Concise Encyclopedia of Semantics 9780080959696, 0080959695, 9781282886797, 1282886797

Concise Encyclopedia of Semantics is a comprehensive new reference work aiming to systematically describe all aspects of

1,484 202 10MB

English Pages 1103 Year 2009

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Concise Encyclopedia of Semantics
 9780080959696, 0080959695, 9781282886797, 1282886797

Table of contents :
Front Cover
Concise Encyclopedia of Semantics
Copyright Page
The Editor
Alphabetical List of Articles
Introduction
Contributors
A
Accessibility Theory
Bibliography
Acquisition of Meaning by Children
Conventionality and Contrast
In Conversation
Making Inferences
Pragmatics and Meaning
Another Approach
Sources of Meanings
Summary
Bibliography
Anaphora Resolution: Centering Theory
Anaphora Resolution with Centers of Attention
Centering Theory: Modeling Local Coherence with Centers of Attention
Centering Theory and Anaphora Resolution
Unspecified Aspects of Centering. Applications of Centering Theory as a Model of Local CoherenceBibliography
Anaphora, Cataphora, Exophora, Logophoricity
Defining Anaphora, Cataphora, and Exophora
NP-Anaphora
The Syntactic Approach
The Semantic Approach
The Pragmatic Approach
VP-Anaphora
A Typology of VP-Anaphora
VP-Ellipsis: Properties, Issues, and Analyses
Properties
Issues
Analyses
Logophoricity
Defining Logophoricity
Cross-Linguistic Marking of Logophoricity
A Typology of Languages with Respect to Logophoricity
Some Implicational Universals with Respect to Logophoricity
Bibliography. Antonymy and IncompatibilityIncompatibility and Contrast
Antonymy and Opposition
Gradable Contrariety (Classical Antonymy, Polar Opposition)
Complementarity (Contradiction)
Directional Antonyms
Other Types of Opposition
Research Issues
Contrast and Lexical Development
A Lexical Relation?
Discourse Functions and Constructions
Defining Antonymy
Bibliography
Aristotle and Linguistics
Bibliography
Aspect and Aktionsart
Phases and Boundaries
Aspect Theories and Their Historical Development
Bibliography
Assertion
Bibliography
B
Boole and Algebraic Semantics
Bibliography
C. Categorial Grammar, Semantics inIntroduction
Montague Semantics
Lexical Semantics
Quantifiers and Scope
Anaphora
Reflexives
Pronouns
Computational Semantics for Categorial Grammars
Conclusion
Bibliography
Categorizing Percepts: Vantage Theory
Bibliography
Relevant Website
Category-Specific Knowledge
Principles of Organization
Modality-Specific Hypotheses
Domain-Specific Hypotheses
Feature-Based Hypotheses
Clues from Cognitive Neuropsychology
Explaining Category-Specific Semantic Deficits
Clues from Functional Neuroimaging
Conclusion
See also
Bibliography
Causatives. Defining Causative ConstructionsTypes of Causative Constructions
The Semantics of Causatives: Two Major Types of Causation
Causative Continuum and Causation Types
Bibliography
Character versus Content
Content/Character Distinction and Semantics
Content/Character Distinction and Philosophy
Bibliography
Classifiers and Noun Classes
Noun Classes
Noun Classifiers
Numeral Classifiers
Classifiers in Possessive Constructions
Verbal Classifiers
Locative Classifiers
Deictic Classifiers
Bibliography
Cognitive Semantics
Cognitive Linguistics and Cognitive Semantics.

Citation preview

CONCISE ENCYCLOPEDIA OF

SEMANTICS

This page intentionally left blank

CONCISE ENCYCLOPEDIA OF

SEMANTICS COORDINATING EDITOR PROFESSOR KEITH BROWN University of Cambridge Cambridge, UK VOLUME EDITOR PROFESSOR KEITH ALLAN Monash University Victoria, Australia

Elsevier Ltd., The Boulevard, Langford Lane, Kidlington, Oxford, OX5 1GB, UK Radarweg 29, PO Box 211, 1000 AE Amsterdam, Netherlands ß 2009 Elsevier Ltd. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic, or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publishers. Permissions may be sought directly from Elsevier’s Rights Department in Oxford, UK: phone (+44) 1865 843830; fax (+44) 1865 853333; e-mail [email protected]. Requests may also be completed online via the homepage (http://www.elsevier.com/locate/permissions). First edition 2009 Library of Congress Control Number: 2009923453 A catalogue record for this book is available from the British Library ISBN 978-0-08-095968-9 09 10 11 12 13 10 9 8 7 6 5 4 3 2 1 This book is printed on acid-free paper Printed and bound in Great Britain

THE EDITOR

Keith Allan was Section Editor for Logical and Lexical Semantics in Encyclopedia of Languages and Linguistics 2nd edition. Currently Professor of Linguistics at Monash University, he is the author of Linguistic Meaning (Routledge & Kegan Paul, 1986), Natural Language Semantics (Blackwell, 2001) and The Western Classical Tradition in Linguistics (Equinox, 2007; 2nd edn 2009), and co-author (with Kate Burridge) of Euphemism and Dysphemism: Language Used as Shield and Weapon (Oxford University Press, 1991) and Forbidden words: Taboo and the Censoring of Language (Cambridge University Press, 2006). He was Semantics Editor of Oxford International Encyclopedia of Linguistics 1st & 2nd editions (Oxford University Press, 1991, 2003), and Co-editor (with Kasia Jaszczolt) of The Cambridge Handbook of Pragmatics (Cambridge University Press, in preparation). Additionally he is Co-editor of Australian Journal of Linguistics (since 2007) and he has been the Co-editor of two special issues of Language Sciences devoted to Vantage Theory, 2009.

v

This page intentionally left blank

ALPHABETICAL LIST OF ARTICLES

Accessibility Theory Acquisition of Meaning by Children Anaphora Resolution: Centering Theory Anaphora, Cataphora, Exophora, Logophoricity Antonymy and Incompatibility Aristotle and Linguistics Aspect and Aktionsart Assertion Boole and Algebraic Semantics Categorial Grammar, Semantics in Categorizing Percepts: Vantage Theory Category-Specific Knowledge Causatives Character versus Content Classifiers and Noun Classes Cognitive Semantics Coherence: Psycholinguistic Approach Cohesion and Coherence Collocations Color Terms Comparatives Comparative Constructions Componential Analysis Compositionality Concepts Concessive Clauses Conditionals Connectives in Text Connotation Constants and Variables Context Context and Common Ground Context Principle Conventions in Language Cooperative Principle Coreference: Identity and Similarity Counterfactuals Default Semantics

Definite and Indefinite Definite and Indefinite Articles Definite and Indefinite Descriptions Definition in Lexicology Definitions Demonstratives Dictionaries Dictionaries and Encyclopedias: Relationship Diminutives and Augmentatives Direct Reference Disambiguation Discourse Anaphora Discourse Domain Discourse Parsing, Automatic Discourse Representation Theory Discourse Semantics Donkey Sentences Dthat Dynamic Semantics Event-Based Semantics Evidentiality Evolution of Semantics Existence Expression Meaning vs Utterance/Speaker Meaning Extensionality and Intensionality Face Factivity False Friends Field Work Methods in Semantics Folk Etymology Formal Semantics Frame Semantics Future Tense and Future Time Reference Game-Theoretical Semantics Gender General Semantics Generating Referring Expressions Generative Lexicon

vii

viii Alphabetical List of Articles

Generative Semantics Generic Reference Generics, Habituals and Iteratives Grammatical Meaning Honorifics Human Reasoning and Language Interpretation Hyponymy and Hyperonymy Ideational Theories of Meaning Ideophones Idioms Implicature Indefinite Pronouns Indeterminacy Indexicality Inference: Abduction, Induction, Deduction Ingressives Intensifying Reflexives Intention and Semantics Interpreted Logical Forms Interrogatives Irony Jargon Lexical Acquisition Lexical Conceptual Structure Lexical Conditions Lexical Fields Lexical Meaning, Cognitive Dependency of Lexical Semantics Lexicology Lexicon/Dictionary: Computational Approaches Lexicon: Structure Logic and Language Logical and Linguistic Notation Logical Consequence Logical Form Mass Expressions Meaning Postulates Meaning, Sense, and Reference Memes Mentalese Meronymy Metalanguage versus Object Language Metaphor and Conceptual Blending Metonymy Modal Logic Monotonicity and Generalized Quantifiers Montague Semantics Mood and Modality Mood, Clause Types, and Illocutionary Force Multivalued Logics Natural Language Understanding, Automatic Natural Semantic Metalanguage Natural versus Nonnatural Meaning Negation Neo-Gricean Pragmatics

Neologisms Nominalism Nonmonotonic Inference Nonstandard Language Use Number Numerals Onomasiology and Lexical Variation Operators in Semantics and Typed Logics Partitives Perfects, Resultatives, and Experientials Performative Clauses Philosophical Theories of Meaning Phrastic, Neustic, Tropic: Hare’s Trichotomy Plurality Polarity Items Politeness Politeness Strategies as Linguistic Variables Polysemy and Homonymy Possible Worlds Pragmatic Determinants of What Is Said Pragmatic Presupposition Pragmatics and Semantics Pre-20th Century Theories of Meaning Presupposition Projection Problem for Presupposition Pronouns Proper and Common Names, Impairments of Proper Names Proper Names: Philosophical Aspects Propositional and Predicate Logic Propositional Attitude Ascription Propositional Attitudes Propositions Prosody Prototype Semantics Psychology, Semantics in Quantifiers Reference and Meaning, Causal Theories Reference: Philosophical Theories Referential versus Attributive Register Representation in Language and Mind Rhetoric, Classical Rigid Designation Role and Reference Grammar, Semantics in Scope and Binding Selectional Restrictions Semantic Change Semantic Change, the Internet and Text Messaging Semantic Maps Semantic Primitives Semantic Value Semantics–Pragmatics Boundary Sense and Reference Serial Verb Constructions

Alphabetical List of Articles ix

Situation Semantics Sound Symbolism Spatial Expressions Specificity Speech Act Verbs Speech Acts Speech Acts and AI Planning Theory Speech Acts and Grammar Stereotype Semantics Summarization of Text: Automatic Synesthesia Synesthesia and Language Synonymy Syntax–Semantics Interface

Taboo Words Taboo, Euphemism, and Political Correctness Temporal Logic Tense Thematic Structure Thesauruses Thought and Language Truth Conditional Semantics and Meaning Type versus Token Use Theories of Meaning Vagueness Vagueness: Philosophical Aspects Virtual Objects WordNet(s)

This page intentionally left blank

INTRODUCTION

The Concise Encyclopedia of Semantics gathers into one compact volume 214 articles from the world’s leading experts. All aspects of semantics are appraised, making the scope of coverage unrivalled for a single volume. The lightly re-edited articles were selected from the wealth of scholarly work compiled for The Encyclopedia of Languages and Linguistics, Second Edition (Brown 2006). This introduction explains the expansive scope of the Concise Encyclopedia of Semantics. Semantics is the study of meaning in human languages; more precisely, it is the study and representation of the meaning of every kind of constituent and expression in language (morph, word, phrase, clause, sentence, text/ discourse), and also of the meaning relationships among them. To say Her frown means she’s angry is to talk about the frown as a sign of anger; a language expression is not the sign of its meaning, but an arbitrary (though conventional) symbol (or set of symbols) for the meaning. Semantics studies the interpretation of these symbols in their various combinations. More often than not, full understanding requires some knowledge of context, consequently one might be misled on overhearing the following (adapted from Rachel Giora 2003: 175). ‘‘Emma come first. Den I come. Den two asses come together. I come once-a-more. Two asses, they come together again. I come again and pee twice. Then I come one lasta time.’’

The addressee knew that his Italian companion was telling (in his quaint English) how to spell Mississippi. More prosaically, the various meanings of English bank are, necessarily, elicited with reference to different contexts. A comprehensive volume on semantics cannot ignore context and common ground, even though these take us into pragmatics (the context dependent assignment of meaning to language expressions used in acts of speaking and writing). Common ground includes such assumptions as that the interlocutor is normally an intelligent being, that a speaker (let this be shorthand for ‘‘speaker and/or writer’’) does not need to spell out those things obvious to the sensory receptors of the hearer (‘‘hearer and/or reader’’), or those which can easily be reasoned out on the basis of knowing the language and conventions for its use and from experience of the world the interlocutors inhabit. Common ground allows meaning to be underspecified by a speaker, so that language understanding is a constructive process in which a lot of inferencing is expected from the hearer (the person whom the speaker intends to be the (or a) recipient of the speaker’s message and consequently to react to it). Most people in our community hold two true beliefs: that meanings are a property of words and that word meanings are stored in dictionaries. Lexical semantics focuses on the semantic content of words and morphemes and the semantic relations among lexical items. This obviously leads us to consider certain semantically oriented aspects of lexicology. A dictionary (or lexicon, the terms are not differentiated here) gives the decontextualized sense of a word, abstracted from innumerable usages of it; the dictionary user must puzzle out for him- or herself what the speaker uses the word to refer to in the particular text in which it appears. Speakers refer to things – physical objects, abstract entities, places, states, events – that have existed (happened) in the past, things that exist (are happening) at present, and things that they predict will exist (happen) in the future. They also talk about things that could be or could have been if the world were different than it was, is, or is expected to be. Speakers talk about things in the fictional worlds and times of books and films; about things represented in paintings and photographs; about things that they deny exist; even, occasionally, about impossible things such

xi

xii Introduction

as the largest prime number or My brother is an only child. The existential status of entities referred to, and the nature of the world and time being spoken of, are very significant aspects of meaning that need to be accounted for within semantic theory. Semantics must meet the challenge of connecting the language expressions used to talk about all these different kinds of things to the very things spoken about, that is language forms must be linked to a model of the world and time spoken of (Mw,t). To give the sense (roughly, ‘‘decontextualized meaning’’) of a language expression eO in the natural language being described (the object-language) is to translate it into a language expression ‘‘eM’’ in the metalanguage, the language of the semantic representation, which may be the same as the object-language (e.g., dog means ‘‘canine animal’’), another natural language (e.g., Hausa kare means ‘‘dog’’) or something more formal (e.g., 8x[DOG(x) $ ly(ANIMAL(y) ^ CANINE(y))(x)]). Meaning is compositional. The meaning of a text (or discourse, the terms are used interchangeably here) is composed from the meanings of its constituent utterances (including their punctuation or prosody – stress, disjunctures, intonation, tone of voice) and the sense of the sentence used in each utterance. The senses of phrases and sentences are computed from the senses of their constituents, with the most primitive chunks of meaning being taken from a lexicon (dictionary). The lexicon contains every language expression whose sense cannot be computed from its constituent parts, e.g., paddle must be listed because its meaning does not derive from pþaddle or pad(d)þle, but traveler need not be listed because it derives from travelþer. Within twentieth century linguistics, studies of meaning progressed from lexical semantics to assigning senses to sentences, then to assigning denotation/reference to utterances and meanings to speech acts, culminating in studies of text (discourse) meaning and the analysis of meaning within conversations. The very distinction between sense and reference (roughly, ‘‘what, in given a world and time, is spoken about’’) drags in contexts and speakers, speech acts and hearers, and so pragmatics. Indexicals link language expressions to the situations of utterance and interpretation, and an indexical used as a form of address invokes socio-cultural matters such as face and the use of honorifics. For example, in French it would normally be inappropriate for a child to address the teacher with je t’en prie (‘‘you’re welcome; please do’’) instead of the more respectful je vous en prie; in Japanese a socially distant third person can be insulted through the use of an in-group pronoun or verb form as in Ano yaroo ga soo iiyagatta (‘‘That guy said so [impolite form]’’). There are many languages where the indexical and other lexical and morphosyntactic choices indicate the status and familiarity of the speaker relative to the addressee and/or who (and sometimes what) is spoken of. We can generalize this to a choice of discourse style. There’s a link, here, to tabooed language. So along with articles on sense and many approaches to reference, the Concise Encyclopedia of Semantics has papers on aspects of sociocultural behavior and pragmatics. Semantics was traditionally concerned only with literal meaning and with sense, denotation (what the language expression is normally used to refer to), and reference. Yet much everyday language relies for its communicative force on the figurative language of metaphor and metonymy which drive reinterpretation and the creation of many novel expressions. Often language is enlivened with sound symbolism, which undermines the claim that the formmeaning correlation is completely arbitrary. There is also connotation: effects that arise from encyclopedic knowledge about the denotation (or reference) of a language expression and also from experiences, beliefs, and prejudices about the contexts in which the expression is typically used, e.g., the differences between bunny and rabbit, between Nigger and African American, between frak and fuck. Connotations reflect social and stylistic aspects of meaning. Avoiding words with dysphemistic connotations gives rise to euphemisms such as the n-word, the f-word, the c-word (as if there were only one English word beginning with each of these letters). The study of semantics evolved on the one hand from the compiling of dictionaries and on the other from developments in rhetoric, dialectic, and rational argument among philosophers in Ancient Greece; these combined with interest in literary analysis to inspire the study of grammar in the Ancient World. Throughout history there has been a strong correlation between investigations of semantics and philosophical inquiry into rational argument and the meanings of language expressions analyzed and tested in systems of logic. There are therefore many essays on logic and the philosophy of language in the Concise Encyclopedia of Semantics. By and large, formal semantics developed from these areas of research. The philosophical tradition bequeathed to linguistic semantics a branch of the discipline with strong adherence to truth-conditional semantics. In order to understand and evaluate the meaning of It is raining or Kangaroos are marsupials you need to know the conditions under which these statements would be true. Knowing these conditions allows you to make such inferences as that you will get wet if you go out into the rain and that female kangaroos have pouches to hold neonates. One problem is, as mentioned earlier, connecting the

Introduction xiii

language used to the world and time being spoken of. A greater problem is providing an acceptable semantics for non-truth-functional sentences (or utterances) like Be quiet! and What’s your name? and expressive idioms such as Thanks or the ejaculation Shit! It has long been recognized that not all sentences (or utterances) are truth-bearing. Aristotle noted that ‘Not all sentences are statements; only such as have in them either truth or falsity. Thus a prayer is a sentence, but neither true nor false’ (On Interpretation 17a,1, Aristotle 1984). Later, the Stoics distinguished a ‘judgment’ (axı´o¯ma) as either true or false whereas none of an interrogation, inquiry, imperative, adjurative, optative, hypothetical, nor vocative has a truth value (Diogenes Laertius 1925 Lives VII: 65–68). For more than two millennia, logicians and language philosophers concentrated their minds on statements and the valid inferences to be drawn from them, to the virtual exclusion of other propositional types (questions, commands, etc.). Then Austin 1962 noted that people actually perform acts through certain forms of utterance (for example, make a promise by saying I promise, offer thanks with Thank you). Searle 1975 identified five macro-classes of such speech acts in the following words: ‘we tell people how things are, we try to get them to do things, we commit ourselves to doing things, we express our feelings and attitudes and we bring about changes through our utterances.’ Speech acts are the very source for (potentially verifiable and manipulable) language data; they are, however, quintessentially pragmatic. Nonetheless, several of the articles in the Concise Encyclopedia of Semantics investigate the more semanticy aspects of speech acts. Since Aristotle’s time, formal logics (systems establishing the principles of reliable inference) have been used in representing meaning. Whereas a logic functions primarily as an abstract reasoning device, a natural language exists for use as a practical means of communication about our responses as human beings to our experiences. The semantic descriptions of natural language need to reflect this characteristic. Standard logics define the truth values of propositions connected by special uses of and, or, if . . . then. The meanings of general vocabulary items like man, know, yesterday, etc. are given by meaning postulates only in nonstandard logics. A formal nonstandard logic should make a useful metalanguage for natural language semantics if its terms and processes are fully defined, explicit, and rigorously adhered to. However, there is the problem that the metalanguage for natural language semantics needs to be at least as comprehensive, and of the same notational class, as natural language itself, and no existing logical system yet achieves this goal. All the following four criteria need to be met by a formal metalanguage for natural language semantics. (1) All the terms and processes of the formal metalanguage must be explicitly defined and strictly adhered to. Ideally, the vocabulary will be a specified set of symbols whose forms and correlated meanings are fully defined; all possible combinations of vocabulary items in the metalanguage will be generated from fully specified syntactic axioms and rules of syntax; and the meanings of syntactically well formed structures will be fully specified by semantic axioms and rules for the metalanguage. Proper formalization of the metalanguage should permit proofs of particular conclusions about semantic structure and so prevent mistakes derived from faulty assumptions and/or inference procedures. Such standards of rigor and exactitude tend to be ignored when using an informal metalanguage such as a natural language, however, none of the advantages of a formal system is necessarily unobtainable in an informal metalanguage. (2) The metalanguage must be applicable to the whole of the object-language and not just a selected fragment of it. (3) The formal metalanguage must be able to assign denotations to senses, i.e. link eO to worlds and times (potentially) spoken of. (4) The products of the metalanguage should combine explicitness of statement with clarity of expression, so as to genuinely illuminate the meaningful properties and meaning relations of any and every expression within the object-language in terms which correlate with everyday notions of meaning in language. The basic requirement of semantic analysis is to satisfactorily communicate the meaning of language expression eO from the object-language into expression ‘‘eM’’ in the metalanguage, bearing in mind that the metalanguage is meant to be understood by human beings who normally communicate in a natural language of which they have fluent command. If you understand neither Polish nor Swahili there is little point using Swahili as a metalanguage for the semantic analysis of Polish (or vice versa); e.g., to say To jest pies means ‘‘Ni mbwa’’ will not help you at all (using English as a metalanguage, they mean ‘‘It’s a dog’’). In practice, scholars either provide natural language glosses for exotic metalanguage expressions, or assume some existing knowledge of the semantics of the symbols and expressions being used: e.g., 8 means ‘‘for all’’, $ means ‘‘if and only if’’, ^ means ‘‘logical and’’, ly(P(y))(x) means ‘‘x is a member of set P’’. Lexical semantics comprehends content words like nouns, verbs, and adjectives, and grammatical elements like connectives, articles, modal and serial verbs; it also extends to the meanings of grammatical operators like number, tense, mood, and aspect. The semantics of quantifiers (e.g., all, most, some) needs to take syntax into account in order to ascertain the relative scope of the quantifier, especially where there is more than one quantifier in a clause (compare Everyone in the room knew two languages with Two languages were known by everyone in the room: their salient meanings seem to differ). Semantics cannot ignore the contribution that

xiv Introduction

morphosyntax makes to meaning because, of course, the morphosyntactic dissimilarity makes The hunter killed the crocodile (in Latin, venator crocodillum occidit) mean something different from The crocodile killed the hunter (venatorem crocodillus occidit). Although only a handful of grammatical theories are represented in the Concise Encyclopedia of Semantics, the semantic components of those described here concentrate on the meanings of sentences and the propositions which they contain. Theories of formal semantics do likewise. And although semantic relations such as antonymy, synonymy, and meronymy are usually associated with lexical semantics, these relations apply to the semantics of larger syntactic structures too (for instance, venatorem crocodillus occidit, crocodillus venatorem occidit, crocodillus occidit venatorem, occidit crocodillus venatorem are all synonymous). To admit into semantic theory the semantic analysis of sentences leads directly to a concern with connected sentences and hence to longer texts. The Concise Encyclopedia of Semantics therefore includes a handful of articles which focus on the meanings of texts and discourses. An important aspect of texts is the intertextual relations, which include anaphoric relations often manifest through indexicals. Anaphors typically indicate coreference (Sue1 screamed at her attacker2 and then she1 hit him2) but often merely semantic identity or similarity (Sue bought a white shirt and Harry a black one, although Sue had said she didn’t like the color). Consideration of texts raises matters of cohesion and coherence. Roughly speaking, a discourse is judged coherent where (the model of) the world spoken of (Mw,t) is internally consistent and generally accords with accepted human knowledge. Discourse semantics needs to be able to represent Mw,t as a product of the meaningful contributions of such formal strategies as the choice of vocabulary, syntactic construction, and prosody (or its graphic counterpart, punctuation). A model of communicative behavior explaining exactly how discourse meaning is composed from the language expressions within the text requires input from many branches of linguistics. Formal and mathematical systems are essential tools of research when computers are applied to lexicological, textual, and other semantic analysis. A discourse parser takes as input a text, and automatically derives its discourse structure representation. This requires the assembly of complex algorithms, speech recognizers and generators, lexica, sets of morphological and morphophonemic rules, grammars, parsers, logical form builders, and inference engines, all networked with vast amounts of encyclopedic knowledge. Computational lexicology develops machine-readable dictionaries from which to extract semantic definitions and semantic relations for use in natural language processing applications such as disambiguation, meaning overlap, information extraction, question answering, and text summarization. With the huge increase in on-line text and multimedia information in recent years, demand for automatic summarization systems has grown. The goal of automatic summarization is to extract information content from a source document so as to present the gist in a condensed form in a manner sensitive to the needs of the user and task. Articles in the Concise Encyclopedia of Semantics deal with such aspects of computational linguistics. Aristotelian logic concentrated on entailments of propositions; Frege 1892 drew attention to their (or the speaker’s) presuppositions. Only if Sue has stopped smoking is true does it entail Sue no longer smokes; but whether it is true or false, it (or the speaker) presupposes (or pretends to presuppose) that there is someone in the world spoken of (Mw,t) identifiable as Sue, and also that Sue used to smoke. Grice 1975 famously identified certain non-monotonic (defeasible) inferences accessible from conversational sequences that arise from a speaker’s implicature if s/he is abiding by the cooperative principle. For instance, a (male) colleague turns up late for a meeting and on entry immediately says I’m sorry, my car broke down. In so saying, he uses a conversational implicature from which he expects it to be understood that he is apologizing for being late, not for the fact that his car broke down, and that mention of the car break-down is intended to explain his being late because car-break-downs disrupt journey schedules. Even if none of his colleagues knew he was coming by car, he does not have to spell this premise out, it is implicit in (and non-monotonically entailed by) what he has said. Such mundane enrichment of what is said rests upon knowledge of social and cultural conventions and the cognitive principles that govern our thinking, all of which need to be accounted for in a semantic theory that comprehends utterance meaning. Since meaning is in the head, the cognitive, psychological, and neurological aspects of meaning are a significant consideration. These range from how children acquire meanings, through the relation between meanings and concepts, to the impairment of meaning in people suffering brain disorders. The Concise Encyclopedia of Semantics includes articles on all these topics. Aristotle divided up human experience into ten categories, each associated with a grammatical class. He believed that the nature of the mind determines that all humans have similar phenomenal and conceptual experiences; a view that was adopted by, among many others, a number of seventeenth century rationalists. One, John Wilkins 1668, created symbols which

Introduction xv

characterize and label each ‘thing and notion’ so as to represent its place in the natural order relative to all other things and notions. Wilkins also proposed a pronunciation system and syntax for this ‘philosophical language’ and wrote a dictionary translating English words into it, to produce a most comprehensive componential analysis of the language. However, twentieth century componential analysis owed nothing directly to Wilkins and others interested in a ‘universal character’; perhaps the closest heirs are thesauri and Wierzbicka’s natural semantic metalanguage. There were several sources for twentieth century componential analysis: one was Prague school distinctive feature analysis of inflexional morphology; another was anthropology, where universal concepts like BE-THEMOTHER-OF were used in giving the meaning of kin terms; a third was semantic field theory. Seemingly closed fields such as case inflexions or kin terms should permit exhaustive componential analysis in which every term within the field is characterized by a unique subset of the universal set of semantic components defining the field. However, these systems invariably leak into other fields when meaning extensions and figurative usage are considered. Furthermore an exhaustive componential analysis of the entire vocabulary of a language is probably unachievable, because it proves impossible to define the boundaries – and hence all the components – of every field. There is also a problem with the notion ‘component’. For instance, MALE is not so much a ‘component’ of bull, but an inferred property of a typical bull such that it is true to say if something is a bull, then it is male. Nonetheless, many lexical semanticists favor componential analysis, as will be seen from the articles in this compilation. I have sought to explain in this Introduction the scope of coverage in the Concise Encyclopedia of Semantics, which is unrivalled for a single volume. The subject matter comprehends lexical semantics, lexicology, semantic relations, cognitive, psychological, computational, formal and functional approaches with excursions into text and discourse, context, pragmatics, the syntax-semantics interface and the semantics of grammar. This anthology is a Pandora’s box of scholarly delights for readers of all kinds who wish to acquaint themselves with the recent work by the world’s leading authorities within the broad field of semantic inquiry and research. Keith Allan

References Aristotle (1984). The Complete Works of Aristotle. In Jonathan Barnes (ed.) The Revised Oxford Translation. Bollingen Series 71. Princeton: Princeton University Press. Austin John L (1962). How to Do Things with Words. Oxford: Clarendon Press. Brown E Keith (General editor) (2006). Encyclopedia of Languages and Linguistics (2nd edition.) (14 vols). Oxford: Elsevier. Diogenes Laertius (1925). Lives of Eminent Philosophers (Vol.2). In Robert D Hicks (Transl.) Loeb Classical Library. London: Heinemann. Frege Gottlob (1892). U¨ber Sinn und Bedeutung. Zeitschrift fu¨r Philosophie und philosophische Kritik 100, 25–50. Reprinted as ‘On sense and reference’. In Peter Geach & Max Black (ed.) Translations from the Philosophical Writings of Gottlob Frege, Oxford: Blackwell. 1960: 56–78. Giora Rachel (2003). On Our Mind: Salience, Context, and Figurative Language. New York: Oxford University Press. Grice H Paul (1975). Logic and conversation. In Peter Cole & Jerry L Morgan (eds.) Syntax and Semantics 3: Speech Acts. New York: Academic Press. 41–58. Reprinted in H. Paul Grice Studies in the Way of Words. Cambridge MA: Harvard University Press. 1986. Searle John R (1975). A taxonomy of illocutionary acts. In Gunderson Keith (ed.) Language, Mind, and Knowledge. Minneapolis: University of Minnesota Press. 344–369. Reprinted in Language in Society 5, 1976:1–23 and John R. Searle Expression and Meaning: Studies in the Theory of Speech Acts. Cambridge: Cambridge University Press. 1979. Wilkins John (1668). Essay Towards a Real Character and a Philosophical Language. London: S. Gellibrand and John Martin for the Royal Society [Menston: Scolar Press Facsimile. 1968].

This page intentionally left blank

CONTRIBUTORS [Author locations were correct at the time the article was submitted]

B Abbott Michigan State University, East Lansing, MI, USA

D Biber Northern Arizona University, Flagstaff, AZ, USA

A Y Aikhenvald La Trobe University, Bundoora, Australia

D Blair University of Western Ontario, Canada

V Akman Bilkent University, Ankara, Turkey

P Blanchette University of Notre Dame, Notre Dame, IN, USA

K Allan Monash University, Victoria, Australia

E Borg University of Reading, Reading, UK

U Ansaldo Universiteit van Amsterdam, Amsterdam, The Netherlands M Ariel Tel Aviv University, Tel Aviv, Israel S Attardo Youngstown State University, Youngstown, OH, USA J Ayto London, UK A Barber The Open University, Milton Keynes, UK F Bargiela-Chiappini Nottingham Trent University, Nottingham, UK C Barker University of California, San Diego, CA, USA S Barker University of Nottingham, Nottingham, UK L Bauer Victoria University of Wellington, Wellington, New Zealand

D Braun University of Rochester, Rochester, NY, USA C M Bretones Callejas University of Almeria, Almeria, Spain H Bunt Katholieke Universiteit Brabant, Le Tilburg, The Netherlands K Burridge Monash University, Victoria, Australia C Caffi Genoa University, Genoa, Italy G Callaghan Wilfrid Laurier University, Waterloo, Ontario, Canada D Cameron Worcester College, Oxford, UK R Cann University of Edinburgh, Edinburgh, UK B Caplan University of Manitoba, Winnipeg, Canada A Caramazza Harvard University, Cambridge, MA, USA

H Be´joint Universite´ Lyon 2, Lyon, France

G Carlson University of Rochester, Rochester, NY, USA

A Bezuidenhout University of South Carolina, Columbia, SC, USA

P Chamizo-Domı´nguez Universidad de Ma´laga, Ma´laga, Spain

xvii

xviii Contributors G Chierchia Universita degli Studi di Milano-Bicocca, Milan, Italy L Clapp Illinois Wesleyan University, Bloomington, IL, USA E V Clark Stanford University, Stanford, CA, USA H H Clark Stanford University, Stanford, CA, USA E Corazza University of Nottingham, Nottingham, UK

J van Eijck Centre for Mathematics and Computer Science, Amsterdam, The Netherlands and Research Institute for Language and Speech, Utrecht, The Netherlands V Evans University of Sussex, Brighton, UK C Fabricius-Hansen Germanistisk Institutt, Oslo, Norway D Farkas University of Santa Cruz, Santa Cruz, CA, USA C Fellbaum Princeton University, Princeton, NJ, USA

G G Corbett University of Surrey, Guildford, UK

C J Fillmore University of California at Berkeley, Berkeley, CA, USA

F Cornish University of Toulouse-Le Mirail, Toulouse, France

K Frankish The Open University, Milton Keynes, UK

S Coulson University of California, San Diego, CA, USA

A Galton University of Exeter, Exeter, UK

A Cowie University of Leeds, Leeds, UK

D Geeraerts University of Leuven, Leuven, Belgium

S Crawford Lancaster University, Lancaster, UK M Cresswell The University of Auckland, Auckland, New Zealand D J Cunningham Indiana University, Bloomington, IN, USA R Cytowic Washington, DC, USA ¨ Dahl O Stockholm University, Stockholm, Sweden R Dale Macquarie University, Sydney, NSW, Australia L Degand Universite´ catholique de Louvain, Louvain-la-Neuve, Belgium A Deumert Monash University, Victoria, Australia

C Goddard University of New England, Armidale, Australia M S Green University of Virginia, Charlottesville, VA, USA D Gregory University of Sheffield, Sheffield, UK J Groenendijk Universiteit van Amsterdam, Amsterdam, The Netherlands C Gussenhoven Queen Mary, University of London, London, UK J Gvozdanovic´ Universita¨t Heidelberg, Heidelberg, Germany G Haßler Potsdam, Germany W Hanks University of California at Berkeley, Berkeley, CA, USA

J Dever University of Texas, Austin, TX, USA

P Hanks Brandeis University, Waltham, MA, USA, and BerlinBrandenburg Academy of Sciences, Berlin, Germany

H Diessel Friedrich-Schiller-Universita¨t Jena, Jena, Germany

R A Harris University of Waterloo, Waterloo, Canada

E Eaker University of Western Ontario, London, Ontario, Canada

R R K Hartmann University of Exeter and University of Birmingham, UK

P Edmonds Sharp Laboratories of Europe, Oxford, UK

B Hellwig School of Oriental and African Studies, London, UK

F Egan Rutgers University, New Brunswick, NJ, USA

J Hintikka University of Helsinki, Helsinki, Finland

Contributors xix J Hoeksema University of Groningen, Groningen, The Netherlands

R Krishnamurthy Aston University, Birmingham, UK

J Holmes Victoria University of Wellington, Wellington, New Zealand

P Lasersohn University of Illinois at Urbana-Champaign, Urbana, IL, USA

Y Huang University of Reading, Reading, UK

S Laurence University of Sheffield, Sheffield, UK

M Hymers Dalhousie University, Halifax, Nova Scotia, Canada

G Lavers University of Western Ontario, London, Ontario, Canada

C Ilie O¨rebro University, O¨rebro, Sweden T M V Janssen ILLC, University of Amsterdam, Amsterdam, The Netherlands K Jaszczolt University of Cambridge, Cambridge, UK A K Joshi University of Pennsylvania, Philadelphia, PA, USA J S Jun Hankuk University of Foreign Studies, Seoul, Korea P Juvonen Stockholm University, Stockholm, Sweden S Kaufmann Northwestern University, Evanston, IL, USA R Keefe University of Sheffield, Sheffield, UK E L Keenan University of California, Los Angeles, CA, USA C Kennedy Northwestern University, Evanston, IL, USA C Kilian-Hatz Universitat zu Ko¨ln, Ko¨ln, Germany G Klima Fordham University, Bronx, NY, USA J-P Koenig University at Buffalo, Buffalo, NY, USA

J Lawler University of Michigan, Ann Arbor, MI, USA A Lehrer University of Arizona, Tucson, AZ, USA E Lepore Rutgers University, Piscataway, NJ, USA K Lindblom Stony Brook University, Stony Brook, NY, USA J Lindstedt University of Helsinki, Helsinki, Finland K C Litkowski CL Research, Damascus, MD, USA G Longworth Birkbeck College, University of London, London, England, UK E J Lowe University of Durham, Durham, UK P Lutzeier University of Surrey, Surrey, UK H Pander Maat Universiteit Utrecht, Utrecht, The Netherlands B Z Mahon Harvard University, Cambridge, MA, USA I Mani Georgetown University, Washington, DC, USA

M Koptjevskaja-Tamm Stockholm University, Stockholm, Sweden

D Marcu University of Southern California, Marina del Rey, CA, USA

A Koskela University of Sussex, Brighton, UK

E Margolis Rice University, Houston, TX, USA

M Ko¨lbel University of Birmingham, Birmingham, UK

G Martı´ ICREA and Universitat de Barcelona, Barcelona, Spain

E Ko¨nig Free University of Berlin, Berlin, Germany

R M Martin Dalhousie University, Halifax, Nova Scotia, Canada

W Koyama Rikkyo University, Tokyo, Japan

D McCarthy University of Sussex, Brighton, UK

xx Contributors J D McCawleyy University of Chicago, Chicago, IL, USA

H-J Sasse University of Cologne, Cologne, Germany

J Meibauer Universita¨t Mainz, Mainz, Germany

A Saxena Uppsala University, Uppsala, Sweden

E Miltsakaki University of Pennsylvania, Philadelphia, PA, USA

C Semenza University of Trieste, Trieste, Italy

M Montague University of California, Irvine, CA, USA M L Murphy University of Sussex, Brighton, UK B Nerlich University of Nottingham, Nottingham, UK R T Oehrle Pacific Grove, CA, USA N Oldager Technical University of Denmark, Lyngby, Denmark

P A M Seuren Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands B Sherman Princeton University, Princeton, NJ, USA M Shibatani Rice University, Houston, TX, USA J J Song University of Otago, Dunedin, New Zealand

G Ostertag Nassau Community College, Garden City, NY, USA

C Spencer Howard University, Washington, DC, USA

D L Payne University of Oregon, Eugene, OR, USA

R J Stainton University of Western Ontario, London, Ontario, Canada

B Pizziconi University of London, SOAS, London, UK

L Stassen Radboud University, Nijmegen, The Netherlands

G Powell University College London, London, UK

M Steedman University of Edinburgh, Edinburgh, UK

R Prasad University of Pennsylvania, Philadelphia, PA, USA S Predelli University of Nottingham, Nottingham, UK K Proost Institut fu¨r Deutsche Sprache, Mannheim, Germany

K Stenning Edinburgh University, Edinburgh, UK M Stokhof Universiteit van Amsterdam, Amsterdam, The Netherlands

J Pustejovsky Brandeis University, Waltham, MA, USA

A Sullivan Memorial University of Newfoundland, St John’s NL, Canada

A Ramsay University of Manchester, Manchester, UK

A Szabolcsi New York University, New York, NY, USA

I E Reay University of Glasgow, Glasgow, UK

J R Taylor University of Otago, Dunedin, New Zealand

M Reimer University of Arizona, Tucson, AZ, USA

C Temu¨rcu¨ University of Antwerp, Antwerp, Belgium

P Salo University of Helsinki, Helsinki, Finland T Sanders Universiteit Utrecht, Utrecht, The Netherlands G Sandu University of Helsinki, Helsinki, Finland A Sanford University of Glasgow, Glasgow, Scotland, UK y

Deceased

E C Traugott Palo Alto, CA, USA J van der Auwera University of Antwerp, Antwerp, Belgium R van der Sandt Radboud University, Nijmegen, The Netherlands R D Van Valin Jr University at Buffalo, The State University of New York, Buffalo, NY, USA

Contributors xxi J Ward University of London, London, UK

A Wierzbicka Australian National University, Canberra, Australia

S Wechsler University of Texas, Austin, TX, USA

D P Ziegeler University of Manchester, Manchester, UK

L Wetzel Georgetown University, Washington, DC, USA

J Zlatev Lund University, Lund, Sweden

This page intentionally left blank

A Accessibility Theory M Ariel, Tel Aviv University, Tel Aviv, Israel ß 2006 Elsevier Ltd. All rights reserved.

Natural discourse does not start from scratch. Speakers routinely integrate new information with contextual assumptions, roughly, information that they can take for granted, and so they need not assert it (Sperber and Wilson, 1986/1995). Referring to discourse entities, an inherent feature of human interactions, is no different. Although some discourse entities are (treated as) new (a kiss in [1]), most are (treated as) identifiable (e.g., the review, Helen, her in [1], and her heart, a first-mention, in [2]). Thus, part of the nonasserted material is information about discourse entities that the speaker would like the addressee to retrieve (for citations of SBC [Santa Barbara corpus], see Du Bois et al., 2000, 2003. [. . .] ¼ a short fragment deleted): (1) LORI:

LINDA: LORI: (2) DORIS:

when you were reading the review, you talked about the affair between Helen and Paul, [. . .] all that happened was, was a kiss. [. . .] He kissed her, (SBC: 023). they had an autopsy done on her. And her heart, was just hard, (SBC: 001).

Accessibility Theory (Ariel, 1985a, 1985b, 1988a, 1990, 2001), in effect a development of Sanford and Garrod (1981) and Givo´n (1983) (and see also Chafe, 1994), assumes a logically prior distinction between identifiable/Given entities (coded as definite) and nonidentifiable/Given entities (coded as indefinite). Identifiable entities are ones for which the addressee is assumed to be able to access mental representations (see Du Bois, 1980; Heim, 1982). Accessibility theory seeks to account for the selection and interpretation of all definite referring expressions. The theory does not assume (as fundamental) the first versus subsequent mention distinction, and provides one and the same account for expressions considered referential (e.g., proper names), often used for discourse first-mentions, as well as for expressions considered

anaphoric (e.g., pronouns), often used for subsequent mentions (Ariel, 1990, 1994, 1996). It also does not view references to the speech situation (e.g., by deictics) as special (Ariel, 1998a). All definite referring expressions in all languages are analyzed as accessibility markers, as instructions to the addressee on how to access specific mental representations. In fact, the theory handles other types of Given materials as well, most notably whole propositions (see Ariel, 1985a, 1985b, 1988b). Using a definite NP, the speaker signals to her addressee to access some mental representation based either on his encyclopedic knowledge, his awareness of the speech situation, or his discourse model of the interaction so far (Clark and Marshall, 1981). The definite referring expression also provides information about the intended entity, which the addressee is to rely on when zeroing in on the intended referent (e.g., her is a singular female). This is as far as the definiteness aspect takes us, but speakers can be even more helpful. Mental representations are not equally accessible to us at any given stage of the discourse. Some are highly activated, others are mildly activated, and yet others, although potentially identifiable, are not currently activated at all. Speakers refer to discourse entities at all activation levels. This is where accessibility theory plays a crucial role. It helps the addressee pick the correct mental representation by indicating to him the degree of accessibility with which the mental representation is currently entertained. The claim is that each referring expression specializes for a specific degree of mental accessibility, hence the term accessibility markers for referring expressions. On this view, addressees search mental representations not only based on the content of the referring expression, but also based on the degree of accessibility indicated by the speaker. Since mental accessibility comes in a rich array of degrees, accessibility markers can be graded on a scale of accessibility marking, some indicating very low degrees of mental accessibility, others indicating various intermediate and high degrees of accessibility. The following partially grammaticized (see Ariel, 2001) accessibility marking scale, starting with very

2 Accessibility Theory

low accessibility markers and ending with extremely high accessibility markers, has been proposed in Ariel (1990), but the list is not intended to be exhaustive: (3) Full name þ modifier > full name > long definite description > short definite description > last name > first name > distal demonstrative þ modifier > proximate demonstrative þ modifier > distal demonstrative þ NP > proximate demonstrative þ NP > distal demonstrative (-NP) > proximate demonstrative (-NP) > stressed pronouns þ gesture > stressed pronoun > unstressed pronoun > cliticized pronoun > verbal person agreement markers > zero.

For example, the affair between Helen and Paul in (1) is a long definite description. The prediction is that it indicates a mental representation that is not as accessible as the shorter the review or he. Indeed, the review is what the interlocutors have been discussing. But the affair, as such, was not explicitly mentioned in the conversation, and in fact, according to Lori, it’s not even clear that there was one. He (a pronoun) refers to the highly accessible Paul, who was just mentioned. Now, the correlations between specific referring expressions and specific degrees of mental accessibility are not arbitrary. This is why (3) is virtually a universal. By and large, the accessibility marking scale is governed by three coding principles: informativity, rigidity, and attenuation. Informativity predicts that more informative expressions be used when the degree of accessibility is relatively low. It is only reasonable for the speaker to provide the addressee with more information if the mental representation is not (highly) activated, so he can better identify the intended entity from among the many he entertains at a low degree of accessibility. Rigidity predicts that a (more) uniquely referring expression (such as a proper name), rather than a relatively nonrigid expression (such as a pronoun), should be used when degree of accessibility is low (cf. Helen, Paul with her, he in [1]). Finally, attenuation predicts that greater phonological size (including the presence of stress) correlates with lower degrees of accessibility, whereas smaller phonological size correlates with higher degrees of accessibility (cf. definite descriptions vs. pronouns, and even more so with zero). The three principles overlap to a large extent. Quite often, informative expressions are also relatively rigid and unattenuated. However, this is not invariably so. The newspaper and United States of America are as informative and rigid as the paper and US(A), respectively, but they are not as attenuated. Accordingly, the lower accessibility markers are found in contexts where a lower degree of accessibility is the case

(see Ariel, 2001, inter alia). Similarly, in languages with verbal person agreement, there is no difference in the informativity and rigidity between independent pronouns (e.g., Hebrew ani, ‘I’) and the corresponding agreement marker (þti for past tense). But distributional patterns show that the independent pronoun (less attenuated) is used when the speaker is less accessible. Finally, for Western names, it’s usually the case that first and last names are equally informative and attenuated, but they are not equally rigid. Last names tend to pick a referent more uniquely than first names (simply because there is a greater variety of last names). Accordingly, Ariel (1990: 45) correlates the two types of names with different textual positions, showing that anaphoric first names mostly find their antecedents within the same paragraph, but last names have three times as many cross-paragraph anaphoric relations. This points to the lower degree of accessibility indicated by last names. Distance between a previous and a current mention of the entity (recency) is indeed one important factor determining degree of accessibility. Naturally, the longer the time elapsed between the previous and the current reference, the less activated the representation, so that relatively lower accessibility markers are called for. Note that the relationship between the antecedent and the anaphor, their Unity, is not simply measured in number of words (only), but rather, syntactic boundaries (e.g., the clause), textual boundaries (the paragraph, the episode), and pragmatic boundaries (units more vs. less cohesively linked to each other) define the closeness between a potential antecedent and its anaphor, dictating higher or lower accessibility markers depending on how ‘distant’ the two are from each other. When a discourse entity is inferred based on another, we similarly see differences according to how automatic/stereotypic the inference connecting the two is (cf. her heart in [2], which is easily inferred from her, given that humans have hearts, with his sense of character values based on his referring to Mister Forster – SBC: 023, where we don’t automatically assume that people have a ‘‘sense of character values’’). Empirical evidence for these Unity claims can be found in Clancy (1980), Sanford and Garrod (1981), Givo´n (1983), and Ariel (1985a and onward). Unity features mostly pertain to anaphoric references. Referent salience is important for all types of reference, first-mention referential expressions included. Some discourse entities are inherently more salient: the speaker and addressee (vs. third persons), humans (especially vs. inanimates), and famous personalities (vs. anonymous people). Other discourse entities have a prominent ad hoc status, mostly because they constitute discourse topics. The predictions are then that higher accessibility markers will serve

Accessibility Theory 3

these more salient discourse entities. Competition over the role of intended referent between potential mental representations may, however, lower the degree of accessibility of each, mainly of nontopics. It then calls for lower accessibility markers: (4) MARY:

What I have to do, is take off the distributor wirei, and splice iti in with the fuel pump wirej. Because my . . . fuel pumpj is now electric, (SBC: 007).

(5) In the reference, each author is referred to by name and initials. There is a single exception – to avoid the possibility of confusion, first names are always included for David Payne, Doris Payne, John Payne, Judith Payne and Thomas Payne (Dixon, 1994: xvi–xvii).

In (4), the more topical entity is coreferred to by it, the nontopic by an informative lexical NP (my fuel pump). In (5), presumably equally accessible entities are all referred to by lower accessibility markers (full names), because they compete with each other (initial þ Payne is not rigid enough in this context). It is important to remember, however, that accessibility theory makes claims about correlations between referring expressions and degree of accessibility, measured as a total concept, rather than by any one of its components (e.g., topic, distance, or competition). In other words, the prediction is that accessibility marker selection is determined by weighing together a whole complex of accessibility factors, which together determine what the degree of accessibility of a given discourse entity is at the current stage of the discourse (see Toole, 1996; Ariel, 1999). This is why, for example, even speakers are not invariably referred to by the highest accessibility markers (zero in Hebrew). Although the speaker is a highly salient discourse entity, if she’s not topical or if it’s competing with another antecedent, it may be referred to by an independent pronoun. Finally, accessibility theory is universal (see Ariel, 1990: 4.2), although not all languages have exactly the same set of referring expressions, and even when these seem to be identical, they may rate differently for the three coding principles (informativity, rigidity, and attenuation, e.g., cf. English and Japanese pronouns). Provided they are comparable, all referring expressions are predicted to indicate the same relative, though not absolute, degrees of accessibility. Thus, in all languages zeroes indicate a higher degree of accessibility than pronouns, but not all languages allow cross-sentential zero anaphora. Accessibility theory applies to all genres/registers (see Ariel, 2007). In fact, because accessibility related discourse patterns are so common in diverse registers and

languages, we can account for various cross-linguistic grammaticization paths. For example, the recurrent creation of verbal person agreement markers for first/second persons, but not for third persons (via the cliticization of the high accessibility markers used for the very salient speaker and addressee; see Ariel, 1998b, 2000), as well as universal constraints on the use of resumptive pronouns (see Ariel, 1999). At the same time, accessibility constraints may be violated to create special pragmatic effects (e.g., Jamie the old lady (SBC: 002) is too low an accessibility marker, when used by Jamie’s husband in her presence). See also: Anaphora, Cataphora, Exophora, Logophoricity; Anaphora Resolution: Centering Theory; Cohesion and Coherence; Context and Common Ground; Coreference: Identity and Similarity; Definite and Indefinite Articles; Demonstratives; Discourse Anaphora; Pronouns.

Bibliography Ariel M (1985a). Givenness marking. Ph.D. thesis, Tel-Aviv University. Ariel M (1985b). ‘The discourse functions of Given information.’ Theoretical Linguistics 12, 99–113. Ariel M (1988a). ‘Referring and accessibility.’ Journal of Linguistics 24, 65–87. Ariel M (1988b). ‘Retrieving propositions from context: why and how.’ Journal of Pragmatics 12, 567–600. Ariel M (1990). Accessing noun phrase antecedents. London: Routledge. Ariel M (1994). ‘Interpreting anaphoric expressions: a cognitive versus a pragmatic approach.’ Journal of Linguistics 30(1), 3–42. Ariel M (1996). ‘Referring expressions and the þ/ coreference distinction.’ In Fretheim T & Gundel J K (eds.) Reference and referent accessibility. Amsterdam: John Benjamins. 13–35. Ariel M (1998a). ‘The linguistic status of the ‘‘here and now’’.’ Cognitive Linguistics 9, 189–237. Ariel M (1998b). ‘Three grammaticalization paths for the development of person verbal agreement in Hebrew.’ In Koenig J-P (ed.) Discourse and cognition: bridging the gap. Stanford: CSLI/Cambridge University Press. 93–111. Ariel M (1999). ‘Cognitive universals and linguistic conventions: the case of resumptive pronouns.’ Studies in Language 23, 217–269. Ariel M (2000). ‘The development of person agreement markers: from pronouns to higher accessibility markers.’ In Barlow M & Kemmer S (eds.) Usage-based models of language. Stanford: CSLI. 197–260. Ariel M (2001). ‘Accessibility theory: overview.’ In Sanders T, Schilperoord J & Spooren W (eds.) Text representation: linguistic and psycholinguistic aspects. Amsterdam: John Benjamins. 27–87. Ariel M (2007). ‘A grammar in every register? The case of definite descriptions.’ In Hedberg N & Zacharsky R (eds.) Topics in the grammar-pragmatics interface:

4 Acquisition of Meaning by Children Papers in honor of Jeanette K Gundel. Amsterdam: John Benjamins. 265–292. Chafe W L (1994). Discourse, consciousness, and time: the flow and displacement of conscious experience in speaking and writing. Chicago: The University of Chicago Press. Chafe W L (ed.) (1980). The pear stories: cognitive, cultural, and linguistic aspects of narrative production. vol. III of Freedle R (ed.) Advances in discourse processes Norwood, NJ: Ablex. Clancy P M (1980). ‘Referential choice in English and Japanese narrative discourse.’ In Chafe W L (ed.) The pear stories: cognitive, cultural, and linguistic aspects of narrative production. Vol. III of Freedle R (ed.) Advances in discourse processes. Norwood, NJ: Ablex. 127–202. Clark H H & Marshall C (1981). ‘Definite reference and mutual knowledge.’ In Joshi A K, Webber B L & Sag I A (eds.) Elements of discourse understanding. Cambridge: Cambridge University Press. 10–63. Dixon R M W (1994). Ergativity. Cambridge: Cambridge University Press. Du Bois J W (1980). ‘Beyond definiteness: the trace of identity in discourse.’ In Chafe W L (ed.) The pear stories:

cognitive, cultural, and linguistic aspects of narrative production. Vol. III of Freedle R (ed.) Advances in discourse processes. Norwood, NJ: Ablex. 203–274. Du Bois J W, Chafe W L, Meyer C & Thompson S A (2000). Santa Barbara corpus of spoken American English, part 1. Philadelphia: Linguistic Data Consortium. Du Bois J W, Chafe W L, Meyer C, Thompson S A & Nii M (2003). Santa Barbara corpus of spoken American English, part 2. Philadelphia: Linguistic Data Consortium. Fretheim T & Gundel J K (eds.) (1996). Reference and referent accessibility. Amsterdam: John Benjamins. Givo´n T (ed.) (1983). Topic continuity in discourse: a quantitative cross-language study. Amsterdam: John Benjamins. Heim I (1982). The semantics of definite and indefinite noun phrases. Ph.D. diss., University of Massachusetts. Sanford A J & Garrod S C (1981). Understanding written language. Chichester: John Wiley and Sons. Sperber D & Wilson D (1986/1995). Relevance. Oxford: Blackwell. Toole J (1996). ‘The effect of genre on referential choice.’ In Fretheim T & Gundel J K (eds.) Reference and referent accessibility. Amsterdam: John Benjamins. 263–290.

Acquisition of Meaning by Children E V Clark, Stanford University, Stanford, CA, USA ß 2006 Elsevier Ltd. All rights reserved.

How do children assign meanings to words? This task is central to the acquisition of a language: words allow for the expression of the speaker’s intentions, they combine to form larger constructions, and the conventional meanings they have license their use for making references in context. Without them, there is no language. In the acquisition of meaning, children must solve the general mapping problem of how to line up word forms with word meanings. The forms are the words they hear from other (mainly adult) speakers. The meanings they must discern in part from consistencies in speaker usage in context from one occasion to the next and in part from inferences licensed by the speaker on each occasion. Possible meanings for unfamiliar words, then, are built up partly from children’s conceptual representations of events and partly from the social interactions at the heart of adult-child conversation. One critical task for children is that of working out the conventional meanings of individual words (e.g., cup, team, friend, truth). Yet, doing so is not enough: syntactic constructions also carry meanings that combine with the meanings contributed by the

actual words used (causative constructions, as in They broke the cup or The boy made the pony jump; the locative construction, as in She put the carving on the shelf; the resultative construction, as in He washed the floor clean). However, children start mapping word meanings before they begin combining words. Languages differ in how they lexicalize information – how they combine particular elements of meaning into words – and in the kinds of grammatical information that have to be expressed. They may package information about events differently; for example, combining motion and direction in a single word (depart) or not (go þ toward), combining motion and manner (stroll), or not (walk slowly). They also differ in the grammatical distinctions made in each utterance. Some always indicate whether an activity was completed; others leave that to be inferred. Some always indicate whether the speaker is reporting from direct observation, or, for example, from the report of someone else. Some indicate whether object-properties are inherent or temporary. The grammatical distinctions that languages draw on vary, as do the ways in which they lexicalize information about objects and events. Mapping meanings onto words is not simply a matter of equating meanings with conceptual categories. Children have to select and organize conceptual information as they

Acquisition of Meaning by Children 5

work out what the conventional meanings are for the words they are learning. How do children arrive at the meanings they first assign to unfamiliar words? How do they identify their intended referents? And how do they arrive at the relations that link word meanings in different ways? The general conversational context itself serves to identify relevant information on each occasion for children trying to work out the meaning of an unfamiliar word. Adult language use presents them with critical information about how words are used, their conventional meanings, and the connections among words in particular domains.

Conventionality and Contrast Adult speakers observe two general pragmatic principles when they converse. First, they adhere to the conventions of the language they are speaking and in so doing make sure their addressees identify the meanings intended in their utterances. The principle of conventionality takes the following form: ‘For certain meanings, there is a form that speakers expect to be used in the language community.’ So if there is a conventional term that means what the speaker wishes to convey, that is the term to use. If the speaker fails to use it or uses it in an unusual way, that speaker risks being misunderstood. For conventions to be effective, conventional meanings must be given priority over any nonconventional ones. The second general principle speakers observe is that of contrast: ‘Speakers take every difference in form to mark a difference in meaning.’ When speakers choose a word, they do so for a reason, so any change in word choice means they are expressing a different meaning. These two principles work hand-in-hand with the Cooperative principle in conversation and its attendant maxims of quality (be truthful), quantity (be as informative as required), relation (make your contribution relevant), and manner (avoid ambiguity; Grice, 1989). Acting in a cooperative manner demands that one observe the conventions of the language in order to be understood. At the same time, if there is no conventional term available for the meaning to be expressed, speakers can coin one, provided they do so in such a way that the addressee will be able to interpret the coinage as intended (Clark, 1993).

In Conversation Adults talk to young children from the very start, and what they say is usually tied closely to specific objects and activities. This feature of conversation presents young infants with opportunities to discern different

intentions, marked by different utterances from early on. Infants attend to adult intentions and goals as early as 12 months of age. They show this, for example, by tracking adult gaze and adult pointing toward objects (e.g., Carpenter et al., 1998), so if they are also attentive to the routine words and phrases used on each type of occasion, they have a starting point for discerning rational choices among contrasting terms and gestures. Consider the general conditions for conversational exchange: joint attention, physical co-presence, and conversational co-presence. Adults observe these conditions and indeed impose them, as they talk to very young children. They work to get 1- and 2-year-olds to attend, for instance when planning to tell them about an unfamiliar object, and only then do they talk to them about whatever object or event is visibly present (Clark, 2001). By first establishing joint attention, adults set young children up to identify and then to help add to common ground. Children can do this by ratifying offers of new words by repeating them or else indicating in some other way that they have taken up an unfamiliar term (Clark, 2003). When adults offer unfamiliar words, they do so in the conversational context; that is, with children who are already attending to whatever is in the locus of joint attention. This feature, along with any familiar terms that are co-present in the conversation, allows children to make a preliminary mapping by identifying the intended referent, whether it is an object or an action (Tomasello, 2002). In effect, the conditions on conversation narrow down the possible meanings that young children might consider for a new term to whatever is in the current joint focus of attention. However, adults do more in conversation. They accompany their offers of unfamiliar words with additional information about the intended referent on that occasion and about how the target word is related to other terms in the same semantic field. Among the semantic relations adults commonly offer are inclusion (An X is a kind of Y), meronomy or partonomy (An X is part of Y), possession (X belongs to Y), and function (X is used for Y; Clark and Wong, 2002). After offering one term, adults often offer others that happen to contrast in that context, so a dimensional term like tall may be followed up by short, wide, narrow, and long (Rogers, 1978). In fact, the meanings of words for unfamiliar actions may also be inferred in part from their co-occurrence with terms for familiar objects affected by those actions, and the meanings of words for unfamiliar objects may be inferred in part from the verbs with which the nouns in question occur (e.g., Goodman et al., 1998; Bowerman, 2005). All this information offers ways for children

6 Acquisition of Meaning by Children

to link new terms to any relevant words they already know. Children learn from child-directed speech about general properties of the lexicon – taxonomic relations, nonoverlapping categories within levels, opposites, overlaps in meaning (through hierarchical connections) vs. in reference, and so on. In short, adults are the experts in providing the conventional terms used for specific meanings in the speech community. The novices, children, ask them innumerable What’s that? questions from around age 2;0–2;6 on and treat them as reliable sources for how to talk about new things (e.g., Diesendruck and Markson, 2001). Moreover, when young children make errors, adults frequently check up, through side sequences and embedded corrections, on what they intended to say, and so present children with the conventional forms to aim for (Chouinard and Clark, 2003).

Making Inferences When children hear a new term for some object or activity, they can infer in context that the term probably applies to the object or activity to which they are attending. However, the information that adults often follow up with allows children to make more detailed inferences about the candidate meaning. Mention of class membership – for example, A sparrow is a bird – tells them that they can add the term sparrow to the set of terms they already know for birds, perhaps just chicken and duck. Comments on the size, characteristic song, or flight each allow further inferences about how sparrows differ from ducks and chickens. What evidence is there that young children take in such information? In spontaneous conversations, they give evidence of attending to what adults say in several ways. First, they repeat new terms in their next conversational turn, either as single words or embedded in a larger utterance; second, they acknowledge the adult offer with forms like yeah, uhhuh, and mmh; and third, they continue to talk about the relevant semantic domain (Clark, 2004). Children’s readiness to make inferences from added information offered by adults has also been examined in word-learning experiments. In one study, children aged just 2 years old were taught words for two sets of objects (A and B) that were similar in appearance and had the same function. After teaching the first word for the first set (A), the experimenter introduced the second set of objects while saying just once, ‘‘Bs are a kind of A.’’ He then proceeded to teach the second word, B. Children were then tested by asking them to find all the As and then all the Bs. For the first request, they typically selected As; for the second,

they consistently (and correctly) picked only Bs (Clark and Grossman, 1998). In short, the one statement of an inclusion relation was enough for even young 2-year-olds to make use of it in this task. In another condition, again teaching two new words for two sets that resembled each other, children infer that there could be an inclusion relation but they have no way to tell which way it should go, so some include A in B, and some B in A. Children rely on contrast in context to make inferences about the most probable reference for a newly introduced word. For example, if they already know what the object they are attending to is called, they are more likely to infer that a new term denotes a subordinate, a part, or some other property of it (Taylor and Gelman, 1989). This propensity was exploited directly in studies of whether children could decide in context whether a new word was intended to denote an object or an activity. Young 2-year-olds were presented with the same object doing different actions, with one action labeled with the new term, or else several objects, one labeled with the new term and all doing the same action. The children readily inferred that the new word denoted an activity in the first case and an object in the second (e.g., Tomasello, 2002). Young children are also able to discern the intended from the accidental. When shown various actions being demonstrated, infants aged 18 months imitated intended actions (marked by utterances like ‘There’) more frequently than unintended ones (signaled by utterances like ‘Oops’). By age 2, young children know to ignore errors in wording, for example, and attend only to the final formulation of what someone is saying. In one study, for example, children were taught a word for a set of objects; then the experimenter exclaimed, ‘‘Oh, I made a mistake: these aren’t As, they’re Bs’’ and proceeded to teach the word B in place of the earlier A. When tested, even children who were just 2 years old knew that they did not know what A, the first word, meant (e.g., Clark and Grossman, 1998). All the inferences presented so far have been overt inferences about unfamiliar word meanings, made on the spot by children exposed to the new words. Yet, although adults make clear offers of new words, marking them as new by introducing them in formulaic deictic frames (e.g., This is a . . .), with utterancefinal stress, many of the other words they use will be unfamiliar to very young children. How do children assign meanings to all those words? The answer lies in the covert use of Roger Brown’s (1958) ‘‘original word game.’’ Basically, the child notices an unfamiliar word, makes inferences in context about its probable meaning and acts on that, and then adjusts those

Acquisition of Meaning by Children 7

inferences in light of the adult’s responses. Consider these scenarios by way of illustration: (a) Young child watching parent in the kitchen, with several drink containers on the counter Mother (to older sibling): Hand me that mug, will you? (Child, wondering what a mug is, watches sibling pick up a mug) Mother: Thanks (Child infers for now that mug denotes something that has a handle, is a solid color, and is made of ceramic)

Sometimes, the inferences that children make are informed slightly more directly by the parent’s direct responses, as in (b). (b) Young child holding two plastic animals, a cat and a dog Father: Can you give me the spaniel? (Child, uncertain what spaniel means, holds out the cat) Father: No, the spaniel please. (Child infers that spaniel must refer to a kind of dog rather than a kind of cat, and so hands over the plastic dog instead)

In both cases, the child makes preliminary or tentative inferences that can then be adjusted or changed in light of adult follow-up utterances, further exposures in other contexts, and additional, often explicit information about inclusion, parts, properties, or functions. Of course, inferences like these can also be made about terms for actions, relations, and states, as well as about those for objects, parts, and properties.

Pragmatics and Meaning In the conversational exchanges considered so far, adult and child both follow the cooperative principle characterized by Grice (1989), as seen by their observation of joint attention, physical co-presence, and conversational co-presence. In addition, each participant in the exchange must add to common ground and keep account of the common ground that has been accumulated so far (H. Clark, 1996). All of this requires that speakers keep careful track of the intentions and goals being conveyed within an exchange (Tomasello, 1995; Bloom, 2000). Infants are attentive to nonlinguistic goals very early. For example, if 14-month-olds are shown an unusual action that achieves a goal – for example, an adult bending down to touch a panel switch with her forehead – they imitate it. If 18-month-olds watch an adult try and fail to place a metal hoop on a prong, the infants will produce the action successfully, even though they have never seen it completed (Meltzoff, 1995). That is, infants infer that the adult intended to

turn on the light or intended to hang up the hoop. Intentions are what is critical, Meltzoff demonstrated, not just observation of the relevant actions, because infants do not re-enact these events when the actions are performed by a mechanical hand. In much the same way, infants attend to the words that adults use. Upon hearing a word, they infer that the speaker is referring to the entity physically present in the locus of joint attention. If instead the speaker produces a different word, they infer that the speaker is now referring to something else and therefore has a different goal in speaking. That is, each linguistic expression chosen indexes a different intention, thus exemplifying the speaker’s reliance on contrast, as well as on conventionality (Clark, 1993). This recognition then licenses young children to use words to express their intentions and in this way to convey specific goals. Adult usage provides the model for how to do so within conversational exchanges. Infants also grasp quite early that the words used to express certain meanings are fixed and conventional. For example, they know that adults who wish to refer to a squirrel use the term ‘squirrel’ or to refer to a sycamore tree use the term ‘sycamore,’ and so on. As a result, when they notice adults who fail to use ‘squirrel’ when looking at a squirrel, but instead use another expression, they can readily infer that the speaker must therefore mean something else. In effect, young children, just like adults, assume that if the speaker intends to talk about a squirrel, he will use the conventional term for it. If instead, he uses something else, then he must intend to convey some other meaning. As a result, in situations where children already know terms for some of the objects they can see, they expect the adult to use a familiar term for any familiar object. If the adult instead produces an unfamiliar term, in the presence of an unfamiliar object, they will infer that he intended to refer to the object for which they do not yet have a term. So they use contrast, together with the assumption that conventional terms always take priority, to interpret the speaker’s intentions on such occasions. The result is that they consistently assign unfamiliar terms to asyet unnamed objects or actions. This pragmatic strategy for interpreting intentions and thereby making a first assignment of meaning to an unfamiliar word helps young children in many settings. Take the case of an adult looking at a single familiar object that is well known to the child. The adult, clearly talking about that object, does not use the expected conventional term. What can the child infer from that? There are two common options: (1) the unfamiliar expression denotes a superordinate or subordinate category,

8 Acquisition of Meaning by Children

or (2) it denotes a part or property of the familiar object. Then, the remainder of the utterance can typically provide the child with important clues about the correct inference. For example, production of a familiar term for a known object is typically followed by a part term accompanied by a possessive pronoun (as in his ear), whereas such expressions as is a kind of or is a are associated with assignments to class membership in a superordinate category (Clark and Wong, 2002; Saylor and Sabbagh, 2004). Use of a verb like looks or feels (as in it looks smooth, it feels soft) often accompanies the introduction of property terms, and when the unfamiliar term is introduced before the familiar one with kind of (a spaniel is a kind of dog), the child readily infers that the new term, here spaniel, designates a subordinate category. Finally, children as young as age 2 rely on a combination of syntax cues and physical co-presence in identifying generic noun phrases; for example, when asked something like What noise do dogs make? with just one dog in sight. What these findings indicate is that even very young children are highly attentive to the locus of joint attention and to whatever is co-present physically and conversationally. When one adds in whatever linguistic knowledge children have already built up about word meanings and constructions, it becomes clear that they have an extensive base from which to make inferences about possible, plausible meanings of unfamiliar words. This holds whether the words are presented explicitly as ‘new’ by adult speakers or whether children simply flag them en passant as unfamiliar and therefore in need of having some meaning assigned. At the same time, young children may have a much less firm grasp on the meanings of many of their words than adult speakers, and incidental or even irrelevant pragmatic factors may affect their interpretations and responses. Take the case of the Piagetian conservation task where the experimenter ‘checks up’ on the 5- or 6-year-old near-conserver’s answer by asking, for the second time, whether the amount that has just been transferred to a new container or transformed into a different array ‘‘is still the same.’’ Children on the verge of conserving typically change their initially correct answers from ‘yes’ to ‘no’ at this point. They do so because, pragmatically, asking the same question a second time signals that the initial answer was unsatisfactory (Siegal, 1997).

Another Approach In another approach to the acquisition of lexical meaning, some researchers have proposed that the task is so complex for young children that they must

start out with the help of some a priori constraints. These constraints are designed to limit the kinds of meanings children can attribute to new words. What form would these constraints take, and what evidence is there for them? Among the constraints proposed are whole object – ‘Words pick out whole objects’ – and mutual exclusivity: ‘Each referent is picked by just one word’ (e.g., Markman, 1989). The whole object constraint predicts that young children should assume that any unfamiliar word picks out a whole object and not, for example, a part or property of that object. The mutual exclusivity constraint predicts that young children should assume that an unfamiliar word must pick out something other than whatever has a name that is already known to the child. So this constraint predicts that children will systematically reject second terms they hear apparently applied to an already labeled referent, as well as fail to learn second terms. The predictions from these and other constraints have been tested in a variety of word-learning experiments where the target referents are objects. In fact, the whole object and mutual exclusivity constraints apply only to words for objects, so they would have infants treat all unfamiliar words as if they designated only objects and never actions, relations, or properties. How do such constraints work, as they conflict with many properties of word meanings? For example, mutual exclusivity would cause children to not learn inclusion relations in taxonomies because they would need to apply two or more terms to the same referent category in learning that an X can be called a dog, specifically a subtype called a spaniel, and that a dog is also a kind of animal. The whole object constraint would cause children to not learn terms for parts and properties. It would militate against children learning any terms for activities or relations. One could propose that such constraints apply only in the early stages of acquisition, after which they are overridden. However, then one has to specify what leads to their being overridden; in other words, what the necessary and sufficient conditions are for each constraint to be dropped so children can start to learn words for activities and relations, say, from adult usage, or words for parts and properties, as well as words for objects. Under this view of meaning acquisition, children could start out observing the various constraints and then drop each one at a certain stage in development so as to be able to learn other kinds of meanings up to then blocked by the constraints. In short, children should at first ignore much of what their parents say about words and word meanings

Acquisition of Meaning by Children 9

and reject second labels whenever they are offered to mark a different perspective, for example. They should also look for words only to pick out objects, mistakenly assigning any that might, in context, seem to be designating categories of actions or relations as words for objects instead. Is this a realistic picture of development? No, because it calls for selectively ignoring or rejecting a large amount of what adults do with language as they talk about the world to their children, offer them words for objects and events in the locus of joint attention, and provide extensive commentary on parts, properties, motion, and functions associated with specific category members. The constraints approach ignores conditions imposed on conversational exchanges, such as joint attention, and physical and conversational co-presence, and what they contribute to assigning meaning. It also conflicts with adult usage, which offers a range of perspectives on specific objects and events. A piece of fruit can be just that, fruit, or it can be an apple, dessert, or a snack, depending on the perspective chosen (Clark, 1997). Yet, these factors must all be taken into account in both designing and interpreting studies of meaning acquisition.

Sources of Meanings Children draw on conceptual categories already known to them and on information offered in context, both nonlinguistic and linguistic, when they assign a first meaning to new words. Infants build up and organize conceptual categories of the objects, relations, and events they observe months before they try to use words to evoke the relevant categories. As they assign candidate meanings, they rely on these conceptual categories to connect category instances and words as they start in on language (Slobin, 1985). However, because languages differ, children learn, for purposes of talking, to attend automatically to some aspects of events and ignore others; for example, whether an action is complete or not or whether the speaker witnessed an event for herself or simply heard about it. It is therefore important to distinguish between conceptual knowledge about events and the knowledge speakers draw on when they talk about those events (Slobin, 1996). Children try to make sense of what the adult wants. This means relying on any potentially useful source of information for interpreting and responding to adult utterances. What children know about the conceptual categories that appear to be at the focus of joint attention seems to provide initial strategies for coping when they do not yet understand all the words. The physical and conversational contexts, with joint attention,

identify the relevant ‘space’ in which to act. This holds just as much for responding to half-grasped requests as for direct offers of unfamiliar words. Children attend to what is physically present, to any familiar words, and to any conceptual preferences. These preferences may include choosing greater amounts over lesser ones, assuming that the first event mentioned is the first to occur, and exchanging one state of affairs for another (Clark, 1997). Such coping strategies may be consistent with the conventional meanings of certain words, so children will appear to understand them when in fact they do not. The match of coping strategies and meanings offers one measure of complexity in acquisition: Matches should be simpler to acquire than cases of mismatch. Children can draw on what they already know about objects and events, relations, and properties for their world so far. Their current knowledge about both conceptual categories and about their language at each stage offers potential meanings, in context, assignable to unfamiliar words. These preliminary meanings can be refined, added to, and reshaped by adult usage on subsequent occasions. This way, children learn more about the meanings that abut each word, the contrasts relevant in particular semantic domains, and the number of terms in a domain that have to be distinguished from one another. To succeed in this effort, children have to identify consistent word uses for specific event-, relation-, and objecttypes. They have to learn what the conventions are for usage in the speech community where they are growing up (e.g., Eckert, 2003).

Summary As children learn new words, they rely on what they know so far – the conceptual and linguistic knowledge they have already built up – to assign them some meaning in context. These initial meanings draw equally on their own conceptual categories and on adult patterns of word use within the current conversational exchange. In effect, joint attention, along with what is co-present physically and conversationally, places pragmatic limits on what the meaning of an unfamiliar word is most likely to be. In addition, adults often provide further information about the referent object or action, linking the word just offered to other words for relevant properties and actions, and thereby situating the new word in relation to terms already known to the child. This framing by adults for new word meanings licenses a variety of inferences by the child about what to keep track of as relevant to each particular word meaning (Clark, 2002). Adults here are the experts and constitute

10 Acquisition of Meaning by Children

both a source and resource for finding out about unfamiliar word meanings. See also: Cognitive Semantics; Context and Common Ground; Cooperative Principle; Inference: Abduction, Induction, Deduction; Lexical Fields; Lexical Meaning, Cognitive Dependency of; Lexical Semantics; Lexicon: Structure; Pragmatic Determinants of What Is Said; Psychology, Semantics in; Sense and Reference.

Bibliography Bloom P (2000). How children learn the meanings of words. Cambridge, MA: MIT Press. Bowerman M (2005). ‘Why can’t you ‘‘open’’ a nut or ‘‘break’’ a cooked noodle? Learning covert object categories in action word meanings.’ In Gershkoff-Stowe L & Rakison D (eds.) Building object categories in developmental time. Mahwah, NJ: Lawrence Erlbaum. Brown R (1958). Words and things. New York: Free Press. Carpenter M, Nagell K & Tomasello M (1998). ‘Social cognition, joint attention, and communicative competence from 9 to 15 months of age.’ Monographs of the Society for Research in Child Development 63(176). Chouinard M M & Clark E V (2003). ‘Adult reformulations of child errors as negative evidence.’ Journal of Child Language 30, 637–669. Clark E V (1993). The lexicon in acquisition. Cambridge: Cambridge University Press. Clark E V (1997). ‘Conceptual perspective and lexical choice in acquisition.’ Cognition 64, 1–37. Clark E V (2001). ‘Grounding and attention in the acquisition of language.’ In Andronis M, Ball C, Elston H & Neuvel S (eds.) Papers from the 37th meeting of the Chicago Linguistic Society, vol. 1. Chicago: Chicago Linguistic Society. 95–116. Clark E V (2002). ‘Making use of pragmatic inferences in the acquisition of meaning.’ In Beaver D, Kaufmann S, Clark B Z & Casillas L (eds.) The construction of meaning. Stanford, CA: CSLI Publications. 45–58. Clark E V (2003). First language acquisition. Cambridge: Cambridge University Press. Clark E V (2004). ‘Pragmatics and language acquisition.’ In Horn L R & Ward G (eds.) Handbook of pragmatics. Oxford: Blackwell. 562–577. Clark E V & Grossman J B (1998). ‘Pragmatic directions and children’s word learning.’ Journal of Child Language 25, 1–18. Clark E V & Wong A D-W (2002). ‘Pragmatic directions about language use: words and word meanings.’ Language in Society 31, 181–212.

Clark H H (1996). Using language. Cambridge: Cambridge University Press. Diesendruck G & Markson L (2001). ‘Children’s avoidance of lexical overlap: a pragmatic account.’ Developmental Psychology 37, 630–641. Eckert P (2003). ‘Social variation in America.’ Publication of the American Dialect Society 88, 99–121. Goodman J C, McDonough L & Brown N B (1998). ‘The role of semantic context and memory in the acquisition of novel nouns.’ Child Development 69, 1330–1344. Grice H P (1989). Studies in the ways of words. Cambridge, MA: Harvard University Press. Markman E M (1989). Categorization and naming in children: problems of induction. Cambridge, MA: MIT Press. Meltzoff A N (1995). ‘Understanding the intentions of others: re-enactment of intended acts by eighteenmonth-old children.’ Developmental Psychology 31, 838–850. Rogers D (1978). ‘Information about word-meaning in the speech of parents to young children.’ In Campbell R N & Smith P T (eds.) Recent advances in the psychology of language. London: Plenum. 187–198. Saylor M M & Sabbagh M A (2004). ‘Different kinds of information affect word learning in the preschool years: the case of part-term learning.’ Child Development 75, 395–408. Siegal M (1997). Knowing children: experiments in conversation and cognition (2nd edn.). Hove, Sussex: Psychology Press. Slobin D I (1985). ‘Crosslinguistic evidence for the language-making capacity.’ In Slobin D I (ed.) The crosslinguistic study of language acquisition, vol. 2. Hillsdale, NJ: Lawrence Erlbaum. 1157–1249. Slobin D I (1996). ‘From ‘‘thought and language’’ to ‘‘thinking for speaking.’’ In Gumperz J J & Levinson S C (eds.) Rethinking linguistic relativity. Cambridge: Cambridge University Press. 70–96. Taylor M & Gelman S A (1989). ‘Incorporating new words into the lexicon: preliminary evidence for language hierarchies in two-year-old children.’ Child Development 60, 625–636. Tomasello M (1995). ‘Joint attention as social cognition.’ In Moore C & Dunham P J (eds.) Joint attention: its origins and role in development. Hillsdale, NJ: Lawrence Erlbaum. 103–130. Tomasello M (2002). ‘Perceiving intentions and learning words in the second year of life.’ In Bowerman M & Levinson S C (eds.) Language acquisition and conceptual development. Cambridge: Cambridge University Press. 132–158.

Anaphora Resolution: Centering Theory 11

Anaphora Resolution: Centering Theory A K Joshi, R Prasad and E Miltsakaki, University of Pennsylvania, Philadelphia, PA, USA ß 2006 Elsevier Ltd. All rights reserved.

Anaphora Resolution with Centers of Attention Anaphora resolution in discourse – a coherent sequence of utterances – is the task or process of identifying the referents of expressions that we use to denote discourse entities, i.e., objects, individuals, properties, and relations that have been introduced and talked about in the prior discourse. The importance of modeling this process cannot be overstated. Computing the meaning of a discourse is commonly understood as partly the process of connecting the information in the upcoming utterance with the information contained in the prior discourse. Before we can do this, however, we need to assign an interpretation to all the elements of the utterance and then to the utterance as a whole. In many cases, the interpretation of some elements in the sentence can only be assigned relative to the prior discourse context – anaphoric expressions comprise one such class of elements. In early approaches to anaphoric reference in AI and linguistics, the task of anaphora resolution was relegated to syntax, which provided filters, such as grammatical agreement constraints, and open-ended semantic inference that drew on, among other things, world knowledge and inference procedures to identify the appropriate referent. However, it was soon recognized that while syntactic constraints were very limited in constraining the search for anaphoric referents on the one hand, the mechanism of open-ended semantic inference, on the other hand, was too knowledge intensive and complex – requiring reasoning over the entire space of discourse at once – and therefore computationally unfeasible. In 1977, a different view to anaphora resolution arose out of the work of Barbara Grosz (Grosz, 1977) that rests on a fundamental and singularly important assumption regarding the attentional status of discourse entities: at any given point of the discourse, the discourse participants’ attention is centered on a set of entities, a proper subset of all the entities being talked about in the discourse. In addition, for a given utterance, the discourse participants’ attention is centered on a singleton entity, and the rest of the utterance makes a predication about this entity. The notion of the center of attention specific to utterances is very similar to the notion of ‘topic’ in linguistics, where it is defined as what is ‘talked about’ in the utterance. The

approach for anaphora resolution with this Centering view is that the search for the referents of anaphoric expressions should be restricted to the set of centered entities, the assumption being that in discourse, it is these entities that we are most likely to continue to talk about and refer to with the use of anaphoric expressions. Furthermore, a partial ordering is imposed on the elements of the set, so that some entities are more centered than others. Such a preference ordering on the possible candidate referents for anaphoric expressions significantly simplifies the ‘nature’ of inference that would be needed and at the same time minimizes the ‘amount’ of inference. Another significant proposal was that the set of centered entities can be partially determined by the linguistic structure of the utterance itself. The consequences for all these ideas were tremendous, because the view meant that it was possible to set aside, to a significant extent, the role of open-ended inferencing for anaphora resolution and look instead to more easily identifiable surface features of the utterance, such as the solution and explanation, for at least part of the problem. While Grosz laid out the general framework for the Centering process, her work did not suggest the exact mechanisms whereby the centered entities could be identified. Candy Sidner (1979) extended Grosz’s framework by precisely defining the notion of the utterance-based center linguistically and also provided a mechanism for using centers to identify referents of pronouns. Sidner invoked several Centering structures – singleton sets called ‘discourse focus’ and ‘actor focus,’ and a set called the ‘potential foci,’ which can contain one or more elements. The ‘discourse focus’ is equivalent to the center of the utterance, i.e., the entity about which some predication is made by the utterance. The ‘actor focus’ is the discourse entity that is predicated as the agent of the event in the utterance. The ‘discourse focus’ is identified by using a set of rules that refer to the linguistic structure of the utterance as well as the state of the existing data structures when the utterance containing the pronoun is processed. A referent for a pronoun is identified primarily with the actor focus or the discourse focus, unless it is ruled out by some specified criteria, in which case an alternate candidate referent is considered from the set of ‘potential foci,’ which contains entities other than the two primary foci. A significant aspect of Sidner’s work is that she did not rule out the role of inference in pronoun interpretation, but instead only constrained it in nature and amount. The nature of inference needed is different from earlier, open-ended inference systems

12 Anaphora Resolution: Centering Theory

in that it only involves checking for contradictions once a candidate referent is chosen by using the structurally determined preference ordering: this allows for a much simpler knowledge base and reasoning procedures. The amount of inference needed is also reduced because of the preference ordering, so as soon as an entity is identified for which no contradictions arise, no other inferencing is needed.

Centering Theory: Modeling Local Coherence with Centers of Attention Centering Theory (CT) arose from the work of Aravind Joshi and Steve Kuhn (Joshi and Kuhn, 1979), where the concepts of the ‘center’ and ‘Centering’ were first introduced as a way to specify an almost monadic calculus approach to discourse interpretation. Joshi and Kuhn showed that inferences of a certain class are more easily computed by using a monadic representation for utterances. However, they were also interested in computing the difficulty of deriving the necessary inferences. While not explicitly stated by Joshi and Kuhn, the Centering process was assumed to be a local phenomenon operating over successive utterances. In the meantime, Grosz’s work on global and local discourse processing had also been formalized by Grosz and Sidner (1986), and it was possible to place CT in its proper place in a complete theory of discourse processing. Grosz and Sidner provided a framework for discourse structure as a composite of three interacting constituents: a linguistic structure, an intentional structure, and an attentional state. The linguistic structure is determined by the intentional structure and comprises the utterances of the discourse grouped together hierarchically into discourse segments. The attentional state is an abstraction of the discourse participants’ center of attention as the discourse unfolds. Each discourse segment is associated with a fixed attentional state relevant to the overall discourse – the global attentional state. A local attentional state is associated with each utterance within the segment. The local attentional state is inherently dynamic and can remain constant or can change from utterance to utterance within the segment. Centering Theory (Grosz et al., 1983, 1995) was proposed as a model of the local attentional state, i.e., of the dynamic attentional state within the discourse segment. Following up on the concerns of Joshi and Kuhn, it explicated more clearly and formally the particular linguistic and attentional state factors that contribute to the ease or difficulty of interpreting a discourse segment. The notion of inferential complexity or difficulty was recast as the term of ‘coherence.’ The first factor that contributes to coherence was given

as a further explication of Joshi and Kuhn’s ‘change of center’ rule and accounts for the difference in coherence between the discourse segments (1) and (2). (1a) John went to his favorite music store to buy a piano. (1b) He had frequented the store for many years. (1c) He was excited that he could finally buy a piano. (1d) He arrived just as the store was closing for the day. (2a) John went to his favorite music store to buy a piano. (2b) It was a store John had frequented for many years. (2c) He was excited that he could finally buy a piano. (2d) It was closing just as John arrived.

Discourse (1) is intuitively more coherent than discourse (2). This difference may be seen to arise from the number of changes in the center. Discourse (1) centers a single individual, ‘John,’ describing various actions he took and his reactions to them. In contrast, discourse (2) seems to flip back and forth between ‘John’ and ‘the store.’ These ‘changes in aboutness’ or ‘changes of centers’ makes discourse (2) less coherent than discourse (1). The second observation that CT captures with discourses (1) and (2) establishes the correlation of center changes and the degree of coherence with the linguistic form of the utterances. Both discourses convey the same information but in different ways. They differ not in content or what is said but in expression or how it is said. The variation in ‘changes of attentional state’ that they exhibit arises from different choices of the way in which they express the same propositional content. The different linguistic choices further engender different inference demands on the hearer or reader, and these differences in inference load underlie certain differences in coherence between them. In addition to the different linguistic choices pertaining to the realization of the propositional content of the utterance as a whole, CT also identifies different linguistic choices made for realizing particular elements within the propositional content of the utterance. These are choices in referring expression form. Pronouns and definite descriptions are not equivalent with respect to their effect on coherence. CT characterizes the perceived coherence of the use of pronouns and definite descriptions by relating different choices to the inferences they require the hearer or reader to make. In (3), the variations of a discourse illustrate this relationship. (3a) Terry really goofs sometimes. (3b) Yesterday was a beautiful day and he was excited about trying out his new sailboat.

Anaphora Resolution: Centering Theory 13 (3c) He wanted Tony to join him on a sailing expedition. (3d) He called him at 6 A.M. (3e) He was sick and furious at being woken up so early. (3e0 ) Tony was sick and furious at being woken up so early. (3f) He told Terry to get lost and hung up. (3g) Of course, he hadn’t intended to upset Tony. (3g0 ) Of course, Terry hadn’t intended to upset Tony. (3g00 ) Of course, Terry hadn’t intended to upset him.

In discourse (3), it is the use of the pronoun in utterance (3e) that is in question. While we can tell that the pronoun He refers to ‘Tony,’ the use of the pronoun here is potentially confusing. CT claims that this is because until utterance (3d), ‘Terry’ has been the ‘center of attention’ and therefore the most likely referent of the pronoun. This claim rests on the assumption that hearers expect speakers to continue talking about the entity that is in the ‘center of attention.’ The confusion results because we tend to assign the reference of the pronoun to the center of attention as soon as we encounter it but have to backtrack (a phenomenon called ‘garden path’) when we process the rest of the sentence and find something that contradicts our assumption. In this particular example, we backtrack when we get to the work sick and, from the prior utterances in the discourse, reason that it must be ‘Tony’ and not ‘Terry’ who is sick. As the careful reader will have noticed, the assumed preferences for determining the referents of pronouns in CT are reminiscent of Sidner’s model. We return to this comparison at the end of this section, where we discuss the relation between anaphora resolution and Centering theory. The confusion arising from (3e) is removed if the pronoun is replaced with the full noun phrase ‘Tony,’ as shown in (3e0 ). The conjecture in CT, therefore, is that when the center of attention shifts to another entity, the form of referring expression used to denote the new centered entity has consequences for the processing load required for interpreting the utterance. A pronoun used to refer to the new centered entity increases the processing load, because it causes backtracking from the interpretation of the old centered entity and thus from the interpretation of the utterance itself. A full noun phrase, on the other hand, shifts the center of attention before the rest of the utterance is processed and therefore entails less processing. The three variants (3g), (3g0 ), and (3g00 ) provide an illustration of yet another type of difference in coherence due to the form of referring expression. This arises when multiple entities are talked about from one utterance to the next. By the time (3f) is processed, the center has shifted from ‘Terry’ to ‘Tony,’

so that in (3g) we expect ‘Tony’ to be the center of attention. This expectation is borne out in (3g), since ‘Tony’ is indeed mentioned again. However, what makes this sentence very odd and hard to process is that ‘Terry’ is also mentioned in (3g), but while the centered ‘Tony’ is referred to with a full noun phrase, the noncentered ‘Terry’ is referred to with a pronoun. This increased processing is reduced when a full noun phrase is used for ‘Terry’ instead of the pronoun, as in (3g0 ) or (3g00 ), so that we are able to shift the center before processing the rest of the utterance, thus avoiding any backtracking. The type of coherence variation found in these utterances is due to the fact that both the centered entity in (3f) and another entity are mentioned again in (3g) and its variants, but in (3g) it is the noncentered entity from (3f) that is referred to with a pronoun. CT provides a set of definitions, constraints, and rules to formalize the three-way relationship discussed above, i.e., the relationship between attentional state, the degree of coherence, and linguistic form (for the realization of full propositional content as well as for the realization of discourse entities). The CT definitions, constraints, and rules are given here. Definitions: D1. Each utterance U in a discourse segment is assigned a set of forward-looking centers, Cf (U), where centers are discourse entities realized in the utterance. D2. Each utterance other than the segment-initial utterance is assigned a single backward-looking center, Cb (U). D3. The backward-looking center of utterance Unþ1 connects with one of the forward-looking centers of Un. D4. The elements of Cf (Un) are partially ordered to reflect relative prominence or salience in Un. In English, the Cf is ordered according to grammatical role. D5. The more highly ranked an element of Cf (Un), the more likely it is to be Cb (Unþ1). D6. The most highly ranked element of Cf (Un) is called the preferred center, Cp (Un). D7. A transition relation holds between each utterance pair Un and Unþ1 in a segment. There are four types of transitions, which describe center continuation, center retention, and two types of center shifting. The transitions are shown in Table 1. Constraints: C1. There is precisely one backward-looking center Cb (Un).

14 Anaphora Resolution: Centering Theory

C2. Cb (Unþ1) is the highest-ranked element of Cf (Un) that is realized in Unþ1. Constraint C1 says that there is one central discourse entity that the utterance is about. Constraint C2 states that the ranking or ordering of the forwardlooking centers in Un determines which of them realized in Unþ1 will become the backward-looking center of Unþ1. Rules: Rule 1. If some element of Cf (Un) is realized as a pronoun in Unþ1 then so is Cb(Unþ1). Rule 2. With respect to Table 1, sequences of the CONTINUE transition are preferred to sequences of the RETAIN transition, which are preferred to sequences of the SMOOTHSHIFT transition, which are preferred to sequences of the ROUGH-SHIFT transition. Rule 1 is often called the ‘Pronoun Rule.’ It is important to note that the inference load due to Rule 1 is not part of the inference load characterized by the transitions. Rule 1 is thus independent of the transitions. This independence of Rule 1 is an important consideration when one thinks of the relation between CT and anaphora resolution. The inference load due to Rule 1 can be regarded as a binary measure that simply states whether or not Rule 1 has been violated. With this rule, we can now explain the varying degrees of coherence for utterances (3g), (3g0 ), and (3g00 ) in discourse (3). The Centering analysis for this discourse is shown in Table 2. After ‘Tony’ is established as the center (the Cb) in (3e), this center continues in (3f), but with the reintroduction of ‘Terry’ as a potential center. In (3g), both ‘Tony’ and ‘Terry’ are mentioned, but since ‘Tony’ is higher ranked than ‘Terry’ in (3f), it is ‘Tony’ that is retained as the Cb in (3g). However, this utterance creates a Rule 1 violation, because the Cb, ‘Tony,’ is not realized with a pronoun, whereas ‘Terry,’ which is not the Cb, is. The only difference between (3g) and (3g0 and 3g00 ) is that the latter do not violate Rule 1, as the transitions remain the same. The oddness of (3g) is therefore explained by Rule 1. Rule 2 provides a formal characterization of the perceived differences in coherence for discourse

Table 1 Centering transitions

Cb (Uiþ1) ¼ Cp (Uiþ1) Cb (Uiþ1) 6¼ Cp (Uiþ1)

Cb (Uiþ1) ¼ Cb (Ui) OR Cb (Ui) ¼ [?]

Cb (Uiþ1) 6¼ Cb (Ui)

CONTINUE RETAIN

SMOOTH-SHIFT ROUGH-SHIFT

segments in terms of an ordering on transition sequences. The less frequent the shifts in a discourse, the more coherent it is. Discourse (1) above is characterized by CONTINUE transitions throughout the segment (CONTINUE, CONTINUE, CONTINUE; see Table 3) describing a highly coherent discourse, whereas discourse (2) is characterized by switches between RETAIN and CONTINUE (RETAIN, CONTINUE, RETAIN; see Table 4), describing a less coherent discourse.

Table 2 Centering analysis for discourse (3) (3a) (3b)

(3c)

(3d)

(3e)

(3e0 )

(3f)

(3g)

(3g0 )

(3g00 )

Terry really goofs sometimes. Cf ¼ {Terry}, Cp ¼ Terry, Cb ¼ ?, Transition ¼ undef. Yesterday was a beautiful day and he was excited about trying out his new sailboat. Cf ¼ {Terry, sailboat}, Cp ¼ Terry, Cb ¼ Terry, Transition ¼ Continue He wanted Tony to join him in a sailing expedition. Cf ¼ {Terry, Tony, expedition}, Cp ¼ Terry, Cb ¼ Terry, Transition ¼ Continue He called him at 6 A.M. Cf ¼ {Terry, Tony}, Cp ¼ Terry, Cb ¼ Terry, Transition ¼ Continue He was sick and furious at being woken up so early. Cf ¼ {Tony}, Cp ¼ Tony, Cb ¼ Tony, Transition ¼ Smoothshift Tony was sick and furious at being woken up so early. Cf ¼ {Tony}, Cp ¼ Tony, Cb ¼ Tony, Transition ¼ Smoothshift He told Terry to get lost and hung up. Cf ¼ {Tony, Terry}, Cp ¼ Tony, Cb ¼ Tony, Transition ¼ Continue Of course, he hadn’t intended to upset Tony. Cf ¼ {Terry, Tony}, Cp ¼ Terry, Cb ¼ Tony, Transition ¼ Retain Of course, Terry hadn’t intended to upset Tony. Cf ¼ {Terry, Tony}, Cp ¼ Terry, Cb ¼ Tony, Transition ¼ Retain Of course, Terry hadn’t intended to upset him. Cf ¼ {Terry, Tony}, Cp ¼ Terry, Cb ¼ Tony, Transition ¼ Retain

Table 3 Centering analysis for discouse (1) (1a) John went to his favorite music store to buy a piano. Cf ¼ {John, store, piano}, Cp ¼ John, Cb ¼ ?, Transition ¼ undef. (1b) He had frequented the store for many years. Cf ¼ {John, store}, Cp ¼ John, Cb ¼ John, Transition ¼ Continue (1c) He was excited that he could finally buy a piano. Cf ¼ {John, piano}, Cp ¼ John, Cb ¼ John, Transition ¼ Continue (1d) He arrived just as the store was closing for the day. Cf ¼ {John, store}, Cp ¼ John, Cb ¼ John, Transition ¼ Continue

Anaphora Resolution: Centering Theory 15 Table 4 Centering analysis for discourse (2) (2a)

(2b)

(2c)

(2d)

John went to his favorite music store to buy a piano. Cf ¼ {John, store, piano}, Cp ¼ John, Cb ¼ ?, Transition ¼ undef. It was a store John had frequented for many years. Cf ¼ {store, John}, Cp ¼ store, Cb ¼ John, Transition ¼ Retain He was excited that he could finally buy a piano. Cf ¼ {John, piano}, Cp ¼ John, Cb ¼ John, Transition ¼ Continue It was closing just as John arrived. Cf ¼ {store, John}, Cp ¼ store, Cb ¼ John, Transition ¼ Retain

Centering Theory and Anaphora Resolution As stated right in the beginning of this article, the main goal of CT is to characterize certain aspects of local coherence. Differences in coherence result from changes in the center of attention, captured by the Centering transitions and transition ordering, and from the different expressions in which centers are realized. In particular, pronouns and definite descriptions engender different inference demands on the hearer. CT, however, is not to be seen as a theory of anaphora resolution. The incorporation of referring expressions in the account of local coherence has led many researchers to use the CT as part of anaphora resolution algorithms. This has led to some interesting research. At the same time, it has led to some confusion in the literature associated with CT. The first point to appreciate is that there is undoubtedly a very relevant connection between CT and anaphora resolution. As the careful reader will have deduced, the garden-path effect with the interpretation of the pronouns illustrated in discourse (3) is reminiscent of the preference ordering utilized by Sidner for the reference resolution of pronouns. In Sidner’s model, the ‘center of attention’ is equivalent to the ‘discourse focus,’ and like Sidner, CT utilizes this preference for the ‘center of attention’ to continue over successive utterances. The relative preference of the ‘actor focus’ as the next center of attention is also captured with the ‘preferred center’ in CT. At a first look, it may seem that Sidner’s use of the ‘center of attention’ to determine the referents of pronouns and CT’s use of the same to explain how incorrect referents are assigned to pronouns result in a paradox. But a closer look shows that it isn’t really so, because like CT, Sidner also allows for garden paths on the referents of pronouns by further invoking inference procedures (albeit unspecified) to check for contradictions. So Sidner’s goals and CT’s goals are very much alike, in that both assume similar preference for the ‘initial’ resolution of pronouns,

which can be contradicted with further information. The difference between the two is that CT goes further to formalize the nature and difficulty of the contradictory inferences in terms of utterance pair transitions and uses the formal system as a way to compute the degree of coherence of a discourse segment. Anaphora resolution algorithms that want to obviate the need for inference procedures and want to model the preferential rules for pronoun resolution should use the common part underlying the two described models. Sidner’s inference rules for computing contradictions should be left out (or at least relegated to another interacting component), as should the part in CT that deals with the computation of coherence with the transitions and transition orderings. More formally, the common aspect of Sidner’s model and CT are captured in CT with (i) the list of forwardlooking centers, (ii) the backward-looking center, (iii) the preferred center, and (iv) Rule 1, the ‘Pronoun Rule.’ These data structures and rules are sufficient to set the initial preference for the referents of pronouns. Furthermore, corpus studies and studies of naturally occurring data of the form of referring expressions have shown that to a large extent speakers adhere to the preference orderings and Rule 1, so that much mileage can be achieved by building these preferences into anaphora resolution algorithms, as Sidner conjectured. However, while some anaphora resolution algorithms have used these very data structures and shown good results, others have used CT in totality, i.e., together with the transitions and transition orderings, to compute the referents of pronouns (for example, the Centering algorithm – called the BFP algorithm – for pronoun resolution in Brennan et al., 1987). In addition to being theoretically misguided, the latter approach also yields contradictory results for the initial preferential resolution of pronouns (Kehler, 1997). An Optimality theory-based version of the BFP algorithm and a comprehensive overview of Centering, together with a historical development of Centering Theory and its applications, can also be found in Beaver (2004).

Unspecified Aspects of Centering Some parameters and constants in Centering, from the perspectives of both anaphora resolution and local coherence, were left unspecified in the original models. Two of these in particular have led to a great deal of research. The first is determination of the preference ordering on the list of forward-looking centers or determination of relative salience of discourse entities in an utterance. This is crucial for the initial interpretation assignments for pronouns. Crosslinguistic investigation of the

16 Anaphora Resolution: Centering Theory

mechanisms that languages use to realize discourse functions like ‘topic’ shows that different ranking criteria need to be used for different languages. In English, relative salience is largely predicted by grammatical role, as was correctly assumed in CT. Other languages use other mechanisms. In Japanese, which uses the morphemes wa and ga to distinguish topics and subjects and special forms of the verb for marking empathy, topic- and empathy-marked entities are ranked higher than subjects. German uses word order in some syntactic contexts to indicate salience, positioning higher-ranked entities before lower-ranked ones. Other languages on which such research has been conducted include Finnish, Greek, Hindi, Italian, Russian, and Turkish. The second is the specification of what constitutes the utterance, which in CT is the linguistic locus of the local attentional state. Discourse centers, both backward looking and forward looking, are computed for each utterance. That is, each utterance serves as a center update unit. In attempts to characterize the linguistic encoding of a center update unit, complications arise from complex sentence structures. Up-todate research on this issue has suggested that complex sentences may project different center update units, depending on their internal structure. In early theoretical work on characterizing the center update unit in Centering, it was suggested that complex sentences be broken into clauses each of which forms an autonomous center update unit, with the possible exception of relative clauses and complement clauses. Treating adverbial clauses as autonomous center update units predicts that a pronoun in a fronted adverbial clause, as in (4c), is anaphorically dependent on an entity already introduced in the immediately prior discourse and not on the subject of the main clause it is attached to. (4a) (Jim) Kerni began reading a lot about the history and philosophy of Communism (4b) but never 0i felt there was anything he as an individual could do about (4c) When hei attended the Christina AntiCommunist Crusade school here about six months ago (4d) Jimi (Kern) became convinced that he as an individual could do something constructive in the ideological battle (4e) and 0i set out to do it

This view on backward anaphora was also professed in earlier work by Kuno, who asserted that there was no genuine backward anaphora: the referent of an apparent cataphoric pronoun must have appeared in the previous discourse. Empirical data later showed that this view of backward anaphora could not be maintained. Corpus studies have shown that cataphoric pronouns can appear discourse-initially.

Experimental work focusing on complex sentences of the type that includes adverbial clauses has suggested that adverbial clauses are processed as a single unit with the matrix clause. Specifically, native speakers of English tend to interpret the ambiguous subject pronoun in (5) as the groom, i.e., the subject of the preceding clause, even when the adverbial in the second main clause is semantically varied (however, as a result, moreover, then, etc.). This pattern contrasts with the interpretation of the subject pronoun (6), for which no consistent tendency is identified, indicating that in this case the interpretation of the pronoun is most likely determined by the semantics of the predicates of the main and adverbial clause and the relation between them. (5) The groom hit the best man. However, he . . . (6) The groom hit the best man although he . . .

Other experimental work on the interpretation of a subject pronoun following a complex sentence indicates that referents in subject position in adverbial clauses are not favored for the interpretation of a subsequent pronoun. In (7) and (8), for example, the subject pronoun is interpreted as the conductor, i.e., the referent of the matrix clause, even when the adverbial clause is postposed with respect to the main clause. (7) After the tenor opened his music score the conductor sneezed three times. He . . . (8) The conductor sneezed three times after the tenor opened his music score. He . . .

Data such as (5) and (6) would be a challenge for a Centering-based anaphora resolution algorithm, which processes one clause at a time, because there is no way of distinguishing between them. At the same time, these data are consistent with Centering and Centering’s Pronoun Rule, under the assumption that adverbial clauses are not processed as independent update units. Under this assumption, Centering would predict the pattern observed in (5), (7), and (8). Centering’s Pronoun Rule would not make a prediction for (6) with respect to the entities introduced in the main clause, because they belong to the same unit as the pronoun. Additional evidence for treating the entire sentence as a single update unit comes from corpus work exploring various parameters that can be set for Centering and the number of Centering rules that they would violate. This type of work suggests that treating the whole complex sentence as a center update unit leads to fewer violations of the Pronoun Rule. Studies of Centering in relative clauses present conflicting results, which need further research to be reconciled. On the one hand are discourses, like (9), that suggest that entities mentioned in relative

Anaphora Resolution: Centering Theory 17

clauses (9b) are less salient than in the main clause (9a), as indicated by the use of the subsequent use of noun phrase in (9c). In fact, a pronoun used instead of the full noun phrase would probably be interpreted as Mr. Taylor, i.e., the entity in the main clause. (9a) Mr. Taylori, 45 years old, succeeds Robert D. Kilpatrickj, 64, (9b) whoj is retiring, as reported earlier. (9c) Mr. Kilpatrickj will remain a director. (9d) Hei . . . #Hej . . .

On the other hand are discourses, like (10), that show the opposite pattern from that in (9). Such data have come from work that looked at different types of relative clauses, specifically nonrestrictive and restrictive with a definite or an indefinite head. Complementary patterns in the use of pronouns and definite descriptions shows that nonrestrictive clauses and restrictive clauses with an indefinite head pattern alike and form an autonomous (but embedded and accessible) center update unit. In (10), the subject pronoun in (10c) refers, without any garden-path effects, to the subject referent of the preceding relative clause and not to the subject referent of the main clause, indicating that in this case the relative clause probably introduces a new update unit that is accessible to (10c) for center establishment. (10a) This Mosesi was irresistible to a man like Simkinj (10b) whoj loved to pity and to poke fun at the same time. (10c) Hej was a reality-instructor.

Applications of Centering Theory as a Model of Local Coherence Some research has illustrated the appropriate and correct application of Centering Theory. The four Centering transitions shown in Table 1 define four degrees of coherence within a discourse segment. A textual segment characterized by a sequence of CONTINUE transitions demonstrates the highest degree of coherence and is perceived as a segment focusing on a single entity. Topic retains and smooth shifts to new topics are captured in the RETAIN and SMOOTH-SHIFT transitions. Indeed, numerous corpus studies have identified CONTINUE, RETAIN, and SMOOTH-SHIFT transitions. As expected, ROUGH-SHIFT transitions are rarely identified in corpora of written text, which presumably maintain a high level of coherence. An exception to this pattern has been observed in texts whose coherence is under evaluation and therefore cannot be assumed. A typical kind of this type of text is student essays. Indeed, in a study of essays written by students, it was shown that excessive number of ROUGH-SHIFT transitions per

paragraph in students’ essays correlated with low essay scores provided by writing experts. A closer analysis of the essays revealed that the incoherence detected by a ROUGH-SHIFT measure was not due to violations of Centering’s Pronoun Rule or other infelicitous uses of pronominal forms. The distribution of nominal and pronominal forms over ROUGH-SHIFT transitions revealed that in fact pronominal forms were avoided in ROUGH-SHIFT transitions. This observation indicates that the incoherence found in the student essays was not due to the processing load imposed on the reader to resolve anaphoric references. Instead, the incoherence in the essays was due to discontinuities caused by the introduction of a rapid succession of new, undeveloped topics with no links to the prior discourse. In other words, ROUGH-SHIFTS picked up textual incoherence due to topicx discontinuities. Studies such as the one just described are supportive of the formulation of Centering as a model of local discourse coherence. They also show that the Centering model can be used successfully for practical applications, e.g., to improve automated systems of writing evaluation in testing and education. In fact, it has been shown that adding a Centering-based metric of coherence to an existing electronic essay scoring system (the system e-rater and developed at the Educational Testing Service) improved the performance of the system by better approximating human expert scores. In addition, a Centering-based system of writing evaluation has exceptional pedagogical value. This is because the model offers the capability of directing students’ attention to specific locations within an essay where topic discontinuities occur. It can illuminate broken topic and focus chains within the text of an essay by drawing the student’s attention to the noun phrases playing the roles of Cb’s and Cp’s. Supplementary instructional comments could guide the student into revising the relevant section by paying attention to topic discontinuities. See also: Anaphora, Cataphora, Exophora, Logophoricity; Coherence: Psycholinguistic Approach; Cohesion and Coherence; Context; Coreference: Identity and Similarity; Discourse Anaphora; Discourse Domain; Discourse Parsing, Automatic; Discourse Semantics.

Bibliography Baldwin B F (1995). COGNIAC: a discourse processing engine. Ph.D. diss., University of Pennsylvania. Beaver D (2004). ‘The optimization of discourse anaphora.’ Linguistics and Philosophy 27(1), 3–56. Brennan S E, Friedman M W & Pollard C J (1987). ‘A Centering approach to pronouns.’ In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, Stanford, CA. 155–162.

18 Anaphora, Cataphora, Exophora, Logophoricity Cooreman A & Sanford A (1996). Focus and syntactic subordination in discourse (technical report). Human Communication Research Center, University of Edinburgh. Di Eugenio B (1996). ‘Centering in Italian.’ In Walker M A, Joshi A K & Prince E F (eds.) Centering Theory in discourse. New York: Oxford University Press. 115–138. Givo´n T (1983). ‘Topic continuity in discourse: a quantitative cross-language study.’ In Topic continuity in discourse: an introduction. Amsterdam: John Benjamins. 1–42. Gordon P C, Grosz B J & Gilliom L A (1993). ‘Pronouns, names and the Centering of attention in discourse.’ Cognitive Science 17(3), 311–347. Grosz B J (1977). The representation and use of focus in dialogue understanding (technical report no. 151). Menlo Park, CA: SRI International. Grosz B J & Sidner C L (1986). ‘Attentions, intentions and the structure of discourse.’ Computational Linguistics 12, 175–204. Grosz B J, Joshi A K & Weinstein S (1983). ‘Providing a unified account of noun phrases in discourse.’ In Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, Cambridge, MA. 44–50. Grosz B J, Joshi A K & Weinstein S (1995). ‘Centering: a framework for modeling the local coherence of discourse.’ Computational Linguistics 21(2), 203–225. Hudson-D’Zmura S B (1988). The structure of discourse and anaphor resolution: the discourse center and the role of nouns and pronouns. Ph.D. diss., University of Rochester. Joshi A K & Kuhn S (1979). ‘Centered logic: the role of entity centered sentence representation in natural language inferencing.’ In Proceedings of the 6th International Joint Conference in Artificial Intelligence, Tokyo. 435–439. Kehler A (1997). ‘Current theories of Centering for pronoun interpretation: a critical evalutation.’ Computational Linguistics 23(3), 467–475. Miltsakaki E (2002). ‘Toward an aposynthesis of topic continuity and intrasentential anaphora.’ Computational Linguistics 28(3), 319–355. Miltsakaki E (2004). ‘Not all subjects are born equal: a look at complex sentence structure.’ In The processing and acquisition of reference. Cambridge, MA: MIT Press.

Miltsakaki E & Kukich K (2004). ‘Evaluation of text coherence for electronic essay scoring systems.’ Natural Language Engineering 10(1), 25–55. Poesio M, Stevenson R, DiEugenio B & Hitzeman J (2004). ‘Centering: a parametric theory and its instantiations.’ Computational Linguistics 30(3), 309–363. Prasad R (2003). Constraints on the generation of referring expressions: with special reference to Hindi. Ph.D. diss., University of Pennsylvania. Prasad R & Strube M (2000). ‘Discourse salience and pronoun resolution in Hindi.’ In Williams A & Kaiser E (eds.) Penn working papers in linguistics: current work in linguistics, vol. 6, no. 3. 189–208. Prince E F (1999). ‘Subject pro-drop in Yiddish.’ In Bosch P & van der Sandt R (eds.) Focus: linguistic, cognitive and computational and perspectives. Cambridge: Cambridge University Press. 82–101. Rambow O (1993). ‘Pragmatic aspects of scrambling and topicalization in German.’ Paper presented at the Institute for Research in Cognitive Science workshop on Centering Theory in naturally-occurring discourse, University of Pennsylvania, Philadelphia, May 20–28. Reinhart T (1981). ‘Pragmatics and linguistics: an analysis of sentence topics.’ Philosphica 27(1), 53–94. Sidner C L (1979). Toward a computational theory of definite anaphora comprehension in English (technical report no. AI-TR–537). Cambridge, MA: MIT Press. Strube M & Hahn U (1998). ‘Never look back: an alternative to Centering.’ In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics, Montreal, Quebec. 1251–1257. Suri L Z, DeCristofaro J D & McCoy K F (1999). ‘A methodology for extending focusing frameworks.’ Computational Linguistics 25(2), 173–194. Turan U D (1995). Null vs. overt subjects in Turkish discourse: a Centering analysis. Ph.D. diss., University of Pennsylvania. Walker M A, Iida M & Cote S (1994). ‘Japanese discourse and the process of Centering.’ Computational Linguistics 20(2), 193–232. Walker M A, Joshi A K & Prince E F (1998). Centering theory in discourse. New York: Oxford University Press.

Anaphora, Cataphora, Exophora, Logophoricity Y Huang, University of Reading, Reading, UK ß 2006 Elsevier Ltd. All rights reserved.

Defining Anaphora, Cataphora, and Exophora The term ‘anaphora/anaphor/anaphoric’ has three distinct senses in contemporary linguistics. In the

first place, it can be used for reference to a relation between two linguistic elements, in which the interpretation of one (called an anaphor) is in some way determined by the interpretation of the other (called an antecedent) (Huang, 2000a; Huang, 2004). Linguistic elements that can be employed to encode an anaphoric relation in this general sense range from gaps (or empty categories) through reflexives to various reference-tracking systems like gender/

Anaphora, Cataphora, Exophora, Logophoricity 19

class, switch function and switch reference. Second, the term can be used in Chomsky’s generative syntax to refer to an NP with the features [þanaphor,  pronominal], as against a pronominal, an NP with the features [anaphor, þpronominal]. Thus the reflexive in (1) is treated as an anaphor, and the (other-referring) pronoun in (2) as a pronominal in the Chomskyan sense. (1) Russell1 admired himself1. (2) Russell1 admired him2.

Third, the term can be used to refer to an anaphor whose antecedent comes earlier, as in (3), as opposed to ‘cataphora/cataphor/cataphoric,’ where the antecedent comes later (4). (3) After John1 got promoted, his1 salary went up. (4) After he1 got promoted, John’s1 salary went up.

Both anaphora and cataphora in this sense can be subsumed under the term ‘endophora/endophor/ endophoric,’ referring to the relation in which the anaphor/cataphor and its antecedent are within what is said or written. In contrast, by the term ‘exophora/exophor/exophoric’ is meant that the ‘antecedent’ of an anaphor, or more accurately a deictic expression, lies outside what is said or written. This is illustrated in (5). (5) (Referents in the physical context, and with selecting gestures) He1’s not the managing director. He2 is.

NP-Anaphora

In terms of syntactic category, anaphora can be divided into two main categories: NP-including -anaphora, and VP-anaphora. In an NP-anaphoric relation, both the anaphor and its antecedent are NPs, and both are potentially referring expressions, as in (6). NP-anaphora can be expressed by gaps, pronouns, reflexives, names, and descriptions. By contrast, in an -anaphoric relation, both the anaphor and its antecedent are an rather than an NP, and neither is a potentially referring expression, as in (7). Linguistic elements that can be used as an -anaphor include gaps, pronouns and nouns. (6) Alain Robert1 said that he1 enjoyed scaling the world’s skyscrapers with no safety net. (7) John’s brother is an anti-war campaigner, and Bill’s Ø is an anti-globalization activist.

Anaphora can be intrasentential, in which case the anaphor and its antecedent occur within a single simplex or complex sentence. It can also be discoursal, in which case the anaphor and its antecedent

cross sentence boundaries (see Huang, 2000a, 2000b for discussion of discourse anaphora). Regarding intrasentential NP-anaphora, three main theoretical approaches can be identified: syntactic, semantic, and pragmatic. The Syntactic Approach

The central idea underlying a syntactic analysis is that anaphora is largely a syntactic phenomenon, and as such references must be made to conditions and constraints that are essentially syntactic in nature. This approach is best represented by Chomsky’s (1981, 1995) binding theory within the principlesand-parameters theory and its minimalist descendant. Chomsky distinguishes two types of abstract feature for NPs: anaphors and pronominals. An anaphor is a feature representation of an NP which must be referentially dependent and which must be bound within an appropriately defined minimal syntactic domain; a pronominal is a feature representation of an NP which may be referentially dependent but which must be free within such a domain. Interpreting anaphors and pronominals as two independent binary features, Chomsky hypothesizes that one ideally expects to find four types of NP in a language – both overt and nonovert. (8) Chomsky’s typology of NPs Overt a. [þanaphor, reflexive/ pronominal] reciprocal b. [anaphor, pronoun þpronominal] c. [þanaphor, þpronominal] d. [anaphor, pronominal]

Empty NP-trace pro



PRO

name

wh-trace

Of the four types of overt NP listed above, anaphors, pronominals, and r[eferential]-expressions are subject to binding conditions A, B, and C respectively. (9) Chomsky’s binding conditions A. An anaphor is bound in a local domain. B. A pronominal is free in a local domain. C. An r-expression is free.

Binding is defined on anaphoric expressions in configurational terms, appealing to purely structural concepts like c-command, government, and locality. The binding theory accounts for the syntactic distribution of anaphoric expressions in (10). It is also applicable to three of the four empty categories listed in (8): NP-trace, pro, and wh-trace, but that work will not be reviewed here (cf. Huang, 1992, 1995).

20 Anaphora, Cataphora, Exophora, Logophoricity (10) a. Russell1 admired himself1. b. Russell1 admired him2. c. Russell1 admired Russell2.

There are, however, serious problems at the very heart of Chomsky’s binding theory. Crosslinguistically, the distribution of reflexives violates binding condition A in both directions: on the one hand, a reflexive can be bound outside its local domain (the so-called long-distance reflexive or anaphor), as in Chinese, Russian and Telugu (see 11); on the other, it may not be bound within its local domain, as in Dutch and Norwegian (see 12). (11) (Chinese) (Modern Chinese) Xiaoming gaosu Xiaohua Xiaolan bu xihuan ziji. Xiaoming tell Xiaohua Xiaolan not like self ‘Xiaoming1 tells Xiaohua2 that Xiaolan3 does not like self1/*2/3.’ (12) (Norwegian) (Hellan, 1988) *Jon foraktet seg. Jon despises self ‘Jon despises himself.’

Binding condition B is also frustrated, for in many of the world’s languages (such as Catalan [CatalanValencian-Balear], Gumbaynggir [Kumbainggar] and Haitian Creole [Haitian Creole French]), a pronominal can be happily bound in its local domain. Next, given Chomsky’s formulation of binding conditions A and B, it is predicted that anaphors and pronominals will be in strict complementary distribution, because the two binding conditions are precise mirror images of each other. But this predicted complementarity cannot be maintained. Even in a ‘syntactic’ language such as English, it is not difficult to find syntactic environments where the complementarity breaks down (see 13). (13) a. Blair1 saw a picture of himself1/him1 in the Times. b. Mary1 saw a snake near herself1/her1. c. [Pavarotti and Domingo]1 adore each other’s1/their1 performances. d. Pavarotti1 said that tenors like himself1/him1 would not sing operas like that.

Finally, even a cursory inspection of languages such as English, Korean, and Vietnamese indicates that binding condition C cannot be taken as a primitive of grammar, either (see Huang, 1996, 2000a, 2004 for further discussion). The Semantic Approach

In contrast to the ‘geometric,’ syntactic approach, the semantic approach views anaphora as essentially a semantic phenomenon. Accordingly, binding is defined in argument structure terms. Reinhart and

Reuland’s (1993) theory of reflexivity, for example, belongs to this camp. (14) Reinhart and Reuland’s binding conditions: A. A reflexive-marked syntactic predicate is reflexive. B. A reflexive semantic predicate is reflexivemarked. (15) Reinhart and Reuland’s typology of overt NPs SELF SE pronoun Reflexivizing function þ   Referential independence   þ SELF ¼ a morphologically complex reflexive, e.g., himself, zichzelf SE ¼ a morphologically simplex reflexive, e.g., ziji, zich, seg.

In Reinhart and Reuland’s view, reflexivity is not a property of NPs but of predicates. The binding theory is designed not to capture the mirror image distribution of anaphors and pronominals, but to regulate the domain of reflexivity for a predicate. More specifically, what the theory predicts is that if a predicate is lexically reflexive, it may not be reflexivemarked by a morphologically complex SELF anaphor in the overt syntax. On the other hand, if a predicate is not lexically reflexive (that is, if it contains a semantic predicate), it may become reflexive only via the marking of one of its coarguments by such an anaphor. This explains why (12) is ungrammatical. Since the predicate in (12) is not lexically reflexive, it must be reflexive-marked in order to comply with Reinhart and Reuland’s binding condition B. But the SE-anaphor seg is not a reflexivizer, and consequently cannot reflexive-mark the predicate, hence the ungrammaticality of (12). Reinhart and Reuland’s theory, however, is not without problems of its own. First, cross-linguistic evidence has been presented that marking of reflexivity is not limited to the two ways identified by Reinhart and Reuland. In addition to being marked lexically and syntactically, reflexivity can also be indicated morphologically. Secondly, more worrisome is that the central empirical prediction of the reflexivity analysis, namely, only a reflexive predicate can and must be reflexive-marked, is falsified in both directions. On the one hand, a predicate that is both syntactically and semantically reflexive can be nonreflexive-marked in the sense of Reinhart and Reuland, as in (16); on the other, a nonreflexive predicate can be reflexive-marked, as in (17) (see Huang, 2000a, 2004 for further discussion). (16) (Icelandic) Jon elskar sig. Jon loves self ‘Jon loves himself.’

Anaphora, Cataphora, Exophora, Logophoricity 21 (17) (Chinese) (Mandarin Chinese) Xiaoming shuo taziji hen xihuan yinyue. Xiaoming say 3SGself very like music ‘Xiaoming says that he likes music very much.’

Note also that neither Chomsky’s binding theory nor Reinhart and Reuland’s reflexivity theory is designed to say anything about the anaphoric relations in examples like (3), (4), and (6). The Pragmatic Approach

As an alternative to the syntactic and semantic approaches, there is the pragmatic approach. The most influential of the pragmatic analyses is the neoGricean pragmatic theory of anaphora developed by Levinson (1987, 1991, 2000)and Huang (1991, 1994, 2000a, 2000b, 2004). Central to this theory is the assumption that anaphora is essentially pragmatic in nature, though the extent to which anaphora is pragmatic varies typologically. Consequently, anaphora can largely be determined by the systematic interaction of some general neo-Gricean pragmatic principles such as Levinson’s (2000) Q-, I-, and M-principles (18), depending on the language user’s knowledge of the range of options available in the grammar, and of the systematic use or avoidance of particular anaphoric expressions or structures on particular occasions. (18) Levinson’s Q-, I- and M-principles (simplified) a. The Q-principle Speaker: Do not say less than is required (bearing I in mind). Recipient: What is not said is not the case. b. The I-principle Speaker: Do not say more than is required (bearing Q in mind). Recipient: What is generally said is stereotypically and specifically exemplified. c. The M-principle Speaker: Do not use a marked expression without reason. Recipient: What is said in a marked way is not unmarked.

Applying the Q-, I-, and M-principles to the domain of anaphoric reference, we can derive a general neo-Gricean pragmatic apparatus for the interpretation of various anaphoric expressions in (19). (19) Huang’s (2000a, 2004) revised neo-Gricean pragmatic apparatus (simplified) i. The use of an anaphoric expression x I-implicates a local coreferential interpretation, unless (ii) or (iii). ii. There is an anaphoric Q-scale , in which case the use of y Q-implicates the complement of the I-implicature associated with the use of x in terms of reference.

iii. There is an anaphoric M-scale {x, y}, in which case the use of y M-implicates the complement of the I-implicature associated with the use of x, in terms of either reference or expectedness.

Needless to say, any interpretation generated by (19) is subject to the general consistency constraints applicable to Gricean conversational implicatures. These constraints include world knowledge, contextual information, and semantic entailments. Substantial cross-linguistic evidence has been presented to show that empirically the revised neoGricean pragmatic theory of anaphora is more adequate than both a syntactic and a semantic approach. Regarding (10), on this account, Chomsky’s binding conditions B and C can be reduced to pragmatics. In somewhat simplified terms, this can be achieved in the following way. If binding condition A is taken to be either grammatically constructed (as in English-type, syntactic languages) or pragmatically specified (as in Chinese [Mandarin Chinese]type, pragmatic languages), then binding condition B is the direct result of the application of the Q-principle. Given this principle, the use of a semantically weaker pronoun where a semantically stronger reflexive could occur gives rise to a conversational implicature which conveys the negation of the more informative, coreferential interpretation associated with the use of the reflexive, as in (10b). By the same reasoning, binding condition C can also be eliminated. Wherever a reflexive could occur, the use of a semantically weaker proper name Q-implicates the nonapplicability of the more informative, coreferential interpretation associated with the use of the reflexive. This is exactly what has happened in (10c). Moreover, the revised neo-Gricean pragmatic theory can provide an elegant account of many of the anaphoric patterns that have always embarrassed a generative analysis, such as the cases where a pronoun is bound in its local domain. It can also accommodate examples like (3), (4), and (6), which lie outside the scope of either Chomsky’s binding theory or Reinhart and Reuland’s reflexivity theory. Next, from a conceptual point of view, the revised neo-Gricean pragmatic theory of anaphora has important implications for current thinking about universals, innateness, and learnability.

VP-Anaphora A Typology of VP-Anaphora

There are five types of VP-anaphora. First, VPellipsis, in which the VP of the second and subsequent clauses is somewhat elided, as in (20). Second, gapping, in which some element (typically a repeated,

22 Anaphora, Cataphora, Exophora, Logophoricity

finite verb) of the second and subsequent conjuncts of a coordinate construction is dropped, as in (21). Third, sluicing, which involves the deletion of an IP within an embedded CP, resulting in an elliptical construction where an IP contains an embedded interrogative CP consisting only of a wh-phrase, as in (22). Fourth, stripping – an elliptical construction in which the ellipsis clause usually contains only one constituent, as in (23). And finally, there is null complement anaphora – an elliptical construction in which a VP or IP complement of a verb is omitted, as in (24). (20) John adores his girlfriend, and Peter does, too. (21) Reading maketh a full man; conference a ready man; and writing an exact man. (Bacon) (22) Nigel wrote something about the internet, but I don’t know what. (23) Pavarotti will sing Nessun Dorma again, but not at Covent Garden. (24) Lucy wanted to sit on a tree overhanging the river, but her mother didn’t approve.

VP-Ellipsis: Properties, Issues, and Analyses

Of the five types of VP-anaphora listed above, VPellipsis has attracted the most attention. Properties VP-ellipsis has a number of distinct properties, notably: it can occur either in a coordinate or subordinate clause; it engenders a sloppy reading; it exhibits the locality effect on the sloppy reading; the sloppy reading is subject to the c-command condition; VP-ellipsis may operate across sentence boundaries; it may take a pragmatic antecedent; and it may have a split antecedent. Issues Two issues are of particular interest in the analysis of VP-ellipsis. The first and more traditional one is concerned with the availability and distribution of the strict and sloppy interpretations. This can be illustrated by a consideration of (20) above. In (20), the second, elided conjunct is ambiguous. It can be understood either in the manner of (25a) – the socalled strict reading, or in the manner of (25b) – the so-called sloppy reading. (25) a. John adores John’s girlfriend, and Peter adores John’s girlfriend. b. John adores John’s girlfriend, and Peter adores Peter’s girlfriend.

Second, there are what Fiengo and May (1994) called the eliminative puzzles – the question of why VP-ellipsis reduces the number of possible interpretations of sentences relative to their nonelided counterparts. Of particular concern here are three types of eliminative puzzle: the many-pronouns puzzle, as in

(26), the many-clauses puzzle, as in (27), and the Dahl puzzle, as in (28). (26) John said that he adored his girlfriend, and Peter did, too. (27) John adores his girlfriend, Peter does, too, but David doesn’t. (28) John thinks that he is intelligent, Peter does, too, but his girlfriend doesn’t.

Analyses Two general approaches can be identified, syntactic and semantic. Central to the syntactic analyses is the claim that VP-ellipsis can best be resolved at some level of syntactic structure. On this view, the interpretation of VP-ellipsis involves the reconstruction of the target clause on the basis of the syntactic structure of the source clause. Semantic representations which are derived syntactically are then assigned to the elided VP in a manner parallel to the antecedent VP. Currently this approach is best represented by Fiengo and May (1994). By way of contrast to the syntactic, reconstructionist approach, the semantic approach takes the view that VP-ellipsis can best be resolved at a purely semantic level of representation. Under this approach, the interpretation of VP-ellipsis bears on the identification of a property of the antecedent VP and the assignment of this property to the elided VP. Dalrymple, Shieber and Pereira (1991), for instance, belong to this camp. Also noteworthy is a second distinction in the study of VP-ellipsis that is independent of the syntactic/ semantic approach distinction we have just seen. Dalrymple, Shieber and Pereira (1991) noted that a distinction can be made between those analyses which trace the strict/sloppy dichotomy to an ambiguity in the representation of the source clause and those which attribute it to the process of recovering a property or relation for the target clause. Let us call the first type the source ambiguity analysis, and the second type the process ambiguity analysis. Next, regarding the source ambiguity account, within the syntactic camp, the ambiguity of interpretation can be derived in a partially interpretative way either by deleting the phrase structure of the target clause under the condition of identical semantic interpretation, or by copying the syntactic structure of the source clause to the target clause but requiring identical semantic representation for the two VPs. Within the semantic camp, the multiplicity of interpretation can be obtained in a purely interpretative manner by assuming an unambiguous syntactic analysis of the source clause (see Huang, 2000a for further discussion). In addition to the syntactic and semantic approaches mentioned above, Kehler (2002) has recently put forward an analysis which is based on the notion of

Anaphora, Cataphora, Exophora, Logophoricity 23

discourse coherence. According to this analysis, VPellipsis can partially be accounted for in terms of various neo-Humean coherence relations such as resemblance, cause–effect, and contiguity.

Logophoricity Defining Logophoricity

Logophoricity can be regarded as a special case of NPanaphora. The term ‘logophoricity’ is used for reference to the phenomenon whereby the point of view of an internal protagonist of a sentence or discourse, as opposed to that of the current, external speaker, is being reported using some morphological and/or syntactic means (e.g., Huang, 2000a, 2002). The term ‘point of view’ is employed here in a technical sense and is intended to encompass words, thoughts, knowledge, emotion, and perception. The concept of logophoricity was introduced in the study of African languages such as Aghem, Ewe and Tuburi (Tupuri) (see Hage`ge, 1974; Clements, 1975), where there is a separate paradigm of logophoric pronouns which is used for such a purpose. By way of illustration, consider (29) and (30) from Donno SO, taken from Culy (1994). (29) Oumar Anta inyemen˜ waa be Oumar Anta LOG-ACC seen AUX ‘Oumar1 said that Anta2 had seen him1.’ (30) Oumar Anta won˜ waa be Oumar Anta 3SG-ACC seen AUX ‘Oumar1 said that Anta2 had seen him3.’

gi. said gi. said

In (29) the use of the logophoric pronoun encodes a coreferential reading between it and the matrix subject, thus reporting Oumar’s speech from his perspective. By contrast, in (30) the employment of the regular pronoun indicates a disjoint reference, hence reporting Oumar’s speech from the point of view of the current, external speaker. Cross-Linguistic Marking of Logophoricity

Cross-linguistically, logophoricity may be morphologically and/or syntactically expressed by one or more of the following mechanisms: (i) logophoric pronouns, as in (29) above; (ii) logophoric addressee pronouns, as in (31); (iii) logophoric verbal affixes, as in (32); and (iv) long-distance reflexives, as in (11) above. (31) Logophoric addressee pronouns (Mapun, Frajzyngier, 1985) n- sat n-wur taji gwar dim n Kaano. I say BEN-3SG not ADDR go PREP Kano ‘I told him1 that he1 may not go to Kano.’ (32) Logophoric verbal affixes (Gokana; Hyman and Comrie, 1981)

a` nyı´ma kO ae` he know that he ‘He1 knows that he1 fell.’

dO-e. fell-LOG

Logophoric marking devices (i)–(iii) are found largely in African languages, and logophoric marking mechanism (iv) is located in a wide range of languages throughout the world. Furthermore, these devices can be ranked according to the following hierarchy (Huang, 2000a, 2002). (33) Hierarchy of grammatical mechanisms for logophoric marking a. Logophoric pronouns/addressee pronouns/ verbal affixes [þlogophoric, þcoreference] b. Long-distance reflexives [logophoric, þcoreference]

What (33) basically says is this: for logophoric marking, a logophoric pronoun, addressee pronoun, or verbal affix will be used if there is one; otherwise, a longdistance reflexive will be used. A second point to be borne in mind is that logophoricity and coreference are two distinct, though closely related, notions; logophoricity entails coreference, but not vice versa. A Typology of Languages with Respect to Logophoricity

Following suggestions made by von Roncador (1992) and Culy (1994), languages can be grouped into three types with respect to logophoricity: full or pure logophoric languages – languages which have special morphological and/or syntactic forms that are employed only in logophoric domains (e.g., Babungo [Vengo], Pero, and Ekepeye [Ekpeye]); nonlogophoric languages – languages which have no such morphological and/or syntactic forms (e.g., Arabic, English, and Mambar); and semi- or mixed logophoric languages – languages which either allow logophors to be used for nonlogophoric purposes (e.g., Igbo, Idoma, and Yoruba) or allow the extended use of reflexives in logophoric contexts (e.g., Bangla [Bengali], Icelandic, and Japanese). It is particularly interesting, as Culy (1994) observed, that while logophoric languages are found in many places throughout the world, full or pure logophoric languages seem to be found only in Africa. Furthermore, while full or pure logophoric languages are not in a contiguous area, logophoric languages as a whole are in a contiguous area. This geographic distribution of logophoric languages is fascinating as well as surprising, and for the time being, remains unexplained. Some Implicational Universals with Respect to Logophoricity

In Huang (2000a, 2002), a number of implicational universals with respect to logophoricity are proposed.

24 Anaphora, Cataphora, Exophora, Logophoricity

The first of these is concerned with the person distinction of logophoric pronouns. (34) Person hierarchy for logophoric pronouns 3>2>1 First-person logophoric pronouns imply second-person logophoric pronouns, and second-person logophoric pronouns imply third-person logophoric pronouns.

Given (34), it is predicted that in all languages with logophoric pronouns, logophoric pronouns can be third-person; in some, they can also be identified as second-person; in a few, they can be distinguished as first-person as well (see also Hyman and Comrie, 1981; Wiesemann, 1986). This pattern of person distinction holds also for logophoric long-distance reflexives. As pointed out in Huang (2000a, 2002), there is a functional/pragmatic explanation for (34): for referential disambiguity, the third-person distinction is the most, and the first-person distinction the least, useful, with the second-person distinction lying in between, for third-person is closer to nonperson than either first- or second-person. It follows therefore that the fact that first-person logophoric pronouns are rare in languages is hardly surprising, given that logophoric pronouns are one of the (most common) devices that the current, external speaker (which is encoded usually in terms of a first-person pronoun) utilizes in reflecting the perspective of anyone else (usually an internal protagonist) but him- or herself. Second, there is the implicational universal for the number distinction of logophoric pronouns. (35) Number hierarchy for logophoric pronouns Singulars > plurals Plural logophoric pronouns imply singular logophoric pronouns.

The implicational universal in (35) summarizes the general pattern of number specification for logophoric pronouns. While all languages with logophoric pronouns allow singular logophoric pronouns, only some permit plural logophoric pronouns as well (see also Hyman and Comrie, 1981; Wiesemann, 1986). This pattern of number specification holds also for logophoric long-distance reflexives. Next, mention should be made of logocentric triggers, namely those NPs that can act as an antecedent for a logophoric pronoun or logophoric long-distance reflexive. First, logocentric triggers are generally constrained to be a core argument of the logocentric predicate of the matrix clause. Secondly, they are typically subjects. However, logocentric triggers can also be some other, nonsubject argument, provided that this argument represents the ‘source’ of the proposition or the ‘experience’ of the mental state that is

being reported. Two types of construction are particularly common in African logophoric languages. The first involves the predicate ‘hear from.’ The second involves ‘psychological’ predicates expressing emotional states and attitudes, of which the ‘experiencer’ serves as direct object or object of preposition. (36) is the implicational universal for logocentric triggers. (36) Hierarchy for logocentric triggers Surface structure: subject > object > others Semantic role: agent > experiencer/benefactive > others

The higher an NP is on the hierarchy, the more likely it will function as an antecedent for a logophoric pronoun or a logophoric long-distance reflexive (see also Hyman and Comrie, 1981). Given that the subject of the matrix clause is typically the NP that is the highest on the hierarchy (and, incidentally, the most animate), it is hardly surprising that it is the typical antecedent for a logophoric pronoun or a logophoric long-distance reflexive. Taken together, the above three hierarchies predict that the most basic, unmarked pattern of logophoric marking is one which encodes logophoricity by the use of a third-person, singular, logophoric pronoun which refers to a human subject. Finally, it should be pointed out that logophoric pronouns or logophoric long-distance reflexives usually occur in a logophoric domain, that is, a stretch of discourse in which the internal protagonist’s perspective is being represented. The logophoric domain is commonly created by a logocentric licenser, which consists mainly of a logocentric predicate. Logocentric predicates can be distinguished largely on a semantic basis. The most common types of logocentric predicates are predicates of speech and thought. But other types of predicates, such as those of mental state, knowledge, and direct perception, can also trigger a logophoric domain. While languages differ in allowing precisely which type of predicate to function as a logocentric licenser, cross-linguistically there exists an implicational universal for logophoric predicates (see also Stirling, 1993; Culy, 1994; Huang, 1994, 2000a). (37) An implicational universal for logocentric predicates Speech predicates > epistemic predicates > psychological predicates > knowledge predicates > perceptive predicates

What (37) basically states is this: if a language allows (some) predicates of one class to establish a logophoric domain, then it will also allow (some) predicates of every class higher on the hierarchy to do the same. Thus, if a language has logophoric marking with predicates of, say, psychological state,

Antonymy and Incompatibility 25

then it will necessarily have it with predicates of thought and communication.

See also: Anaphora Resolution: Centering Theory; Corefer-

ence: Identity and Similarity; Discourse Anaphora; Donkey Sentences; Intensifying Reflexives; Scope and Binding.

Bibliography Chomsky N (1981). Lectures on government and binding. Dordrecht: Foris. Chomsky N (1995). The minimalist program. Cambridge, MA: MIT Press. Clements G N (1975). ‘The logophoric pronoun in Ewe: its role in discourse.’ Journal of West African Languages 2, 141–177. Culy C (1994). ‘Aspects of logophoric marking.’ Linguistics 32, 1055–1094. Dalrymple M, Shieber S & Pereira F (1991). ‘Ellipsis and higher-order unification.’ Linguistics and Philosophy 14, 399–452. Fiengo R & May R (1994). Indices and identity. Cambridge, MA: MIT Press. Frajzyngier Z (1985). ‘Logophoric systems in Chadic.’ Journal of African Languages and Linguistics 7, 23–37. Hage`ge C (1974). ‘Les pronoms logophoriques.’ Bulletin de la Socie´te´ de Linguistique de Paris 69, 287–310. Hellan L (1988). Anaphora in Norwegian and the theory of grammar. Dordrecht: Foris. Huang Y (1991). ‘A neo-Gricean pragmatic theory of anaphora.’ Journal of Linguistics 27, 301–335. Huang Y (1992). ‘Against Chomsky’s typology of empty categories.’ Journal of Pragmatics 17, 1–29. Huang Y (1994). The syntax and pragmatics of anaphora. Cambridge: Cambridge University Press.

Huang Y (1995). ‘On null subjects and null objects in generative grammar.’ Linguistics 33, 1081–1123. Huang Y (1996). ‘A note on the head-movement analysis of long-distance reflexives.’ Linguistics 34, 833–840. Huang Y (2000a). Anaphora: a cross-linguistic study. Oxford: Oxford University Press. Huang Y (2000b). ‘Discourse anaphora: four theoretical models.’ Journal of Pragmatics 32, 151–176. Huang Y (2002). ‘Logophoric marking in East Asian languages.’ In Gu¨ldemann T & von Roncador M (eds.) Reported discourse: a meeting ground for different linguistic domains. Amsterdam: John Benjamins. 211–224. Huang Y (2004). ‘Anaphora and the pragmatics–syntax interface.’ In Horn L & Ward G (eds.) The handbook of pragmatics. Oxford: Blackwell. 288–314. Hyman L & Comrie B (1981). ‘Logophoric reference in Gokana.’ Journal of African Languages and Linguistics 3, 19–37. Kehler A (2002). Coherence, reference, and the theory of grammar. Stanford, CA: CSLI. Levinson S C (1987). ‘Pragmatics and the grammar of anaphora.’ Journal of Linguistics 23, 379–434. Levinson S C (1991). ‘Pragmatic reduction of the binding conditions revisited.’ Journal of Linguistics 27, 107–161. Levinson S C (2000). Presumptive meanings: the theory of generalized conversational implicature. Cambridge, MA: MIT Press. Reinhart T & Reuland E (1993). ‘Reflexivity.’ Linguistic Inquiry 24, 657–720. Stirling L (1993). Switch-reference and discourse representation. Cambridge: Cambridge University Press. von Roncador M (1992). ‘Types of logophoric marking in African languages.’ Journal of African Languages and Linguistics 13, 163–182. Wiesemann U (1986). ‘Grammaticalized coreference.’ In Wiesemann U (ed.) Pronominal systems. Tu¨bingen: Gunter Narr. 437–464.

Antonymy and Incompatibility M L Murphy, University of Sussex, Brighton, UK ß 2006 Elsevier Ltd. All rights reserved.

Incompatibility is the semantic relation in which two expressions have necessarily different reference. So, for example, bottle and overture are incompatible because nothing can be a bottle and an overture simultaneously. Often, incompatibility is used more particularly to mean incompatible items within a semantic field, such as bottle/can/box. In some traditions, the term contrast (set) is used for this relation. Antonymy involves special binary cases of incompatibility. These lexical-semantic relations are further defined below, and then main research issues are

outlined. Slashes symbolize these relations – e.g., cold/hot.

Incompatibility and Contrast Logical incompatibility (bottle/overture as well as bottle/can/box) is a semantic, rather than a lexical, relation. That is, the meanings are incompatible, rather than the words themselves, because the relation holds between specific senses of the word (so bottle ‘container’ and bottle ‘courage’ are incompatible with each other and with different items generally) and because formal properties of the words involved (such as pronunciation or register) are irrelevant to the relation. Logical incompatibility is rarely a focus

26 Antonymy and Incompatibility

of lexicological study, which usually concerns semantic paradigms, because no paradigm can be found in a group such as argon/bottle/Democrat/overture. This logical sense of incompatible contrasts with another sense that is specifically used for incompatibility within a semantic field, as for club/diamond/ heart/spade. We can further distinguish the logicalsemantic incompatibility from a more pragmatic notion of contrast. Incompatibility definitionally includes no cases with referential overlap, but contrast can. For instance, if green and blue can both describe a particular shade of turquoise, then they are not incompatible, but do contrast, because they refer to categories with different, partly incompatible boundaries in the color field. In this sense, the opposite of contrast is inclusion: blue/green contrast, while loden and lime do not contrast with green but are included in its sense (see Hyponymy and Hyperonymy).

Antonymy and Opposition The term antonymy is used in two ways. Some people (Lyons, 1977; Lehrer and Lehrer, 1982) use it specifically to refer to contrast between gradable predicates, so that cold/hot are antonyms, but aunt/ uncle are not. Others (Jones, 2002; Murphy, 2003) use it to refer to any lexical pairs that constitute semantic opposites. In this general sense, antonymy can be defined as a relation in which two words share all relevant properties except for one that causes them to be incompatible. This is called minimal difference. For example, aunt and uncle share all semantic properties (parent’s sibling[-in-law]) except sex, and cold/hot share all (extreme of temperature scale, neutral as to what it modifies) except which extreme is denoted. Antonymy contrasts with relations such as synonymy and hyponymy in its binarity and in that it is frequently argued to relate words as well as senses. For example, while hot and freezing are semantically incompatible, freezing is not the antonym of hot – only cold is. Several logical antonym types have been identified. Antonym taxonomies vary in which types they include, but the three below occur in most taxonomies. Gradable Contrariety (Classical Antonymy, Polar Opposition)

As noted, some theorists use antonym to refer only to contrasting gradable predicates such as cold/hot, bad/ good, big/little. Such predicates are gradable in that they can hold to varying degrees (e.g., hardly/very cold, hotter than), and their relation is contrary in that the assertion of one entails the negation of the other (if something is hot, then it is not cold), but not vice versa (if something is not hot, it’s not necessarily the case that it’s cold – it could be tepid). Thus, there

is some neutral semantic ground between the two antonyms. Gradable contraries are particularly interesting because they often exhibit markedness phenomena (Lehrer, 1985). Complementarity (Contradiction)

Complementary (also called contradictory) antonyms are those whose senses completely bisect some domain (and thus are non-gradable). For example, every integer is either odd or even, so even/odd is a complementary pair. The assertion of one entails the negation of the other and vice versa: X is odd contradicts X is even, and X is even is synonymous with X is not odd. Complementarity is comparable to many cases of non-binary incompatibility that exhaust their referential domain (e.g., animal/mineral/vegetable). In this case, the entailment and contradiction relations are between the assertion or negation of one member of the set and the disjunction of the others. For instance, X is an animal contradicts X is a vegetable or a mineral, and X is not an animal entails X is a vegetable or a mineral. So-called gradable complementaries, such as dishonest/honest, lie between complementarity and contrariety. They seem to contradict each other (X is not honest entails X is dishonest, and vice versa), but a middle ground seems to exist, because we can assert that a person is neither honest nor dishonest. Even classic examples of complementarity, such as dead/ alive, sometimes take on gradable qualities (e.g., he’s more dead than alive). One solution to the problem of categorizing gradable complementaries is to treat the words involved as polysemous, having relative and absolute senses in contrary and complementary relations, respectively. Directional Antonyms

Most taxonomies include at least one more type of antonym, which covers all or some of the following subtypes. Converse antonyms follow the pattern if X is p to Y, then Y is q to X. This covers examples such as give (to)/receive (from), child/parent, and above/below, in which each member describes the same relation or activity from a different perspective (e.g., using either X or Y as the landmark). For instance, X gives Y to Z entails Z received Y from X. Reversive opposites involve the undoing of some action: tie/untie, build/demolish. Both of these types are sometimes collected, along with other miscellaneous examples (e.g., left/right, come/go) in a general category of directional antonyms. Other Types of Opposition

Many words that are considered to be opposites do not fit the above categories. For example, learn/teach

Antonymy and Incompatibility 27

seem to go together in a converse-like way, but they are not different perspectives on the same action, but rather two actions that typically co-occur. Gender opposites (female/male, aunt/uncle) are often considered to be complementary, but the existence of hermaphrodites calls that judgment into question. Other binary opposites are seemingly part of larger contrast sets, for example happy/sad (rather than happy/angry or angry/sad), raising the question of whether they are true contraries (on the same scale) or not (Ogden, 1967). Other pairs of words that go together, such as gin/tonic can be argued to be minimally different (they are similar in being part of the same drink, but different in being alcoholic/non-alcoholic), but many definitions of antonymy would not include such pairings. Still others, such as Heaven/Hell, contrast on so many levels (reward/punishment, ecstasy/agony, up/ down, cool/hot) that they seem to flout the definition of antonyms as minimally different (Cruse, 1986; Murphy, 2003).

Research Issues Interest in antonymy and incompatibility has shifted away from the logical properties of the relations and their subtypes and toward their roles in lexical organization and discourse. In part, this shift has been fueled by corpus methodology, which makes large-scale studies of antonym distribution possible. Contrast and Lexical Development

Contrast relations are often seen as constraining lexical development, both at the individual (lexical acquisition) and the community (semantic change) levels. In the structural semantics tradition (Lyons, 1977; Coseriu and Geckeler, 1981; Cruse, 1986), meaning is often defined as at least partly dependent on the relations between words. The very existence of two words in the same lexical field forces their meanings to differentiate, and the existence of contrast relations between words makes it likely that they will referentially drift apart and their contrast relation will be extended to other senses for those words. For example, black/white are contrasted as color names, but also as racial terms that represent a complex of physical, genotypical, and cultural characteristics, and also mean ‘evil/good’ (black/white magic). These findings suggest that language users appreciate the relation as a relation between words, not just between senses, as discussed next. A Lexical Relation?

As noted, incompatibility is a semantic relation among words’ senses, but it can be argued that antonymy is a relation between words. In this case, we can

use opposite to refer to the semantic relation of binary incompatibility (or contrast) and antonym to refer to the conventionalized pairing of two particular words with opposite meanings. The effects of such conventionalized pairing can be seen in several ways. First, some of the strongest associations in word association tests are anonymous stimulus-response pairs. If asked to respond with the first word that comes to mind, most people respond white to black and sad to happy, but have less consistent or non-antonymous responses for words without clear antonyms, such as grey or joy. Second, people use antonyms together in discourse at greater than chance rates and at greater rates than they use other possible semantic opposites. So, weak is more likely to be paired with strong in text than with another of its semantic opposites, powerful (Charles and Miller, 1989). One might argue that such preferred pairings arise because they are semantically more perfect opposites, but a completely semantic explanation is not sufficient, because there is evidence that word form plays a role, too. For example, homosexual was established before heterosexual, but none of its semantic opposites gained status as an antonym for homosexual until a morphologically analogous term was invented (Murphy, 2003). Acknowledging the conventionalization of antonym pairs raises several questions: How are such pairs established in the language, how are they acquired by individuals, and how are they mentally represented? Corpus studies have been a major tool in answering these questions and raising new ones. For example, Fellbaum (1995) has shown that the pairing of antonyms is not a strictly paradigmatic matter in which nouns have noun antonyms and verbs have verb antonyms, etc. So while the nouns death and life co-occur in sentences at higher than expected rates, so do the morphologically related pairs deadly (Adj) and life (N), dead (N) and live (V), and die (V) and live (Adj). Because antonyms co-occur and because their opposition is somewhat insensitive to part-of-speech, they defy traditional definitions of a ‘paradigmatic’ relation, leading some to claim that antonymy is a syntagmatic relation (while the semantic relation of opposition can still be classified as paradigmatic). Discourse Functions and Constructions

Corpus studies of antonym co-occurrence also allow for the examination of how and where antonyms are used in discourse. The most comprehensive work on this to date has been Jones’s (2002) description of discourse functions of antonyms, each of which is associated with particular morpho-syntactic frames,

28 Aristotle and Linguistics

or constructions. The two major functions account for about 70% of antonym use. In coordinated antonymy (e.g., X and Y; neither X nor Y), the contrast between the antonyms is suspended in order to say something that is true of both X and Y. In sentence (1), the difference between indoors/outdoors is rendered irrelevant (for the use of the lights) in context. (1) These lights can be used indoors or outdoors.

In the other major type of antonym, ancillary antonymy, one antonym pair creates or underscores a second opposition. For example, in sentence (2), the use of the conventionalized pair long/short heightens the contrast between acquaintances and friends. (2) He was long on acquaintances but short on friends.

Jones’ other seven types of antonyms may be more or less common in different genres of text and speech. Because the corpora searched contained formal writing, a remaining question is whether antonymy is used similarly in speech to create emphasis, cohesion, and interest. Defining Antonymy

Finally, attention to antonymy as an arguably syntagmatic relation with particular discourse uses has brought into question earlier definitions that rely on logical notions such as incompatibility and entailment. The broader use of the term antonym to include all types of opposite relations raises questions regarding how the relation as a whole can be defined: What counts as opposition? Is antonymic opposition a semantic or pragmatic issue? How can we account for perceptions that some antonym pairs seem ‘more antonymous’ than others? Cruse (1994) has suggested that antonym is a prototype category, and that the prototypical antonym pair is diametrically opposed, symmetrical and binary, and exhausts the superordinate domain. This predicts that pairs such as alive/dead

will be perceived as ‘better antonyms’ than examples such as happy/sad or short/tall, but that these others are still close enough to the prototype to count as antonyms in most contexts. Murphy (2003), on the other hand, has proposed that antonym relations are based on a pragmatic principle of contrast by minimal difference, and so, theoretically, almost any two words could count as opposites, if used in an appropriate context. See also: False Friends; Hyponymy and Hyperonymy;

Lexical Fields; Lexicon: Structure; Lexicon/Dictionary: Computational Approaches; Negation; Synonymy; WordNet(s).

Bibliography Charles W G & Miller G A (1989). ‘Contexts of antonymous adjectives.’ Applied Psycholinguistics 10, 357–375. Coseriu E & Geckeler H (1981). Trends in structural semantics. Tu¨bingen: Narr. Cruse D A (1986). Lexical semantics. Cambridge: Cambridge University Press. Cruse D A (1994). ‘Prototype theory and lexical relations.’ Rivista di linguistica 6, 167–188. Fellbaum C (1995). ‘Co-occurrence and antonymy.’ International Journal of Lexicography 8, 281–303. Horn L (2001). A natural history of negation (rev. edn.). Stanford: CSLI Publications. Jones S (2002). Antonymy: A Corpus-based approach. London: Routledge. Lehrer A (1985). ‘Markedness and antonymy.’ Journal of Linguistics 21, 397–429. Lehrer A & Lehrer K (1982). ‘Antonymy.’ Linguistics and Philosophy 5, 483–501. Lyons J (1977). Semantics (2 vols.). Cambridge: Cambridge University Press. Murphy M L (2003). Semantic relations and the lexicon. Cambridge: Cambridge University Press. Ogden C K (1967). Opposition: a linguistic and psychological analysis. Bloomington: Indiana University Press [Orig. pub. 1932 by the Orthological Institute.].

Aristotle and Linguistics P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

The study of language has always had two kinds of practitioners, the practical and the theoretical linguists. Aristotle was no doubt the first theoretical

linguist (in addition to being the first in many other subjects), but he also contributed essentially to the development of practical linguistics. His role in the history of linguistics has been highlighted in a few publications (e.g., Seuren, 1998; Allan, 2004). Aristotle was born in Stagira, in Ancient Macedonia, in 384 B.C.E. His father was the personal physician and a close friend of the king of Macedonia, Amyntas

Aristotle and Linguistics 29

II. An exceptionally gifted boy to begin with, Aristotle joined Plato’s Academy in Athens at the age of 17, to remain there until Plato’s death in 347. Having been passed over as Plato’s successor, he left Athens to live, first, in Asia Minor and then in Lesbos. In 343–342, Amyntas’ son and successor, Philip II of Macedonia, invited him to come and teach his son Alexander, then 14 years old. This he did for 2 years. In 336, Alexander succeeded his father and immediately conquered the whole of Greece. Under Alexander’s political protection, Aristotle returned to Athens in 335 and founded his school of philosophy, the Lyceum. There he taught until 323, when news of Alexander’s death reached Athens. No longer certain of Macedonian protection, he left Athens overnight and sought refuge in Chalcis, just north of Athens, where a Macedonian garrison was stationed. One year later, in 322, he died of an intestinal disease. His first great contribution to the study of language—not often mentioned—is the fact that he demythologized language. Rather than seeing language as a magical instrument to cast spells, entrance people, and call up past, present, and future spirits, he saw language as an object of rational inquiry, a means of expressing and communicating thoughts about anything in the world. The ‘semiotic triangle’ of (a) language as the expression of (b) thoughts that are intentionally related with (c) elements in the world, famously depicted in Ogden and Richards (1923: 11), is due to Aristotle. This is Aristotle’s most general and perhaps also his most important contribution to the study of language, even if it is not often mentioned by modern authors, for whom it has become a matter of course that language can be seen as a valid object of rational inquiry. In a more analytical sense, Aristotle’s role in the development of linguistics is in large part due to his theory of truth. For him, truth and falsity are properties of either thoughts or sentences. A classic statement is (Metaphysics 1027b25): For falsity and truth are not properties of actual things in the world (so that, for example, good things could be called true and bad things false), but properties of thought.

A few pages earlier, he defines truth as follows (Metaphysics 1011b26): We begin by defining truth and falsehood. Falsehood consists in saying of that which is that it is not, or of that which is not that it is. Truth consists in saying of that which is that it is, or of that which is not that it is not.

Here Aristotle introduces not as a simple truthfunctional inverter of truth values: a toggle between true and false. This has momentous consequences.

Aristotle’s truth theory is known as the correspondence theory of truth, in that it requires a correspondence between what is the case in the world on the one hand and what is said or thought on the other. To make this notion of correspondence more explicit, some form of analysis is needed. Aristotle made a beginning with that. He analyzes the ‘world’ as consisting of things that are named by any of the 10 categories substance, quantity, quality, relation, place, time, position, state, action, or affection (Categories 1b25–30). Within the category ‘substance,’ there is a hierarchy from the primary substances (individual existing entities) through a range of secondary substances, from species and genus to any higher order. The secondary substances together with the remaining 9 categories are properties or things that things are (‘‘everything except primary substances is either predicable of a primary substance or present in it’’; Categories 2a33). On the other hand, he analyzes sentences as resulting from the application of a kateˆgorou´menon (Latin praedicatum) to something. The something to which the predicate is applied he calls hypokeı´menon (literally ‘that which underlies’; Latin subiectum or suppositum). Primary substances (entities) can be the object only of predicate application – that is, can only be hypokeı´mena (Categories 2b39–40). All other things can be either hypokeı´mena or properties, denoted by a predicate. Yet in orderly talk about the universe, it is proper to take lower categories of substance as the things predicates apply to and reserve predicates themselves for the denoting of higher-order substances and other categories of being (Categories 3a1–5). The combination of a predicate with a term denoting the hypokeı´menon Aristotle calls pro´tasis (Latin propositio). A proposition is true just in case the property assigned to the hypokeı´menon actually adheres to it; otherwise it is false. Moreover, a true proposition is made false, and vice versa, by the prefixing of not (‘‘it is not the case that’’). The term pro´tasis occurs for the first time on the first page of Prior Analytics, which contains his doctrine of syllogisms (Prior Analytics 24a16): A proposition (pro´tasis) is an affirmative or negative expression that says something of something.

A proposition is divided into terms (Prior Analytics 24b16): A term (ho´ron) I call that into which a proposition is analyzed, such as the predicate (kateˆgorou´menon) and that to which the predicate is applied.

One notes that Aristotle lacked a word for what we call the subject term of a sentence. During the late

30 Aristotle and Linguistics

Middle Ages, the Latin subiectum began to be used in that sense—an innovation that has persisted until the present time (Seuren, 1998: 121–124). This was the first semantic analysis of sentence structure in history, presaged by, and probably unthinkable without, Plato’s incipient analysis of sentence meaning in his dialogue The Sophist. It is important to note that Aristotle’s analysis of the proposition does not correspond to the modern syntactic analysis in terms of subject and predicate, but rather to what is known as topic-comment analysis. The identification of Aristotle’s sentence constituent for the denoting of a hypokeı´menon with ‘‘grammatical subject,’’ characterized by nominative case, and of Aristotle’s predicate with ‘‘grammatical predicate,’’ may have been suggested by Aristotle, as when he says that a morphological verb ‘‘always is a sign of something said of something else’’ (On Interpretation 16b7). But it was carried through systematically a few decades after Aristotle’s death by the linguists of Alexandria, whose task it was to develop teaching material for the Egyptian schools where local children had to learn Greek in the shortest possible time (Seuren, 1998: 21–22). Unfortunately, this identification was, though convenient, rash and ill-considered. It persisted more or less unchallenged until the middle of the 19th century, when some, mostly German, scholars discovered that the Aristotelian subject–predicate distinction does not coincide with the syntactic subject–predicate analysis universally applied in linguistics. For in actual discourse, very often what should be the subject according to Aristotle’s definition is not the subject recognized in grammatical analysis, and likewise for the predicate. Steinthal, for example, observed (1860: 101–102): One should not be misled by the similarity of the terms. Both logic and grammar speak of subject and predicate, but only rarely do the logician and the grammarian speak of the same word as either the subject or the predicate.. . .Consider the sentence Coffee grows in Africa. There can be no doubt where the grammarian will locate subject and predicate. But the logician? I do not think the logician could say anything but that ‘Africa’ contains the concept that should be connected with ‘coffee grows’. Logically one should say, therefore, ‘the growth of coffee is in Africa’.

Observations like this gave rise to a long debate, which lasted more than 80 years. At the end, it was decided to keep the terms subject and predicate for the syntactic analysis and speak of topic and comment for the semantic analysis in the Aristotelian sense (see Seuren, 1998: 120–133 for a detailed discussion). Syntax, in the modern sense, is largely absent from Aristotle’s writings. He does, however, distinguish

between different sentence types (On Interpretation 17a1–12): Every sentence is meaningful, not in virtue of some natural force but by convention. But not all sentences are assertions, only those in which there is question of truth or falsity. In some sentences that is not so. Wishes, for example, are sentences but they are not true or false. We will leave all other sentence types out of consideration, as they are more properly studied in rhetoric or poetics. But assertions are the topic of the present study [i.e., logic]. The primary assertive sentence type is the simple affirmation, the secondary is the simple negation. All other, complex, assertions are made one by conjunction. Every assertion must contain a verb or a conjugated form of a verb. For a phrase like ‘‘man’’ is not yet an assertion, as long as no verb in the present, past, or future tense is added.

Some word classes are already there. Thus, at the outset of On Interpretation, he defines o´noma (noun) as ‘‘a stretch of sound, meaningful by convention, without any reference to time and not containing any internal element that is meaningful in itself’’ (On Interpretation 16a19–21). Rheˆma (verb) is defined as ‘‘that which, in addition to its proper meaning, carries with it the notion of time, without containing any internal element that is meaningful in itself; it always is a sign of something said of something else’’ (On Interpretation 16b6–8). In his Rhetoric, at 1406a19, Aristotle uses the term epı´theton for adjective. All other terms for word classes are of a later date, with many of them having been created by the Alexandrian linguists. The term ptoˆsis is found relatively frequently, in the sense of nominal or verbal morphological modification, as in Categories 1a13–15: ‘‘Things are said to be named ‘derivatively’ when they derive their name from some other word that differs in morphological form (ptoˆsei), such as the grammarian from the word grammar or the courageous from the word courage.’’ The literal meaning of ptoˆsis is ‘fall’ (Latin: casus). Its use in the sense of morphological modification is based on the metaphor that the word ‘as such’ stands upright (in the ‘upright case’ or ortheˆ ptoˆsis; Latin: casus rectus). Its other falls are represented by forms that are modified morphologically according to some paradigm. The Alexandrians began to reserve the term ptoˆsis for the nominal cases of nominative (the form of your own name), genitive (the form of your father’s name), dative (the name of the person you give something to), accusative (the name of the person you take to court), and vocative (the name of the person you call). These terms smell of the classroom, not of philosophy.

See also: Pre-20th Century Theories of Meaning; Proposi-

tional and Predicate Logic; Rhetoric, Classical.

Aspect and Aktionsart 31

Bibliography Allan K (2004). ‘Aristotle’s footprints in the linguist’s garden.’ Language Sciences 26(4), 317–342. Ogden C K & Richards I A (1923). The meaning of meaning. A study of the influence of language upon thought and of the science of symbolism. London: Routledge & Kegan Paul.

Seuren P A M (1998). Western linguistics: An historical introduction. Oxford: Blackwell. Steinthal H (1860). Charakteristik der hauptsa¨chlichsten Typen des Sprachbaues (Neubearbeitung von Dr. Franz Misteli). Berlin: Du¨mmler.

Aspect and Aktionsart H-J Sasse, University of Cologne, Cologne, Germany ß 2006 Elsevier Ltd. All rights reserved.

Phases and Boundaries The notions of ‘aspect’ (also called ‘viewpoint’ or ‘perspective’) and ‘aktionsart’ (also called ‘lexical aspect,’ ‘aspectual class,’ ‘aspectual character,’ ‘actionality,’ ‘situation type’ as well as a few other terms) are concerned with the temporal semantics of an utterance in terms of the time intervals (also termed ‘phases’) conceptualized in the construal of the situation expressed by that utterance. The fundamental criterion is the inclusion or noninclusion of starting points and/or end-points (‘boundaries’) in the conceptualization of the situation. In a sequence of utterances such as the following, a number of different types of situation boundaries are involved: Yesterday we went to the beach. We knew the weather was fine. My son built a sand castle, then we swam for half an hour, and then we went home. The predicate of the first clause, went to the beach, may be said to be associated with at least three different types of boundedness on three different levels of representation. First, it is bounded by virtue of it being one link in a chain of narrated events which happen in a temporal sequence, each event bounding the preceding and the following one (x and then y and then z). This type of bounding will not concern us at the moment. Second, on the abstract lexicalsemantic level, the event {go to the beach} has a built-in goal, the location beach, which terminates the event when reached. This is a different type of boundedness, embedded in the semantics of the expression (constructional semantics plus the lexical semantics of the constituent parts of the construction). Third, the simple past went (as opposed to the progressive were going) suggests that the goal is actually reached. This provides us with the third level of bounding, the actual temporal marking at the utterance level, which spells out the intrinsic bounding,

already prepared, as it were, by the presence of the directional locative phrase. In the mainstream of both classical and modern aspectology, the third type of bounding, the act of completing the event, indicated by means of the simple past, would be regarded as a matter of aspect, while the second type of boundedness, the intrinsic one, would be regarded as a matter of aktionsart. As our little example shows, the act of realizing inherent boundedness at the utterance level is performed in English by the aspectual values of the tense morphemes, which act as aspect operators (‘aspect grams’). Had we said we were going to the beach, this would imply that the end-point was not yet reached: we were still on our way, that is, in the preliminary phase (henceforth pre-phase) of going to the beach. Thus, one may say that the abstract phrase go þ definite directional is semantically designated for an end-point, but is (at the abstract lexical-semantic level) underspecified for the actual realization of this end-point. There are two formally distinct ‘tenses’ in English that regulate the choice between the pre-phase interpretation and the interpretation of the end-point being actually reached: the simple past and the progressive. In traditional terms the simple past would be considered as an instance of the perfective aspect (boundary indicating), and the progressive as an instance of the imperfective aspect (non-boundary indicating). The aktionsart under discussion here is called telic (Greek ‘having an inherent goal/end-point’). Telic events fall into several subtypes depending on how much pre-phase they admit; the type we are concerned with here and which allows for a fairly separate activity phase is called accomplishments, in contrast to achievements, such as reach or die, which allow for very little if any pre-phase (on aktionsart terminology see below). Aktionsart is traditionally defined as a lexically inherent semantic property of the simple verb, characterizing the phasal time structure of the event it denotes, i.e., the various intervals of its development

32 Aspect and Aktionsart

(beginning, end, repetition, etc.). As we see here, this traditional view is misleading. Aktionsart is actually a property of the verb plus its argument frame. Thus, several distinct frames may lead to the same aktionsart interpretation, and quantificational or determinational features of arguments in the same frame may lead to different aktionsart interpretations, both independent of the verb’s ‘basic’ aktionsart. While in the example discussed above the built-in end-point of the telic accomplishment predicate is a directional, in the third clause of our little story it is an object (a sand castle) that is created by the activity expressed by the verb; once the object is ready, the situation has reached its natural goal. The implications of the simple past and the progressive are the same as in the first example: he built a sand castle suggests that the sand castle is finished, while he was building a sand castle suggests that he was in the prephase of building it. The end-point may also be completely inherent in a simple verb, as in turn: if we say he was turning he hasn’t completed the turn, but if we say he turned, the turn is done. As for quantificational features that may come into play, {build a sand castle} is a telic event, while {build sand castles} is not: the bare plural does not suggest a limited number of sand castles that would naturally terminate the event when ready. In our little story, we have a simple predicate of this kind: swim in the third sentence. This verb is atelic: it does not have a built-in end-point. Boundedness here comes about through the durative time adverbial for half an hour, which sets a time frame with a clear endpoint. However, this end-point is different from that established by the spatial goal or the completed entity in the first two clauses. This finds its linguistic correlate in the fact that one cannot say we were swimming for half an hour implying that we were in a pre-phase of completing the 30 minutes. This state of affairs is characteristic of a class of verbal expressions whose aktionsart value is called activities. These are characterized by the fact that they do not have natural endpoints but, rather, arbitrary time limits. One cannot swim for eternity: at one time, one starts doing it, and after a while, one stops. Here, the contrast between simple past and progressive has quite different implications. The simple past we swam indicates that the entire act, including its beginning, its active phase, and its end, happened at a certain time in the past. The progressive we were swimming indicates that only the active (ongoing) phase is described, while the arbitrary beginning and end-point are out of focus; they are not implied by the grammatical form. Let us finally look at the verb know in the second sentence. It is clear without further consideration that the situation expressed in this sentence is not bounded

by the surrounding events. We probably already knew that the weather was fine before we started to go, and nothing is implied about when we learned of it; likewise, nothing is implied about whether or when we will stop knowing it. In fact, no boundaries are conceptualized here at all. This is characteristic of the aktionsart called states. The same form (simple past) is used here as with the complete, boundary-including phase of activities, but in a different reading: since no boundaries are implied, it pertains to the persistent state of knowing in the past. The progressive is not compatible with this aktionsart. In summary, each aktionsart has its characteristic pattern involving two types of information concerning the aspect operators (aspect grams): cooccurrence restrictions between aktionsart and aspect operators, and the semantic effects induced by a certain aktionsart for the aspect operators. Aspect systems in the languages of the world are typically characterized by this kind of interaction between the two dimensions of aktionsart and grammatical aspect, which is manifested in the interplay of different temporal bounding or unbounding characteristics: those inherent in the aktionsart, which are part of its semantics, and those provided by grammatical aspectual devices such as the progressive/simple past distinction in English. The details of this interplay are a matter of the individual language’s lexicogrammar, whose organization in terms of verbal semantics and inventory of grammatical devices sets the stage for the actual phenomenon of interlocking between these two dimensions.

Aspect Theories and Their Historical Development An excellent introduction to the early history of aspectology is found in Pollak (1988). The grammatical description of the Slavic verb system served both as an instigator and as a pacemaker for later developments of the theory. The first descriptions of aspectual oppositions in Slavic languages go back to the early 17th century. The Prague scholar Benedikt von Nudozˇer was the first to describe grammatical aspect as a complementary system of two perspectives or viewpoints, which later came to be called perfective and imperfective (these are loan translations of Russian sovershennyi vid ‘completed viewpoint’ and nesovershennyi vid ‘uncompleted viewpoint’ respectively). The Russian term vid ‘viewpoint’, which provided the basis for the loan translation ‘aspect,’ was first used by N. I. Grecˇ in his Russian grammar of 1827. These early works shaped the concept of a system of ‘aspects’ or ‘aktionsarten’ (the terms being used synonymously by that time) of roughly the

Aspect and Aktionsart 33

following form: the speaker has a choice between two options: he may present a situation as an undivided whole, including both its initial and final boundaries (perfective); or he may present a situation from an internal perspective, ignoring the boundaries of a situation (imperfective). This concept was then applied by Indo-Europeanists to morphological oppositions in other languages where a similar semantic distinction was found, such as Greek aorist versus imperfect, Latin perfect versus imperfect, French passe´ simple versus imparfait, English simple past versus progressive past, etc. The first attempt to differentiate between grammatical aspect and the lexically inherent timeconstitutional characteristics of verbs was Agrell (1908), which exploited, for that purpose, the presence of the competing synonymous terms ‘aspect’ and ‘aktionsart’ by distinguishing them in much the same manner as we do today. His main point was that grammatical aspect cuts across aktionsarten, e.g., the imperfective aspect is compatible with the nondurative aktionsart (durative versus nondurative was the terminological predecessor of what was later called atelic versus telic), thus showing that one needs two dimensions to deal with the whole range of phenomena. Even though this convinced many scholars in the first half of the 20th century of the necessity to differentiate between grammatical aspect and lexical aktionsarten, the aktionsart theory enjoyed limited success, perhaps due to the fact that aktionsarten continued to be defined as formal, i.e., derivational categories. The classic Slavist aktionsart terminology as we still find it today in handbooks of Russian and other Slavic languages (e.g., ingressive [beginning of action], egressive [end of action], delimitative [a certain temporal extension], iterative [repetition or habituality], semelfactive [single instance]) is all associated with certain derivational devices, mostly verb prefixes, occasionally also suffixes. The situation was further complicated by the oddities of the Slavic system with its enormous overlap in the formal devices used to mark grammatical aspect and aktionsarten. Attempts at a proper systematization of phasal time constitution and its interrelation with grammatical aspect operators (grams) was thus hampered by the presence of two independent approaches: a logical/philosophical one where aktionsarten were taken to be abstract schemata characterizing all predicate expressions, and a morphologically oriented one that defined aktionsarten in terms of derivational morphology, ignoring the question of how the simplex, nonderived items would fit into an overall schema of time constitution. (For a brief characterization of the situation in the early 1980s see Krifka, 1989: 102–106.) Under these

circumstances, research on aktionsarten would possibly have come to a dead end, were it not for the fact that Zeno Vendler’s famous ‘time schemata’ of 1957 (better known in the version of 1967 and extremely influential since then) brought a breath of fresh air into attempts at coming to grips with aktionsart semantics. Not originally intending to provide a classification of lexical verbs (this was often misunderstood because of his use of the term ‘verb’), Vendler divides the universe of ‘verbs’ into four ‘time schemata’: activity terms, accomplishment terms, achievement terms, and state terms. The first two make up a class of processes, consisting of successive phases and allowing the use of the progressive in English. Activities and accomplishments are distinguished by the fact that activity terms are atelic (‘do not have a set terminal point’), while accomplishment terms are telic and share this feature with achievements. Chief criterion for telicity is the compatibility with durative time adverbials (for an hour) and frame adverbials (in an hour); activities are compatible with the former, but not with the latter, while the reverse is true for accomplishments. Vendler’s standard examples show that he does not classify lexical verbs in the sense of the classic aktionsart approach but uses actual verb phrases: Activities: Accomplishments: Achievements: States:

run, push a cart run a mile, draw a circle, grow up win a race, reach the summit love somebody, have something

The time schema approach had a great impact on research into aspect and time constitution in formal grammar and early computational linguistics, mainly because of its refinement by David Dowty (Dowty, 1979). The post-Vendlerian approach splits aspectologists into two groups, which I have termed ‘unidimensionalists’ and ‘multidimensionalists’ (Sasse, 2002). The unidimensional approach proceeds from the assumption that aspect and aktionsart are essentially the same phenomenon (predominantly called ‘aspect’ in these approaches). What traditional aspectology would have analyzed as the interaction between the two is usually treated as ‘recategorization’ in an unidimensional approach, e.g., the progressive aspect in English used with a telic verb as in he’s dying could be looked upon as an achievement recategorized as an activity. This approach is inadequate as it does not allow a proper treatment of the subtle semantic nuances that arise from the interaction between lexical aktionsart characteristics of verbs and verb phrases on the one hand and the semantics of aspectual markers in languages with richer grammatical aspect systems on the

34 Assertion

other. Moreover, the affinities between perfectivity and telicity on the one hand, and imperfectivity and nontelicity on the other, leave one type of interaction unaccounted for, namely activities, which are typically characterized by their compatibility with both perfectivity and imperfectivity. Nevertheless, important work in particular on the compositionality of the aspectual interpretation of sentences (i.e., the influence of quantificational and determinational characteristics of noun phrases involved – cf. the examples in the previous section) has been done by unidimensionalists. A different approach to these issues on the basis of a multidimensional understanding of aspect and aktionsart was taken by Krifka (1989). In multidimensional approaches, two levels are assumed: one of lexical aktionsart, and one of grammatical aspect, the latter interacting with the former. How this might work was informally demonstrated in the previous section. There are several variants of this approach. A ‘selection theory’ would assume every aktionsart to have a characteristic interaction pattern for the aspect categories, resulting in ‘interaction senses’; both the perfective and the imperfective aspect would have a number of specific ‘readings,’ depending on the interaction with the aktionsart they combine with. A ‘cumulative theory’ would assume total independence, resulting in the cumulation of aktionsarten and aspect semantics. It seems that the unidimensional approach is gradually losing ground. Modern aspectology is characterized by numerous different multidimensional approaches, where aktionsart systems of various sizes are proposed, all elaborating on the Vendler categories, as we did in the previous section (cf. Breu, 1994; Bertinetto, 1997; Smith, 1997). An excellent table comparing some of these systems is found in Tatevosov (2002: 320–321); see also Sasse (2002).

See also: Generics, Habituals and Iteratives; Grammatical Meaning; Ingressives; Perfects, Resultatives, and Experientials; Role and Reference Grammar, Semantics in; Serial Verb Constructions; Temporal Logic; Tense.

Bibliography Agrell S (1908). Aspekta¨nderung und Aktionsartbildung beim polnischen Zeitworte: ein Beitrag zum Studium der indogermanischen Praeverbia und ihrer Bedeutungsfunktionen. Lund: Ohlsson. Bertinetto P (1997). Il dominio tempo-aspettuale: demarcazioni, intersezioni, contrasti. Turin: Rosenberg & Sellier. Breu W (1994). ‘Interactions between lexical, temporal and aspectual meanings.’ Studies in Language 18, 23–44. ¨ (1989). ‘The creation of tense and Bybee J L & Dahl O aspect systems in the languages of the world.’ Studies in Language 13, 51–103. Bybee J L, Perkins R & Pagliuca W (1992). The evolution of grammar: tense, aspect, and modality in the languages of the world. Chicago: University of Chicago Press. Comrie B (1976). Aspect. Cambridge: Cambridge University Press. ¨ (1985). Tense and aspect systems. Oxford: BlackDahl O well. Dowty D (1979). Word meaning and Montague grammar: the semantics of verbs and times in generative semantics and in Montague’s PTQ. Dordrecht: Reidel. Krifka M (1989). Nominalreferenz und Zeitkonstitution: zur Semantik von Massentermen, Pluraltermen und Aspektklassen. Munich: Fink. Pollak W (1988). Studien zum Verbalaspekt, mit besonderer Beru¨cksichtigung des Franzo¨sischen. Berne: Peter Lang. Sasse H-J (2002). ‘Recent activity in the theory of aspect: accomplishments, achievements, or just non-progressive state?’ Linguistic Typology 6, 199–271. Smith C (1997). The parameter of aspect (2nd edn.). Dordrecht: Kluwer. Tatevosov S (2002). ‘The parameter of actionality.’ Linguistic Typology 6, 317–401. Vendler Z (1967). Linguistics in philosophy. Ithaca, NY: Cornell University Press.

Assertion M S Green, University of Virginia, Charlottesville, VA, USA ß 2006 Elsevier Ltd. All rights reserved.

Plato in such works as Cratylus takes a sentence to be a series of names. This view is untenable because it is not the case that in uttering a series of names one thereby says something either true or false. ‘‘Ephesus, 483, Neptune’’ says nothing either true or false.

By contrast, an indicative sentence, such as ‘‘Ephesus was a large city,’’ is true, and it seems to include elements other than names. Aristotle seems to be making just this point when in De Interpretatione he remarks Falsity and truth have to do with combination and separation. Thus, names and verbs by themselves—for instance ‘‘man’’ or ‘‘white’’ when nothing further is added—are like the thoughts that are without combination and separation. So far they are neither true nor false (1963: 16a9)

Assertion 35

Aristotle, however, does not distinguish in his use of ‘‘statement’’ among (1) the sentence used to express a proposition, (2) the proposition thereby expressed, or (3) the use of that proposition in the performance of a speech act, such as an assertion. These are three distinct phenomena, but it took over two millennia before the point was formulated clearly by the philosopher-logician Gottlob Frege. According to Frege, a proposition (such as the proposition that global warming is accelerating) is an abstract object, the existence of which does not depend upon any mind grasping it or any sentence expressing it (1984). By contrast, a sentence expresses a proposition if it is indicative and meaningful. Unlike propositions, sentences exist only as part of a conventional linguistic system and so in a clear sense depend upon human activities. Nevertheless, one can utter a sentence expressing a proposition without making an assertion. One might utter such a sentence in the course of testing a microphone, or in rehearsing one’s lines for a play, or for that matter in investigating a proposition to see what its consequences are without endorsing it. For instance, one might put forth the proposition that global warming will cause the melting of Greenland’s glaciers to see what would follow (the submersion of Florida, etc.) without claiming that global warming will in fact go so far as to melt Greenland’s glaciers. There are three distinct items then: an indicative sentence, a proposition expressed by that sentence, and the use of that sentence and proposition expressed to make an assertion. ‘Statement,’ ‘claim,’ ‘judgment,’ and even ‘proposition’ are often used interchangeably among these three notions, and in the history of philosophy no small amount of mischief has resulted from such ambiguous usage. (For a trenchant discussion of such ambiguity, see Geach, 1972; on the distinction between illocutionary force and semantic content, see Green, 2000.) Isolating assertions from propositions and indicative sentences is only the beginning of wisdom about the former. An assertion is a speech act. Just as one can make a promise by saying, ‘‘I promise to do so and so,’’ so too one can assert P with such words as ‘‘I hereby assert that P.’’ (Austin, 1962 placed assertion on his list of ‘expositives’; Vanderveken, 1990 locates assertion on his list of ‘assertives.’) Further, just as a promise can only be made if certain conditions have been met (you cannot promise to do what is obviously outside your control), so too an assertion can only be made under certain conditions. For instance, it is doubtful that you can assert what is obvious to both you and your audience to be false. An attempt to do so will most likely be taken facetiously. Further, Strawson (1950) held that a sentence can make

presuppositions that if not met prevent that sentence from being usable for assertions. If Iceland has no monarchy, then according to Strawson my utterance of, ‘‘The present queen of Iceland is beautiful,’’ will fail to make an assertion in spite of the fact that I have in good faith uttered a meaningful indicative sentence. To assert a proposition is at the very least to put it forth as true. Searle (1969) tried to capture this dimension by construing an assertion of proposition P as an ‘‘undertaking to the effect that P.’’ That may be a necessary, but is not a sufficient condition: My betting a large sum on P’s coming out true is an undertaking to the effect that P is true, but it is no assertion thereof. (For instance, if I bet on P without believing P true, I am not a liar; if I assert P under those conditions, I am.) Searle, however, elsewhere wrote that an assertion has word-to-world direction of fit. According to this conception, inherent in the practice of asserting is the norm that the speaker’s words are supposed to track how things are. This feature distinguishes assertion from, say, commands, the aim of which is not to track the world but rather to get the world (in particular, one or more of its inhabitants) to conform to what the command enjoins. Commands are thus commonly said to have world-to-word direction of fit. This notion of word-to-world direction of fit may be elaborated further by the way in which one sticks one’s neck out by making an assertion. One who asserts a proposition P is right or wrong on the issue of P, depending upon whether P is true: one is right on that issue if P is indeed the case and wrong if it is not. In thus exposing oneself to liability to error on the issue of P, one is doing more than if one had just uttered a sentence expressing P. However, liability to error on the issue of a particular proposition still does not distinguish assertion from other speech acts involving propositions. One who guesses that P is also right or wrong on the issue of P, depending on whether P is the case. How may we distinguish assertion from other propositioninvolving speech acts? Williamson (1996) contended that one who asserts P is thereby open to the challenge, ‘‘How do you know?’’ This much may not be said of other proposition-involving speech acts. For instance, it would be inappropriate to respond to one who conjectures, guesses, or supposes for the sake of argument that a black hole inhabits the center of the Milky Way, with the question, ‘‘How do you know?’’ Unlike these other speech acts, an assertion purports to convey knowledge. One who makes an assertion incurs a risk through the aforementioned liability to error and, if Williamson is correct, is exposed to a conversational challenge. What if such a challenge is made?

36 Assertion

According to the view of Brandom (1983, 1994), in that case its issuer is obliged to respond with reasons that would justify the contested claim. Those reasons might invoke items of experience or the authority of others. The issuer of the assertion might even show that the speaker raising the challenge is committed to that proposition by her own convictions. If, however, the assertor is unable to convince the interlocutor of the assertion, others will become unable to defer to his or her authority if their own assertion of that same proposition is challenged. In light of incurring a risk of error and exposing oneself to a conversational challenge, it might seem that asserting is more trouble than it is worth. Yet, asserting is the bread and butter of conversational life, so it presumably has redeeming points. First, one whose assertions turn out to be reliably correct, or at least widely accepted, garners credibility. That in turn is a source of social authority: We look to reliable assertors to get things right. Second, it might be held that knowledge has intrinsic worth. Although making an assertion that is borne out is not a sufficient condition for knowledge (you might have gotten lucky or not have believed what you claimed), it is often associated with it. For this reason, getting things right by means of a correct assertion might be thought to be its own reward. Third, an assertion that is not challenged, or is challenged but the assertor responds with an adequate defense of the claim, may be entered into conversational common ground. A conversation begins with a (possibly empty) set of propositions that interlocutors hold in common while being mutually aware that they hold this information in common. Among fans at a baseball game this common ground might include propositions about current weather conditions, the teams’ score, and perhaps a few items in the current national news. Much more will typically be common ground among members of a tightly knit family, less among people waiting at a major bus terminal (Clark, 1996). Let Si be a commitment store for interlocutor i, containing all those propositions to which interlocutor i is committed. Where 1, . . ., n are interlocutors, we may define S1\, . . ., \n as S1\, . . ., \Sn. Even if P 2 Si\j, it does not follow that P is in the common ground of i and j, for neither may have any commitments with regard to the other’s commitment to P. Rather, where s1, . . ., sn are a group of interlocutors, P is common ground among s1, . . ., sn (written P 2 S1 . . .n) iff (a) for all si 2 {s1, . . ., sn}, P 2 Si, and for all si 2 {s1, . . ., sn}, (a) 2 Si.

S1. . .n will in general be a proper subset of S1\, . . ., \n. When s1, . . ., sn are engaged in a conversation and si asserts that P, then so long as no other member of s1, . . ., sn demurs, P 2 S1. . .n. This may bring about progress on a question at issue (Where is the bus station?; or Why is the child ill?) or may aid in the elaboration of a plan (of getting to the bus station or curing the child), and it enables speakers at a later time to presuppose P in their speech acts. For instance, once it is common ground that the child’s illness is viral, we may presuppose this fact, as shown by the acceptability of a remark such as, ‘‘Since antibiotics will be useless on her, we need to ease her discomfort some other way.’’ The use of the phrase, ‘‘Since antibiotics will be useless on her,’’ would be inappropriate if it were not already common ground that the child’s illness is viral and that antibiotics do not treat viral infections (see Green, 2000 for a fuller discussion). In addition to enhancing inquiry and planning and to laying the foundation for later presuppositions, an assertion may generate other pragmatic effects. For instance, one who asserts P may also suggest, insinuate, or implicate a distinct proposition through the meaning of what one says: If one asserts that Mary was poor but honest, the use of ‘‘but’’ suggests that there is some tension between poverty and honesty. According to Grice’s theory of implicature (1989), in that case one does not, however, assert that there is such a tension. Rather, what one asserts is true only in case Mary is poor and Mary is honest. Again, if someone asserts that Mary lost a contact lens, he or she will normally be taken to suggest that Mary lost one of her own contact lenses, rather than someone else’s contact lens. This implicature, due not to the conventional meaning of any particular locution but rather to norms governing conversational practice, is also no part of what he or she asserts. When a speaker asserts P, he or she often means (specifically, speaker means) a good deal more without thereby asserting those other contents. Assertions are not just beholden to the norm of accuracy about the world; they are also held to the norm of sincerity. One who asserts what one does not believe is a liar. By contrast, it is no lie conversationally to implicate what one does not believe. If in response to your question, ‘‘Where is Mary?’’ I reply that she is somewhere in Spain, I may implicate that I am in no position to be more specific. If in fact I do know that she is in Pamplona, I have been evasive, misleading, and perhaps even mendacious but no liar (Adler, 1997). Similarly an assertion of P, while representing oneself as believing that P, is not also an assertion that one believes that P. Were that so, one would literally contradict oneself in saying, ‘‘P but

Assertion 37

I don’t believe that P.’’ As G. E. Moore (1993) observed, however, although this utterance does seem absurd in some way, it is not a self-contradiction. What it says could well be true. Although an assertion of P is not an assertion that one believes that P, that assertion does express, manifest, or display one’s belief that P. Current research is actively engaged with the relation between assertion and the states of mind that it manifests (Williams, 1996; Davis, 2003; Green and Williams, 2006). See also: Counterfactuals; Mood, Clause Types, and Illocu-

tionary Force; Negation; Phrastic, Neustic, Tropic: Hare’s Trichotomy; Pragmatic Determinants of What Is Said; Propositions; Speech Acts and Grammar; Speech Acts.

Bibliography Adler J (1997). ‘Lying, deceiving, or falsely implicating.’ Journal of Philosophy 94, 435–452. Aristotle (1963). De Interpretatione. Acrkill J L (trans.). Oxford: Oxford University Press. Austin J L (1962). ‘How to do things with words.’ In Urmson J O & Sbisa` M (eds.). Cambridge, MA: Harvard University Press. Brandom R (1983). ‘Asserting.’ Nouˆs 17, 637–650. Brandom R (1994). Making it explicit. Cambridge, MA: Harvard University Press. Davis W (2002). Meaning, expression and thought. Cambridge: Cambridge University Press.

Carston R (2003). Thoughts and utterances. Oxford: Blackwell. Clark H (1996). Using language. Cambridge: Cambridge University Press. Frege G (1984). ‘The thought: a logical inquiry.’ In McGuinness B (ed.) Collected papers on mathematics logic and philosophy. Black M et al. (trans.). Oxford: Blackwell. Geach P (1972). ‘Assertion.’ In Geach P (ed.) Logic matters. Oxford: Blackwell. 254–269. Green M (2000). ‘Illocutionary force and semantic content.’ Linguistics and Philosophy 23, 435–473. Green M & Williams J (2006). Moore’s paradox: new essays on belief, rationality and the first person. Oxford: Oxford University Press. Grice H P (1989). Studies in the way of words. Cambridge, MA: Harvard University Press. McDowell J (1998). ‘Meaning, communication, and knowledge.’ In Meaning, knowledge and reality. Cambridge, MA: Harvard University Press. Moore G E (1993). ‘Moore’s paradox.’ In Baldwin T (ed.) G. E. Moore: selected writings. London: Routledge. 207–212. Plato (1998). Cratylus. Reeve C D C (trans.). Indianapolis: Hackett. Searle J (1969). Speech acts. Cambridge: Cambridge University Press. Strawson P (1950). ‘On referring.’ Mind 59, 320–344. Vanderveken D (1990). Meaning and speech acts. Cambridge: Cambridge University Press. Williamson T (1996). ‘Knowing and asserting.’ The Philosophical Review 105, 489–523.

This page intentionally left blank

B Boole and Algebraic Semantics E L Keenan, University of California, Los Angeles, CA, USA A Szabolcsi, New York University, New York, NY, USA ß 2006 Elsevier Ltd. All rights reserved.

In 1854 George Boole, a largely self-educated British mathematician, published a remarkable book, The laws of thought, in which he presented an algebraic formulation of ‘‘those operations of the mind by which reasoning is performed’’ (Bell, 1965: 1). Since then, boolean algebra has become a rich subbranch of mathematics (Koppelberg, 1989), with extensive applications in computer science and, to a lesser extent, linguistics (Keenan and Faltz, 1985). Here we illustrate the core boolean notions currently used in the study of natural language semantics. Most such applications postdate Boole’s work by more than a century, though Boole (1952: 59) anticipated some of the linguistic observations, pointing out, for example, that Animals are either rational or irrational does not mean the same as Either animals are rational or animals are irrational; similarly, Men are, if wise, then temperate does not mean If all men are wise then all men are temperate. Generative grammarians rediscovered such truths in the latter third of the 20th century. We begin with the basic notion of a partially ordered set (poset) and characterize richer structures with linguistic applications as posets satisfying additional conditions (Szabolcsi, 1997; Landman, 1991). A poset consists of a domain D of objects on which is defined a binary relation R, called a partial order relation, which is reflexive (for all x in D, xRx), transitive (xRy and yRz implies xRz), and antisymmetric (xRy and yRx implies x ¼ y). For example, the ordinary arithmetical  relation is a partial order: n  n, any natural number n; if n  m and m  p, then n  p; and if n  m and m  n, then n ¼ m. Similarly, the subset relation  is reflexive: any set A is a subset of itself. And if A  B and B  C, then A  C, so  is transitive. And finally, if A  B and B  A, then A ¼ B, that is, A and B are the same set, since they have the

same members. So partial order relations are quite familiar from elementary mathematics. A case of interest to us is the arithmetical  restricted to {0, 1}. Here 0  1, 0  0 and 1  1, but 1 is not  0. Representing the truth value ‘False’ as 0 and ‘True’ as 1, we can say that a conditional sentence ‘if P then Q’ is True if and only if TV(P)  TV(Q), where TV(P) is the truth value of P, etc. Thus we think of sentences of the True/False sort as denoting in a set {0, 1} on which is defined a partial order, . The denotations of expressions in other categories defined in terms of {0, 1} inherit this order. For example, one-place predicates (P1s), such as is even or lives in Brooklyn, can be presented as properties of the elements of the set E of objects under discussion. Such a property p looks at each entity x in E and says ‘True’ or ‘False’ depending on whether x has p or not. So we represent properties p, q as functions from E into {0, 1}, and we define p  q if and only if (iff) for all x in E, p(x)  q(x), which just means if p is True of x, then so is q. The  relation just defined on functions (from E into {0, 1}) is provably a partial order. Other expressions similarly find their denotations in a set with a natural partial order (often denoted with a symbol like ‘’). A crucial example for linguists concerns the denotations of count NPs (Noun Phrases), such as some poets, most poets, etc., as they occur in sentences (Ss) like Some poets daydream. We interpret this S as True iff there is an entity x that both the ‘poet’ property p and the ‘daydreams’ property d map to 1. Similarly, No poets daydream is True iff there is no such x. And Most poets daydream is True iff the set of x such that p(x) and d(x) ¼ 1 outnumbers the set such that p(x) ¼ 1 and d(x) ¼ 0. That is, the set of poets that daydream is larger than the set that don’t. And for F,G possible NP denotations (called generalized quantifiers), we define F  G iff for all properties p, F(p)  G(p). This relation is again a partial order. As NP denotations map one poset (properties) to another (truth values), it makes sense to ask whether a given function F preserves the order (if p  q, then F(p)  F(q)), reverses it (if p  q, then F(q)  F(p)), or does neither. Some/all/most poets preserve the order,

40 Boole and Algebraic Semantics

since, for example, is laughing loudly  is laughing and Some poet is laughing loudly  Some poet is laughing, which just means, recall, that if the first sentence is True, then the second is. In contrast, no poet reverses the order, since, in the same conditions, No poet is laughing implies No poet is laughing loudly. The reader can verify that fewer than five poets, neither poet, at most six poets, and neither John nor Bill are all order reversing. And here is an unexpected linguistic correlation: reversing order correlates well with those subject NPs that license negative-polarity items, such as ever: (1a) No student here has ever been to Pinsk. (1b) *Some student here has ever been to Pinsk.

Observe that as a second linguistic application, modifying adjectives combine with property-denoting expressions (nouns) to form property-denoting expressions and can be represented semantically by functions f from properties to properties. For example, tall combines with student to form tall student, and semantically it maps the property of being a student to that of being a tall student. And overwhelmingly when f is an adjective function and p a property, f(p)  p. All tall students are students, etc. In fact, the denotation sets for the expressions we have discussed possess a structure much richer than a mere partial order: they are (boolean) lattices. A lattice is a poset in which for all elements x, y of the domain, the set {x, y} has a least upper bound (lub) noted (x _ y) and read as ‘x join y,’ and a greatest lower bound (glb), noted (x ^ y) and read as ‘x meet y.’ An upper bound (ub) for a subset K of a poset is an element z that every element of K is  to. An ub z for K is a lub for K iff z  every ub for K. Dually a lower bound (lb) for K is an element w  every element of K; such a w is a glb for K iff every lb for K is  w. For example, in the truth value lattice {0,1}, lubs are given by the standard truth table for disjunction: 1 _ 1 ¼ 1, 1 _ 0 ¼ 1, 0 _ 1 ¼ 1, and 0 _ 0 ¼ 0. That is, a disjunction of two false Ss is False, but True otherwise. Similarly, glbs are given by the truth table for conjunction: a conjunction of Ss is True iff each conjunct is, and False otherwise. So here the denotation of or is given by _, and that for and by ^. And this is quite generally the case. In our lattices of functions, for example, f _g, the lub of {f, g}, is that function mapping each argument x to f(x) _ g(x). Similarly, f ^ g maps each x to f(x) ^ g(x). So, for example, in the lattice of properties, the glb of {POET, DOCTOR} is that property which an entity x has iff POET (x) ¼ 1 and DOCTOR (x) ¼ 1, that is, x is both a poet and a doctor. So, in general, we see that the lattice structure provides denotations for the operations of conjunction and a disjunction, regardless of the category of

expression we are combining. We might emphasize that the kinds of objects denoted by Ss, P1s, Adjectives, NPs, etc., are quite different, but in each category conjunctions and disjunctions are generally interpreted by glbs and lubs of the conjuncts and disjuncts. So Boole’s original intuition that these operations represent properties of mind – how we look at things – rather than properties specific to any one of these categories, is supported. And we are not done: boolean lattices present an additional operation, complement, which provides a denotation for negation. Note that negation does combine with expressions in a variety of categories: with Adjectives in a bright but not very diligent student, with P1s in Most of the students drink but don’t smoke, etc. Formally, a lattice is said to be bounded if its domain has a glb (noted 0) and a lub (noted 1). Such a lattice is complemented if for every x there is a y such that x ^ y ¼ 0 and x _ y ¼ 1. If for each x there is exactly one such y, it is noted :x and called the complement of x. In {0, 1}, for example, :0 ¼ 1 and :1 ¼ 0. In our function lattices, :f is that function mapping each x to :(f(x)). In distributive lattices (ones satisfying x ^ (y _ z) ¼ (x ^ y) _ (x ^ z) and x _ (y ^ z) ¼ (x _ y)^ (x _ z)), each x has a unique complement. A lattice is called boolean if it is a complemented distributive lattice. And, again, a linguistic generalization: the negation of an expression d in general denotes the complement of the denotation of d. Given uniqueness of complements, : is a function from the lattice to itself, one that reverses the order: if x  y, then :y  :x. We expect, correctly then, that negation licenses negative-polarity items in the predicate, and it does: He hasn’t ever been to Pinsk is natural, *He has ever been to Pinsk is not. Reversing the order on denotations, then, is what ordinary negation has in common with NPs such as no poet, neither John nor Bill, etc., which as we saw earlier also license negative-polarity items. The boolean lattices we have so far invoked have further common properties. They are, for example, complete, meaning that each subset, not just ones of the form {x, y}, has a glb and a lub. They are also atomic (Keenan and Faltz, 1985: 56). In addition, different categories have some distinctive properties – which, with one exception, space limitations prevent us from reviewing (see also Keenan, 1983). The exception is the lattice of count NP denotations, needed for expressions such as most poets and five of John’s students. This lattice has the property of having a set of complete, independent (free) generators, called individuals (denotable by definite singular NPs, such as John, Mary, this poet). This means that any function from properties to truth values is in fact a boolean function (meet, join, complement) of individuals

Boole and Algebraic Semantics 41

(Keenan and Faltz, 1985: 92). And this implies that the truth value of an S of the form [[Det N] þ P1], for P1 noncollective, is booleanly computable if we know which individuals have the N and the P1 properties. The truth of Ss like Most of the students laughed, No students laughed, etc., is determined once that information is given. This semantic reduction to individuals is a major simplification, in that the number of individuals is the number of elements in E, whereas the number of possible NP denotations is that of the power set of the power set of E. So speaking of an E with just four elements, we find there are just four individuals but 65 536 NP denotations. These freely generated algebras show up in another, unexpected syntactic way. Szabolcsi and Zwarts (1993) observed that negation determines a context that limits the class of questions (relative clauses, etc.) we can grammatically form. Thus, the questions in (2) are natural, but those in (3), in which the predicates are negated, are not: (2) How tall is John? (3) *How tall isn’t John?

How much did the car cost? *How much didn’t the car cost?

It is tempting to say simply that we cannot question out of negative contexts, but that is not correct. Both questions in (4) are acceptable: (4) How many of the books on the list did/didn’t you read?

A more accurate statement is that negation blocks questioning from domains that lack individuals (free generators), such as amounts and degrees. So, as with the distribution of negative-polarity items, we find an unexpected grammatical sensitivity to boolean structure. Much ongoing work in algebraic semantics focuses on NPs (and their predicates) that are not boolean compounds of individuals. The predicates in the Ss in (5) force us to interpret their subjects as groups. (5a) John and Mary respect each other/are a nice couple. (5b) Russell and Whitehead wrote Principia mathematica together. (5c) The students gathered in the courtyard/ surrounded the building. (5d) Six teaching assistants graded 120 papers between them.

Respecting each other (being a nice couple, etc.) holds of a group of individuals if certain conditions among them obtain. But it does not make sense to say *John respects each other (*He is a nice couple, etc.), so we must interpret and somewhat differently from the glb operator discussed earlier. We note that the other boolean connectives – such as either . . . or . . . and

neither . . . nor . . . – do not admit of a reinterpretation in the way that and does (Winter, 2001). *Either John or Mary respect each other is nonsense: the disjunctive subject still forces a lub interpretation in which respect each other would hold of at least one of the disjuncts. First attempts to provide denotations for the subject NPs in (5) involve enriching the understood domain E of entities with a partial order relation called part-of, to capture the sense in which the individual John is part of the denotation of John and Mary in (5a) or some individual student is part of the group of students in (5c), etc. The group itself is a new type of object, one that is the lub of its parts. And new types of predicates, such as those in (5), can select these new objects as arguments. Thus, the domain of a model is no longer a mere set E but is a join semi-lattice, a set equipped with a part-of partial order in which each nonempty subset has a lub (see Link, 1983, 1998; Landman, 1991). Yet other new types of arguments are mass terms (6a) and event nominals (6b). (6a) Water and alcohol don’t mix. (6b) 4000 ships passed through the lock last year. (Krifka, 1991)

Mass term denotations have a natural part-of relation: if I pour a cup of coffee from a full pot, the coffee that remains, as well as that in my cup, is part of the original coffee. So mass term denotations are in some way ontologically uniform, with the result that definitional properties of a whole also apply to their parts – the coffee I poured and the coffee that remains are both coffee. This contrasts with predicates in (5), where respect each other, gather in the courtyard, etc., do not make sense even when applied to the proper parts of their arguments. In general, mass terms are much less well understood than count terms (see Pelletier and Schubert, 1989; Link, 1998). Last, observe that (6b) is ambiguous. It has a count reading, on which there are 4000 ships each of which passed through the lock (at least once) last year. But it also has an event reading, of interest here, on which it means that there were 4000 events of ships passing through the lock. If, for example, each ship in our fleet of 2000 did so twice, then there were 4000 passings but only 2000 ships that passed. Now, the event in (6b) has the individual passing events as parts, so such complex events exhibit something of the ontological uniformity of mass terms. But there are limits. The subevents of a single passing (throwing lines to the tugboats, etc.) are not themselves passings. So events present a part-of partial order with limited uniformity, and at least some events can be represented as the lubs of their parts. But in distinction to pure mass terms, events are ontologically

42 Boole and Algebraic Semantics

complex, requiring time and place coordinates, Agent and Patient participants, etc., resulting in a considerable enrichment of our naı¨ve ontology (see Parsons, 1990; Schein, 1993; and Landman, 2000). See also: Categorial Grammar, Semantics in; Dynamic Semantics; Event-Based Semantics; Formal Semantics; Monotonicity and Generalized Quantifiers; Negation; Operators in Semantics and Typed Logics; Plurality; Polarity Items; Quantifiers; Scope and Binding.

Bibliography Bell E (1937). Men of mathematics. New York, NY: Simon and Schuster. Boole G (1854). The laws of thought. Reprinted (1952) as vol. 2 in George Boole’s collected logical works. La Salle, IL: Open Court. Carlson G (1977). ‘A unified analysis of the English bare plural.’ Linguistics and Philosophy 1, 413–456. Keenan E L (1983). ‘Facing the truth: some advantages of direct interpretation.’ Linguistics and Philosophy 6, 335–371. Keenan E L & Faltz L M (1985). Boolean semantics for natural language. Dordrecht: D. Reidel. Koppelberg S (1989). Monk J D & Bonnet R (eds.) Handbook of boolean algebras, vol. 1. North-Holland: Amsterdam.

Krifka M (1991). ‘Four thousand ships passed through the lock: object-induced measure functions on events.’ Linguistics and Philosophy 13, 487–520. Krifka M (1992). ‘Thematic relations as links between nominal reference and temporal constitution.’ In Sag I A & Szabolcsi A (eds.) Lexical matters. Chicago: CSLI Publications, Chicago University Press. 29–53. Landman F (1991). Structures for semantics. Dordrecht: Kluwer. Landman F (2000). Events and plurality. Dordrecht: Kluwer. Link G (1983). ‘A logical analysis of plurals and mass terms: a lattice-theoretic approach.’ In Ba¨uerle R et al. (eds.) Meaning, use and interpretation in language. Berlin: de Gruyter. 302–323. Link G (1998). Algebraic semantics in language and philosophy. Stanford: CSLI. Parsons T (1990). Events in the semantics of English: a study in subatomic semantics. Cambridge, MA: MIT Press. Pelletier F J & Schubert L K (1989). ‘Mass expressions.’ In Gabbay D & Guenthner F (eds.) Handbook of philosophical logic, vol. IV. Dordrecht: D. Reidel. 327–407. Schein B (1993). Plurals and events. Cambridge, MA: MIT Press. Szabolcsi A (ed.) (1997). Ways of scope taking. Dordrecht: Kluwer. Szabolcsi A & Zwarts F (1993). ‘Weak islands and an algebraic semantics for scope taking.’ Natural Language Semantics 1, 235–284. Winter Y (2001). Flexibility principles in boolean semantics. Cambridge, MA: MIT Press.

C Categorial Grammar, Semantics in M Steedman, University of Edinburgh, Edinburgh, UK ß 2006 Elsevier Ltd. All rights reserved.

Introduction Categorial Grammar (CG) has always held a strong attraction for semanticists, because of the close relationship that it exhibits between syntactic and semantic types, both of which it characterizes as either functions or arguments. For example, if the semantic type of entities is e and that of propositions is t, then the semantic type (t e) e of the translation loved0 of the English transitive verb ‘loved’ as a function from (object) entities of type e to functions from (subject) entities of type e to propositions of type t is reflected in one standard notation in the following syntactic type:

Again, there are other notations, notably the one in which the translation precedes the syntactic category to the left of the colon. However, because the syntactic combinatorics determines semantic composition, the present notation is helpfully consistent with English orthography. The standard forward and backward functional application rules of categorial grammar in (3) then allow us to write syntactic and semantic derivations in parallel, as in (3) The functional application rules (a) X/Y : f Y : a ) X : fa (>) (b) Y : a X\Y : f ) X : fa (B in derivations: (19) The forward composition rule X/Y : f ) Y/Z : g ) ( X/Z : lz.f (gz) (>B) (20)

At first glance this looks quite promising, although as an account of scope inversion it will require modification. The same apparatus of type-raising and composition underlies the standard CCG account of wh-extraction, on the assumption that wh-elements are functions over S/NP and an indirect question has category Q and type t e (we ignore the semantics of

‘wonder’ and the related issue of functional vs. global interpretations of questions): It also allows such extraction to be unbounded, as in (22a), and predicts a number of constraints on extraction, including the Fixed Subject Constraint (22b) and the Across-the-Board Condition on extraction out of coordination (22c): (22a) I wonder [who]Q/(S/NP) [Johnnie said that Frankie shot]S/NP (22b) I wonder [who]Q/(S/NP) *[Frankie said that shot Johnnie] (22c) [Frankie loved,]S/NP and [I heard that she shot,]S/NP [her man.]S\(S/NP)

We therefore predict Rodman’s 1976 observation that similar constraints appear to apply to whextraction and quantifier scoping, as we saw in connection with examples (11) and the ‘all wide scope’ reading for a right node raised quantifier (as in (18)). However, there is a difficulty. The grammar as stated does not yet yield the other across-the-board Geach reading for (18), the one where ‘someone’ has narrow scope in both conjuncts. This suggests that derivations like (20) are not the way – or at least, not the only way – that existentials gain wide scope. Following preliminary work in a type-driven categorial framework by van Benthem and by Partee and Rooth, Hendriks (1993, and much earlier work), made a heroic proposal to make scope assignment entirely type driven, using a battery of type-changing rules over and above Montagovian type raising, notably including ‘argument lifting’ and ‘argument lowering.’ For example, in order to obtain the narrow scope object reading for sentences (9) and (18), Hendriks, would apparently need to subject the category of the transitive verb(s) to ‘argument lifting’ to make them functions over a type-raised object type. (The coordination rule must be correspondingly semantically generalized.) Moortgat (in unpublished technical reports) and Carpenter attempted to lexicalize Hendriks’s system using a ‘scoping connective,’ writing the standard syntactic type-lifted NP generalized quantifier category as np*s, axiomatized within a version of the Lambek calculus, of which Carpenter shows Hendriks’s rules to all be theorems. Pereira and Carpenter persuasively suggest that Keller’s earlier idea of ‘Nested Cooper Storage’ is essentially a syntactic reification of the way the categorial grammar makes such properties as the bound variable constraint emerge from the combinatorics of the grammar itself. However, such schemes are still very expressive. For example, of the remaining five readings for (12), four, according to which the companies either do or do not

48 Categorial Grammar, Semantics in

share the same two representatives, and in which the specific sets of three samples either do or do not vary with the representatives, are immediately available. However, as Park pointed out, it is much less clear that a fifth reading exists, as claimed by Shieber, Pereira, and Dalrymple, according to which three companies outscopes four samples, which in turn outscopes two representatives – that is, where both representatives of any given company saw the same four samples, but not the same four as the two representatives of either of the other two companies. There are a number of other constraints that remain hard to explain on any of these accounts. The most striking is the fact that most quantifiers fail to invert scope at all, in the sense of binding a syntactically commanding existential as the true universals each and every and their derivatives do in sentences like (10). For example, there seems to be no inverting reading for the following: (23) Some linguist speaks at least three languages. (9  3/#  39)

The existentials also appear to differ from the universals in being immune to ‘island effects’ in their ability to yield wide scope readings, thus (24a) Every witness claimed that Johnnie had dated some woman. (89/98) (24b) Every witness claimed that some woman had dated Johnnie. (89/98)

Accordingly, Park, following Fodor and Sag, suggested that the apparent wide-scope readings of existentials in examples like (9) arose from a referential reading for the nonuniversal quantifiers, according to which they referred to a contextually available individual, which gave the appearance of having scope everywhere. When combined with CCG syntax, this immediately explained their inability to invert scope and their indifference to islands, as well as making the across-the-board narrow scope reading for (18) available and explaining the impossibility of mixed scope readings as proposed by Geach under his combinatory categorial approach. Steedman and Farkas proposed ways of obtaining both wide-scope discourse-referential and narrow-scope dependent readings from a single underspecified representation of a referential reading. The former account identifies these representations with a generalization of Skolem terms within a CCG framework and investigates several other ways in which syntax limits available scopings. The assumption that all quantifiers other than the true universals are (possibly dependent) referential expressions rather than generalized quantifiers also explains their otherwise anomalous ability to

apparently bind pronouns that they do not structurally command, as in the famous ‘Donkey Sentences’: (25) Every farmer who owns a donkey feeds it.

The preferred reading of the sentence, according to which each farmer feeds the donkey(s) he or she owns, arises not from quantifier binding of it by a noncommanding existential generalized quantifier, but by discourse anaphora to a dependent donkey represented by a Skolem term in the variable bound by the universal, which does command the pronoun via the category defined at (15a). (This approach to the problem avoids a number of complications that accrue to the standard ‘E-type pronoun’ and Discourse Representation Theory (DRT) accounts.)

Anaphora Reflexives

Bach and Partee influentially proposed that reflexive pronouns like ‘herself’ in sentences like Frankie cooled herself had the type of a type-raised NP. In present notation this can be captured in the following lexical entry (26) (we ignore the complication of object-bound reflexives for the moment): (26) herself :¼ (S\NP)\((S\NP)/NP) : lvlx:self 0 vx

Thus, we have: (27)

This suggestion (which is also implicit in Geach’s account) was taken up by Szabolcsi, Jacobson, Morrill, and Carpenter. In order to prevent the typeraised reflexive category from allowing unbounded reflexive binding analogous to the unbounded quantifer scope inversion in (11a) (contrary to Condition A of the Chomskyan Binding Theory) and in order to enforce nonlocal binding of pronouns (Condition B, discussed later), Bach and Partee added two extra Cooper stores to the grammar, one of which restricted anaphors like reflexives to local binding. Szabolcsi and Jacobson assumed that Condition A followed from other semantic considerations (although not, of course, from profane structural command relations defined over logical forms), but Morrill restricted his version of category (26) to lexical categories using slash modalities, whereas Steedman lexicalized anaphor binding on the verb, obtaining Condition A as a consequence of the intrinsic locality of lexicalization and Condition C (profanely stated).

Categorial Grammar, Semantics in 49

All of these authors generalized their accounts to cover non-subject-bound local anaphors, and ‘pied-piping’ of elements like prepositions, as in the following: (28a) I introduced Frankie and Johnnie to each other. (28b) *I introduced each other to Frankie and Johnnie. Pronouns

Bach and Partee treated pronouns like ‘her’ as typeraised NPs introducing a distinguished variable both into the logical form and into a third ‘Superpronoun’ Cooper store from which the variable could in due course be unstored to become bound by a (nonlocal) antecedent. This system was used to account for a very wide range of phenomena involving pronouns, including simple intrasentential anaphora, quantifierbound variable anaphora, ‘Donkey Anaphora’ of the kind exhibited in (25), and ‘Sloppy Anaphora’ of the kind exhibited in the following ‘Paycheck Sentence’, in which ‘it’ can refer to a different paycheck, as well as the same one: (29) The woman who put her paycheck in the bank was wiser than the woman who put it in a teapot.

However, adding even one store to a grammar (or the equivalent automaton) can bring about dramatic increases in expressive power, and subsequent accounts again attempted to eliminate storage in favor of more compositional mechanisms. Szabolcsi treated the pronoun as having a category drawn from a family including the following: (30a) him :¼ ((S\NP)\(S\NP))\(S/NP) : lf lglx:f (gx)x (30b) he :¼ ((S\NP)\(S\NP))/(S\NP) : lf lglx:f (gx)x

This yielded derivations like (31) and (32). Hepple and Jacobson developed a related account in which the pronoun itself was an identity function, as follows: (33) him :¼ NPNP : lx.x

The superscript NP is a syntactically inert functional type-constructor, to which none of the standard combinatory rules apply. A ‘Geach Rule’ G, corresponding to unary application of the composition combinator B, applies to pass NP-marking up the derivation to a point where a combinatory rule called Z with a semantics identical to that in Szabolcsi’s pronominal schema (30) (although in other respects the theories differ) applies to permit the remainder of the derivation to proceed as in Szabolcsi’s (31), thus:

In both Szabolcsi’s and Jacobson’s proposals, a ‘family’ of Z-rules or Z-related categories (and in the latter case G-rules) is needed to allow for cases like the following, in which multiple binding relations may not be across adjacent elements and may either nest or intercalate – see Jacobson 1999:(23),(29): (35) Frankiei said Johnnie j thinks hej/shei saw heri/ himj.

In this respect, anaphoric relations stand in sharp contrast to standard syntactic multiple dependencies like the following, which generally have to do one or the other: (36a) Which violini is this sonataj easy to playj oni? (36b) *Which sonatai is this violinj easy to playi onj?

Jacobson shows that GZ CCG supports the interpretation of ‘Pronouns of Laziness’ in ‘Paycheck Sentences’ like (29) among several other varieties of anaphora. A number of problems remain open for this attractive account of anaphora. One is that it is not yet clear whether the approach can be extended to donkey anaphora of the kind exhibited in (25) (Szabolcsi, 1989: 315; Jacobson, 1999: n. 1). On the assumption from the last section that existentials are referential rather than quantificational, which itself seems to be a forced move in any surface derivational semantics, it seems likely that an extension is possible, because the syntactic position of the donkey is one out of which universal quantifiers can invert scope, and the types seem otherwise compatible with composition via the GZ machinery, in combination with a ‘wrapping’ combinator – essentially as in Jacobson’s analysis of paycheck sentences. The problem of Quantifier Scope in variable-free CCG is discussed by Barker. A more serious open problem that both Szabolci and Jacobson draw attention to is that neither account of anaphora is compatible with the standard account of syntactic extraction in CCG (see Jacobson, 1999: n. 17). For example, on the earlier standard assumptions about the type of an indirect question, the following derivation in Szabolcsi’s version blocks: The problem is that the type of the category of ‘he saw’, ((S\NP)\(S\NP))/NP, is not the same as that of ‘Everyone saw,’ S/NP in (21). The same is true for the corresponding type SNP/NP in Jacobson’s system. As Jacobson pointed out, the GZ mechanism offers no semantically coherent way for the category of ‘what,’ Q/(S/NP), to shift into something that will combine with SNP/NP. Jacobson suggested that the CCG account of extraction could be dispensed with and replaced with an autonomous relative clause formation mechanism, such as the feature-passing mechanism of

50 Categorial Grammar, Semantics in

Generalized Phrase Structure Grammar (GPSG). However, to capture the full range of extraction phenomena covered in CCG, including multiple longrange dependencies, is known to be problematic in GPSG. The only constrained way of doing it that is known is equivalent to reimplementing the ‘mildly context sensitive’ CCG mechanism as a Linear Indexed Grammar via a form of storage. However, the GZ-plus-WRAP apparatus already amounts to an apparatus of at least this power, because CCG is known to be weakly equivalent to a Linear Indexed Grammar, and Geached NP arguments can stack. It is not immediately clear what combining two such mechanisms in a single grammar would do to its expressive power. The fact that anaphora is notoriously free from the syntactic constraints that limit relativization, as example (35) shows, makes it seem possible to conclude that Pronoun Anaphora should not be treated in syntax at all. Differences of opinion on this question are not confined to Categorial Grammarians.

Computational Semantics for Categorial Grammars Although we have used first-order predicate calculus as a semantic representation and the l-calculus as the ‘glue language’ for assembling logical forms, as is standard, other representations are possible and may be important in computational applications of Categorial Grammar. Calder, Klein and Zeevat and Bos et al. use versions of DRT as a first-order representation language as a basis for inference. Baldridge proposed a ‘flatter’ Hybrid Logic Dependency Semantics (HLDS), which is better adapted to realization in Language Generation. The properties of such representations are important for computational efficiency, but we will not explore them here.

Conclusion It appears that the Montagovian vision of a semantically transparent categorial theory of grammar for natural languages is close to realization, with profound implications for our view of both syntax and semantics. The remaining difficulties look soluble and represent the most important challenge for the approach. See also: Boole and Algebraic Semantics; Compositionality; Dynamic Semantics; Formal Semantics; Game-theoretical Semantics; Lexical Semantics; Monotonicity and Generalized Quantifiers; Montague Semantics; Operators in Semantics and Typed Logics; Quantifiers; Scope and Binding; Syntax-Semantics Interface; Truth Conditional Semantics and Meaning.

Bibliography Bach E (1979). ‘Control in Montague grammar.’ Linguistic Inquiry 10, 513–531. Bach E & Partee B (1980). ‘Anaphora and semantic structure.’ In Papers from the 10th Regional Meeting of the Chicago Linguistic Society, Parasession on Pronouns and Anaphora. Chicago, IL: University of Chicago, Chicago Linguistic Society. Baldridge J (2002). Lexically specified derivational control in combinatory categorial grammar. Ph.D. thesis, University of Edinburgh. Barker C (2005). ‘Remarks on Jacobson 1999.’ Linguistics and Philosophy 28, 211–242. In press. van Benthem J (1986). Essays in logical semantics. Dordrecht: Reidel. Bos J, Clark S, Steedman M, Curran J R & Hockenmaier J (2004). ‘Wide-coverage semantic representations from a CCG parser.’ In Proceedings of the 20th International Conference on Computational Linguistics (COLING ‘04), Geneva. ACL. Calder J, Klein E & Zeevat Z (1988). ‘Unification categorial grammar: a concise, extendable grammar for natural language processing.’ In Proceedings of the 12th Annual Conference of the Association for Computational Linguistics. 83–86. Carpenter B (1997). Type-logical semantics. Cambridge, MA: MIT Press. Chierchia G (1984). Topics in the syntax and semantics of infinitives and gerunds. Ph.D. thesis, U. Mass., Amherst. Cooper R (1983). Quantification and syntactic theory. Dordrecht: Reidel. Cresswell M J (1973). Logics and languages. London: Methuen. Curry H B & Feys R (1958). Combinatory logic: Vol. I. Amsterdam: North Holland. Dowty D (1979). ‘Dative movement and Thomason’s extensions of Montague grammar.’ In Davis S & Mithun M (eds.) Linguistics, philosophy, and Montague grammar. Austin: University of Texas Press. 153–222. Dowty D (1985). ‘On recent analyses of the semantics of control.’ Linguistics and Philosophy 8, 291–331. Farkas D (2001). ‘Dependent indefinites and direct scope.’ In Condoravdi C & de Lavalette R (eds.) Logical perspectives on language and information. Stanford, CA: CSLI Publications. 41–72. Fodor J D & Sag I (1982). ‘Referential and quantificational indefinites.’ Linguistics and Philosophy 5, 355–398. Geach P (1972). ‘A program for syntax.’ In Davidson D & Harman G (eds.) Semantics of natural language. Dordrecht: Reidel. 483–497. Hendriks H (1993). Studied flexibility: categories and types in syntax and semantics. Ph.D. thesis, Universiteit van Amsterdam. Hepple M (1990). The grammar and processing of order and dependency: a categorial approach. Ph.D. thesis, University of Edinburgh. Jacobson P (1992a). ‘Flexible categorial grammars: questions and prospects.’ In Levine R (ed.) Formal grammar. Oxford: Oxford University Press. 129–167.

Categorizing Percepts: Vantage Theory 51 Jacobson P (1992b). ‘The lexical entailment theory of control and the tough construction.’ In Sag I & Szabolcsi A (eds.) Lexical matters. Stanford, CA: CSLI Publications. 269–300. Jacobson P (1999). ‘Towards a variable-free semantics.’ Linguistics and Philosophy 22, 117–184. Keenan E & Faltz L (1985). Boolean semantics for natural language. Dordrecht: Reidel. Kempson R & Cormack A (1981). ‘Ambiguity and quantification.’ Linguistics and Philosophy 4, 259–309. Klein E & Sag I A (1985). ‘Type-driven translation.’ Linguistics and Philosophy 8, 163–201. Lewis D (1970). ‘General semantics.’ Synthese 22, 18–67. Montague R (1973). ‘The proper treatment of quantification in ordinary English.’ In Hintikka J, Moravcsik J M E & Suppes P (eds.) Approaches to natural language: proceedings of the 1970 Stanford workshop on grammar and semantics. Dordrecht: Reidel. 221–242. [Reprinted in Montague 1974, 247–279.] Montague R (1974). Formal philosophy: papers of Richard Montague. Thomason R H (ed.). New Haven, CT: Yale University Press. Moortgat M (1988). Categorial investigations. Ph.D. thesis, Universiteit van Amsterdam. Published by Foris, Dordrecht, 1989. Morrill G (1988). Extraction and coordination in phrase structure grammar and categorial grammar. Ph.D. thesis, University of Edinburgh. Morrill G (1990). ‘Intensionality and boundedness.’ Linguistics and Philosophy 13, 699–726. Morrill G (1994). Type-logical grammar. Dordrecht: Kluwer.

Park J (1995). ‘Quantifier scope and constituency.’ In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, Cambridge MA. San Francisco, CA: Morgan Kaufmann. 205–212. Partee B & Rooth M (1983). ‘Generalised conjunction and type ambiguity.’ In Bau¨erle R, Schwarze C & von Stechow A (eds.) Meaning, use, and interpretation of language. Berlin: de Gruyter. 361–383. Pereira F (1990). ‘Categorial semantics and scoping.’ Computational Linguistics 16, 1–10. Rodman R (1976). ‘Scope phenomena, ‘‘movement transformations,’’ and relative clauses.’ In Partee B (ed.) Montague grammar. New York: Academic Press. 165–177. Shaumyan S (1977). Applikativnaja grammatika kak semanticˇeskaja teorija estestvennyx jazykov. Translated as. Applicational grammar as a semantic theory of natural language. Edinburgh: Edinburgh University Press. Moscow: Nauka.] Shieber S, Pereira F & Dalrymple M (1996). ‘Interactions of scope and ellipsis.’ Linguistics and Philosophy 19, 527–552. Steedman M (1996). Surface structure and interpretation. Cambridge, MA: MIT Press. Steedman M (2000). The syntactic process. Cambridge, MA: MIT Press. Szabolcsi A (1989). ‘Bound variables in syntax: are there any?’ In Bartsch R, van Benthem J & van Emde Boas P (eds.) Semantics and contextual expression. Dordrecht: Foris. 295–318. Szabolcsi A (1992). ‘On combinatory grammar and projection from the lexicon.’ In Sag I & Szabolcsi A (eds.) Lexical matters. Stanford, CA: CSLI Publications. 241–268.

Categorizing Percepts: Vantage Theory K Allan, Monash University, Victoria, Australia ß 2006 Elsevier Ltd. All rights reserved.

Vantage theory (VT) is a theory of cognitive categorization in terms of point of view or ‘vantage.’ The underlying assumption is that categorization reflects human needs and motives. VT was created by the late Robert E. MacLaury as a way of explaining the meanings and development of color terms across languages when he found prototype theory and fuzzy-set logic inadequate to the task (see MacLaury 1986, 1987, 1991, 1995, 1997, 2002). VT explains . how people construct categories by analogy to the way they form points of view in space–time; . how categories are organized; . how categories divide; and . the relations between categories.

In VT, cognition consists of selective attention to perception. To form a category, selected perceptions and reciprocal emphases on similarity and difference must be integrated in a principled way. A vantage is a point of view constructed by analogy to physical experience as though it were one or more ‘space– motion coordinates’ on a spatial terrain. Reminiscent of gestalt theory is MacLaury’s claim that a category is the sum of its coordinates, plus their arrangement into one or more vantages by selective emphasis. ‘‘The maker of the category, in effect, names the ways he constructs it rather than the set of its components as detached from himself’’ (1997: 153). The categorizer’s perspectives can be illustrated by an ornithologist ‘zooming in’ to see a mallard among the ducks on a lake, or alternatively ‘zooming out’ to see the assembled mallards, widgeon, and pintails as ducks. The mallard is the ‘fixed coordinate’; the rest

52 Category-Specific Knowledge

VT has been applied to many cognitive fields: the category of person in 16th century Aztec; literacy choices for Yaquis in Arizona; choice of orthography in Japan; semantic extensions in English, French, Spanish, and Zapotec; lexical choices in French; varieties of Japanese women’s speech; terms of address in Japanese; the process of argumentation; and foreign language learning. Figure 1 Red focus in the composite ‘warm’ category; cf. MacLaury (1997: 145).

a ‘mobile coordinate.’ In both views, there is a pair of coordinates that we can loosely differentiate as ‘species’ and ‘genus.’ Imagine mapping warm-category colors (red, yellow) in an array of colored blocks representing the entire color spectrum. If each of the terms ‘red’ and ‘yellow’ is mapped differently, there is a single vantage. If there is coextensive mapping (evidence of a composite ‘warm’ color) with red focus [see Color Terms], red will dominate at the primary level of concentration, Level 1 in Figure 1, and attention is on ‘similarity,’ S, as the mobile coordinate. At Level 2 concentration, attention to the mobile coordinate yellow notes its similarity to red (as a warm color). At Level 3, there is attention to D, the ‘difference’ of fixed coordinate yellow from red. Here, yellow is recessive. Thus does VT model the dominant–recessive pattern of coextensive naming. The dominant vantage includes reinforced attention to similarity; the recessive vantage reinforces attention to difference. Thus, a category is composed of . selected perceptions; . reciprocal and mutable emphases on similarity and difference; and . at least one arrangement of these coordinates into levels of concentration—which is the vantage.

See also: Category-Specific Knowledge; Cognitive Semantics; Color Terms; Field Work Methods in Semantics; Memes; Mentalese; Polysemy and Homonymy; Psychology, Semantics in; Representation in Language and Mind.

Bibliography MacLaury R E (1986). Color in Mesoamerica, vol. 1. Ph.D. diss., UCB. No. 8718073. Ann Arbor: UMI University Microfilms. MacLaury R E (1987). ‘Coextensive semantic ranges: Different names for distinct vantages of one category.’ In Need B, Schiller E & Bosch A (eds.) Papers from the Twenty-Third Annual Meeting of the Chicago Linguistics Society. Chicago: Chicago Linguistics Society. 268–282. MacLaury R E (1991). ‘Social and cognitive motivations of change: Measuring variability in color semantics.’ Language 67, 34–62. MacLaury R E (1995). ‘Vantage theory.’ In Taylor J R & MacLaury R E (eds.) Language and the cognitive construal of the world. Berlin: Mouton de Gruyter. 231–276. MacLaury R E (1997). Color and cognition in Mesoamerica: Constructing categories as vantages. Austin: University of Texas Press. MacLaury R E (ed.) (2002). Language Sciences 24. Special Edition on Vantage Theory. Taylor J R & MacLaury R E (eds.) (1995). Language and the cognitive construal of the world. Berlin: Mouton de Gruyter.

Relevant Website http://serwisy.umcs.lublin.pl/adam.glaz/vt.html

Category-Specific Knowledge B Z Mahon and A Caramazza, Harvard University, Cambridge, MA, USA ß 2006 Elsevier Ltd. All rights reserved.

Principles of Organization Theories of the organization of conceptual knowledge in the brain can be distinguished according

to their underlying principles. One class of theories, based on the neural structure principle, assumes that the organization of conceptual knowledge is governed by representational constraints internal to the brain itself. Two types of neural constraints have been invoked: modality-specificity and domain-specificity. The second class of theories, based on the correlated structure principle, assumes that the organization of

Category-Specific Knowledge 53

conceptual knowledge in the brain is a reflection of the statistical co-occurrence of object properties in the world. Neuropsychological evidence, and more recently findings from functional neuroimaging, have figured centrally in attempts to evaluate extant theories of the organization of conceptual knowledge. Here we outline the main theoretical perspectives as well as the empirical phenomena that have been used to inform these perspectives. Modality-Specific Hypotheses

The first class of theories based on the neural structure principle assumes that the principal determinant of the organization of conceptual knowledge is the sensory-motor modality (e.g., visual, motor, verbal) through which the information was acquired or is typically processed. For instance, the knowledge that hammers are shaped like a T would be stored in a semantic subsystem dedicated to representing the visual structure of objects, while the information that hammers are used to pound nails would be represented in a semantic subsystem dedicated to functional knowledge of objects. There have been many proposals based on the modality-specific assumption (Beauvois, 1982; Warrington and McCarthy, 1983, 1987; Warrington and Shallice, 1984; Allport, 1985; Martin et al., 2000; Humphreys and Forde, 2001; Barsalou et al., 2003; Cree and McRae, 2003; Crutch and Warrington, 2003; Gallese and Lakoff, 2005). One way to distinguish between these proposals concerns whether, and to what extent, conceptual knowledge is assumed to be represented independently of sensory-motor processes. At one extreme are theories that assume conceptual content reduces to (i.e., actually is) sensory-motor content (e.g., Allport, 1985; Pulvermuller, 2001; Barsalou et al., 2003; Gallese and Lakoff, 2005). Central to such proposals is the notion of simulation, or the automatic reactivation of sensory-motor information in the course of conceptual processing. Toward the other end of the continuum are modality-based hypotheses of the organization of conceptual knowledge that assume that sensory-motor systems may be damaged without compromising the integrity of conceptual knowledge (Martin et al., 2000; Plaut, 2002; Crutch and Warrington, 2003; for discussion, see Mahon and Caramazza, 2005). Domain-Specific Hypotheses

A second class of proposals based on the neural structure principle assumes that the principal determinant of the organization of conceptual knowledge is semantic category (e.g., Gelman, 1990; Carey and Spelke, 1994; Caramazza and Shelton, 1998;

Kanwisher, 2000). For instance, in this view, it may be argued that conceptual knowledge of conspecifics and conceptual knowledge of animals are represented and processed by functionally dissociable processes/systems. Crucially, in this view, the first order principle of organization of conceptual processing is semantic category and not the modality through which that information is typically processed. One proposal along these lines, the DomainSpecific Hypothesis (Caramazza and Shelton, 1998), argues that conceptual knowledge is organized by specialized (and functionally dissociable) neural circuits innately determined to the conceptual processing of different categories of objects. However, not all Domain-Specific theories assume that the organization of the adult semantic system is driven by innate parameters (e.g., Kanwisher, 2000). Feature-Based Hypotheses

The class of hypotheses based on the correlated structure principle has focused on articulating the structure of semantic memory at the level of semantic features. There are many and sometimes diverging proposals along these lines; common to all of them is the assumption that the relative susceptibility to impairment (under conditions of neurological damage) of different concepts is a function of statistical properties of the semantic features that comprise those concepts. For instance, on some models, the degree to which features are shared by a number of concepts is contrasted with their relative distinctiveness (Devlin et al., 1998; Garrard et al., 2001; Tyler and Moss, 2001). Another dimension that is introduced by some theorists concerns dynamical properties of damage in the system; for instance, Tyler and Moss assume that features that are more correlated with other features will be more resistant to damage, due to greater reciprocal activation (or support) from those features with which they are correlated (but see Caramazza et al., 1990). Distinctive features, on the other hand, will not receive as much reciprocal support, and will thus be more susceptible to damage. More recently, theorists have expanded on the original proposal of Tyler and colleagues, adding dimensions such as familiarity, typicality, and relevance (e.g., Cree and McRae, 2003; Sartori and Lombardi, 2004). Feature-based models of semantic memory have in general emphasized an empirical, bottom up, approach to modeling the organization of semantic memory, usually drawing on feature generation tasks (e.g., Garrard et al., 2001; Tyler and Moss, 2001; Cree and McRae, 2003; Sartori and Lombardi, 2004). For this reason, feature-based models have been useful in generating

54 Category-Specific Knowledge

hypotheses about the types of parameters that may contribute to the organization of conceptual knowledge.

Clues from Cognitive Neuropsychology Neuropsychological studies of patients with semantic impairments have figured centrally in developing and evaluating the hypotheses outlined above. Of particular importance has been a clinical profile described as category-specific semantic deficit. Patients with category-specific semantic deficits present with disproportionate or even selective difficulty for conceptual knowledge of stimuli from one semantic category compared to other semantic categories. For instance, the reports of category-specific impairment by Warrington and her collaborators (e.g., Warrington and McCarthy, 1983, 1987; Warrington and Shallice, 1984) documented patients who were impaired for living things compared to nonliving things, or the reverse: greater difficulty with nonliving things than living things. Since those seminal reports, the phenomenon of category-specific semantic deficit has been documented by a number of investigators (for recent reviews of the clinical evidence, see Humphreys and Forde, 2001; Tyler and Moss, 2001; Capitani et al., 2003). The clinical profile of category-specific semantic deficits is in itself quite remarkable, and can be striking. Consider some aspects of the following case of category-specific semantic deficit for living animate things. Patient EW (Caramazza and Shelton, 1998) was 41% correct (7/16) for naming pictures of animals but was in the normal range for naming pictures of non-animals (e.g., artifacts, fruit/vegetables) when the pictures from the different semantic categories were matched jointly for familiarity and visual complexity. EW was also severely impaired for animals (60%; 36/60 correct) in a task in which the patient was asked to decide, yes or no, whether the depicted stimulus was a real object or not. In contrast, EW performed within the normal range for making the same types of judgments about nonanimals. On another task, EW was asked to decide whether a given attribute was true of a given item (e.g., Is it true that eagles lay eggs?). EW was severely impaired for attributes pertaining to animals (65% correct) but within the normal range for non-animals. EW was equivalently impaired for both visual/perceptual and functional/associative knowledge of living things (65% correct for both types of knowledge) but was within the normal range for both types of knowledge for non-animals. The phenomenon of category-specific semantic deficits frames what has proven to be a rich question:

How could the conceptual system be organized such that various conditions of damage can give rise to conceptual impairments that disproportionately affect specific semantic categories? There is emerging consensus that any viable answer to this question must be able to account for the following three facts (for discussion, see Caramazza and Shelton, 1998; Tyler and Moss, 2001; Capitani et al., 2003; Cree and McRae, 2003; Samson and Pillon, 2003). Fact I: The grain of the phenomenon: Patients can be disproportionately impaired for either living animate things (i.e., animals) compared to living inanimate things (i.e., fruit/vegetables (e.g., Hart and Gordon, 1992; Caramazza and Shelton, 1998) or living inanimate things compared to living animate things (e.g., Hart et al., 1985; Crutch and Warrington, 2003; Samson and Pillon, 2003). Patients can also be impaired for nonliving things compared to living things (Hillis and Caramazza, 1991). Fact II: The profile of the phenomenon: Categoryspecific semantic deficits are not associated with disproportionate impairments for modalities or types of information (e.g., Caramazza and Shelton, 1998; Laiacona and Capitani, 2001; Farah and Rabinowitz, 2003; Samson and Pillon, 2003). Conversely, disproportionate impairments for modalities or types of information are not necessarily associated with category-specific semantic deficits (e.g., Lambon-Ralph et al.,1998; Miceli et al., 2001). Fact III: The severity of overall impairment: The direction of category-specific semantic deficits (i.e., living things worse than nonliving things, or vice versa) is not related to the overall severity of semantic impairment (Garrard et al., 1998; Zannino et al., 2002). Explaining Category-Specific Semantic Deficits

Most of the empirical and theoretical work in categoryspecific semantic deficits has been driven by an attempt to evaluate a theoretical proposal first advanced by Warrington, Shallice, and McCarthy (Warrington and McCarthy, 1983, 1987; Warrington and Shallice, 1984): the Sensory/Functional Theory. The Sensory/ Functional Theory is an extension of the modalityspecific semantic hypothesis (Beauvois, 1982) discussed above. In addition to assuming that the semantic system is functionally organized by modality or type of information, the Sensory/Functional Theory assumes that the recognition/identification of items from different semantic categories (e.g., living things compared to nonliving things) differentially depends on different modality-specific semantic subsystems. In general, Sensory/Functional theories assume that the ability to identify/recognize living things differentially depends on visual/perceptual knowledge, while

Category-Specific Knowledge 55

the ability to identify/recognize nonliving things differentially depends on functional/associative knowledge (for data and/or discussion of the assumption that different types or modalities of information are differentially important for different semantic categories, see Farah and McClelland, 1991; Caramazza and Shelton, 1998; Garrard et al., 2001; Tyler and Moss, 2001; Cree and McRae, 2003). There are several versions of the Sensory/Functional Theory, each of which has emphasized a different correspondence between the type or modality of information and the category of items that differentially depends on that type of information. For instance, it has been proposed that color information is more important for fruit/vegetables than animals (e.g., Humphreys and Forde, 2001; Cree and McRae, 2003; Crutch and Warrington, 2003) while biological motion information is more important for animals than for fruit/ vegetables (e.g., Cree and McRae, 2003). Another version of the Sensory/Functional Theory (Humphreys and Forde, 2001) holds that there is greater perceptual crowding (due to greater perceptual overlap) at a modality-specific input level for living things than for nonliving things. Thus, damage to this visual modality-specific input system will disproportionately affect processing of living things compared to nonliving things (see also Tranel et al., 1997; Dixon, 2000; Laws et al., 2002). Common to theories based on the Sensory/Functional Assumption is that at least some category-specific semantic deficits can be explained by assuming damage to the modality or type of information upon which recognition/identification of items from the impaired category differentially depends (for discussion see Humphreys and Forde, 2001). Other authors have argued that the fact that category-specific semantic deficits are not necessarily associated with deficits to a modality or type of knowledge (see Fact II above) indicates that the phenomenon does not provide support for Sensory/Functional theories (for discussion, see Caramazza and Shelton, 1998; Tyler and Moss, 2001; Capitani et al., 2003; Cree and McRae, 2003; Samson and Pillon, 2003). Caramazza and Shelton (1998) argued for a Domain-Specific interpretation of category-specific semantic deficits that emphasized the hypothesis that the grain of category-specific semantic deficits will be restricted to a limited set of categories. Specifically, because the Domain-Specific Hypothesis (Caramazza and Shelton, 1998) assumes that the organization of conceptual and perceptual processing is determined by innate constraints, the plausible categories of category-specific semantic impairment are ‘animals,’ ‘fruit/vegetables,’ ‘conspecifics,’ and possibly tools. Recent discussion of this proposal

(Caramazza and Mahon, 2005; see also Shelton et al., 1998) has capitalized on using the category ‘conspecifics’ as a test case. Consistent with expectations that follow from the Domain-Specific Hypothesis, patients have been reported who are relatively impaired for knowledge of conspecifics but not for animals or objects (e.g., Kay and Hanley, 1999; Miceli et al., 2000) as well as the reverse: equivalent impairment for animals and objects but spared knowledge of conspecifics (Thompson et al., 2004). Thus, the domain of conspecifics can be spared or impaired independently of both objects and other living things, and importantly, an impairment for conspecifics is not necessarily associated with a general impairment for living things compared to nonliving things. Another line of research has sought an account of category-specific semantic deficits in terms of featurebased models of semantic memory organization. For instance, the Organized Unitary Content Hypothesis (OUCH) (Caramazza et al., 1990) makes two principal assumptions. First, conceptual features corresponding to object properties that often cooccur will be stored close together in semantic space; and second, focal brain damage can give rise to category-specific semantic deficits either because the conceptual knowledge corresponding to objects with similar properties is stored in adjacent neural areas, or because damage to a given property will propagate damage to highly correlated properties. While the original OUCH model is not inconsistent with the currently available data from categoryspecific semantic deficits, it is too unconstrained to provide a principled answer to the question of why the various facts are as they are. Other feature-based models have emphasized the differential susceptibility to impairment of different types of semantic features. These models often assume random (or diffuse) damage to a conceptual system that is not organized by modality or object domain. For instance, in order to account for categoryspecific semantic deficits, the semantic memory model advanced by Tyler and Moss (2001) makes three assumptions bearing on the relative susceptibility to impairment of different classes of semantic features: (a) Living things have more shared features than nonliving things, or put differently, nonliving things have more distinctive/informative features than living things; (b) For living things, biological function information is highly correlated with shared perceptual properties (e.g., can see/has eyes). For artifacts, function information is highly correlated with distinctive perceptual properties (e.g., used for spearing/has tines). (c) Features that are highly correlated with other features will be more resistant to damage

56 Category-Specific Knowledge

than features that are not highly correlated (see also Devlin et al., 1998; Garrard et al., 2001; Cree and McRae, 2003). This proposal, termed the Conceptual Structure Account, predicts that a disproportionate deficit for living things will be observed when damage is relatively mild, while a disproportionate deficit for nonliving things will only arise when damage is so severe that all that is left in the system are the highly correlated shared perceptual and function features of living things. Recent work investigating the central prediction of the theory through cross sectional analyses of patients at varying stages of Alzheimer’s disease has not found support for this prediction (Garrard et al., 1998; Zannino et al., 2002).

Clues from Functional Neuroimaging Increasingly, the neuropsychological approach is being complemented by functional neuroimaging studies of category-specificity. There is a large body of evidence from functional neuroimaging that demonstrates differentiation by semantic domain within modality-specific systems specialized for processing object form and object-associated motion. Specifically, within the ventral object processing system, areas on the inferior surface of the temporal lobes process object-associated form and texture, while areas on the lateral surfaces of the temporal lobes process object-associated movement (Kourtzi and Kanwisher, 2000; Beauchamp et al., 2002, 2003). Within both form/texture- and motion-specific areas of the ventral object processing system, there is differentiation by semantic category. On the inferior surface of the temporal lobe (e.g., fusiform gyrus), more lateral areas are differentially involved in the processing of living things, while more medial regions are differentially involved in the processing of nonliving things. Furthermore, human face stimuli, in comparison to non-face stimuli (including animals without faces), differentially activate distinct regions of the inferior temporal cortex (Kanwisher et al., 1999). On the lateral surface of the temporal lobes, more superior regions (e.g., superior temporal sulcus) are differentially involved in the processing of motion associated with living things, while more inferior regions (e.g., middle temporal gyrus) are differentially involved in the processing of motion associated with nonliving things (for review, see Kanwisher, 2000; Martin and Chao, 2001; Beauchamp et al., 2002, 2003; Bookheimer, 2002; Caramazza and Mahon, 2003, 2006). All of the theoretical frameworks outlined above have been applied to the data from functional neuroimaging. One widely received view, the Sensory/Motor Theory, developed by Martin, Wiggs,

Ungerleider, and Haxby (1996; see also Martin et al., 2000) assumes that conceptual knowledge of different categories of objects is stored close to the modalityspecific input/output areas that are active when we learn about and interact with those objects. Other authors have interpreted these patterns of activation within a Domain-Specific Framework (e.g., Kanwisher, 2000; Caramazza and Mahon, 2003, 2006), while still others have interpreted these findings within a distributed semantic memory model that emphasizes experience-dependent and/or featurebased properties of concepts (e.g., Tarr and Gauthier, 2000; Levy et al., 2001; Martin and Chao, 2001; Bookheimer, 2002; Devlin et al., 2002). Regardless of what the correct interpretation of these functional neuroimaging data turns out to be, they suggest a theoretical approach in which multiple dimensions of organization can be distinguished. In particular, whether the category- specific foci of activation are interpreted within the Domain-Specific Framework or within a feature-based framework, these data suggest the inference that the organization of conceptual knowledge in the cortex is driven both by the type or modality of the information as well as its contentdefined semantic category.

Conclusion The three proposals that we have reviewed (the Sensory/Functional Theory, the Domain-Specific Hypothesis, and the Conceptual Structure Account) are contrary hypotheses of the causes of categoryspecific semantic deficits. However, the individual assumptions that comprise each account are not necessarily mutually contrary as proposals about the organization of semantic memory. In this context, it is important to note that each of the hypotheses discussed above makes assumptions at a different level in a hierarchy of questions about the organization of conceptual knowledge. At the broadest level is the question of whether or not conceptual knowledge is organized by Domain-Specific constraints. The second question is whether conceptual knowledge is represented in modality-specific semantic stores specialized for processing/storing a specific type of information, or is represented in an amodal, unitary system. The third level in this hierarchy of questions concerns the organization of conceptual knowledge within any given object domain (and/or modalityspecific semantic store): the principles invoked by feature-based models may prove useful for articulating answers to this question (for further discussion of the various levels at which specific hypotheses have been articulated, see Caramazza and Mahon, 2003).

Category-Specific Knowledge 57

Different hypotheses of the organization of conceptual knowledge are more or less successful at accounting for different types of facts. Thus, it is important to consider the specific assumptions made by each hypothesis in the context of a broad range of empirical phenomena. The combination of neuropsychology and functional neuroimaging is beginning to provide promising grounds for raising theoretically motivated questions concerning the organization of conceptual knowledge in the human brain.

Acknowledgments Preparation of this manuscript was supported in part by NIH grant DC04542 to A. C., and by an NSF Graduate Research Fellowship to B. Z. M. Portions of this article were adapted from Caramazza and Mahon (2003) and Caramazza and Mahon (2006).

also: Categorizing Percepts: Vantage Theory; Concepts; Evolution of Semantics; Human Reasoning and Language Interpretation; Lexical Meaning, Cognitive Dependency of; Mentalese; Psychology, Semantics in; Synesthesia; Synesthesia and Language.

See

Bibliography Allport D A (1985). ‘Distributed memory, modular subsystems and dysphasia.’ In Newman & Epstein (eds.) Current perspectives in dysphasia. New York: Churchill Livingstone. Barsalou L W, Simmons W K, Barbey A K & Wilson C D (2003). ‘Grounding conceptual knowledge in the modality-specific systems.’ Trends in Cognitive Sciences 7, 84–91. Beauchamp M S, Lee K E, Haxby J V & Martin A (2002). ‘Parallel visual motion processing streams for manipulable objects and human movements.’ Neuron 34, 149–159. Beauchamp M S, Lee K E, Haxby J V & Martin A (2003). ‘FMRI responses to video and point-light displays of moving humans and manipulable objects.’ Journal of Cognitive Neuroscience 15, 991–1001. Beauvois M F (1982). ‘Optic aphasia: a process of interaction between vision and language.’ Proceedings of the Royal Society (London) B298, 35–47. Bookheimer S (2002). ‘Functional MRI of language: new approaches to understanding the cortical organization of semantic processing.’ Annual Review of Neuroscience 25, 151–188. Capitani E, Laiacona M, Mahon B & Caramazza A (2003). ‘What are the facts of category-specific deficits? A critical review of the clinical evidence.’ Cognitive Neuropsychology 20, 213–262. Caramazza A, Hillis A E, Rapp B C & Romani C (1990). ‘The multiple semantics hypothesis: Multiple confusions?’ Cognitive Neuropsychology 7, 161–189.

Caramazza A & Shelton J R (1998). ‘Domain specific knowledge systems in the brain: the animate-inanimate distinction.’ Journal of Cognitive Neuroscience 10, 1–34. Caramazza A & Mahon B Z (2003). ‘The organization of conceptual knowledge: the evidence from categoryspecific semantic deficits.’ Trends in Cognitive Sciences 7, 325–374. Caramazza A & Mahon B Z (2006). ‘The organization of conceptual knowledge in the brain: the future’s past and some future directions.’ Cognitive Neuropsychology 23, 13–38. Carey S & Spelke E (1994). ‘Domain-specific knowledge and conceptual change.’ In Hirschfeld L A & Gelman S A (eds.) Mapping the mind: domain-specificity in cognition and culture. New York: Cambridge University Press. 169–200. Cree G S & McRae K (2003). ‘Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns).’ Journal of Experimental Psychology: General 132, 163–201. Crutch S J & Warrington E K (2003). ‘The selective impairment of fruit and vegetable knowledge: a multiple processing channels account of fine-grain category specificity.’ Cognitive Neuropsychology 20, 355–373. evlin J T, Gonnerman L M, Anderson E S & Seidenberg M S (1998). ‘Category-specific semantic deficits in focal and widespread brain damage: a computational account.’ Journal of Cognitive Neuroscience 10, 77–94. Devlin J T, Russell R P, Davis M H, Price C J, Moss H E, Fadili M J & Tyler L K (2002). ‘Is there an anatomical basis for category-specificity? Semantic memory studies in PET and fMRI.’ Neuropsychologia 40, 54–75. Dixon M J (2000). ‘A new paradigm for investigating category-specific agnosia in the new millennium.’ Brain and Cognition 42, 142–145. Farah M J & McClelland J L (1991). ‘A computational model of semantic memory impairment: modality specific and emergent category specificity.’ Journal of Experimental Psychology: General 120, 339–357. Farah M J & Rabinowitz C (2003). ‘Genetic and environmental influences on the organization of semantic memory in the brain: is ‘‘living things’’ an innate category?’ Cognitive Neuropsychology 20, 401–408. Gallese V & Lakoff G (2005). ‘The brain’s concepts: the role of the sensory-motor system in conceptual knowledge.’ Cognitive Neuropsychology, 22, 455–479. Garrard P, Patterson K, Watson P C & Hodges J R (1998). ‘Category specific semantic loss in dementia of Alzheimer’s type. Functional-anatomical correlations from cross sectional analyses.’ Brain 121, 633–646. Garrard P, Lambon-Ralph M A, Hodges J R & Patterson K (2001). ‘Prototypicality, distinctiveness, and intercorrelation: analyses of semantic attributes of living and nonliving concepts.’ Cognitive Neuropsychology 18, 125–174. Gelman R (1990). ‘First principles organize attention to and learning about relevant data: number and the animateinanimate distinction as examples.’ Cognitive Science 14, 79–106.

58 Category-Specific Knowledge Hart J, Berndt R S & Caramazza A (1985). ‘Categoryspecific naming deficit following cerebral infarction.’ Nature 316, 439–440. Hart J & Gordon B (1992). ‘Neural subsystems for object knowledge.’ Nature 359, 60–64. Hillis A E & Caramazza A (1991). ‘Category-specific naming and comprehension impairment: a double dissociation.’ Brain 114, 2081–2094. Humphreys G W & Forde E M (2001). ‘Hierarchies, similarity, and interactivity in object recognition: ‘‘Category-specific’’ neuropsychological deficits.’ Behavioral and Brain Sciences 24, 453–509. Kanwisher N (2000). ‘Domain specificity in face perception.’ Nature 3, 759–763. Kanwisher N, Stanley D & Harris A (1999). ‘The fusiform face area is selective for faces, not animals.’ Neuroreport 10, 183–187. Kay J & Hanley J R (1999). ‘Person-specific knowledge and knowledge of biological categories.’ Cognitive Neuropsychology 16, 171–180. Kourtzi Z & Kanwisher N (2000). ‘Activation in human MT/MST by static images with implied motion.’ Journal of Cognitive Neuroscience 12, 48–55. Laiacona M & Capitani E (2001). ‘A case of prevailing deficit for non-living categories or a case of prevailing sparing of living categories?’ Cognitive Neuropsychology 18, 39–70. Lambon-Ralph M A, Howard D, Nightingale G & Ellis AW (1998). ‘Are living and non-living category-specific deficits causally linked to impaired perceptual or associative knowledge? Evidence from a category-specific double dissociation.’ Neurocase 4, 311–338. Laws K R, Gale T M, Frank R & Davey N (2002). ‘Visual similarity is greater for line drawings of nonliving than living thing: the importance of musical instruments and body parts.’ Brain and Cognition 48, 421–423. Levy I, Hasson U, Avidan G, Hendler T & Malach R (2001). ‘Center-periphery organization of human object areas.’ Nature Neuroscience 4, 533–539. Mahon B Z & Caramazza A (2005). ‘The orchestration of the sensory-motor systems: clues from neuropsychology.’ Cognitive Neuropsychology 22, 480–494. Martin A & Chao L L (2001). ‘Semantic memory and the brain: structure and processes.’ Current Opinion in Neurobiology 11, 194–201. Martin A & Weisberg J (2003). ‘Neural foundations for understanding social and mechanical concepts.’ Cognitive Neuropsychology 20, 575–587. Martin A, Ungerleider L G & Haxby J V (2000). ‘Category specificity and the brain: the sensory/motor model of semantic representations of objects.’ In Gazzaniga M S (ed.) The new cognitive neurosciences. Cambridge, MA: MIT Press.

Martin A, Wiggs C L, Ungerleider L G & Haxby J V (1996). ‘Neural correlates of category-specific knowledge.’ Nature 379, 649–652. Miceli G, Capasso R, Daniele A, Esposito T, Magarelli M & Tomaiuolo F (2000). ‘Selective deficit for people’s names following left temporal damage: an impairment of domain-specific conceptual knowledge.’ Cognitive Neuropsychology 17, 489–516. Miceli G, Fouch E, Capasso R, Shelton J R, Tamaiuolo F & Caramazza A (2001). ‘The dissociation of color from form and function knowledge.’ Nature Neuroscience 4, 662–667. Plaut D C (2002). ‘Graded modality-specific specialization in semantics: a computational account of optic aphasia.’ Cognitive Neuropsychology 19, 603–639. Pulvermuller F (2001). ‘Brain reflections of words and their meaning.’ Trends in Cognitive Science 5, 517–524. Samson D & Pillon A (2003). ‘A case of impaired knowledge for fruit and vegetables.’ Cognitive Neuropsychology 20, 373–401. Sartori G & Lombardi L (2004). ‘Semantic relevance and semantic disorders.’ Journal of Cognitive Neuroscience 16, 439–452. Shelton J R, Fouch E & Caramazza A (1998). ‘The selective sparing of body part knowledge: a case study.’ Neurocase 4, 339–351. Tarr M J & Gauthier I (2000). ‘FFA: a flexible fusiform area for subordinate-level visual processing automatized by expertise.’ Nature Neuroscience 3, 764–769. Thompson S A, Graham K S, Williams G, Patterson K, Kapur N & Hodges J R (2004). ‘Dissociating personspecific from general semantic knowledge: roles of the left and right temporal lobes.’ Neuropsychologia 42, 359–370. Tranel D, Logan C G, Frank R J & Damasio A R (1997). ‘Explaining category-related effects in the retrieval of conceptual and lexical knowledge for concrete entities.’ Neuropsychologia 35, 1329–1339. Tyler L K & Moss H E (2001). ‘Towards a distributed account of conceptual knowledge.’ Trends in Cognitive Science 5, 244–252. Warrington E K & McCarthy R (1983). ‘Category specific access dysphasia.’ Brain 106, 859–878. Warrington E K & McCarthy R (1987). ‘Categories of knowledge: further fractionations and an attempted integration.’ Brain 110, 1273–1296. Warrington E K & Shallice T (1984). ‘Category-specific semantic impairment.’ Brain 107, 829–854. Zannino G D, Perri R, Carlesimo G A, Pasqualettin P & Caltagirone C (2002). ‘Category-specific impairment in patients with Alzheimer’s disease as a function of disease severity: a cross-sectional investigation.’ Neuropsychologia 40, 2268–2279.

Causatives 59

Causatives J J Song, University of Otago, Dunedin, New Zealand

Types of Causative Constructions

ß 2006 Elsevier Ltd. All rights reserved.

The most widely known classification of causatives is based on the formal fusion between the predicate of cause and that of effect. In this classification, three different types of causative are recognized: (1) lexical, (2) morphological, and (3) syntactic. The lexical causative type involves suppletion (no formal similarity between the noncausative verb and its causative counterpart). In this type, the formal fusion of the expression of cause and of effect is maximal, with the effect that the causative verb cannot be analyzed into two morphemes. Examples of this type include, English die vs. kill and German sterben ‘to die’ vs. to¨ten ‘to kill.’ In the morphological type, the expression of cause is in the form of a derivational affix, with the expression of effect realized by a basic verb to which that affix is attached. In Japanese, for example, the suffix -(s)ase can apply to basic verbs to derive causative verbs, for example, ik- ‘[X] to go’ vs. ik-ase- ‘to cause [X] to go.’ The causative morpheme can be in the form of not only suffixes but also prefixes, infixes, and circumfixes. In the syntactic type, the expression of cause and of effect are separate verbs, and they occur in different clauses. This type has already been exemplified by (1). Swahili provides another good example (Vitale, 1981: 153).

Defining Causative Constructions The causative construction is a linguistic expression that denotes a complex situation consisting of two events: (1) the causing event in which the causer does something, and (2) the caused event in which the causee carries out an action or undergoes a change of condition or state as a result of the causer’s action. The following example is such a linguistic expression. (1) The teacher made Matthew paint the house

In (1), the causer (the teacher) did something, and as a result of that action the causee (Matthew) in turn carried out the action of painting the house. The causative construction has two main characteristics. First, the causer noun phrase and the expression of cause must be foregrounded, with the causee noun phrase and the expression of effect backgrounded. The foregrounding of the causer noun phrase and the expression of cause is achieved by putting these two expressions in grammatically more prominent positions in the sentence than the causee noun phrase and the expression of effect. Second, the expression of the causer’s action must be without specific meaning; all that is encoded by that expression is the pure notion of cause. For instance, the sentence in (2), although denoting a causative situation similar to (1), is not regarded as an example of the causative construction but rather as an example of what may be referred to broadly as the causal construction. (2) Matthew painted the house because the teacher instructed him to do so

There are two clear differences between (1) and (2). First, in (1) the causer noun phrase, the teacher, and the expression of cause, made, are the subject and the main predicate of the sentence, respectively (i.e., they are foregrounded). The causee noun phrase and the predicate of effect, on the other hand, appear as a nonsubject noun phrase and a subordinate predicate, respectively (i.e., they are backgrounded). This situation is reversed in (2); the causee noun phrase and the expression of effect appear as the subject and the predicate of the main clause, respectively, with both the causer noun phrase and the expression of cause located in the subordinate clause. Second, in (1) the expression of the causer’s action, made, lacks specific lexical content. In (2), on the other hand, the expression of the causer’s action, instructed has specific lexical content.

(3) Ahmed a-li-m-fanya mbwa Ahmed he-PAST-him-make dog samaki mkubwa fish large ‘Ahmed made the dog eat a large fish’

a-l-e he-eat-SUBJ

The three causative types must be understood to serve only as reference points. There are languages that fall somewhere between any two of the ideal types. For instance, Japanese lexical causative verbs lie between the lexical type and the morphological type because they exhibit degrees of physical resemblance – from almost identical to totally different – to their corresponding noncausative verbs, for example, tome- ‘to cause [X] to stop’ vs. tomar- ‘[X] to stop,’ oros- ‘to bring down’ vs. ori- ‘to come down,’ age- ‘to raise’ vs. agar- ‘to rise,’ and koros- ‘to kill’ vs. sin‘to die.’

The Semantics of Causatives: Two Major Types of Causation As previously described, the causative construction is a linguistic expression that denotes a situation consisting of two events: (1) the causing event in which the causer does something, and (2) the caused event in

60 Causatives

which the causee carries out an action or undergoes a change of condition or state as a result of the causer’s action. There are two mixed but distinct levels of description contained in this definition: the level of events and the level of participants. The first level is where the relationship between the causing event and the caused event is captured. The second level concerns the interaction between the causer and the causee. Most descriptions of the semantics of causatives revolve around these two levels of description. Two major causation types – the distinction between direct and indirect causation, and the distinction between manipulative and directive causation – are discussed in this article because they are most highly relevant to the three causative types (lexical, morphological, and syntactic) previously described. The first semantic type of causation is based on the level of events; and the second is based on the level of participants. The distinction between direct and indirect causation hinges on the temporal distance between the causing event and the caused event. If the caused event is temporally adjacent to the causing event, without any other event intervening between them, the overall causative situation may be regarded as direct. For example, if X makes Y fall into the river by pushing Y, the causing event of X pushing Y immediately precedes the caused event of Y’s falling into the river. There is no intervening or intermediary event that plays a role in the realization of the caused event; in direct causation, the caused event is immediately temporally adjacent to the causing event. As a matter of fact, the temporal distance between cause and effect in direct causation may be so close that it sometimes becomes difficult perceptually, if not conceptually, to divide the whole causative situation into the causing event and the caused event (e.g., the cat jumped as John slammed the door). Thus, direct causation represents a causative situation in which the causing event and the caused event abut temporally on one another, the former immediately preceding the latter. Indirect causation, on the other hand, involves a situation in which the caused event may not immediately follow the causing event in temporal terms. There will be at least one event intervening between the causing and caused events. In order for this to be the case, however, the temporal distance between the two events must be great enough for the whole causative situation to be divided clearly into the causing event and the caused event. For example, X fiddles with Y’s car, and days later Y is injured in a car accident due to the failure of the car. In this situation, the causing event is X’s fiddling with Y’s car and the caused event is Y’s getting injured in the accident. But these events are separated temporally from one

another by the intermediary event (the failure of the car). The intervening event plays an important role in bringing about the caused event. Note that, although this causative situation is indirect, the caused event is connected temporally with the causing event in an inevitable flow or chain of events: Y’s accident caused by the failure of the car and the failure of the car in turn caused by X’s fiddling with it (e.g., Croft, 1991). There can potentially be more than one event intervening between the causing event and the caused event in indirect causation. The other level of description involves the major participants of the causative situation, namely the causer and the causee. Depending on the nature and extent of the causer’s relationship with the causee in the realization of the caused event, the causative situation may be either manipulative or directive. If the causer acts physically on the causee, then the causative situation is regarded as manipulative. The causer manipulates the causee in bringing about the caused event. The situation used previously to exemplify direct causation is also manipulative because the causer physically pushes the causee into the river. In other words, this particular causative situation represents direct and manipulative causation. The causer may rely on an intermediary physical process or means in effecting the caused event. For example, if X causes Y to fall by pushing a shopping trolley straight into Y, the causer effects the caused event through some physical means, as in the case of direct manipulative causation already discussed. But this intermediary physical process also represents an independent event intervening between the causing event and the caused event – in fact, this intermediary event itself constitutes a causative situation consisting of a causing event (X exerting physical force directly on the shopping trolley) and a caused event (the shopping trolley rolling straight into Y). The causative situation in question may thus be regarded as indirect and manipulative causation. The causer may also draw on a nonphysical (e.g., verbal or social) means in causing the causee to carry out the required action or to undergo the required change of condition or state. For example, if medical doctor X causes patient Y to lie down for a medical examination by giving Y an instruction or direction to do so, the causative situation is directive causation. This particular situation is also direct in that there is no other event intervening between the causing event and the caused event – Y’s lying down is immediately temporally adjacent to X’s uttering the instruction. Again, directive causation may also be indirect rather than direct. For example, if X causes Y to type a letter by giving Z an instruction to cause

Causatives 61

Y to do the typing, then we are dealing with indirect directive causation (e.g., I had the letter typed by Tim by asking Mary to tell him to do so). The caused event is separated from the causing event by the intervening event of Z asking Y to comply with X’s original instruction.

Causative Continuum and Causation Types There is a strong correlation between the causative and the causation types. The three causative types – lexical, morphological, and syntactic – can be interpreted as forming a continuum of formal fusion or physical propinquity between the expressions of cause and of effect, as schematized in Figure 1. There is a strong tendency for manipulative or direct causation to be mapped onto the causative types on the left of the continuum in preference to those on the right of the continuum. Directive or indirect causation, on the other hand, is far more likely to be expressed by the causative types on the right of the continuum than by those on the left of the continuum. This is often cited in the literature as an excellent example in support of iconic motivations in language. Iconic motivation (or iconicity) is the principle that the structure of language should, as closely as possible, reflect the structure of what is expressed by language (e.g., Haiman, 1985). Recently, the correlation between the causative and causation types has been reinterpreted as that between the degree of difficulty in bringing about the caused event and the degree of transparency in expressing the notion of causation (Shibatani, 2002). For example, directive (as opposed to manipulative) causation involves a nonphysical (verbal or social) means of causing the causee to carry out the required action or to undergo the required change of condition or state. Directive causation entails a higher degree of difficulty in bringing about the caused event than manipulative causation. For one thing, in directive causation the causer relies on the causee’s cooperation; the (prospective) causee can refuse to comply with the (prospective) causer’s wish or demand. This higher degree of difficulty in bringing about the caused event is then claimed to be reflected by the tendency for directive causation to be expressed by the causative types to the right,

Figure 1 Continuum of formal fusion.

rather than the left, on the continuum. The notion of causation is much more transparently encoded in the syntactic causative (i.e., a separate lexical verb of cause) than in the lexical causative, where the notion of causation is not expressed by a separate morpheme, let alone by a separate verb. Moreover, there is a large amount of crosslinguistic evidence in support of the case marking of the causee being determined by semantic factors relating to the agency, control, affectedness, or even topicality of the main participants of the causative situation (e.g., Cole, 1983). In Bolivian Quechua, for example, the causee noun phrase is marked by the accusative case if the causee is directly under the causer’s authority and has no control over his or her action. If, however, the causee has control over his or her action but complies voluntarily with the causer’s wish, the causee noun phrase appears in the instrumental case. Some linguists have made an attempt to reinterpret such variable case marking to reflect the conceptual integration of the causee in the causative event as a whole (Kemmer and Verhagen, 1994). This fits in well with the view that the simple noncausative clause pattern serves as a structural model for morphological causatives (Song, 1996). The causative of intransitive verbs is based on the transitive clause pattern, and the causative of transitive verbs is based on either the ditransitive clause pattern or the transitive clause pattern with an adjunct. See also: Aspect and Aktionsart; Concessive Clauses; Counterfactuals; Event-Based Semantics; Perfects, Resultatives, and Experientials; Role and Reference Grammar, Semantics in; Serial Verb Constructions.

Bibliography Cole P (1983). ‘The grammatical role of the causee in universal grammar.’ International Journal of American Linguistics 49, 115–133. Comrie B (1976). ‘The syntax of causative constructions: cross-language similarities and divergences.’ In Shibatani M (ed.) Syntax and semantics 6: the grammar of causative constructions. New York: Academic Press. 261–312. Comrie B (1989). Language universals and linguistic typology (2nd edn.). Oxford: Blackwell. Comrie B & Polinsky M (eds.) (1993). Causatives and transitivity. Amsterdam & Philadelphia: John Benjamins. Croft W (1991). Syntactic categories and grammatical relations: the cognitive organization of information. Chicago: University of Chicago Press. Dixon R M W (2000). ‘A typology of causatives: form, syntax and meaning.’ In Dixon R M W & Aikhenvald

62 Character versus Content A Y (eds.) Changing valency: case studies in transitivity. Cambridge, UK: Cambridge University Press. 30–83. Haiman J (1985). Natural syntax: iconicity and erosion. Cambridge, UK: Cambridge University Press. Kemmer S & Verhagen A (1994). ‘The grammar of causatives and the conceptual structure of events.’ Cognitive Linguistics 5, 115–156. Saksena A (1982). ‘Contact in causation.’ Language 58, 820–831. Shibatani M (1976). ‘The grammar of causative constructions: A conspectus.’ In Shibatani M (ed.) Syntax and semantics 6: the grammar of causative constructions. New York: Academic Press. 1–40. Shibatani M (2002). ‘Introduction: some basic issues in the grammar of causation.’ In Shibatani M (ed.) The

grammar of causation and interpersonal manipulation. Amsterdam & Philadelphia: John Benjamins. 1–22. Song J J (1995). ‘Review of B. Comrie, and M. Polinsky (ed.) Causatives and transitivity.’ Lingua 97, 211–232. Song J J (1996). Causatives and causation: a universaltypological perspective. London & New York: Addison Wesley Longman. Song J J (2001). Linguistic typology: morphology and syntax. Harlow and London: Pearson Education. Talmy L (1976). ‘Semantic causative types.’ In Shibatani M (ed.) Syntax and semantics 6: the grammar of causative constructions. New York: Academic Press. 43–116. Vitale A J (1981). Swahili syntax. Dordrecht & Cinnaminson: Foris Publications.

Character versus Content C Spencer, Howard University, Washington, DC, USA ß 2006 Elsevier Ltd. All rights reserved.

David Kaplan introduced the content/character distinction in his monograph Demonstratives (1989a) to distinguish between two aspects of the meaning of (1) indexical and demonstrative pronouns (e.g., ‘I’, ‘here,’ ‘now,’ ‘this,’ and ‘that’) and (2) sentences containing them. Roughly, the content of an occurrence of an indexical or demonstrative is the individual to which it refers, and its character is the rule that determines its referent as a function of context. Thus, an indexical has different contents in different contexts, but its character is the same in all contexts. For instance, the character of ‘I’ is the rule, or function, that maps a context of utterance to the speaker of that context. This function determines that the content of Sally’s utterance of ‘I’ is Sally.

Content/Character Distinction and Semantics Sentences containing indexicals or demonstratives are context-dependent in two ways. First, contexts help to determine what these sentences say. Second, contexts determine whether what is said is true or false. For instance, suppose Sally says, ‘I’m cold now’ at time t. The context supplies Sally as the referent for ‘I’ and time t as the referent for ‘now,’ so it helps to determine what Sally said. Other facts about the context, specifically whether Sally is cold at time t, determine whether she said something true or false. Different contexts can play these different roles, as they do when we ask whether what Sally said in one context would be true

in a slightly different context. A central virtue of Kaplan’s semantics is that it distinguishes between these two roles of context. For Kaplan, a context of use plays the first role, of supplying contents for indexical expressions, and a circumstance of evaluation plays the second. A context of use is just a context in which an indexical expression may be used, and which supplies a content for the indexical expression. A circumstance of evaluation is an actual or merely possible situation in which the content of an utterance is evaluated for truth or falsehood. A semantic framework like Kaplan’s, which captures the double-dependence of meaning on context, is sometimes called a two-dimensional semantics. In the two-dimensional framework, a meaningful entity such as a linguistic expression or an utterance determines not a single semantic value but a twodimensional matrix of semantic values. Figure 1 represents Kaplan’s semantics in this way. In Figure 1, the vertical axis of the matrix displays contexts of use (u1-u3) and the horizontal axis displays circumstances of evaluation (c1-c3). Each cell in the matrix gives the extension of the linguistic expression e as used in the specified context of use and evaluated in the specified circumstance of evaluation. In this matrix, the cell in row n and column m gives the

Figure 1 Two-dimensional matrix.

Character versus Content 63

semantic value of e in the context of use specified at the beginning of row n and evaluated in the circumstance of evaluation specified at the top of column m. If e is a sentence, cells will be filled in with truth values as illustrated. Kaplan offers a syntax and semantics for a formal language containing indexicals, demonstratives, and a variety of modal operators. In this formal system, a context of use is an ordered n-tuple of contextual features to which indexicals or demonstratives are sensitive, such as the speaker, time, world, and location of the context. A circumstance of evaluation is an ordered n-tuple of a possible world-state or world-history, a time, and perhaps other elements as would be required given the sentential operators in the language. For Kaplan, all contexts of use are proper, which means that the speaker of the context must be located at the time, place, and world of the context. Circumstances of evaluation, however, need not be proper. Contexts of use and circumstances of evaluation play a role in the specification of the character and content of an expression. The character of any linguistic expression e is a function from contexts of use to contents appropriate for e, i.e., an individual if e is a singular term, a proposition if e is a sentence, and sets of n-tuples of individuals if e is an n-place predicate. Indexical expressions only have contents relative to a context of use. So Kaplan speaks of the content of an occurrence of an expression rather than the content of the expression itself. Contents are evaluated in circumstances of evaluation, and these evaluations yield extensions appropriate to the kind of content under evaluation. So we also can characterize the content of an occurrence of e as a function from circumstances of evaluation to extensions of a type appropriate to e. For instance, the extensions for sentences are truth values, for indexicals, individuals, and for n-place predicates, n-tuples of individuals. For individuals and n-place predicates, these will be constant functions (i.e. the function delivers the same extension in every circumstance of evaluation). It is often simpler to think of contents as individuals (for singular terms), propositions (for sentences) and sets of n-tuples of individuals (for n-place predicates), and Kaplan typically talks about contents in this way. Both ways of thinking of contents are semantically equivalent. For Kaplan, indexicals and demonstratives are both directly referential and rigidly designating. They are directly referential because they contribute only their referents to the propositions expressed by sentences containing them. They are rigidly designating because, once they secure a content in a context of use, they retain that content in every circumstance of evaluation. Indexicals and demonstratives contrast

with the typical definite description in both respects. Definite descriptions typically contribute a descriptive condition to a proposition rather than an individual, and this descriptive condition is typically satisfied by different individuals in different worlds of evaluation. Although Kaplan’s view that demonstratives are directly referential is widely accepted, some recent discussions of complex demonstratives (i.e. expressions of the form ‘that F’) have defended a quantificational approach, and some considerations in favor of such an approach may apply to the pure demonstratives ‘this’ and ‘that’ (King, 2001). Kaplan’s semantics has technical virtues lacking in earlier treatments of natural language indexicality. It shares with other double-indexing accounts (Kamp, 1971) a technical superiority to single-index theories, which evaluate sentences relative to a single index, which is an ordered n-tuple of features of a context, such as a speaker, time, location, and world. Such theories cannot account for the interaction of indexicals and certain sentence operators. To evaluate the sentence (1), for instance, we need to consider the truth value of the constituent sentence, ‘the man who is now President of the United States no longer hold[s] that office’ in situations occurring after the sentence is uttered. (1) Someday, the man who is now President of the United States will no longer hold that office.

But the indexical ‘now’ in that constituent sentence must still refer to the time (1) is used, and not the time at which the constituent sentence is evaluated. As Hans Kamp has argued, only a double-indexing theory will correctly predict the truth conditions for (1) (Kamp, 1971).

Content/Character Distinction and Philosophy The content/character distinction sheds light on some specifically philosophical issues involving contextsensitivity in thought and language. These applications involve philosophically significant assumptions, and are more controversial than the applications to the semantics of indexicals and demonstratives. First, content and character play two roles that Gottlob Frege initially envisioned for the meaning, or sense, of a sentence, one semantic and the other more broadly psychological (Frege, 1892). Frege thought that the sense of a sentence should both determine its truth condition and provide the cognitive significance of beliefs expressible with that sentence. Although Frege expected that one entity, the sense, could play both roles, indexical and demonstrative belief undermines this expectation, since it

64 Character versus Content

appears to require two different entities to play the two roles. Different people who have a belief they could express by saying ‘I’m cold’ will be in the same psychological/functional state. They will all be shivering and trying to get warmer. But because each person who thinks, ‘I’m cold’ is a constituent of the content of that thought, all of these thoughts will differ in content. The psychological role of an indexical belief appears to be more closely tied to the character of the sentence the thinker would use to express that belief than to the content of the belief. But the content, rather than the character, is more directly relevant to the truth condition of an occurrence of a sentence containing an indexical. Second, Kaplan has suggested that the content/character distinction helps to explain the relation between the epistemological notions of logical truth and the a priori, on the one hand, and the metaphysical notions of necessity and contingency on the other. Other philosophers have put broadly similar applications of the two-dimensional framework into service to the same end (Stalnaker, 1978, cf. Stalnaker, 2004; Chalmers, 1996; Jackson, 1998). As is evident to anyone who understands sentence (2), it cannot be uttered falsely. Therefore, sentence (2) is in a certain sense a logical or a priori truth. Yet it does not express a necessary truth, since occurrences of (3) will typically be false. (2) I am here now. (3) Necessarily, I am here now.

Kaplan has suggested that we explain the special status of (2) as follows: metaphysically speaking, (2) is contingently true in virtue of its content. But it has its special epistemic status in virtue of its character: the character of (2) requires that it express a truth in every context of use. Other sentences which may express the same content as a particular occurrence of (2), but a different character, such as (4), do not have the same special epistemic status. (4) GWB is in Washington, DC on June 16, 2004.

Because (4) and some occurrences of (2) share a content but differ in their epistemic status, it is natural to conclude that contents cannot be the bearers of this special epistemic property. Critics of this account of the a priori (Soames, 2005) say that the content/character distinction cannot underwrite the general account of a priori knowledge that some of its defenders (Chalmers, 1996; Jackson, 1998) have claimed. Third, some philosophers have used Kaplan’s content/character distinction to distinguish narrow content (i.e. content determined by the internal state of the thinker) from wide content (i.e. content determined by the internal state of the thinker and his or her

environment) (Fodor, 1987; see also Chalmers, 1996; Jackson, 1998, for a related application of two-dimensional semantics to these ends). They suggest that narrow content is loosely modeled on Kaplan’s characters, and wide content on Kaplan’s contents. That characters seem to capture something important about the psychological roles of belief makes them particularly attractive candidates to model the purely internal aspects of thought. Critics of the approach contend that although characters help to characterize internal states of thinkers, they are not themselves determined by such states (Stalnaker, 1989). See also: Context and Common Ground; Demonstratives;

Direct Reference; Discourse Domain; Dthat; Indexicality; Modal Logic; Possible Worlds; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Referential versus Attributive; Rigid Designation; Situation Semantics.

Bibliography Almog J, Perry J & Wettstein H (1989). Themes from Kaplan. New York: Oxford University Press. Chalmers D (1996). The conscious mind. New York: Oxford University Press. Fodor J A (1987). Psychosemantics: the problem of meaning in the philosophy of mind. Cambridge, MA: MIT Press. Frege G (1892). ‘Ueber sinn und bedeutung.’ Zeitschr. fur Philos. und Philos. Kritik 100. Feigl H (trans.). 190–202. Jackson F (1998). From metaphysics to ethics. New York: Oxford University Press. Kamp H (1971). ‘Formal properties of ‘‘now’’.’ Theoria 37, 227–273. Kaplan D (1989a). ‘Demonstratives.’ In Almog, Perry, & Wettstein (eds.). 481–564. Kaplan D (1989b). ‘Afterthoughts.’ In Almog, Perry, & Wettstein (eds.). 565–614. King J (2001). Complex demonstratives. Cambridge, MA: MIT Press. Kripke S (1980). Naming and necessity. Cambridge, MA: Harvard University Press. Lewis D K (1980). ‘Index, context, and content.’ In Kanger S & Ohman S (eds.) Philosophy and grammar. Dordrecht: Reidel. Soames S (2005). Reference and description: the case against two-dimensionalism. Princeton, NJ: Princeton University Press. Stalnaker R C (1978). ‘Assertion.’ In Cole P (ed.) Syntax and ssemantics, vol. 9: pragmatics. New York: Academic Press, Inc. 315–322. Stalnaker R C (1989). ‘On what’s in the head.’ Philosophical Perspectives 3, Philosophy of Mind and Action Theory. 287–316. Stalnaker R C (2004). ‘Assertion revisited: on the interpretation of two-dimensional modal semantics.’ Philosophical Studies 118(1–2), 299–322.

Classifiers and Noun Classes 65

Classifiers and Noun Classes A Y Aikhenvald, La Trobe University, Bundoora, Australia ß 2006 Elsevier Ltd. All rights reserved.

Almost all languages have some grammatical means for the linguistic categorization of nouns and nominals. The continuum of noun categorization devices covers a range of devices, from the lexical numeral classifiers of Southeast Asia to the highly grammaticalized gender agreement classes of Indo-European languages. They have a similar semantic basis, and one can develop from the other. They provide a unique insight into how people categorize the world through their language in terms of universal semantic parameters involving humanness, animacy, sex, shape, form, consistency, and functional properties. Noun categorization devices are morphemes that occur in surface structures under specifiable conditions, and denote some salient perceived or imputed characteristics of the entity to which an associated noun refers (Allan, 1977: 285). They are restricted to classifier constructions, morphosyntactic units (e.g., noun phrases of different kinds, verb phrases, or clauses) that require the presence of a particular kind of morpheme, the choice of which is dictated by the semantic characteristics of the referent of the nominal head of a noun phrase. Noun categorization devices come in various guises. We distinguish noun classes, noun classifiers, numeral classifiers, classifiers in possessive constructions, and verbal classifiers. Two relatively rare types are locative and deictic classifiers. They share a common semantic core and differ in the morphosyntactic contexts of their use and in their preferred semantic features.

Noun Classes Some languages have grammatical agreement classes based on such core semantic properties as animacy, sex, and humanness, and sometimes also shape. The number of noun classes (also known as genders, or gender classes) varies – from two, as in Portuguese or French, to 10 or so, as in Bantu, or even to several dozen, as in some languages of South America. Noun classes can to a greater or lesser extent be semantically transparent, and their assignment can be based on semantic, morphological, and/or phonological criteria. They are realized through agreement with a modifier or the predicate outside the noun itself. Examples (1) and (2), from Portuguese, illustrate masculine and feminine genders, which are marked

on the noun itself and on the accompanying article and adjective. (1) o menin-o ARTICLE: childMASC.SG MASC.SG ‘the beautiful boy’ (2) a menin-a ARTICLE: child-FEM.SG FEM.SG ‘the beautiful girl’

bonit-o beautifulMASC.SG bonit-a beautiful-FEM.SG

The cross-linguistic properties of noun classes are the following: 1. There is a limited, countable number of classes. 2. Each noun in the language belongs to one (or sometimes more than one) class. 3. There is always some semantic basis to the grouping of nouns into gender classes, but languages vary in how much semantic basis there is. This usually includes animacy, humanness and sex, and sometimes also shape and size. 4. Some constituent outside the noun itself must agree in gender with a noun. Agreement can be with other words in the noun phrase (adjectives, numbers, demonstratives, articles, etc.) and/or with the predicate of the clause, or an adverb. In some languages there is a marker of noun class on every noun; in some languages nouns bear no marker. Noun class systems are typically found in languages with a fusional or agglutinating (not an isolating) profile. Languages often have portmanteau morphemes combining information about noun class with number, person, case, etc. The semantics of noun classes in the languages of the world involves the following parameters: . Sex: feminine vs. masculine, as in many Afroasiatic languages, in East-Nilotic, and in Central Khoisan . Human vs. nonhuman, as in some Dravidian languages of India . Rational (humans, gods, demons) vs. nonrational, as in Tamil and other Dravidian languages . Animate vs. inanimate, as in Siouan, from North America The term neuter is often used to refer to irrational, inanimate gender or to a residue gender with no clear semantic basis. Languages can combine these parameters. Zande and Ma (Ubangi, Niger-Congo) distinguish masculine, feminine, nonhuman animate, and inanimate. Godoberi (Ghodoberi) (Northeast-Caucasian) has feminine, masculine, and nonrational genders.

66 Classifiers and Noun Classes

Primarily sex-based genders can have additional shape- and size-related meanings. In languages of the Sepik region of New Guinea, feminine is associated with short, wide, and round, and masculine with long, tall, and narrow objects (e.g., Ndu family; Alamblak). Feminine is associated with small size and diminutives in Afroasiatic and East-Nilotic languages; masculine includes long, thick, solid objects. Hollow, round, deep, flat, and thin objects are feminine in Kordofanian and Central Khoisan languages (Heine, 1982: 190–191). Unusually large objects are feminine in Dumo, a Sko language from New Guinea (see the summary in Aikhenvald, 2000: 277). In some languages, most nouns are assigned to just one noun class; in other languages, different noun classes can be chosen to highlight a particular property of a referent. Manambu, a Ndu language from the Sepik area, has two genders. The masculine gender includes male referents, and feminine gender includes females. But the gender choice depends on other factors and can vary: if the referent is exceptionally long, or large, it is assigned masculine gender; if it is small and round, it is feminine. Rules for the semantic assignment of noun classes can be more complex. The Australian language Dyirbal (Dixon, 1972: 308–312) has four noun classes. Three are associated with one or more basic concepts: Class I – male humans, nonhuman animates; Class II – female humans, water, fire, fighting; Class III – nonflesh food. Class IV is a residue class covering everything else. There are also two rules for transferring gender membership. By the first, an object can be assigned to a gender by its mythological association rather than by its actual semantics. Birds are classed as feminine by mythological association, since women’s souls are believed to enter birds after death. The second transfer rule is that if a subset of a certain group of objects has a particular important property, e.g., being dangerous, it can be assigned to a different class from the other nouns in that group. Most trees without edible parts belong to Class IV, but stinging trees are placed in Class II. A typical gender system in Australian languages contains four terms that can be broadly labeled as masculine, feminine, vegetable, and residual (Dixon, 2002: 449–514). Andian (Northeast Caucasian) languages have a special noun class for insects, and Bantu languages for places (also see Corbett, 1991). The degree of semantic motivation for noun classes varies from language to language. Noun classes in Bantu languages constitute an example of a semantically opaque system. Table 1 summarizes a basic semantic grid common to Bantu noun class systems (Spitulnik, 1989: 207) based on the interaction of shape, size, and humanness. However, these

Table 1 Noun classes in Bantu Class

Semantics

1/2 3/4

Humans, a few other animates Plants, plant parts, foods, nonpaired body parts, miscellaneous Fruits, paired body parts, miscellaneous inanimates Miscellaneous inanimates Animals, miscellaneous inanimates, a few humans Long objects, abstract entities, miscellaneous inanimates Small objects, birds Masses Abstract qualities, states, masses, collectives Infinitives

5/6 7/8 9/10 11/10 12/13 6 14 15

parameters provide only a partial semantic motivation for the noun classes in individual Bantu languages. (In the Bantuist tradition, every countable noun is assigned to two classes: one singular and one plural.) In modern Bantu languages, however, noun class assignment is often much less semantically motivated, though the semantic nucleus is still discernible. Thus, in Babungo, Class 1/2 is basically human; however, it is a much bigger class than it was in Proto-Bantu, and also contains many animals, some birds and insects, body parts, plants, and household and other objects, e.g., necklace, pot, book, rainbow (Schaub, 1985: 175). Shape and size also appear as semantic parameters: in ChiBemba, class 7/8 is associated with large size and carries pejorative overtones, while class 12/13 includes small objects and has overtones of endearment (also see Denny, 1976; Aikhenvald, 2000: 281–283). In a seminal study, Zubin and Ko¨pcke (1986) provided a semantic rationale for the gender assignment of nouns of different semantic groups in German. Masculine and feminine genders mark the terms for male and female adults of each species of domestic and game animals (following the natural sex principle), and neuter is assigned to non-sex-specific generic and juvenile terms. Masculine gender is used for types of cloth, for precipitation and wind, and for minerals. Disciplines and types of knowledge have feminine gender, and games and types of metal – with the exception of alloys – have neuter gender. This is contrary to a common assumption that there is no real semantic basis for gender assignment in the well-known Indo-European languages. Noun class assignment is typically more opaque for inanimates and for nonhuman animates than for humans and high animates. In the Australian language Bininj Gun-Wok (Evans, 2003: 185–199) masculine class includes male humans, the names of certain malevolent beings mostly associated with

Classifiers and Noun Classes 67

the sky, items associated with painting (a male activity), and also some mammals, some snakes, and some birds and fish. Feminine class includes female humans, and also some reptiles, fish, and birds. Vegetable class includes all terms for nonflesh foods, but also a few bird names. Finally, the neuter, or residue, class is the most semantically heterogenous – it includes items that do not fit into other classes, e.g., most body parts, generic terms for plants, and terms for various inanimate objects. In Jingulu (Pensalfini, 2003: 159–168) nouns divide into four classes, only some of which are more or less semantically transparent. The vegetable class mostly includes objects that are long, thin, or pointed. This class happens to include most vegetables, as well as body parts such as the colon, penis, and neck; instruments such as spears, fire drills, and barbed wire; natural phenomena such as lightning and rainbows; and roads and trenches. The feminine class includes female humans and higher animates, and also words for axes, the sun, and most smaller songbirds. The semantic content of the remaining two classes, masculine and neuter, is much harder to define: masculine is mostly used for the rest of animates and neuter for the rest of inanimates, except that flat and/or rounded inanimates – such as most trees and eggs, and body parts such as the liver and the brow – are masculine.

Noun Classifiers Noun classifiers categorize the noun with which they co-occur and are independent of any other element in a noun phrase or in a clause. They are often independent words with generic semantics. Thus, in Yidiny, an Australian language, one would not generally say: ‘the girl dug up the yam’; it is more felicitous to include generics and say ‘the person girl dug up the vegetable yam’ (Dixon, 1982: 185), as in (3). Classifier constructions are in square brackets. (3) [mayi jimirr] [bama-al vegetableþABS yamþABS CL:PERSON-ERG yaburu-Ngu] julaal girl-ERG dig-PAST ‘The person girl dug up the vegetable yam’

Every noun in a language does not necessarily take a noun classifier. And a noun may occur with more than one classifier. In Minangkabau, a Western Austronesian language from Sumatra, different noun classifiers may be used with the same noun to express different meanings, e.g., batang limau (CL:TREE lemon) ‘lemon-tree’, buah limau (CL:FRUIT lemon) ‘lemon-fruit.’ They are similar to derivationallike devices. The choice of a noun classifier is

predominantly semantic, based on social status, function, and nature, and also on physical properties, e.g., shape. But in some cases the semantic link between a noun classifier and a noun is not obvious. In most languages of the Daly area in Australia, honey takes the noun classifier for flesh food. The choice of noun classifier in Jacaltec, a Mayan language from Guatemala, is often obscured by extension through perceptual analogy; for instance, ice is assigned to the rock class (see Craig, 1986: 275–276). Noun classifiers are found in numerous Australian languages, in Western Austronesian languages, in Tai languages, and in Mayan languages (Aikhenvald, 2000). In Yidiny (Australian) (Dixon, 1977: 480 ff.; 1982: 192 ff.), a language with 20 noun classifiers, these are of two kinds: . Inherent nature classifiers divide into humans (waguja ‘man,’ bunya ‘woman,’ and a superordinate bama ‘person,’ as in [3]); fauna (jarruy ‘bird,’ man gum ‘frog,’ munyimunyi ‘ant’); flora (jugi ‘tree,’ narra ‘vine’); natural objects (buri ‘fire,’ walba ‘stone,’ jabu ‘earth’); and artefacts (gala ‘spear,’ bundu ‘bag,’ baji ‘canoe’). . Function classifiers are minya ‘edible flesh food,’ mayi ‘edible nonflesh food,’ bulmba ‘habitable,’ bana ‘drinkable,’ wirra ‘movable,’ gugu ‘purposeful noise.’ A distinction between flesh and nonflesh food is typical for Australian languages with noun classifiers (Dixon, 2002: 454–459). Noun classfiers for humans often involve social functions. In Mayan languages of the Kanjobalan branch, as in Jacaltec, humans are classified according to their social status, kinship relation, or age. Mam has classifiers for men and women; for young and old men and women; for old men and women to whom respect is due; and for someone of the same status as the speaker. There is also a classifier for babies, and just one nonhuman classifier. In Australian languages, noun classifiers that refer to social status include such distinctions as initiated man. Murinhpatha (Australian) (Walsh, 1997: 256) has a classifier for Aboriginal people (which also covers human spirits) and another for non-Aboriginal people, which includes all other animates. Nouns with nonhuman, or inanimate, referents are classified in terms of inherent nature-based properties from the natural domains of human interaction: animals, birds, fish, plants, water, fire, minerals, and artefacts. Individual systems may vary. There is often a general term for birds and fish, as in Minangkabau (Western Austronesian); while Ngan"gityemerri (Australian) and Akatek (Mayan) have a generic noun classifier for animals. Classifiers in

68 Classifiers and Noun Classes

Murrinh-Patha, from Australia, cover fresh water and associated concepts, flowers and fruits of plants, spears, offensive weapons, fire and things associated with fire, time and space, and speech and language, and there is a residue classifier. There is usually a noun classifier for culturally important concepts. Mayan languages have a noun classifier for corn, a traditionally important crop, and for domesticated dogs, while Daly languages, in northern Australia, have classifiers for spears, diggings sticks, and spear throwers. Noun classifiers often have to be distinguished from generic nouns. In Yidiny, a test for what can be used as a classifier is provided by the way interrogative-indefinite pronouns are used: there is one that means ‘what generic?’ and another meaning ‘generic being known, what specific?’ Another decisive criterion is how obligatory the classifiers are, and whether it is possible to formulate explicit rules for their omission. Incipient structures superficially similar to noun classifiers can be found in Indo-European languages. In English it is possible to use a proper name together with a descriptive noun phrase, such as that evil man Adolf Hitler, but this type of apposition is rather marked and used to achieve rhetorical effect. Lexicosyntactic mechanisms of this kind may well be a historical source of noun categorization devices. Noun classifiers should be distinguished from derivational components in class nouns, such as berry in English strawberry, blackberry, etc., with their limited productivity, high degree of lexicalization, and the fact that they are restricted to a closed subclass of noun roots.

Numeral Classifiers Numeral classifiers are morphemes that only appear next to a numeral, or a quantifier; they may categorize the referent of a noun in terms of its animacy, shape, and other inherent properties. Uzbek, a Turkic language, has 14 numeral classifiers. A classifier for humans is shown in (4). Inanimate objects are classified by their form, as shown in (5) (Beckwith, 1998). (4) bir nafar one CL:HUMAN ‘one person’

aˆdam person

(5) bir baˆs one CL:HEAD.SHAPED ‘one (head of) cabbage’

karaˆm cabbage

Numeral classifiers are relatively frequent in isolating languages of Southeast Asia; in the agglutinating North Amazonian languages of South America; in Japanese, Korean, and Turkic; and in the fusional Dravidian and Indic languages.

In a language with a large set of numeral classifiers, the way they are used often varies from speaker to speaker, depending on the speaker’s social status and competence (Adams, 1989). In this (and in the ways they are acquired by children), they are much more similar to the use of lexical items than to a limited set of noun classes. Each noun in the language does not have to be associated with a numeral classifier. Some nouns take no classifier at all; and some nouns take more than one classifier, depending on which property of the noun’s referent is in focus. Numeral classifiers are always determined by the semantics of the noun referent. Typical semantic parameters are animacy, physical properties (such as dimensionality, shape, consistency, nature), functional properties (e.g., object with a handle), and arrangement (e.g., bunch). There can also be specific classifiers for culturally important items, e.g., canoe, house. A few languages (e.g., Kana, a Cross-River language from Nigeria, and a number of New Guinea languages) (Aikhenvald, 2000: 287–288) have no classifier for animates or humans: when counted, these are classified by shape or by function. For instance, a human is assigned to a class of vertically positioned or elongated objects. A typical problem with numeral classifiers concerns differentiating between sortal classifiers, which just characterize a referent, and mensural classifiers, which contain information about how the referent is measured. As Ahrens (1994: 204) put it, classifiers can classify only a limited and specific group of nouns, while measure words can be used as a measure for a wide variety of nouns. Almost every language, whether it has numeral classifiers or not, has quantifiers, the choice of which may depend on the semantics of the noun. This often depends on whether the noun referent is countable or not. For instance, in English much is used with noncountable nouns, and many with countable nouns; other languages have just one word covering ‘much’ and ‘many.’ The choice of quantifying expressions may also depend on the properties of the referent noun; for instance, in English we include head in five head of cattle, stack in three stacks of books, flock in two flocks of birds, and so on. These quantifying expressions are not numeral classifiers, because they do not fill an obligatory slot in the numeral-noun construction, but are instead used in a type of construction that is also employed for other purposes. For instance, quantifier constructions in English three head of cattle are in fact a subtype of genitive constructions. This is the main reason that English is not a numeral classifier language. The quantifiers also have a lexical meaning of their own.

Classifiers and Noun Classes 69

Classifiers in Possessive Constructions Classifiers in possessive constructions are of three kinds. Relational classifiers categorize the ways in which noun referents relate to, or can be manipulated by, the possessor – whether they are to be eaten, drunk, worn, etc. They tend to occur in languages that distinguish alienable and inalienable possession. In Fijian (Lichtenberk, 1983: 157–158), different classifiers are used to categorize kava as something one is going to drink, as in (6), or as something one has grown or is going to sell, as in (7). (6) na me-qu ARTICLE CL:DRINKABLE-my ‘my kava (which I intend to drink)’

yaqona kava

(7) na no-qu yaqona ARTICLE CL:GENERAL-my kava ‘my kava (that I grew, or that I will sell)’

Oceanic languages typically have from two to five relational classifiers, while Kipea´-Karirı´, an extinct Macro-Jeˆ language from Brazil, had 12. Categorization of the possessive relationship via a relational classifier is based on functional interaction between possessor and possessed. The primary semantic division of referents is into consumable and nonconsumable, as in Fijian, or general and alimentary, as in Manam (Lichtenberk, 1983; Dixon, 1988: 136). Consumable objects can be further classified according to the way in which they are consumed (eaten, drunk, chewed), or prepared (e.g., cooked or roasted). Nonconsumable objects are classified according to how they have been acquired (e.g., found, or received as a gift, as in Kipea´-Karirı´). Value is a semantic parameter used in relational classifiers in Oceanic languages. Humans can be classified by their social function, that is, social status or kinship relationship, as in Ponapean, a Micronesian language. Possessed classifiers characterize a possessed noun itself, based on the physical properties (shape, form, consistency, function) or animacy of its referent, as in Panare (a South American language from the Carib family) (Aikhenvald, 2000: 128), shown in (8). (8) y-uku-n wane¨ 1sg-CL:LIQUID-GENITIVE honey ‘my honey (mixed with water for drinking)’

Possessed classifiers can also be in a generic-specific relationship with the noun they categorize (this is similar to noun classifiers mentioned in this article). In some Carib languages, ‘my papaya’ can only be phrased as ‘my fruit papaya,’ as in (9), from Macushı´: (9) u-yekkari 1sg-CL:FRUIT.FOOD ‘my papaya’

ma"pıˆya papaya

Generic possessed classifiers are often functionbased. Uto-Aztecan languages have possessed classifiers for pets and domesticated plants. Only one language, Daˆw (from the Maku´ family in South America), has possessor classifiers characterizing the possessor in possessive constructions in terms of animacy.

Verbal Classifiers Also called verb-incorporated classifiers, they appear on the verb, categorizing a noun, which is typically in S (intransitive subject) or O (direct object) function, in terms of its animacy, shape, size, structure, and position. Example (10), from Waris, a Papuan language of the Border family (Brown, 1981: 96), shows how the classifier-put-‘round object’ is used with the verb ‘get’ to characterize its O argument, coconut, as a round object. (10) sa coconut

ka-m 1sg-to

put-ra-ho-o VERBAL.CL:ROUND-getBENEFACTIVEIMPERATIVE ‘Give me a coconut (literally coconut to-me round.one-give)’

Suppletive (or partly analyzable) classificatory verbs are a subtype of verbal classifiers. Classificatory verbs can categorize the S/O argument in terms of its inherent properties (e.g., animacy, shape, form, and consistency), as in Athapascan languages of North America, such as Mescalero Apache, shown in Table 2. Different arrangements of tobacco are reflected in the form of a classificatory verb whose basic meaning is ‘give’ (in bold) (Rushforth, 1991): Alternatively, classificatory existential verbs can categorize the S/O argument in terms of its orientation or stance in space, and also to its inherent properties, as in Dakota and Nevome, from North America, and in Papuan languages of the Engan family in the Highlands of New Guinea. In Enga, a verb meaning ‘stand’ is used with referents judged to be tall, large, strong, powerful, standing, or supporting, e.g., men, houses, trees; and ‘sit’ is used with referents judged to be small, squat, horizontal, or weak, e.g., women, possums, ponds.

Table 2 Examples of the use of ‘give’ in Mescalero Apache 1. Na´t 0 uhı´ sha´n"aa ‘Give me (a plug of) tobacco’ 2. Na´t 0 uhı´ sha´nkaa ‘Give me (a can, box, pack) of tobacco’ 3. Na´t 0 uhı´ sha´n t)i)i ‘Give me (a bag) of tobacco’ 4. Na´t 0 uhı´ sha´nt)i)i ‘Give me (a stick) of tobacco’ 5. Na´t 0 uhı´ sha´njaash ‘Give me (loose, plural) tobacco’

70 Classifiers and Noun Classes

Cross-linguistically, classificatory verbs tend to belong to the semantic groups of handling, motion, and existence/location. That classificatory verbs should combine reference to inherent properties of referents, and to their orientation, is not surprising. Shape, form, and other inherent properties of objects correlate with their stance in space. Certain positions and states are only applicable for objects of particular kinds; for instance, a tree usually stands, and only liquids can flow. However, classificatory verbs differ from the lexical selection of a verb in terms of physical properties or the position of an object. Most languages have lexical items similar to English drink (which implies a liquid O), or chew (which implies an O of chewable consistency). Unlike these verbs, classificatory verbs make consistent paradigmatic distinctions in the choice of semantic features for their S/O argument throughout the verbal lexicon. In other words, while English distinguishes liquid and nonliquid objects only for verbs of drinking, classificatory verbs provide a set of paradigmatic oppositions for the choice of verb sets depending on the physical properties of all kinds of S/O. Similarly, posture verbs in many languages tend to occur with objects of a certain shape. For instance, in Russian, long, vertical objects usually stand, and long, horizontal ones lie. However, the correlations between the choice of the verb and the physical properties of the object are not paradigmatic; these verbs cannot be considered classificatory.

Locative Classifiers Locative classifiers occur with locative prepositions and postpositions, and categorize the head noun in terms of its animacy or physical properties, including form and shape. These are found in South American Indian languages of the Carib family, and in Palikur, an Arawak language from Brazil: e.g., pi-wan min (2sg-arm LOC.CL þVERTICAL) ‘on your (vertical) arm’; ah peu (tree LOC.CL þ BRANCH LIKE) ‘on (branchlike) tree’.

Deictic Classifiers Deictic classifiers occur on deictics within a noun phrase and categorize the noun referent in terms of its inherent properties and position in space, such as horizontal or vertical. They are found in Siouan languages from North America, e.g., Mandan dE-ma˜k ‘this one (lying)’; dE-nak ‘this one (sitting).’ Nouns are typically classified by their canonical position, which correlates with their shape and extendedness; for instance, in Pilaga´ (a Guaicuruan language, from Argentina), fire and stones are

classified as horizontal, and buildings and animals as sitting. All noun categorization devices use the same set of core parameters, which include: . animacy; . physical properties covering shape and dimensionality (one-, two-, or three-dimensional objects, including long, flat, and round referents) and direction; size; consistency (flexible, hard or rigid, liquid); material (what the object is made of, e.g., clothlike); . functional properties (to do with specific uses of objects or kinds of action typically performed on them), including social status, which can be considered a subtype of functional categorization; . arrangement (that is, configuration of objects, e.g., a coil of rope or a bunch). Various kinds of noun categorization devices opt for different preferred semantic parameters: animacy and humanness are predominant in noun classes, while noun classifiers often categorize referents in terms of their function and social status. Numeral classifiers typically categorize referents by shape (e.g., round or vertical), while verbal classifiers may also involve orientation (vertical or horizontal). Semantic parameters employed in noun categorization systems follow some tendencies. If a language has classifiers for three-dimensional objects, it is likely to also have classifiers for two-dimensional ones. A summary of preferred semantic parameters depending on a type of noun categorization device is in Table 3 (for their cognitive correlates, see also Bisang, 2002). These preferences represent only tendencies. Generic-specific relations are characteristic of noun classifiers, verbal classifiers, and sometimes possessed classifiers, but not of other types (they are rare in numeral classifiers). The semantic complexity of an individual noun class or classifier varies. Some are semantically simple, e.g., the classifier ‘person’ in Malay and Minangkabau used with all humans. Others undergo semantic extensions, and their choice is less straightforward. Consider the semantic structure of the classifier -hon in Japanese (Matsumoto, 1993: 676–681). In its most common use, it covers saliently one-dimensional objects, e.g., long, thin, rigid objects such as sticks, canes, pencils, candles, trees, dead snakes, and dried fish. It also covers martial arts contests with swords (which are long and rigid), hits in baseball, shots in basketball, Judo matches, rolls of tape, telephone calls, radio and TV programs, letters, movies, medical injections, bananas, carrots, pants, guitars, and teeth. This heterogeneity results from various processes of semantic extension and metonymy. Extensions can be based on certain rules for transferring class

Classifiers and Noun Classes 71 Table 3 Preferred semantic parameters in classifiers Classifier

Typical semantics

Generic-specific relation

Noun classes Numeral classifiers Noun classifiers Verbal classifiers Relational classifiers Possessed Classifiers Locative classifiers Deictic classifiers

Animacy, humanness, physical properties, rarely nature or function Animacy, humanness, physical properties, nature, rarely functional properties Social status, functional properties, nature Physical properties, rarely animacy, nature Functional properties Physical properties, nature, animacy Functional properties Physical properties, rarely animacy Directionality, physical properties

No Rare Yes Yes No Yes

membership, as in Dyirbal (see the section ‘‘Noun Classes’’). According to these principles, idealized models of the world – for instance, myths and beliefs – can account for other chaining links within the structure of a class. In Dyirbal, birds belong to feminine Class II, because they are believed to be the spirits of dead human females. A further type of extension is the Domain of Experience Principle, which links members thought to be associated with the same experience domain. Thus, fish in Dyirbal belong to Class 1, since they are animate, and so do fishing implements, because they are associated with the same activity. These domains are often culture-specific, and subject to change with sociocultural changes. The numeral classifier tay in Korean was originally used with reference to traditional vehicles, and then was extended to introduced European artifacts with wheels. It was further extended to any electric machinery, and to other kinds of machines or instruments, including even the piano. In Austroasiatic languages, shape parameters in inanimate categorization account for typical semantic extensions of terms for plants and their component parts when employed as classifiers, such as small and roundish (from the word for ‘seed’), round (from ‘fruit’), bulky (from ‘tuber’), flat and sheetlike (from ‘flower,’ ‘leaf,’ ‘fiber’), and long (from ‘stalk,’ ‘stick,’ ‘sprout’) (Conklin, 1981: 341). An instructive example of prototype-and-extenson in a multiple classifier system comes from classifier tua in Thai (used with numerals, demonstratives, and adjectives). The structure of the category is shown in Figure 1. Arrows indicate extensions from a prototypical member to a less prototypical one (Carpenter, 1987: 45–46). The prototypical referent classified with tua is a four-legged animal, such as a dog or a water buffalo. The classifier extends to include trousers and shirts, due to their shape: trousers are leglike, and shirts have armlike sleeves. Because of shared function, and the bodylike shape, this classifier also applies to jackets

No No

Figure 1 Structure of the tua category in Thai.

and skirts and even to dresses, underwear, and bathing suits. The general four-legged shape of items of furniture, such as tables and chairs, accounts for their inclusion in the category covered by the classifier tua. Other kinds of furniture were then added because of their shared function with tables and chairs. ‘Letter (of the alphabet)’ in Thai is a compound tua nangseu ‘body book’, so a combination of shape and repetition of the generic compound head caused letters to be classified with tua. Numbers were included either on the basis of shape or by their shared function with letters. Ghosts were included because of their similarity with the two-limbed shape of a human body. Semantic extensions of classifiers can be manipulated by language planners. Following an order of King Mongkut issued in 1854, ‘noble’ animals, such as elephants and horses, should be counted without any classifier; the classifier tua could be used only for animals of a ‘lower’ status. In Setswana, a Bantu language with a large set of noun classes, it is now considered politically incorrect to refer to ethnic minorities, such as the Chinese or the Bushmen, using noun class 5/6 (which includes substances, such as dirt or clay, and abstract nouns); all humans have to be referred to with the ‘human’ class 1/2 (see Table 1). Noun categorization devices are hardly ever semantically redundant. They are often used to distinguish what can be encoded with different lexemes in some languages. For instance, in Burmese a river can be viewed as a place, as a line (on a map), as a section, as a sacred object, or as a connection. These meanings are distinguished through the use of different

72 Classifiers and Noun Classes Table 4 Categorization of an inanimate noun in Burmese with a classifier Noun

Numeral

Classifier

Translation

miyi

te

ya

miyi miyi

te te

tan hmwa

miyi

te

sin

miyi

te

ywE

miyi

te

pa

miyi

te

khu

miyi

te

miyi

river one place (e.g., destination for a picnic) river one line (e.g., on a map) river one section (e.g., a fishing area) river one distant arc (e.g., a path to the sea) river one connection (e.g., connecting two villages) river one sacred object (e.g., in mythology) river one conceptual unit (e.g., in a discussion of rivers in general) river one river (the unmarked case)

numeral classifiers – this is shown in Table 4 (Becker, 1975: 113). In Apache, a plug, a box, a stick, and a bag of tobacco are distinguished through the use of different classificatory verbs. In languages with overt noun class marking, variability in marking noun class on the same root is a way of creating new words. In Bantu languages, such as Swahili, most stems usually occur with a prefix of one class. Prefixes can be substituted to mark a characteristic of an object. M-zee means ‘old person’ and has the human class prefix m-. It can be replaced by ki-(inanimate class) to yield ki-zee ‘scruffy old person’. In Dyirbal, the word ‘man’ can be used with the feminine class marker, instead of masculine, to point out the female characteristics of a hermaphrodite. In Manambu, ‘head’ is usually feminine because of its round shape, but it is treated as masculine when a person has a headache, since then the head feels heavy and unusually big. We have seen that semantically noun categorization devices are heterogenous, nonhierarchically organized systems that employ both universal and culture-specific parameters. The ways these parameters work are conditioned and restricted by cognitive mechanisms and the sociocultural environment. Among universal parameters are animacy, humanness, and physical properties, e.g., shape, dimensionality, consistency. Culture-specific parameters can cover certain functional properties and social organization. Classificatory parameters associated with function rather than physical properties are more sensitive to cultural and other nonlinguistic factors. Human categorization, as a sort of ‘social’ function, depends entirely on social structure. Functional categorization of inanimate and nonhuman objects is directly related to cultural notions. Animacy and sex, when extended

metaphorically, are influenced by social stereotypes and beliefs. Correlations between the choice of physical properties encoded in classifiers and nonlinguistic parameters are much less obvious. They may relate to the cultural salience of certain shapes or forms, and they may ultimately be based on typical metaphorical extensions. See also: Category-Specific Knowledge; Cognitive Semantics; Connotation; Coreference: Identity and Similarity; Diminutives and Augmentatives; Disambiguation; Honorifics; Metaphor and Conceptual Blending; Prototype Semantics; Stereotype Semantics; Vagueness.

Bibliography Adams K L (1989). Systems of numeral classification in the Mon-Khmer, Nicobarese and Asian subfamilies of Austroasiatic. Canberra: Pacific Linguistics. Ahrens K (1994). ‘Classifier production in normals and aphasics.’ Journal of Chinese Linguistics 22, 203–246. Aikhenvald A Y (2000). Classifiers: a typology of noun categorization devices. Oxford: Oxford University Press. Allan K (1977). ‘Classifiers.’ Language 53, 284–310. Becker A J (1975). ‘A linguistic image of nature: the Burmese numerative classifier system.’ Linguistics 165, 109–121. Beckwith C I (1998). ‘Noun specification and classification in Uzbek.’ Anthropological Linguistics 40, 124–140. Bisang W (2002). ‘Classification and the evolution of grammatical structures: a universal perspective.’ Sprachtypologie und Universalienforschung 55, 289–308. Brown R (1981). ‘Semantic aspects of some Waris predications.’ In Franklin K (ed.) Syntax and semantics in Papua New Guinea languages. Ukarumpa: Summer Institute of Linguistics. 93–123. Carpenter K (1987). How children learn to classify nouns in Thai. Ph.D. diss., Stanford University. Conklin N F (1981). The semantics and syntax in numeral classification in Tai and Austronesian. Ph.D. diss., University of Michigan. Corbett G (1991). Gender. Cambridge: Cambridge University Press. Craig C G (1986). ‘Jacaltec noun classifiers: a study in language and culture.’ In Craig C G (ed.) Noun classes and categorization. Amsterdam: John Benjamins. 263–294. Denny J P (1976). ‘What are noun classifiers good for?’ Papers from the annual regional meeting of the Chicago Linguistic Society. Chicago: Chicago Linguistic Society. 12, 122–132. Dixon R M W (1972). The Dyirbal language of North Queensland. Cambridge: Cambridge University Press. Dixon R M W (1977). A grammar of Yidiny. Cambridge: Cambridge University Press. Dixon R M W (1982). Where have all the adjectives gone? and other essays in semantics and syntax. Berlin: Mouton. Dixon R M W (1988). A grammar of Boumaa Fijian. Chicago: University of Chicago Press.

Cognitive Semantics 73 Dixon R M W (2002). Australianlanguages:theirnatureand development.Cambridge: Cambridge University Press. Evans N (2003). Bininj Gun-Wok: a pan-dialectal grammar of Mayali, Kunwinjku and Kune. Canberra: Pacific Linguistics. Heine B (1982). ‘African noun class systems.’ In Seiler H & Lehmann C (eds.) Apprehension: Das sprachliche Erfassen von Gegensta¨nden, Teil I: Bereich und Ordnung der Pha¨nomene. Tu¨bingen: Narr Language Universals Series 1/I. 189–216. Lichtenberk F (1983). ‘Relational classifiers.’ Lingua 60, 147–176. Matsumoto Y (1993). ‘Japanese numeral classifiers: a study on semantic categories and lexical organisation.’ Linguistics 31, 667–713. Pensalfini R (2003). A grammar of Jingulu, an Aboriginal language of the Northern Territory. Canberra: Pacific Linguistics.

Rushforth S (1991). ‘Uses of Bearlake and Mescalero (Athapaskan) classificatory verbs.’ International Journal of American Linguistics 57, 251–266. Schaub W (1985). Babungo. London: Croom Helm. Spitulnik D (1989). ‘Levels of semantic restructuring in Bantu noun classification.’ In Newman P & Botne R D (eds.) Current approaches to African linguistics, vol. 5. Dordrecht: Foris. 207–220. Walsh M (1997). ‘Nominal classification and generics in Murrinhpatha.’ In Harvey M & Reed N (eds.) Nominal classification in Aboriginal Australia. Amsterdam: John Benjamins. 255–292. Zubin D & Ko¨pcke K M (1986). ‘Gender and folk taxonomy: the indexical relation between grammatical and lexical categorization.’ In Craig C G (ed.) Noun classes and categorization. Amsterdam: John Benjamins. 139–180.

Cognitive Semantics J R Taylor, University of Otago, Dunedin, New Zealand ß 2006 Elsevier Ltd. All rights reserved.

Cognitive Linguistics and Cognitive Semantics Cognitive semantics is part of a wider movement known as ‘cognitive linguistics.’ Prior to surveying the main characteristics of cognitive semantics, it will be advisable to clarify what is meant by cognitive linguistics. As a matter of fact, the term is open to different interpretations. On a broad understanding, any approach that views language as residing in the minds of its speakers and a linguistic description as a hypothesis about a speaker’s mental state would merit the designation ‘cognitive.’ Chomsky’s career has been devoted to pursuing cognitive linguistics on this broad understanding. On the narrower, and more specialized interpretation intended here, cognitive linguistics refers to a movement that emerged in the late 1970s and early 1980s, mainly as a reaction to certain tendencies of Chomskyan, and, more generally, formalist linguistics. Linguists who were prominently associated with the emergence of cognitive linguistics, in this narrow sense, were George Lakoff, Ronald Langacker, and Leonard Talmy. Rather than a specific theory, cognitive linguistics can best be described as an approach, or cluster of approaches to language study, whose practitioners nevertheless share a basic outlook on the nature of language. Several common aspects can be identified:

. Cognitive linguists are skeptical of the idea, promoted within Chomskyan linguistics, that human language might be associated with a language-specific module of the mind. Their starting point, rather, is that language is embedded in more general cognitive abilities and processes. According to the editorial statement of the monograph series Cognitive linguistics research (published by Mouton de Gruyter, Berlin), the guiding assumption is that ‘language is an integral facet of cognition which reflects the interaction of social, cultural, psychological, communicative and functional considerations, and which can only be understood in the context of a realistic view of acquisition, cognitive development and mental processing.’ Special attention, therefore, has been directed towards studying language, its structure, acquisition, and use, from the perspective of such topics as perception, categorization, concept formation, spatial cognition, and imagery. Although these capacities might well be subject to highly specialized elaboration in human language, they are not per se linguistic capacities. . Cognitive linguistics signaled a return to the basic Saussurean insight that language is a symbolic system, which relates signifiers (that is, language in its perceptible form, whether as sound, marks on paper, or gesture) and signifieds (that is, meanings). Indeed, Langacker (1987: 11) characterized a language as ‘an open-ended set of linguistic signs [. . .], each of which associates a semantic representation of some kind with a phonological representation.’ Importantly, semantic representations,

74 Cognitive Semantics

i.e., ‘meanings,’ are taken to be mental entities, or, perhaps more appropriately, mental processes. Thus, Langacker prefers to refer not to ‘concepts’ (a term that suggests that meanings are static, clearly individuated entities) but to ‘conceptualizations,’ where the deverbal nominal emphasizes the dynamic, processual character of the phenomenon. . A third feature of cognitive linguistics follows from the view of language as a symbolic system, namely that syntax and morphology – patterns for the combination of words and morphemes into larger configurations – are themselves symbolic, and hence inherently meaningful. The same goes for the elements over which syntax and morphology operate – lexical and phrasal categories, for example – as well as the kinds of relations that can hold between these elements, i.e., relations such as subject (of a clause), modification, complementation, apposition, subordination. The view, current in many linguistic theories, that syntax and morphology constitute autonomous levels of linguistic organization is therefore rejected. Indeed, a major thrust of cognitive linguistic research over the past couple of decades has been, precisely, the attempt to offer a conceptual characterization of formal aspects of language organization. It will be apparent that the orientation of cognitive linguistics, as characterized above, was bound to have considerable influence on the ways in which meanings (whether of words, sentences, syntactic patterns, etc.) have been studied. One aspect has already been mentioned, namely, that meanings are taken to be mental entities. In this, cognitive linguistics contrasts strikingly with other approaches, such as logical approaches, which have focused on logical aspects of sentences and the propositions they express; with truth-conditional approaches, which focus on the relation between propositions and states of affairs in the world; with structuralist approaches, which view meaning in terms of semantic relations within the language; with behaviorist approaches, which view meaning in terms of stimulus-response associations; and, more generally, with theories of meaning as use. What these alternative approaches to meaning have in common is their avoidance of mentalism, i.e., the characterization of meanings as ‘things in the head.’ The remainder of this article surveys some important themes and research topics in cognitive semantics. It should be mentioned that the survey is by no means comprehensive; for broader coverage, the reader is referred to the introductions to cognitive linguistics listed at the end of this article. Some topics, such as metaphor and metonymy, are dealt with elsewhere in this encyclopedia and for this reason are

discussed only briefly. It should also be borne in mind that cognitive semantics, like cognitive linguistics itself, does not constitute a unified theory, but is better regarded as a cluster of approaches and research themes that nevertheless share a common outlook and set of assumptions.

Meaning Is Encyclopedic in Scope Many semanticists, especially those who see the language faculty as an encapsulated module of the mind, insist on the need to make a distinction between the dictionary and the encyclopedia, that is, between what one knows in virtue of one’s knowledge of a language and what one knows in virtue of one’s knowledge of the world. Cognitive semantics denies the validity of such a distinction. On the contrary, meaning is taken to be essentially encyclopedic in scope. A person’s linguistic knowledge would therefore, in principle, be coextensive with the person’s total world knowledge. An individual word, to be sure, provides access to only a small segment of encyclopedic knowledge. No clear bounds, however, can be set on how far the relevant knowledge network extends. The encyclopedic nature of linguistic semantics is captured in the notions of profile, base, domain, and Idealized Cognitive Model (or ICM). The terms ‘profile’ and ‘base’ are due to Langacker (1987). A linguistic expression intrinsically evokes a knowledge structure, some facet of which is profiled. Take the word hypotenuse. The word designates a straight line. Whatever we predicate of hypotenuse is predicated of a hypotenuse qua straight line, as when we assert The hypotenuse is 3 cm. long. Obviously, the notion of a straight line does not exhaust the meaning of the word. The straight line in question is part of a larger structure, namely, a right-angled triangle. Although hypotenuse does not designate the triangle, the notion of a triangle is essential for the understanding of the word (Figure 1). Notice that the concept designated by the word cannot be identified with the profile – as mentioned, the profile is simply a straight line. The concept resides in the profiling of a facet of the base. For other examples that illustrate the profile-base relation, consider words such as thumb (profiled against the conception of a human hand), top (profiled against a schematic notion of a three-dimensional entity), island (a mass of land profiled against the surrounding water). In fact, it is axiomatic, in cognitive semantics, that all expressions achieve their meaning through profiling against the relevant background knowledge.

Cognitive Semantics 75

Figure 1 Notion of hypotenuse.

Returning to the hypotenuse example, it will be apparent that the base – the notion of a triangle – itself presupposes broader knowledge configurations, namely, those pertaining to planar geometry, which themselves are based in notions of space and shape. These broader knowledge configurations are referred to as ‘domains.’ Some domains may be basic, in the sense that they are not reducible to other domains. Examples include time, space, color, temperature, weight, etc. Otherwise, a knowledge structure of any degree of complexity can function as a domain, for example, the rules of a game, a scientific theory, kinship networks, gender stereotypes, educational, political, and legal systems. Domains may also be constituted by deeply held beliefs about life, nature, causation, the supernatural, and so on. Most concepts are characterized against a ‘matrix’ of more than one domain. Uncle, for example, profiles a male human being against the base of a (portion of a) kinship network, specifically, that part of the network that relates the uncle to his nephews/ nieces. The notion of kinship itself rests on notions of gender, procreation, marriage, inheritance, etc. At the same time, uncle profiles a human being, which is understood against multiple domains pertaining to life forms, to three-dimensional bodies and their various parts, with their features of weight, extension, shape, and so on. If we add to this the fact that, in many societies, uncles may have special rights and obligations vis-a`-vis their nephews/nieces, we may appreciate that even a single word, if its meaning is fully explored, can take us into the farthest reaches of our knowledge and cultural beliefs. It will be apparent that the distinction between base and domain is not a clear-cut one. The base may be defined as a knowledge structure that is inherently involved in profiling, whereas domains constitute background, more generalized knowledge.

Terminology in this area is also confusing because different authors have favored a range of terms for domain-based knowledge. Some scholars have used the not always clearly distinguishable terms ‘scene,’ ‘scenario,’ ‘script,’ and ‘frame’ to refer in particular to knowledge about expected sequences of events. Thus, anger refers not just to an emotional state, but is understood against an expected scenario that includes such stages as provocation, response, attempts at control, likely outcomes, and so on. Likewise, paying the restaurant bill evokes the ‘restaurant script’ – knowledge of the kinds of things one does, and the things that happen, when one visits culturally instituted establishments known as ‘restaurants.’ The notion of paying also invokes the frame of a commercial transaction, with its various participants, conventions, and activities. Mention might also be made of Searle’s (1992) notions of ‘the Network’ and ‘the Background,’ whereby a particular belief takes its place within a network of other beliefs, and against the background of capacities, abilities, and general know-how. Of special importance is Lakoff’s (1987) notion of ‘Idealized Cognitive Model,’ or ICM – a notion that bears some affinity with the concept of ‘folk theory’ (again, different scholars prefer different terms). ICMs capture the fact that knowledge about a particular domain may be to some extent idealized and may not fit the actual states of affairs that we encounter on specific occasions. Consider the words bachelor and spinster. We might define these as ‘adult unmarried male’ and ‘adult unmarried female,’ respectively. The concepts, thus defined, presuppose an ICM of marriage practices in our society. According to the ICM, a person reaches a more-or-less clearly definable marriageable age. People who pass the marriageable age without marrying are referred to as bachelors and spinsters, as the case may be. The ICM attributes different motives to men and women who do not marry. Men do so out of choice, women out of necessity. As will be appreciated, the ICM is idealized, in that it presupposes that all citizens are heterosexual and that all are equally available for marriage. It thus ignores the existence of celibate priests and of couples who live together without marrying. The discrepancy between model and reality can give rise to prototype effects. The fact that the Pope is not a representative example of the bachelor category derives from the fact that Catholic clergy are not covered by the ICM. Appeal to the ICM can also explain the different connotations of bachelor and spinster. Although one might not want to subscribe to the sexist framing of the ICM, it does offer an explanation for why eligible bachelor is an accepted collocation, whereas eligible spinster is not.

76 Cognitive Semantics

As mentioned, the meaning of a word may need to be characterized against a matrix of several domains. However, not all uses of a word need invoke each of the domains in equal measure. Certain uses may activate only some domains whereas others are backgrounded or eclipsed. The notion of a kinship network is likely to be prominent in most uses of uncle, yet when parents use the word to introduce one of their adult male friends to their child, the kinship domain is eclipsed. For another example of selective domain activation, consider the concept of a book. When you drop a book, the status of the book as a (heavy) material object is activated, when you read a book, the status of a book as a printed text is activated, when you translate a book, the status of the book as a text in a given language is foregrounded. Note that begin a book can be interpreted in various ways, according to which of the domains is activated. The activity that one begins with respect to the book could be reading, writing, editing, translating, or even (if you are bookworm, literally!), eating. The above examples not only illustrate the importance of domains and related notions in the study of word meanings, they also show why it has been deemed necessary to favor an encyclopedic approach to semantics. The reason is, namely, that we need to appeal to domain-based knowledge in order to account for how words are used and for the ways in which complex expressions are judged. Often, the very possibility of interpreting an expression, and of accepting it as semantically well-formed, can only be explained by reference to appropriate background knowledge. A common objection to an encyclopedic semantics is that one cannot reasonably claim that everything a person knows about the concept designated by a word is relevant to the use of the word. It is certainly true that some facets of background knowledge may be central, and more intrinsic to a concept, others might be more peripheral or even idiosyncratic to an individual speaker. Nevertheless, even extrinsic knowledge might become relevant to a word’s use, for example, in discourse between intimates or family members. Moreover, the study of semantic change teaches us that even highly peripheral and circumstantial knowledge pertaining to a concept can sometimes leave its mark on the semantic development of a word. Langacker (1987: 160) has remarked that Jimmy Carter’s presidency had a notable, if transient, effect on the semantics of peanut. Equally, Margaret Thatcher’s premiership probably influenced the semantic development of handbag, at least for British speakers. The notion of domain is relevant to two important themes in cognitive semantic research, namely

metaphor and metonymy. ‘Metaphor’ has been analyzed in terms of the structuring of one domain of experience (usually, a more abstract, intangible domain) in terms of a more concrete, and more directly experienced domain. For example, time is commonly conceptualized in terms of space and motion, as when we speak of a long time, or say that Christmas is approaching, or even that it is just around the corner. More recently, metaphor has been studied under the more general rubric of ‘conceptual blending,’ whereby components of two or more input domains are incorporated into a new conceptualization, the blend. Whereas metaphor involves elements from more than one domain, ‘metonymy,’ in contrast, concerns elements within a single domain. Thus, we can use the name of an author to refer to books written by the author, as when we enquire whether someone has read any Dickens. The transfer of reference from person to product is possible because both are linked within domain-based knowledge pertaining to books and their authorship.

Categorization Every situation and every entity that we encounter is uniquely different from every other. In order to be able to function in our physical and social worlds, we need to reduce this information overload. We do this by regarding some situations and some entities as being essentially ‘the same.’ Having categorized an entity in a certain way, we know how we should behave towards it and what properties it is likely to have. It is significant that whenever we encounter something whose categorization is unclear we typically feel uneasy. ‘What is it?’, we want to know. Categorization is not a peculiarly human ability. Any creature, if it is to survive, needs at the very least to categorize its environment in terms of edible or inedible, harmful or benign. Humans have developed phenomenal categorization abilities. We operate with literally thousands, if not hundreds of thousands of categories. Moreover, our categories are flexible enough to accommodate new experiences, and we are able to create new categories as the need arises. To know a word is to know, among other things, the range of entities and situations to which the word can be appropriately applied. To this extent, the study of word meanings is the study of the categories that these words denote. And it is not only words that can be said to designate categories. It can be argued that syntactic configurations, for example, those associated with intransitive, transitive, and ditransitive constructions, designate distinct categorizations of events and their participants.

Cognitive Semantics 77

What is the basis for categorization? Intuitively, we might want to say that things get placed in the same category because of their similarity. Similarity, however, is a slippery notion. One approach would be to define similarity in terms of the sharing of some common feature(s) or attribute(s). Similarity, then, would reduce to a matter of partial identity. Feature-based theories of categorization often require that all members of a category share all the relevant features. A corollary of this approach is that categories are well-defined, that is, it is a clear-cut matter whether a given entity does, or does not, belong in the category. It also follows that all members have equal status within the category. There are a number of problems associated with this approach. One is that the categories designated by linguistic expressions may exhibit a prototype structure. Some members of the category might be more representative than others, while the boundary of the category may not be clearly defined. In a wellknown passage, though without introducing the prototype concept, Wittgenstein (1953: x66) drew attention to categorization by family resemblance. Imagine a family photograph. Some members of the family might have the family nose, others might have the family chin, others might have the family buck teeth. No member of the family need exhibit all the family traits, yet each exhibits at least one; moreover, some members might exhibit different traits from others. Wittgenstein illustrated the notion on the example of the kinds of things we call ‘games,’ or Spiele (Wittgenstein was writing in German). Some (but not all) games are ‘amusing,’ some require skill, some involve luck, some involve competition and have winners and losers. The family resemblance notion has been usefully applied to the study of word meaning. Thus, some uses of climb (as in The plane climbed to 30 000 feet) exhibit the feature ‘ascend,’ some (such as The mountaineers climbed along the cliff ) exhibit the feature ‘move laboriously using one’s limbs.’ Considered by themselves, these two uses have very little in common. We see the relation, however, when we consider some further uses of climb (as in The boy climbed the tree), which exhibit both of the features. A fundamental problem with feature-based theories of categorization concerns the nature of the features themselves. As Wittgenstein pointed out, skill in chess is not the same as skill in tennis. The concept of skill therefore raises the very same issues of how categories are to be defined as were raised by the notion of game, which the notion of skill is supposed to explicate. Understanding similarity in terms of partial identity is problematic for another reason. Practically any two objects can be regarded as similar in some respect (for example, both may weigh less

than 100 kg., or both may cost between $5 and $5000), but this similarity does not mean that they constitute a viable or useful category. An alternative approach would be that categorization is driven by the role of the entities within broader knowledge configurations, that is, by domain-based knowledge and ICMs. Sometimes, apparently similar activities might be categorized differently, as when making marks on paper might be called, in some cases, ‘writing’, in other cases, ‘drawing.’ The distinction is based on knowledge pertaining to the nature and purpose of ‘writing’ and ‘drawing.’ On the other hand, seemingly very different activities might be brought under the same category. In terms of the actions performed, making marks with a pen on a piece of paper has little in common with depressing small, square-shaped pads on a keyboard. But given the appropriate domain-based knowledge, both can be regarded as instances of ‘writing.’ Categories, as Murphy and Medin (1985) have aptly remarked, are ultimately based in ‘theories’ (that is, in ICMs). The matter may be illustrated by the distinction (admittedly, not always a clear-cut one) between ‘natural kinds’ and ‘nominal kinds.’ Natural kinds are believed to be given by nature and are presumed to have a defining ‘essence’; moreover, we are inclined to defer to the scientists for an elucidation of their defining essence. Nominal kinds, in contrast, are often defined vis-a`-vis human concerns, and their perceptual properties and/or their function is often paramount in their categorization. Remarkably, even very young children are sensitive to the difference (Keil, 1989). Suppose a zebra had its stripes painted out; would it thereby become a horse? Or suppose a giraffe had its neck surgically shortened; would it cease to be a giraffe? Even very young children respond: ‘No.’ Changes to the appearance of the entities would not alter their defining essence. But suppose you saw off the back of a chair. Does the chair become a stool? Arguably, it does. In this case, a ‘superficial’ aspect is crucial to categorization. The dynamics of categorization may be illustrated by considering the relationship between a linguistic expression (e.g., the word fruit) and its possible referents (e.g., an apple). We can address the relation from two perspectives. We can ask, for this word, what are the things in the world to which the word can be applied? Alternatively, we can ask, for this thing, what are the linguistic expressions that can refer to it? The first perspective (the ‘referential’ perspective: ‘To what does this word apply?’) operationalizes the notion of prototype. Fruit designates, primarily, such things as apples, pears, and bananas – these are the fruit prototypes. Less commonly, the word might be used to refer to olives and tomatoes. The second

78 Cognitive Semantics

perspective (the ‘onomasiological,’ or naming perspective: ‘What is this thing to be called?’) operationalizes the notion of basic level. It is evident that one and the same thing can be named by terms that differ in their specificity vs. generality. For example, the thing you are now sitting on might be called a chair, an office chair, a piece of furniture, an artifact, or even a thing. All of these designations could be equally ‘correct.’ Yet, in the absence of special reasons to the contrary, you would probably call the thing a chair. (This, for example, is probably the answer you would give if a foreign learner wanted to know what the thing is called in English.) Chair is a basic level term, the basic level being the level in a taxonomy at which things are normally named. The basic level has this special status because categorization at this level provides maximum information about an entity. Thus, at the basic level, chairs contrast with tables, beds, and cupboards – very different kinds of things, in terms of their appearance, use, and function. Terms at a lower level in a taxonomy, e.g., kitchen chair vs. office chair, do not exhibit such a sharp contrast while terms at a higher level are too general to give much information at all about an entity. Not surprisingly, basic level terms turn out to be of frequent use, they are generally quite short and morphologically simple, and they are learned early in language acquisition.

The Usage-Basis of Cognitive Semantics Langacker has described cognitive linguistics as a ‘usage-based’ approach. The claim can be understood in two ways. On the one hand, it could be a statement about the methodology of cognitive linguistic research. Usage-based research would be research based on authentic data, as documented in a corpus, recorded in the field, or elicited in controlled situations, rather than on invented, constructed data. Although different researchers might prefer different methodologies, a glance at practically any publication by leading figures in the field, such as Lakoff, Langacker, and Talmy, will show that cognitive linguistics, as a movement, cannot reasonably be said to be ‘usage-based’ in this sense. On a second interpretation, usage-based refers to the presumed nature of linguistic knowledge and the manner in which it is acquired, mentally represented, and accessed. The claim, namely, is that a language is learned ‘bottom-up’ through exposure to usage events. A usage event presents the language user/ learner with an actual vocalization in association with a fine-grained, context-dependent conceptualization. Acquisition proceeds through generalization over usage events. Necessarily, many of the

context-dependent particularities of the usage events will be filtered out, leaving only a schematic representation of both the phonology and the semantics. In this respect, cognitive linguistics contrasts strikingly with ‘top-down’ theories of acquisition, whereby the basic ‘architecture’ of a language is presumed to be genetically given, exposure to usage data being needed merely to trigger the appropriate settings of innately given parameters. The usage-based approach raises two questions, which have loomed large in cognitive semantics research. These concern (a) the units over which schematization occurs, and (b) the extent of schematization. Let us first consider the second of these issues. One of the most vibrant areas of cognitive semantic research has been the study of lexical polysemy. It is a common observation that words exhibit a range of different meanings according to the contexts in which they are used. Indeed, the extent of polysemy appears to be roughly proportional to the frequency with which a word is used. Not surprisingly, among the most highly polysemous words in English are the prepositions. Consider the preposition on. Given such uses as the book on the table and the cat on the mat, it is easy to see how a schematic, de-contextualized image of the on-relation could emerge. It involves locating one object with respect to another in terms of such aspects as contact, verticality, and support. But the preposition has many other uses, as exemplified by the fly on the ceiling, the picture on the wall, the leaves on the tree, the writing on the blackboard, the washing on the clothes-line, the shoes on my feet, the ring on my finger. Do we proceed with further abstraction and schematization, coming up with a characterization of the on-relation that is compatible with all of these uses? Or do we identify a set of discrete meanings, which we may then attempt to relate in a prototype or a family resemblance category? If we adopt this latter approach, another question arises, namely, just how many distinct meanings are to be postulated. Three? Ten? Several dozen? Do we want to say that the water on the floor and the cat on the mat exemplify different senses of on, on the grounds that the relation between cat and mat is not quite the same as that between the water and the floor? Needless to say, the issue becomes even more critical when we take into consideration the vast range of nonspatial uses of the preposition: on television, be on a diet, be on drugs, on Monday, and countless more. In general, as is consistent with a usage-based orientation, cognitive semanticists have tended to focus on the particularities of low-level generalizations, an approach that has frequently been censured for the

Cognitive Semantics 79

‘polysemy explosion’ that it engenders. Nevertheless, the role of more schematic representations is not denied. Langacker, in this connection, draws attention to the ‘rule-list fallacy.’ The fallacy resides in the notion that rules (high-level generalizations), once acquired, necessarily expunge knowledge of the lower-level generalizations on whose basis the rules have been abstracted. It is entirely plausible that highand low-level generalizations might co-exist in the mental grammar. Indeed, knowledge of low-level generalizations – not too far removed, in terms of their schematicity, from actually encountered usage-events – may be needed in order to account for speakers’ fluency in their language. The topic interacts with a more general issue, namely, the relative roles of ‘computation’ vs. ‘storage’ in language knowledge and language use. Humans are not generally very good at computation, but we are quite adept at storing and retrieving specific information. Consider arithmetical operations. We can, to be sure, compute the product of 12 by 12 by applying general rules, but the process is slow and laborious and subject to error, and some people may need the help of pencil and paper. It is far easier, quicker, and more reliable to access the ready-made solution, if we have learned it, namely, that 12  12 ¼ 144. The point of the analogy is that in order for speech production and understanding to proceed smoothly and rapidly, it may well be the case that we access ready-made patterns and preformed chunks, which have been learned in their specific detail, even though these larger units could be assembled in accordance with general principles. The role of formulaic language in fluency and idiomaticity has been investigated especially by linguists engaged in corpus-based lexicography and second language acquisition research. Their findings lend support to the view that linguistic knowledge may indeed be represented at a relatively low level. We might suppose, therefore, that the ring on my finger is judged to be acceptable, not because some highly schematic, underspecified sense of on has been contextually elaborated, nor because some rather specific sense of on has been selected, but simply because speakers have encountered, and learned, such an expression. These considerations lead into the second aspect of a usage-based model: what are the units over which schematization takes place? The study of lexical semantics has typically been based on the assumption that schematization takes place over word-sized units. Indeed, the above discussion was framed in terms of how many meanings the preposition on might have. The study of idioms and related phenomena, such as collocations, constructions, and formulaic expressions, casts doubt on the validity of this assumption.

Corpus-based studies, in particular, have drawn attention to the fact that words may need to be characterized in terms of the constructions in which they occur, conversely, that constructions need to be characterized in terms of the words that are eligible to occur in them. It might be inappropriate, therefore, to speak of the ‘mental lexicon,’ understood as a list of words with their phonological and semantic properties. A more appropriate concept might be the ‘mental phrasicon,’ or the ‘mental contructicon.’ It would certainly be consistent with a usage-based model to assume that language is represented as schematizations over the units in terms of which language is encountered – not individual words as such, but phrases, constructions, and even utterance-length units.

Construal Linguistic meaning has often been approached in terms of the correspondence between an expression and the situation that it designates. Given the expression The cat is on the mat, and a situation in which there is a mat with a cat on it, we might be inclined to say that the linguistic expression fully and accurately describes the observed situation. The matter, however, is not so straightforward. For any conceived situation, certain facets will have been ignored for the purpose of its linguistic expression. Where was the mat? How big was it? What color was it? Was it laid out flat or was it rolled up? Was the cat in the center of the mat? Was the cat sitting or lying? And so on. Secondly, the speaker is able to categorize the situation at different levels of schematicity. Instead of saying that the cat is on the mat, the speaker could have stated that the animal is sprawled out on my new purchase. The speaker’s decision to include or exclude certain facets of the scene, and to categorize the scene and its participants in a certain way, are symptomatic of the broader phenomenon of ‘construal,’ namely, the way in which a conceived situation is mentally structured for the purpose of its linguistic expression. There is a sense in which the whole cognitive semantics enterprise is concerned with how speakers construe a conceived situation and how this construal receives linguistic expression, as a function of the conventional resources of a particular language. Some important facets are construal are discussed below. Figure-Ground Organization

A feature of our perceptual mechanism is that a perceived scene is structured in terms of ‘figure’ and ‘ground.’ Certain aspects of a scene are likely to be especially prominent and specifically attended to, whereas others are relegated to the background

80 Cognitive Semantics

context. Given the situation of the cat and the mat, we are likely to say that the cat is on the mat, rather than that the mat is under the cat. Both wordings might be equally true in terms their correspondence with the situation. Yet one would normally be preferred over the other. This preference is because we would most likely select the cat as the figure, whose location is described with respect to the mat, rather than the other way round. Figure-ground organization is ubiquitous in perception, most obviously in visual perception, but also in other modalities. When we listen to a lecture, the speaker’s voice is (hopefully) the auditory figure, which stands out against the sound of the air conditioning and of people coughing and shuffling. A number of aspects influence the figure-ground alignment. The figure, as the primary object of attention, is likely to be moveable and variable, it can act, or be acted on, independently of the ground, and it is likely to be more information-rich (for the perceiver) than the ground. Moreover, animate entities – especially if human – are likely to attract our attention as figure vis-a`-vis inanimate entities. The ground, in contrast, is likely to be static relative to the figure, it is presupposed, and provides the context for the characterization of the figure. It must be emphasized, however, that while certain inherent features of a scene may strongly suggest a certain figureground alignment, we can often choose to reverse the relation. While at a lecture, we could consciously direct our attention to a background noise, relegating the speaker’s voice to the ground. Figure-ground organization is built into language at many levels. The contrast between an active clause and its passive counterpart can be understood in such terms. The farmer shot the rabbit presents the farmer as the figure – we are interested in what the farmer did. The rabbit was shot (by the farmer) presents the rabbit as figure – we are interested in what happened to the rabbit. Note that what is at issue in these examples is not so much how the scene as such might be visually perceived, but how it is mentally organized by the speaker for its linguistic encoding. Figure-ground asymmetry is also relevant to the encoding of reciprocal relations. If A resembles B, then B obviously resembles A. Yet we would be far more likely to observe that a boy resembles his grandfather than to say that an old man resembles his grandson. We take the old man as the ground, against which the growing boy is assessed, rather than vice versa. Force Dynamics

Another aspect of construal is illustrated by the contrast between The ball rolled along the floor and

The ball kept rolling along the floor. There would be no way to differentiate these sentences in terms of objective features of the situations that they designate. Whenever the one sentence can truthfully be applied to a situation, so can the other. Yet the two sentences construe the situation differently. The difference was investigated by Talmy in terms of his notion of ‘force dynamics.’ We view entities as having an inherent tendency either for motion (or change) or for rest (or inaction). When entities interact, their inherent force dynamic tendencies also interact. The force of one entity may overcome, or fail to overcome the force of another, or the two forces may be in equilibrium. Typically, in a force-dynamic interaction, our attention goes on a figure entity (the agonist), whose behavior is tracked relative to an antagonist. The ball rolled along the floor presents the motion of the ball as resulting from its inherent tendency towards motion. But if we say that the ball kept rolling along the floor, we assume a force opposing the ball’s activity, which, however, was not strong enough to overcome the ball’s tendency towards motion. It is the verb keep that introduces a forcedynamic interaction into the situation, as we construe it. It conveys that the tendency towards motion of the agonist (i.e., the ball) was able to overcome an (unnamed) opposing force. The opposing force may, of course, be explicitly stated: The ball kept rolling, despite our attempt to halt it. Force-dynamic interaction holds even with respect to a ‘static’ situation. I kept silent designates the continuation of a static situation. The stasis, however, results from the fact that an (unnamed) antagonist was not powerful enough to cause the situation to change. Quite a few lexical items have an implicit forcedynamic content, such as keep, prevent, despite, and even finally and (to) manage. Thus, I finally managed to start my car not only conveys that I did start my car, but also that I had to overcome an opposing force. Force dynamics offers an interesting perspective on causation. Prototypically, causation (as expressed by verbs such as cause or make) involves the agonist (the causer) exerting force that overcomes the inactivity of antagonist. Variants of this scenario including letting and preventing. Let conveys that the agonist fails to engage with the antagonist, while prevent conveys that the agonist overcomes the disposition towards action of the antagonist. Another fruitful field of application has been in the study of modality (Sweetser, 1990). Thus, I couldn’t leave conveys that an unnamed antagonist (whether this be another person, a law or proscription, an ethical consideration, a broken leg, or even the fact of a locked door) overcame my disposition to leave.

Cognitive Semantics 81

Similarly, I had to leave presents my leaving as resulting from a force that overcame my disposition to remain where I was. Objective vs. Subjective Construal

Any conceptualization involves a relation between the subject of conceptualization (the person entertaining the conceptualization) and the object of conceptualization (the situation that is conceptualized). In The cat is on the mat, the object of conceptualization is, obviously, the location of the cat vis-a`-vis the mat. Although not explicitly mentioned in the sentence, the subject of conceptualization is relevant to the conceptualization in a number of ways. Firstly, the use of the definite noun phrases the cat and the mat conveys that the referents of these expressions are uniquely identifiable to the speaker, also, that the speaker expects the hearer to be able to uniquely identify the referents. (It’s not just a cat, but the cat.) Also, the use of the tensed verb is conveys that the situation is claimed to hold at the time the speaker utters the expression. Since the speaker’s role is not itself the object of conceptualization, we may say that the speaker is being construed subjectively. Langacker has illustrated the notion of objective vs. subjective construal by means of an analogy. For persons who need to wear them, their spectacles are not usually the object of their visual experience. Spectacles function simply as an aid to the seeing process but are not themselves seen. Their role is therefore a subjective one. A person can, to be sure, take off their spectacles and visually examine them, in which case, the spectacles are viewed objectively. ‘Objectification,’ then, is the process whereby some facet of the subject of conceptualization becomes the object of conceptualization. ‘Don’t talk to your mother like that,’ a woman says to her child. Here, the speaker makes herself the object of conceptualization by referring to herself in the third person. ‘Subjectification,’ in contrast, is the process whereby some facet of the object of conceptualization gets to be located in the subject of conceptualization. Take, as an example, the contrast between Jim walked over the hill and Jim lives over the hill. The first sentence profiles the motion of the figure entity vis-a`-vis the ground. The second merely designates the location of the figure. The location, however, is presented as one that lies at the end of a path that goes over the hill. Importantly, the path is not traced by the object of conceptualization, that is, by Jim. Rather, it is the subject of conceptualization who mentally traces the path. Subjectification has been identified as an important component of grammaticalization. Consider the use of (be) going to as a marker of the future. Ellen is going to the store can be construed objectively – Ellen

is currently engaged in the process of moving towards the store. If we continue to observe Ellen’s motion, we will probably find that she ends up at the store. We can easily see how (be) going to is likely to take on connotations of prediction. Indeed, Ellen is going to the store might be interpreted in just such a way, not as a statement about Ellen’s current activity, but as a prediction about the future. Similarly, It’s going to rain and You’re going to fall have the force of a prediction, extrapolated from the observation of current circumstances. Notice, in these examples, that in spite of the use of the verb go, there is no objective movement, whether literal or metaphorical, towards the future situation. Rather, it is the conceptualizer who mentally traces the future evolution of the present situation. The idea of motion, contained in the verb go, has been subjectified, that is, it has been located in the subject of conceptualization. A special manifestation of subjectification is the phenomenon of ‘fictive motion.’ This typically involves the use of a basically dynamic expression to designate an objectively static situation. Go, we might say, is basically a motion verb, or, more generally, a change of state verb (I went to the airport, The milk went sour, The lights went red). But consider a statement that the road goes through the mountains. No motion is involved here – the road is merely configured in a certain way, it does not (objectively) go anywhere. The idea of motion implied by go can, however, be attributed to the subject of conceptualization. One mentally traces the path followed by the road through the mountains. Mental motion on the part of the conceptualizer is also invoked in reference to the road from London to Oxford, which, of course, could be the very same entity, objectively speaking, as the road from Oxford to London. Similarly, one and the same entity could be referred to, either as the gate into the garden or the gate out of the garden. Linguistic Conventions

Although speakers may construe a situation in many alternate ways, their options are to some extent constrained by the linguistic resources available to them. The matter can be illustrated with respect to language-specific lexicalization patterns. Talmy has drawn attention to alternative ways in which a motion event can be linguistically encoded. Consider the English expression I flew across the Atlantic. In English (and in other Germanic languages), we prefer to encode the manner of motion by means of the verb (fly), the path of the motion being expressed in a prepositional phrase (across the Atlantic). In Romance languages, an alternative construal is preferred. Path is encoded by the verb, manner by means of an adverbial phrase: J’ai traverse´ l’Atlantique en avion

82 Cognitive Semantics

‘I crossed the Atlantic by plane.’ Notice that, in the French sentence, the statement of the manner of motion is optional; the French speaker does not have to state how the Atlantic was crossed, merely that it was crossed. Comparison of the ways in which speakers of different languages give linguistic expression to visually presented situations, and of the ways in which texts in one language are translated into another, supports the notion that situations tend to be construed in a manner that is compatible with the construals made available by the conventional resources of different languages (Slobin, 1996). For example, speakers of English (and Germanic languages) will tend to specify the manner of motion in much finer detail than speakers of Romance languages.

Embodiment An important theme in cognitive semantic research has been the insight that the relation between words and the world is mediated by the language user him/ herself. The language user is a physical being, with its various parts, existing in time and space, who is subject to a gravitational field, and who engages in bodily interaction with entities in the environment. Quite a number of our concepts are directly related to aspects of our bodily experience. To put the matter somewhat fancifully: if we humans were creatures with a different mode of existence, if, for example, we were gelatinous, air-born creatures, floating around in the stratosphere, it is doubtful whether we could ever have access to many of the concepts that are lexicalized in presently existing human languages. Thus, to understand the concept of what it means for an object to be heavy, we have to have experienced the sensation of holding, lifting, or trying to move, a heavy object. The notion of heavy cannot be fully explicated in purely propositional terms, nor in terms of verbal paraphrase. A characteristic of basic level terms, in particular, is that, very often, they are understood in terms of how we would typically interact with the entities in question. Consider the concept of chair. We understand the concept, not simply in terms of what chairs look like, nor even in terms of their various parts and how they are interrelated, but in terms of what we do with our bodies with respect to them, namely, we sit on them, and they support our body weight. We have no such ‘embodied’ conceptualization of more schematic concepts such as ‘thing’ or ‘artifact.’ We do not understand these categories in terms of how we characteristically interact with them. The role of bodily experiences has been elaborated in the theory of image schemas (Johnson, 1987;

Lakoff, 1987). ‘Image schemas’ are common recurring patterns of bodily experience. Examples include notions of containment, support, balance, orientation (up/down), whole/part, motion along a path from a source to a goal, and many more. (Force dynamic interactions, discussed above, may also be understood in image schematic terms.) Take the notion of balance. We experience balance when trying to stand on one leg, when learning to ride a bicycle, or when trying to remain upright in a strong wind. The notion involves the distribution of weights around a central axis. (Balance, therefore, is understood in force-dynamic terms.) The notion can be applied to many domains of experience. We can speak of a balanced diet, a balanced argument, a political balance of power, and of the balance of a picture or photograph. One could, no doubt, analyze these expressions as examples of metaphor. This approach, however, might be to miss the embodied, nonpropositional nature of the concept. Our experience of balancing provides a primitive, experiential schema that can be instantiated in many different domains.

Compositionality A particularly contentious issue in semantics concerns the question of compositionality. According to the compositionality principle, the properties (here: the semantic properties) of the whole can be computed from the properties of the parts and the manner of their combination. From one point of view, compositionality is a self-evident fact about human language. The cat is on the mat means what it does in virtue of the meanings of the component words, and the fact that the words stand in certain syntactic configurations. Speakers of English can work out what the sentence means, they do not have to have specifically learned this sentence. Unless compositionality were a feature of language, speakers would not be able to construct, and to understand, novel sentences. The very fact of linguistic creativity suggests that compositionality has got to be the case. Not surprisingly, therefore, in many linguistic theories, the compositionality of natural languages is axiomatic, and the study of semantics is to a large extent the study of the processes of semantic composition. Cognitive linguists, however, have drawn attention to some serious problems with the notion. It is, of course, generally accepted that idioms are problematic for the compositionality principle. Indeed, idioms are commonly defined as expressions that are not compositional. The expression spill the beans ‘inadvertently reveal confidential information’ is idiomatic precisely because the expression is not compositional, that is, its meaning cannot be worked

Cognitive Semantics 83

out on the basis of the meanings that spill and beans have elsewhere in the language. Leaving aside obviously idiomatic expressions – which, by definition, are noncompositional in their semantics – it is remarkable that the interpretation of an expression typically goes beyond, and may even be at variance with, the information that is linguistically encoded. Langacker (1987: 279–282) discussed the example the football under the table. The expression is clearly not idiomatic, neither would it seem to be problematic for the compositionality principle. Take a moment, however, to visualize the described configuration. Probably, you will imagine a table standing in its canonical position, with its legs on the floor, and the football resting on the floor, approximately in the center of the polygon defined by the bottom of the table’s legs. Note, however, that these specific details of the visualization were not encoded in the expression – they have been supplied on the basis of encyclopedic knowledge about tables. The purely compositional meaning of the expression has been enriched by encyclopedic knowledge. There is more to this example, however. If you think about it carefully, you will see that the enriched interpretation is in an important sense at variance with the compositional meaning. If by ‘X is under Y,’ we mean that X is at a place lower than the place of Y, the football, strictly speaking, is not actually under the table at all. The football, namely, is not at a place that is lower than the lowest part of the table. In interpreting even this seemingly unproblematic expression, we have had to go beyond, and to distort, its strictly compositional meaning. This state of affairs is not unexpected on a usagebased model. The resources of a language – lexical, syntactic, phraseological – are abstractions over encountered uses. The meanings abstracted from previous usage events are necessarily schematic, and may not fit precisely the requirements of the situation at hand. In giving linguistic expression to a conceptualization, we search for the linguistic resources that most closely match our intentions, accepting that some discrepancies and imprecisions are likely to occur. We trust to the inferencing powers of our interlocutors to achieve the fit between the expression and the intended conceptualization.

The Conceptual Basis of Syntactic Categories In many linguistic theories, syntax constitutes an autonomous level of organization, which mediates between phonology and semantics. As pointed out, cognitive linguistics rejects this approach. Rather, syntactic organization is itself taken to be inherently meaningful.

Several things flow from this conception of syntactic organization. First, the notion of ‘meaningless’ morphemes gains little support. It is sometimes said, for example, that the preposition of is a dummy element in expressions such as the destruction of the city, inserted by the syntax in order to satisfy the constraint that a noun cannot take a noun phrase as its complement. The cognitive semantic view of the matter would be that of does indeed have a meaning, albeit a fairly abstract one; specifically, of profiles an intrinsic relation between entities. Just as talk of a student entails some subject matter that is studied, and talk of a photograph entails some thing that was photographed, so talk of destruction entails some entity that was destroyed. These inherent relations between entities are profiled by the same preposition: destruction of the city, a student of physics, a photograph of me. More far-reaching, perhaps, are the implications of the cognitive linguistic approach for the study of word classes. It is almost a truism, in modern linguistics, that word classes – noun, verb, adjective, etc. – must be defined, not in semantic terms, but in terms of their distribution. The word explosion is a noun, not because of what it means, but because it distributes like a noun – it can take a determiner, it pluralizes, and so on. Such an approach is tantamount to claiming that syntax constitutes an autonomous level of linguistic organization, independent of semantics. Many cognitive linguists, committed to the symbolic view of language, have been skeptical of this approach and have reexamined the traditional view that word classes are indeed semantically based. There are a number of ways in which the conceptual approach can be implemented. One is a prototype approach. Prototypically, nouns designate concrete, enduring, individuated objects, while verbs designate rapid changes of state (Givo´n, 1984). A problem with this approach is that explosion, while semantically not at all a prototypical noun, is nevertheless a noun, whose distributional properties are fully comparable with those of supposedly prototypical nouns, such as table and chair. A second approach is functional (Croft, 1991). Nominals designate what is being talked about; adjectivals specify nominals in greater detail; verbal predications make assertions about nominals; while adverbials specify verbal predications more precisely. Each of these functionally defined categories has prototypical instances. Less prototypical instances often bear distinctive morphological markings. Thus, explosion betrays its nonprototypical status as an entity to be talked about by its derivational morphology. Langacker’s aim has been more ambitious. It is to offer unified conceptual definitions of the major

84 Cognitive Semantics

lexical and syntactic categories. Essentially, the claim is that the syntactic category of a word is determined by the nature of its profile. Conversely, the status of a word as noun, verb, adjective, etc., imposes a certain kind of profile on the associated semantic representation. A first distinction is between nominal vs. relational profiles. A good way to understand the distinction is by reference to autonomous vs. dependent conceptualizations. A concept is ‘autonomous’ if it can be entertained without necessary reference to other entities. Of course, there can be no such thing as a fully autonomous concept, given the ubiquity of domainbased knowledge and of the profile-base relation in the understanding of concepts. Nevertheless, relatively autonomous concepts can be proposed, for example, the concept of hypotenuse. As stated earlier, the word hypotenuse profiles a straight line. Although the concept is understood against the base of a right-angled triangle, the word does not profile the triangle, nor the relation of the hypotenuse to the triangle. It is in this sense that nominal profiles are autonomous. Compare, now, the preposition on. The word profiles a kind of (prototypically: spatial) relation between two entities, often referred to as the ‘trajector’ and the ‘landmark’ of the relation. The trajector can be thought of as the figure, i.e., the more prominent element in the relation, the landmark as the ground, i.e., the less prominent participant. Without some schematic notion of the trajector and landmark, the notion of ‘on’ lacks coherence. It is in this sense that the conceptualization associated with on is ‘dependent’ – it inherently requires reference to other entities. Relational profiles are subject to further distinctions. On designates an atemporal relation – the time at which the relation holds, or over which it holds, is not profiled. Verbs, on the other hand, inherently designate temporal relations. Like, as a verb, designates a relation between a trajector (the verb’s subject) and a landmark (its direct object). The temporality of the relation is a facet of the profile. Another distinction concerns the nature of the trajector and landmark. These may themselves be either nominal or relational, and, if relational, temporal or atemporal. Prepositions (before lunch) take a nominal landmark, subordinating conjunctions (before we had lunch) take as their landmark a temporal relation (i.e., a clause). Figure 2 depicts a taxonomy of the major lexical categories based on the nature of their profiles. The combination of smaller units into larger configurations can now be understood in terms of the way in which the profiles of the smaller units can be combined. Figure 3 illustrates the assembly of the book on the table. (The role of the determiners is

Figure 2 Taxonomy of main lexical categories. Reproduced from Taylor J R (2002) Cognitive grammar. Oxford: Oxford University Press, with permission from Oxford University Press.

Figure 3 Combination of smaller units into larger configurations. Reproduced from Taylor J R (2002) Cognitive grammar. Oxford: Oxford University Press, with permission from Oxford University Press.

Cognitive Semantics 85

ignored so as not to unduly complicate the discussion.) In accordance with conventions established by Langacker, nominal profiles are represented by circles, relations by lines between circles, while profiled entities (whether nominal or relational) are depicted in bold. The table, having a nominal profile, is able to function as the landmark of on. The resulting expression, on the table, inherits the relational profile of on. The book is able to function as the trajector of this expression, whereby the resulting expression, the book on the table, inherits the nominal profile of book. The composite expression thus designates a book, the book, however, is one which is taken to be on the table. The pattern illustrated in Figure 3 is valid, not only for the assembly of the specific expression in question, but also, mutatis mutandis, for the assembly of any nominal modified by an prepositional phrase. The pattern, therefore, is able to function as a schema that sanctions expressions of a similar internal structure.

Relativism vs. Nativism The cognitive semantics program highlights the tension between relativism and nativism. The relativist position is that a language brings with it certain categorizations and conceptualizations of the world. The language that one speaks therefore imposes certain construals of the world. It will be evident that a number of themes in cognitive semantics are liable to emphasize the language-specific, and indeed the culturespecific character of semantic structures. For example, emphasis on the symbolic nature of language, in particular the proposal to ground syntactic categories and syntactic relations in conceptual terms, would lead one to suppose that different syntactic structures, as manifested in different languages, would be based in different conceptualizations of the world. Equally, a focus on the role of domain-based knowledge in the characterization of meaning is likely to accentuate the culture-specific nature of linguistic semantics. On the other hand, several themes in cognitive linguistics are likely to be compatible with the nativist position, according to which the commonalities of human languages reflect a common, universal cognitive endowment. For example, the claim that language is embedded in general cognitive processes and abilities – if combined with the not unreasonable assumption that all humans share roughly the same cognitive capacities – would tend to highlight what is common to the conceptualizations symbolized in different languages. All languages, it may be presumed, manifest embodiment, image schematic and force dynamic construals, Figure-Ground asymmetries, and nominal vs. relational profiling.

Aware of the tension between relativism and nativism, Langacker has scrupulously distinguished between ‘conceptual structure’ and ‘semantic structure.’ Conceptual structure – how people perceive and cognize their world (including the inner world of the imagination) – is taken to be universal and based in shared capacities. Semantic structure, on the other hand, pertains to the way in which conceptual structure is formatted in order that it is consistent with the conventionalized resources of a given language. Compare the ways in which English and French speakers refer to bodily sensations of cold. English requires an attributive adjectival construction (I am cold), French require a possessive construction (J’ai froid ‘I have cold’). Although the experience is construed differently in the two languages, one cannot on this basis alone draw the inference that English and French speakers differ in their phenomenological experience of ‘being cold.’ In order to substantiate the claim that the different semantic, syntactic, and lexical resources of different languages do influence conceptualizations of the world, it would be necessary to go beyond the purely linguistic evidence and document correlations between linguistic organization and nonlinguistic cognition. Currently, the matter is hotly debated. Evidence is, however, emerging that the different construals conventionalized in different languages may sometimes have repercussions in nonlinguistic domains, giving some support to the relativist position. For example, in English (as in many other languages), we may state the location of an object by saying that it is in front of us, to our left, or behind another object or person. In some languages, for example, in Guugu Yimithirr (Guguyimidjir; spoken in Northern Queensland, Australia) and Tzeltal (spoken in Mexico), these resources are not available. Rather, an object’s location has to be stated with respect to the cardinal points (to the north, etc.) or to some fixed geophysical landmark (upstream, mountain-wards). Such differences in the linguistic construal of spatial relations have been shown to correlate with nonlinguistic spatial cognition, for example, speakers’ proficiency in dead-reckoning, that is, their ability to track their current location in terms of distance and direction from their home base (Levinson, 2003).

Conclusion Meaning is central to linguistic enquiry. Meaning, after all, is what language is all about. Yet meaning is a notoriously difficult topic to analyze. What is meaning, and how are we to study it? Some semanticists have studied meaning in terms of relations between language and situations in the world. Others have focused on relations within a

86 Coherence: Psycholinguistic Approach

language, explicating meanings in terms of paradigmatic relations of contrast, synonymy, hyponymy, entailment, and so on, and syntagmatic relations of collocation and co-occurrence. Yet others have tried to reduce meaning to matters of observable linguistic behavior. Cognitive semanticists have grasped the nettle and taken seriously the notion that meanings are ‘in the head,’ and are to be equated with the conceptualizations entertained by language users. Cognitive semantics offers the researcher a theoretical framework and a set of analytical tools for exploring this difficult issue. See also: Concepts; Connotation; Evolution of Semantics; Folk Etymology; Frame Semantics; Human Reasoning and Language Interpretation; Ideational Theories of Meaning; Idioms; Intention and Semantics; Lexical Conceptual Structure; Lexical Meaning, Cognitive Dependency of; Mentalese; Metaphor and Conceptual Blending; Metonymy; Natural Semantic Metalanguage; Onomasiology and Lexical Variation; Polysemy and Homonymy; Presupposition; Prototype Semantics; Psychology, Semantics in; Representation in Language and Mind; Semantic Maps; Spatial Expressions; Thought and Language; Virtual Objects.

Bibliography Barlow M & Kemmer S (2000). Usage based models of language. Stanford: CSLI Publications. Croft W (1991). Syntactic categories and grammatical relations: the cognitive organization of information. Chicago: University of Chicago Press. Croft W & Cruse D A (2004). Cognitive linguistics. Cambridge: Cambridge University Press. Cuyckens H, Dirven R & Taylor J (eds.) (2003). Cognitive approaches to lexical semantics. Berlin: Mouton de Gruyter. Givo´n T (1984). Syntax: a functional-typological approach 1. Amsterdam: John Benjamins. Johnson M (1987). The body in the mind: the bodily basis of meaning, imagination, and reason. Chicago: University of Chicago Press. Kay P (1997). Words and the grammar of context. Chicago: University of Chicago Press.

Keil F (1989). Concepts, kinds, and conceptual development. Cambridge, MA: MIT Press. Lakoff G (1987). Women, fire, and dangerous things: what categories reveal about the mind. Chicago: University of Chicago Press. Lakoff G & Johnson M (1980). Metaphors we live by. Chicago: University of Chicago Press. Langacker R W (1987). Foundations of cognitive grammar 1: Theoretical prerequisites. Stanford: Stanford University Press. Langacker R W (1990). Concept, image, and symbol: the cognitive basis of grammar. Berlin: Mouton de Gruyter. Langacker R W (1991). Foundations of cognitive grammar 2: Descriptive application. Stanford: Stanford University Press. Langacker R W (1999). Grammar and conceptualization. Berlin: Mouton de Gruyter. Lee D (2001). Cognitive linguistics: an introduction. Oxford: Oxford University Press. Levinson S (2003). Space in language and cognition: explorations in cognitive diversity. Cambridge: Cambridge University Press. Murphy G & Medin D (1985). ‘The role of theories in conceptual coherence.’ Psychological Review 92, 289–316. Searle J (1992). The rediscovery of the mind. Cambridge, MA: MIT Press. Slobin D (1996). ‘From ‘‘thought and language’’ to ‘‘thinking for speaking’’.’ In Gumperz J & Levinson S (eds.) Rethinking linguistic relativity. Cambridge: Cambridge University Press. 70–96. Sweetser E (1990). From etymology to pragmatics: metaphorical and cultural aspects of semantic structure. Cambridge: Cambridge University Press. Talmy L (2000). Towards a cognitive semantics 1: Conceptual structuring systems. Cambridge, MA: MIT Press. Talmy L (2003). Towards a cognitive semantics 2: Typology and process in concept structuring. Cambridge, MA: MIT Press. Taylor J R (2002). Cognitive grammar. Oxford: Oxford University Press. Taylor J R (2003). Linguistic categorization (3rd edn.). Oxford: Oxford University Press. First edition: 1989. Ungerer F & Schmid H-J (1996). An introduction to cognitive linguistics. London: Longman. Wittgenstein L (1953). Philosophical investigations. Anscombe G E M (trans.). Oxford: Blackwell.

Coherence: Psycholinguistic Approach A Sanford, University of Glasgow, Glasgow, Scotland, UK ß 2006 Elsevier Ltd. All rights reserved.

Coherence in Text and in the Mind A text is coherent to the extent that it is intelligible, that there are no aspects of the text that do not relate

to the message, and that there is no sense that things are missing from the text. We may judge a text as incoherent if these conditions are not met. There are two important sources of information that contribute to coherence: text cues and psychological constraints. Text cues are simply those cues that are in the text itself, while psychological constraints refers to processes of thought or inference that add

Coherence: Psycholinguistic Approach 87

to what is given by the text. Of course, if as a result of the way it is written we have to make too many poorly-guided inferences to understand a message, then we may say that the text itself appears incoherent. From a psychological perspective, a coherent text may be thought of as one that brings to mind just the right things to facilitate easy understanding, while an incoherent text is one that fails to do that, leaving the reader or listener with no sense of understanding the message. Texts that present the reader or listener with a difficult task may be judged more or less coherent. This raises an interesting question: how to define a text as distinct from a random concatenation of sentences. There has been a tradition in text linguistics that claims that coherence is an intrinsic defining property of a text. Pieces of writing that do not conform to the principles underlying coherence are taken either to be defective (suboptimal) or not texts at all. For instance, a text that is coherent must have clauses that are clearly connected to one another. Second, the clauses must logically relate to one another, and third, each sentence must somehow be relevant to the overall topic of the discourse. Some of these requirements can be met from what is actually written in the text itself. For instance, texts can contain explicit cohesion markers that provide links between the clauses of a text. But the other requirements, such as clauses logically relating to one another, and the clauses being relevant to the overall topic of the discourse are plainly psychological; they require the reader/listener to perceive relevance. We shall amply illustrate this point in this article. The psychological view is that coherence is something that depends on the mental activity of the reader or listener, on their capacity to understand the message that the producer of the text is trying to convey. The text can be thought of as providing clues as to what the message is, but the reader has to use these cues. So, from a psychological perspective, we may ask what mental processes lead to the development of a coherent mental representation of the text (knowledge of the message), and what clues in texts help to guide these processes appropriately (see Gernsbacher and Givon, 1995, for a broad perspective).

Here, because is a connective that links the two clauses, and he is an anaphoric pronoun that refers back to John. Both of these devices provide some glue to connect the two clauses, and help bind them into a coherent whole. The devices are visible in the text itself. There are many other cues that signal relationships between the parts of text, expressions like first (which cues that there will be a successor), next, later, finally, after that (signaling temporal progressions), similarly, and in the same way (signaling various ways in which clauses or phrases may be related to one another). Such cues are only cues, of course, and they are neither sufficient nor necessary for a text to appear coherent. So, a text with ample coherence markers may be quite incoherent:

Cohesion Markers

So, although explicit connectives may indicate the relationship between different clauses and propositions, a text may be quite coherent even in the absence of such markers, as shown in (3). What psychological studies have shown is that for some connectives, their presence does indeed aid comprehensibility. For instance, if people read short stories where the last sentence either did or did not begin with a connective (for instance, However, the pilot made a safe landing), they spend less time reading the final sentence when an explicit

One thing that can be seen in texts is a so-called cohesion marker (see Halliday and Hasan, 1976). This marker may be a connective, like and, but, when, while, because, so, therefore, hence, and so on. Another form of connection is anaphora – using a term that relates a concept back to one that was previously introduced, as in (1): (1) John came home because he was missing his mother.

(2) John ate a banana. The banana that was on the plate was brown, and brown is a good color for hair. The hair of the dog is a drink to counteract a hangover.

Such texts are not truly coherent, in the sense that they do not produce an obvious message. So the presence of cohesion markers is not enough to guarantee coherence. The clauses in a text need to be sensibly related and to form a sensible whole. Of course, what is sensible depends upon comparing what is being said in the text with what the reader knows about the world. It is clearly a matter of psychology, not just of what is in the text. Cohesion markers are not necessary for finding a text to be coherent, either. For instance, consider the following: (3) Mr. Smith was killed the other night. The steering on the car was faulty.

Although there is no stated connection between the two sentences, readers infer that the second sentence provides the reason for the state of affairs depicted in the first sentence, and that makes the text coherent. There is no cue to this in the text itself. In (4) there is such a cue: (4) Mr. Smith was killed the other night, because the steering on the car was faulty.

88 Coherence: Psycholinguistic Approach

connective is used than when it is not. The mental representation of sentences that have clauses linked by causal connectives seem to be more stable as well, since they are better remembered than those that are not directly linked. So although it may be possible to infer the link between two clauses, an explicit cue does help, and of course, may sometimes be necessary.

The Psychological Concept of a Connected, Coherent, Discourse Representation An almost universal view, within psychology, of how text is processed is that the text expresses ideas that become connected to form a coherent whole. A parallel idea in text linguistics is that each part of a text is linked to at least one other part by some sort of relation (called rhetorical predicates). The idea is similar: that coherence results from being able to appropriately relate each bit of the discourse to some other, so that a connected whole results; however, the psychological approach is concerned with studying how the connections are established, and what are the mental entities that come to be related to each other. The end point is the mental representation of the discourse and is part of memory. Because memory is fallible, the final representation will be incomplete too. As discussed above, connectives (explicit or inferred) are partly responsible for the local coherence of pieces of text (adjacent pairs of sentences, say). But a text will give rise to a coherent mental representation at several levels. It is possible to illustrate some aspects of connectivity with the following example: (5) (a) Harry was trying to clean up the house. (b) He asked Mary to put the book in the box on the table. (c) She said she was too busy to do that. (d) She had to write out a check for her friend Jill because she had no money left. (e) Harry nearly exploded. (f) He thought that they spent too much money on charities as it was. (g) Mary suddenly burst into tears. Connecting Individuals: Anaphoric Reference

The same individuals appear over and again in this simple story. Harry in (a) is He in (b), Harry in (e), and He in (f). Identifying Harry in this way is important, since that way we can connect the actions and reactions of that individual. Sometimes Harry is used in preference to He, as in (e). Using a name like this occurs especially when the individual concerned has not been at the center of the unfolding text (in focus) for a while; psycholinguistic work has shown that

the use of a name is especially useful when a character is being ‘reintroduced,’ as Harry is in (e). Use of He would still be intelligible, but slower processing of the sentence would result, showing a difficulty with using a pronoun. Processing difficulties are minimized with the character Harry because there is only one male character in the story. But the situation with Mary is different because there are two female characters, with Jill introduced in (d). In fact, in sentence (d), the reader has to work out who she is from general knowledge, not from the text. In this case, because it would be a person with money who could give money to someone without money, the second she is treated as coreferential with Jill, not Mary. Psychological work has shown that such inferential processes, although they seem automatic, are time-consuming. A further anaphoric connector in the passage is worthy of note. First, in (c), there is that. Here the expression refers to the event Mary puts the book in the box on the table. Terms like this and that can refer to huge conglomerations of events, as in a very complex story leading to the statement This was too much for Harry. So, to summarize, anaphoric devices are vital to producing a coherent mental representation of who did what. A major review of psychological work on anaphora was Garnham (2000). Causal Connectivity

With narrative texts especially, the reader has to establish causal links between the various parts of the text, and the whole structure gives rise to global coherence. So, in (a), Harry is given a goal. In (b), a further action is introduced. How is this interpreted? Most people interpret the action as being part of realizing this goal. However, there is nothing in the text to indicate this. In our example, there is hardly anything to tell the reader what the causal structure is. In the passage below, we have included some connectives that fill out the causal structure: (6) Harry was trying to clear up the house. TO HELP WITH HIS GOAL He asked Mary to put the book in the box on the bookshelf. HOWEVER (BLOCKING HIS GOAL) She said that she was too busy to do that. THE REASON WAS She had to write out a check for her friend Jill because she had no money left. AS A RESULT OF THIS REASON Harry nearly exploded. THE REASON WAS He thought they spent too much on charities as it was. AS A RESULT Mary suddenly burst into tears.

In order to achieve a very minimal understanding of this text, the information provided in bold, or

Coherence: Psycholinguistic Approach 89

something like it, must be inferred by the reader and incorporated into their mental representation of the discourse. A number of studies have shown that judgments of how clearly the clauses of texts are causally related predicts a number of performance measures during reading, including the time taken to read sentences, judgments of how well sentences fit into texts, the judged coherence of texts as a whole, and the ease of recalling texts (see Langston and Trabasso, 1999). When people understand texts, they appear to do so by forming mental representations consisting of causal chains, and the robustness of the causal chains reflects coherence.

Studies of Inferential Activity Necessity and Elaboration

Everywhere in discourse, readers are called upon to make inferences; without them, there would be no coherence. A key distinction is made between inferences that are necessary for coherence and inferences that are not necessary, but rather just fill out the picture being portrayed by the writer. Inferring causal relations, and anaphoric relations, are generally considered to be necessary inferences. In general, when a necessary inference is made, it can be shown to take time. One classic case (Haviland and Clark, 1974) is: (7) Harry took the picnic things from the trunk. (8) The beer was warm.

On reading (8), to understand how The beer fits into things, the reader must infer that beer was part of the picnic supplies. Sentence (8) took longer to read after (7) than it did after (9): (9) Harry took the beer from the trunk.

Thus measurable time is needed to make the necessary bridging inference. Bridging inferences are assumed to be made only when necessary, when a gap in the connectivity of clauses is detected. There is no inference that beer might be part of the picnic things when just sentence (7) is read; rather, the inference is triggered when (8) is encountered. So such inferences are also called backwards inferences. Necessary backwards-bridging inferences are contrasted with forward elaborative inferences. For instance, given (10), what might one infer? (10) Unable to control his rage, the angry husband threw the valuable antique vase at the wall.

There are many possibilities, but a highly plausible one is that the vase broke. If on reading (10) this inference were made, then it would be an elaboration over what was said, and as it is not made because it

is needed, it is called a forward inference. If this were followed by (11), the forward inference would facilitate comprehension: (11) It cost well over $100 to replace.

But such an inference would not help us understand (12): (12) He had been keeping his emotions bottled for weeks.

There has been much debate over whether such elaborative inferences are typical, and if and when they are made. Clearly there are many such inferences that might be made. For instance, given (11), one might infer that the wife of the angry husband might be in some danger, that the husband might become more violent, that he felt ashamed afterwards, etc. Do we make all, or any, of such plausible inferences? Because such inferences are indeed so plausible, it might be supposed that they are routinely made. In order to test whether an inference has been made, a variety of priming tasks have been used. With these, a test word is presented after a short passage. For instance, after (10), the test word BROKE might be presented. This word would also be presented after a sentence in which information pertinent to breaking is absent. Subjects are asked to read out loud the test word when it appears. The critical question is whether the word is read more rapidly after a priming sentence (11), when compared with a nonpriming sentence. If the word is read more rapidly, then it has been primed, and this suggests that the inference that the vase had broken had been made. Several different tests have been devised based on this idea. Under some circumstances, there has been weak evidence for such forward inferences happening immediately, though the general view is that they are made only under very constrained conditions and are not typical. The paucity of evidence for elaborative inferences was summed up in McKoon and Ratcliff (1992), who put forward the idea that during reading, immediate forwards elaborative inferences are typically minimal, and inferences are largely restricted to the necessary, backward, variety. However, in the long term, elaborative inferences have to be made, since comprehension is only possible when a mental model of what the text is about is constructed. We shall go on to look at some aspects of such a model. Situation-Specific Information: Scenario-Theory For a discourse to be understood, it has to be situated with respect to background knowledge. For instance: (13) John drove to London yesterday. (14) The car broke down halfway.

90 Coherence: Psycholinguistic Approach

Superficially, this is similar to examples (7) and (8), in that a backwards-bridging inference could be made to link The car in (14) to drove in (13). Such a backwards-inference would be time-consuming for reading (14). However, several studies have shown that the time to read (14) is no greater to read after (13) than it is after (15), where the car is mentioned explicitly: (15) John took his car to London yesterday.

The key difference between The car in (15) and The beer in (8) is that car is typically definitional of driving, whereas beer is just a plausible option for picnic things. So, for entities that are part of the meaning of actions, those entities appear to be included in the representation of the sentence depicting the action. The concept is part of the representation of the action drove to a place. Sanford and Garrod (1981, 1998) put forward the idea that when we read a text, we identify as quickly as possible the background situation in which the propositions in the text are grounded; they further assumed that much of what we know is represented in a situation-specific way, in structures that they termed scenarios. Driving is one example, where the concept , and expected actions, are represented. Another wellknown illustration is of having a meal in restaurant, where the events, the order of events (find table, get waiter, order meal, eat courses in expected order, get bill, pay, leave, etc.), and the principle actors (customer, waiter, wine-waiter) are represented. In general, if a new entity is introduced into a text, either it will already be part of the prior representation (scenario), or a backwards inference will have to made. Using situation-based knowledge is essential for developing a coherent representation, and a simple example is: (16) Fred put the wallpaper on the table. Then he rested his mug of coffee on the paper.

This pair of sentences is immediately coherent; nothing seems out of place. However, (17) depicts an unrealistic state of affairs, and this is immediately recognized: (17) Fred put the wallpaper on the wall. Then he rested his mug of coffee on the paper.

Sentences (16) and (17) depict different situations: putting wallpaper on a table leaves the paper as a horizontal surface, while putting it on the wall leaves it in a vertical plane, so that the cup would just fall down. The implication is that people try to build a representation of what is being depicted with

the each pair of sentences, and in order to do that, they have to use situation-specific knowledge. Keeping Track of Things: Situation Models The kind of situation-specific knowledge discussed above is stereotyped, and connecting language input to representations of specific situations is essential for adequate understanding, and hence coherence. But this is plainly not enough, in that texts do not simply refer to static situations; rather, as they unfold they depict a dynamic passage of events. Even the simple example (6) serves to illustrate that, which is why the development of a causal chain is so important for a coherent representation. A bold attempt to grasp the nettle of this more dynamic aspect of comprehension is found in the concept of the situation model (see Zwaan and Radvansky, 1998 for a detailed overview). They propose that as texts unfold, readers may keep track of a number of things. Consider first space. In some stories, people move around in space, and there is evidence that readers keep track of these movements. So, it turns out that readers have faster access to mental representations of rooms where protagonists are, or toward which protagonists are heading, than to representations of other rooms. This suggests two things: readers take the perspective of protagonists (see Duchan et al., 1995, for an introduction to this issue), and they update the focus of interest to where the protagonist is, or is said to be heading. Of course, not all stories are about events in space, but when they are, readers generally update where the protagonist is in their mental representation of the text. Several researchers have suggested that there are at least five dimensions to situations that could be encoded by the reader: space (as above), time, causation (briefly discussed above), intentionality, and protagonist. It has been plausibly argued that the comprehension of narratives revolves around keeping track of the goals and plans of protagonists (intentionality). It has been shown by probing readers that they appear to make inferences based on what motivates the actions of protagonists, even when this information is not directly available. The protagonist category refers to the people and things in the mental representation. They are the entities that are being updated with respect to place, time, and intentionality. This category leads to a further aspect that has to be understood about characters if coherence is to be achieved: the emotions of the protagonists. Experimental evidence suggests that the emotional states of protagonists might be inferred as forward inferences (see the classic work of Gernsbacher et al., 1992).

Coherence: Psycholinguistic Approach 91

Multiple Viewpoints A further aspect of coherence is that with many texts, alternative versions of reality may have to be entertained if the text is to be understood. Consider (18): (18) John put his wallet on the dresser. Unbeknownst to John, Mary later moved it to his bedside table. When he came in from gardening, John went to get his wallet. First he tried his bedside table.

John’s action doesn’t make sense in our situation model, because we represent what John believes is the location of his wallet, and we also represent where the wallet actually is. The capacity to capture the beliefs of others and how these beliefs relate to what we know to be reality is called having a ‘Theory of Mind.’ Without the capacity to make these representations, texts like (18) could not be understood as anomalous, and would display apparent coherence that was unwarranted. Dealing with multiple viewpoints like this has received some attention, and given its prevalence in narratives and real-life situations, deserves more. However, one major discovery is that some people do not have the capacity to handle these multiple perspectives (it is particularly a feature of autism; see Baron-Cohen, 1995; for a very detailed analysis of multiple viewpoints and coherence, see Fauconnier, 1994).

Coherence and Selective Processing We have portrayed coherence as the establishment of connections between things mentioned in a text, actions mentioned in a text, and world knowledge supporting interpretation. There is an extra ingredient that has to be included, and that is selectivity in processing. Not all parts of texts are equally important, and a key aspect of coherence is being able to sort out what is important from what is mere detail (or even irrelevant). Coherence derives from not merely being able to join up what is in a text and relate it to knowledge, but to be able to selectively attend to what is salient – the gist of a message. Selective Processing

Ideas as to what constitutes the gist of a text have been around since the seminal work of Walter Kintsch and his colleagues (see Kintsch, 1977, for a sample of this important work). Essentially, the idea was that what was in texts could be expressed as propositions (or idea units). Furthermore, some propositions were dependents of others, and that the less dependent a proposition, the closer it corresponded to gist. Consider the following: (19) Harry was tired. He planned to go on package holiday to Greece.

In the second sentence, we learn that Harry planned to go on holiday (the key proposition), that the type of holiday was a package holiday (a dependent proposition: there can be no package holiday without there being a holiday), and that the holiday was in Greece (another dependent proposition). Many experiments on summarizing texts, and on remembering texts, showed that summaries and memories tended to lose dependent propositions. Thus a ‘gist’ representation of (19) might include Harry planned a holiday, but might exclude the package-deal component, or the fact that he planned to take it in Greece (unless these details become more salient later). There are cues in the structure of texts themselves as to what might be the most important information. For instance, discourse focus is a device for indicating where salient information lies. In (20), the salient information is with the hat, because that specifies which man was arrested. In (21), however, with the hat is merely an incidental attribute of the man. (20) Q: Which man was it who was in trouble? A: The man with the hat was arrested. (21) Q: What was it that happened last night? A: The man with the hat was arrested.

Recent experiments have shown that if subjects read a text similar to (20) once, and then immediately afterwards read the same thing again, but with hat changed to cap, they tend to notice the change. But if they read (21), they tend not to notice the change. Focus influences what is important, and how deeply the text is processed, showing how it controls patterns of understanding (see Sanford and Sturt, 2002, for a review of shallow processing). Other aspects of text structure influence what is taken into account in producing a coherent representation. Emphasis may be put on quite different aspects of a state of affairs by simple devices in language, such as negation and quantification. For instance, here are two ways of depicting the fat content of a product: (22) This product is 90% fat free. (23) This product contains 10% fat.

Experiments have shown that people judge products to be less greasy and more healthy if the second of these descriptions is used (even if they taste a product). This is because the first description focuses on the amount of nonfat, whereas the second focuses on the amount of fat. Such focusing can happen implicitly, if terms are negative. Experimental work has shown that both of these statements are coherent, but in the fat-free case, people do not interpret the amount of fat-freeness against how much fat would be good or bad in a product. The fat-free formulation

92 Cohesion and Coherence

inhibits the use of world-knowledge, while the %-fat formulation does not inhibit it. Thus 75% fat free and 95% fat free are both considered to be more or less equally a good thing, while 25% fat is considered to be much less healthy than 5% fat (see Sanford and Moxey, 2003, for a review of these arguments for a variety of situations). Linking together different elements in a text and linking text to knowledge are important for a coherent interpretation of a text to be made; so too is being selective by choosing perspectives to build coherent structures around, and being selective to avoid overelaborating inferences that are not relevant. To achieve these goals, the writer or speaker has to choose the right linguistic devices and forms of words to guide the reader/listener into making the right inferences, using the sort of knowledge the producer intended. The capacity of a producer to do this effectively is what makes the discourse they are producing appear coherent. See also: Anaphora Resolution: Centering Theory; Cohe-

sion and Coherence; Context; Context and Common Ground; Context Principle; Discourse Anaphora; Discourse Domain; Discourse Parsing, Automatic; Discourse Representation Theory; Discourse Semantics; Human Reasoning and Language Interpretation; Irony; Natural Language Understanding, Automatic; Nonstandard Language Use; Psychology, Semantics in.

Bibliography Baron-Cohen S (1995). Mindblindness: an essay on autism and Theory of Mind. Cambridge, MA: MIT Press. Duchan J F, Bruder G A & Hewitt L E (1995). Deixis in narrative: a cognitive science perspective. Hillsdale, NJ: Lawrence Erlbaum Associates.

Fauconnier G (1994). Mental spaces. New York: Cambridge University Press. Garnham A (2000). Mental models and the interpretation of anaphora. Hove: Psychology Press. Gernsbacher M A & Givon T (eds.) (1995). Coherence in spontaneous text. Philadelphia: John Benjamins. Gernsbacher M A, Goldsmith H H & Robertson R R W (1992). ‘Do readers mentally represent fictional characters emotional states?’ Cognition and Emotion 6, 89–111. Halliday M A K & Hasan R (1976). Cohesion in English. London: Longman. Haviland S E & Clark H H (1974). ‘What’s new? Acquiring new information as a processing comprehension.’ Journal of Verbal Learning and Verbal Behavior 13, 512–521. Kintsch W (1977). Memory and cognition. New York: Wiley. Langston M & Trabasso T (1999). ‘Modelling causal integration and availability of information during comprehension of narrative texts.’ In van Oostendorp H & Goldman S R (eds.) The construction of mental representations during reading. Mahwah, NJ: Lawrence Erlbaum Associates. McKoon G & Ratcliff R (1992). ‘Inferences during reading.’ Psychological Review 99, 440–466. Sanford A J & Garrod S C (1981). Understanding written language: explorations beyond the sentence. Chichester: John Wiley and Sons. Sanford A J & Garrod S C (1998). ‘The role of scenario mapping in text comprehension.’ Discourse Processes 26, 159–190. Sanford A J & Moxey L M (2003). ‘New perspectives on the expression of quantity.’ Current Directions in Psychological Science 12, 240–242. Sanford A J & Sturt P (2002). ‘Depth of processing in language comprehension: not noticing the evidence.’ Trends in Cognitive Sciences 6, 382–386. Zwaan R A & Radvansky G A (1998). ‘Situation models in language comprehension and memory.’ Psychological Bulletin 123, 162–185.

Cohesion and Coherence T Sanders and H Pander Maat, Utrecht Institute of Linguistics OTS, Utrecht University, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

Discourse is more than a random set of utterances: it shows connectedness. A central objective of linguists working on the discourse level is to characterize this connectedness. Linguists have traditionally approached this problem by looking at overt linguistic elements and structures. In their famous Cohesion in English, Halliday and Hasan (1976) describe text connectedness in terms of reference, substitution, ellipsis, conjunction, and lexical cohesion. According

to Halliday and Hasan (1976: 13), these explicit clues make a text a text. Cohesion occurs ‘‘when the interpretation of some element in the discourse is dependent on that of another’’ (Halliday and Hasan, 1976: 4). The following types of cohesion are distinguished. . Reference: two linguistic elements are related in what they refer to. Jan lives near the park. He often goes there.

. Substitution: a linguistic element is not repeated but is replaced by a substitution item. Daan loves strawberry ice-creams. He has one every day.

Cohesion and Coherence 93

. Ellipsis: one of the identical linguistic elements is omitted. All the children had an ice-cream today. Eva chose strawberry. Arthur had orange and Willem too.

. Conjunction: a semantic relation is explicitly marked. Eva walked into town, because she wanted an icecream.

. Lexical cohesion: two elements share a lexical field (collocation). Why does this little boy wriggle all the time? Girls don’t wriggle (Halliday and Hasan, 1976: 285). It was hot. Daan was lining up for an ice-cream.

While lexical cohesion is obviously achieved by the selection of vocabulary, the other types of cohesion are considered as grammatical cohesion. The notion of lexical cohesion might need some further explanation. Collocation is the most problematic part of lexical cohesion (Halliday and Hasan, 1976: 284). The analysis of the first example of lexical cohesion above would be that girls and boys have a relationship of complementarity and are therefore related by lexical cohesion. The basis of lexical cohesion is in fact extended to any pair of lexical items that stand next to each other in some recognizable lexicosemantic relation. Let us now consider the second example of lexical cohesion mentioned above. Do hot weather and ice-cream belong to the same lexical field? Do they share a lexicosemantic relationship? If we want to account for the connectedness in this example, we would have to assume that such a shared lexicosemantic relationship holds, since the other forms of cohesion do not hold. The clearest cases of lexical cohesion are those in which a lexical item is replaced by another item that is systematically related to the first one. The class of general noun, for instance, is a small set of nouns having generalized reference within the major noun classes, such as ‘human noun’: people, person, man, woman, child, boy, girl. Cohesion achieved by anaphoric reference items like the man or the girl is very similar to cohesion achieved by reference with pronouns like he or she, although Halliday and Hasan (1976: 276) state explicitly what the difference is: ‘‘the form with general noun, the man, opens up another possibility, that of introducing an interpersonal element into the meaning, which is absent in the case of the personal pronoun.’’ This interesting observation points forward to similar observations formulated in theories developed much later, as in Accessibility Theory (Ariel, 1990) and Mental Space Theory (Fauconnier, 1994; Fauconnier and Sweetser, 1996; Sanders and Redeker, 1996). This

is only one example in which Cohesion in English shows itself to be a seminal work, in some respects ahead of its time. After the publication of cohesion in English, the notion of cohesion was widely accepted as a tool for the analysis of text beyond the sentence level. It was used to characterize text structure, but also to study language development and written composition (Lintermann-Rygh, 1985). Martin’s English text (1992) is a more recent elaboration of the cohesion work. It also starts from a systemic functional approach to language and claims to provide a comprehensive set of discourse analyses for any English text. Useful and seminal as the cohesion approach may be, there seem to be some principled problems with it. For instance, the notion of lexical cohesion is hard to define. The intuition that ‘hot weather’ and ‘icecream’ belong to the same lexical field may be shared by many people in modern Western culture, but now consider example (1). (1) The winter of 1963 was very cold. Many barn owls died.

Here it is much harder to imagine that ‘cold winters’ and ‘barn owls,’ or even ‘dying barn owls,’ should be related by a lexical field. Still, relating these items is necessary to account for the connectedness in (1). This problem is hardly solved by Halliday and Hasan’s (1976: 290) advice ‘‘to use common sense, combined with the knowledge that we have, as speakers of a language, of the nature and structure of its vocabulary.’’ Examples like (1) constitute a major problem for a cohesion approach: this short text presents no interpretation difficulties whatsoever, but there is no overt linguistic signal either. This suggests that cohesion is not a necessary condition for connectedness. Such a conclusion is corroborated by cases like (2), from a Dutch electronic newspaper (Sanders and Spooren, 2007), to which we added the segment-indices (a) and (b). (2a) Greenpeace heeft in het Zuid-Duitse Beieren een nucleair transport verstoord. (2b) Demonstranten ketenden zich vast aan de rails. (Telegraaf-i, April 10, 2001) (2a) ‘Greenpeace has impeded a nuclear transportation in the Southern German state Bayern.’ (2b) ‘Demonstrators chained themselves to the rails.’

This short electronic news item does not create any interpretative difficulties. However, in order to understand the fragment correctly, a massive amount of inferencing has to take place. For instance, we need to infer that the nuclear transportation was not

94 Cohesion and Coherence

disturbed by the organization Greenpeace, but by members of that organization; that the protesters are members of the organization; that the nuclear transportation took place by train, etc. Some of these inferences are based on world knowledge, for instance that organizations consist of people and that people, but not organizations, can carry out actions like the one described here. Others are based on discourse structural characteristics. One example is the phrase the rails. This definite noun phrase suggests that its referent is given in some way. But because there is no explicit candidate antecedent, the reader is invited to link it up with transportation, the most plausible interpretation being that the transportation takes place by a vehicle on rails, i.e., a train. It is clear by now that the cohesion approach to connectedness is inadequate. Instead, the dominant view has come to be that the connectedness of discourse is a characteristic of the mental representation of the text rather than of the text itself. The connectedness thus conceived is often called coherence (see Coherence: Psycholinguistic Approach). Language users establish coherence by actively relating the different information units in the text. Generally speaking, there are two respects in which texts can cohere: 1. Referential coherence: smaller linguistic units (often nominal groups) may relate to the same mental referent (see Discourse Anaphora); 2. Relational coherence: text segments (most often conceived of as clauses) are connected by coherence relations like Cause–Consequence between them. Although there is a principled difference between the cohesion and the coherence approaches to discourse, the two are more related than one might think. We need to realize that coherence phenomena may be of a cognitive nature, but that their reconstruction is often based on linguistic signals in the text itself. Both coherence phenomena under consideration – referential and relational coherence – have clear linguistic indicators that can be taken as processing instructions. For referential coherence these are devices such as pronouns and demonstratives, and for relational coherence these are connectives and (other) lexical markers of relations, such as cue phrases and signaling phrases. A major research issue is the relation between the linguistic surface code (what Givo´n, 1995, calls ‘grammar as a processing instructor’) and aspects of the discourse representation. In the domain of referential coherence, this relation can be illustrated by the finding that different referential devices correspond to different degrees of activation for the referent in question. For instance, a

discourse topic may be referred to quite elaborately in the first sentence but once the referent has been identified, pronominal forms suffice. This is not a coincidence. Many linguists have noted this regularity (e.g., Ariel, 1990; Givo´n, 1992; Chafe, 1994). Ariel (1990, 2001), for instance, has argued that this type of pattern in grammatical coding should be understood to guide processing. In her accessibility theory, ‘high accessibility markers’ use little linguistic material and signal the default choice of continued activation. By contrast, ‘low accessibility markers’ contain more linguistic material and signal the introduction of a new referent (see Accessibility Theory). We now turn to (signals of) relational coherence. Coherence relations taken into account for the connectedness in readers’ cognitive text representation (cf. Hobbs, 1979; Sanders et al., 1992). They arealso termed rhetorical relations (Mann and Thompson, 1986, 1988, 1992) or clause relations, which constitute discourse patterns at a higher text level (Hoey, 1983). Coherence relations are meaning relations connecting two text segments. A defining characteristic for these relations is that the interpretation of the related segments needs to provide more information than is provided by the sum of the segments taken in isolation. Examples are relations like Cause-Consequence, List, and ProblemSolution. These relations are conceptual and they can, but need not, be made explicit by linguistic markers, so-called connectives (because, so, however, although) and lexical cue phrases (for that reason, as a result, on the other hand) (see Connectives in Text). In the last decade, a significant part of research on coherence relations has focused on the question of how the many different sets of relations should be organized (Hovy, 1990; Knott and Dale, 1994). Sanders et al. (1992) have started to define the ‘relations among the relations,’ relying on the intuition that some coherence relations are more alike than others. For instance, the relations in (3), (4), and (5) all express (a certain type of) causality; they express relations of Cause–Consequence/Volitional result (3), Argument–Claim/Conclusion (4) and Speech Act Causality (5): ‘This is boring watching this stupid bird all the time. I propose we go home now!’ The relations expressed in (6) and (7), however, do not express causal, but rather additive relations. Furthermore, a negative relation is expressed in (6). All other examples express positive relations, and (7) expresses an enumeration relation. (3) The buzzard was looking for prey. The bird was soaring in the air for hours. (4) The bird has been soaring in the air for hours now. It must be a buzzard.

Cohesion and Coherence 95 (5) The buzzard has been soaring in the air for hours now. Let’s finally go home! (6) The buzzard was soaring in the air for hours. Yesterday we did not see it all day. (7) The buzzard was soaring in the air for hours. There was a peregrine falcon in the area, too.

Sweetser (1990) introduced a distinction dominant in many existing classification proposals, namely that between content relations (also sometimes called ideational, external, or semantic relations), epistemic relations, and speech act relations. In the first type of relation, segments are related because of their propositional content, i.e., the locutionary meaning of the segments. They describe events that cohere in the world. If this distinction is applied to the set of examples above, the causal relation (3) is a content relation, whereas (4) is an epistemic relation, and (5) a speech act relation. This systematic difference between types of relation has been noted by many students of discourse coherence (see Connectives in Text). Still, there is a lively debate about whether this distinction should be conceived of in terms of domains, or rather in terms of subjectivity; often, semantic differences between connectives are used as linguistic evidence for proposals [see contributions to special issues and edited volumes like Spooren and Risselada (1997); Risselada and Spooren (1998); Sanders, Schilperoord and Spooren (2001); and Knott, Sanders and Oberlander (2001); further see Pander Maat (1999)]. Others have argued that coherence is a multilevel phenomenon, so that two segments may be simultaneously related on different levels (Moore and Pollack, 1992; Bateman and Rondhuis, 1997); see Sanders and Spooren (1999) for discussion. So far, we have discussed connectedness as it occurs in both spoken/dialogical discourse and written/ monological text. However, the connectedness of spoken discourse is established by many other means than the ones discussed so far. Aspects of discourse structure that are specific to spoken language include the occurrence of adjacency pairs, i.e., minimal pairs like Question-Answer and SummonsResponse (Sacks, Schegloff and Jefferson, 1974), and prosody. These topics are subject to ongoing investigations (see especially Ford, Fox and Thompson, 2001) that we consider important because they relate linguistic subdisciplines like grammar and the study of conversation. In addition, it is clear that linguistic signals of coherence, such as connectives, have additional functions in conversations. For instance, connectives function to express coherence relations between

segments, like but in example (8), which expresses a contrastive relation. (8) The buzzard was soaring in the air for hours. But yesterday we did not see it all day.

In conversations, this use of connectives is also found, but at the same time, connectives frequently function as sequential markers: for instance, they signal the move from a digression back to the main line of the conversation or even signal turn-taking. In this type of use, connectives are often referred to as discourse markers (Schiffrin, 2001). In sum, we have discussed the principled difference between two answers to the question ‘how to account for connectedness of text and discourse?’ We have seen that, while cohesion seeks the answer in overt textual signals, a coherence approach considers connectedness to be of a cognitive nature. A coherence approach opens the way to a fruitful interaction between text linguistics, discourse psychology, and cognitive science, but at the same does not neglect the attention for linguistic detail characterizing the cohesion approach. The coherence paradigm is dominant in most recent work on the structure and the processing of discourse (see, among many others, Hobbs, 1990; Garnham and Oakhill, 1992; Sanders, Spooren and Noordman, 1992; Gernsbacher and Givo´n, 1995; Noordman and Vonk, 1997; Kintsch, 1998; Kehler, 2002). In our view it is this type of paradigm, located at the intersection of linguistics and discourse-processing research, that will lead to significant progress in the field of discourse studies. See also: Accessibility Theory; Coherence: Psycholinguis-

tic Approach; Connectives in Text; Context and Common Ground; Coreference: Identity and Similarity; Discourse Anaphora; Discourse Domain; Discourse Parsing, Automatic; Discourse Representation Theory; Discourse Semantics; Rhetoric, Classical.

Bibliography Ariel M (1990). Accessing noun-phrase antecedents. London: Routledge. Ariel M (2001). ‘Accessibility theory: an overview.’ In Sanders T, Schilperoord J & Spooren W (eds.) Text representation: linguistic and psycholinguistic aspects. Amsterdam: John Benjamins. 29–87. Bateman J A & Rondhuis K J (1997). ‘Coherence relations: towards a general specification.’ Discourse Processes 24, 3–49. Chafe W L (1994). Discourse, consciousness, and time. The flow and displacement of conscious experience in speaking and writing. Chicago: Chicago University Press.

96 Cohesion and Coherence Fauconnier G (1994). Mental spaces: Aspects of meaning construction in natural language. Cambridge: Cambridge University Press. Fauconnier G & Sweetser E (eds.) (1996). Spaces, worlds and grammar. Chicago: The University of Chicago Press. Ford C E, Fox B A & Thompson S A (eds.) (2001). The language of turn and sequence. Oxford: Oxford University Press. Garnham A & Oakhill J (eds.) (1992). Discourse Representation and Text Processing. A Special Issue of Language and Cognitive Processes. Hove, UK: Lawrence Erlbaum Associates. Gernsbacher M A & Givo´n T (eds.) (1995). Coherence in spontaneous text. Amsterdam: John Benjamins. Givo´n T (1992). ‘The grammar of referential coherence as mental processing constructions.’ Linguistics 30, 5–55. Givo´n T (1995). ‘Coherence in text vs. coherence in mind.’ In Gernsbacher M A & Givo´n T (eds.) Coherence in spontaneous text. Amsterdam: John Benjamins. 59–115. Halliday M A K & Hasan R (1976). Cohesion in English. London: Longman. Hobbs J R (1979). ‘Coherence and coreference.’ Cognitive Science 3, 67–90. Hobbs J R (1990). Literature and cognition. Menlo Park, CA: CSLI. Hoey M (1983). On the surface of discourse. London: George Allen & Unwin. Hovy E H (1990). ‘Parsimonious and profligate approaches to the question of discourse structure relations.’ In Proceedings of the 5th International Workshop on Natural Language Generation. Lintermann-Rygh I (1985). ‘Connector density – an indicator of essay quality?’ Text 5, 347–357. Kehler A (2002). Coherence, reference and the theory of grammar. Chicago: Chicago University Press. Kintsch W (1998). Comprehension. A paradigm for cognition. Cambridge: Cambridge University Press. Knott A & Dale R (1994). ‘Using linguistic phenomena to motivate a set of coherence relations.’ Discourse Processes 18, 35–62. Knott A, Sanders T & Oberlander J (eds.) (2001). Levels of Representation in Discourse Relations. Special Issue of Cognitive Linguistics. Berlin: Mouton de Gruyter. Martin J R (1992). English text. System and structure. Philadelphia: John Benjamins. Mann W C & Thompson S A (1986). ‘Relational propositions in discourse.’ Discourse Processes 9, 57–90. Mann W C & Thompson S A (1988). ‘Rhetorical Structure Theory: toward a functional theory of text organization.’ Text 8, 243–281.

Mann W C & Thompson S A (eds.) (1992). Discourse description. Diverse analyses of a fund-raising text. Amsterdam: John Benjamins. Moore J D & Pollack M E (1992). ‘A problem for RST: the need for multi-level discourse analysis.’ Computational Linguistics 18, 537–544. Noordman L G M & Vonk W (1997). ‘The different functions of a conjunction in constructing a representation of the discourse.’ In Fayol M & Costermans J (eds.) Processing interclausal relationships in production and comprehension of text. Hillsdale, NJ: Erlbaum. 75–93. Noordman L G M & Vonk W (1998). ‘Memory-based processing in understanding causal information.’ Discourse Processes 26, 191–212. Pander Maat H L W (1999). ‘The differential linguistic realization of comparative and additive coherence relations.’ Cognitive Linguistics 10(2), 147–184. Risselada R & Spooren W (eds.) (1998). The function of discourse markers. Special Issue of Journal of Pragmatics. Amsterdam: Elsevier. Sacks H, Schegloff E A & Jefferson G (1974). ‘A simplest systematics for the organization of turn-taking for conversation.’ Language 50, 696–735. Sanders J & Redeker G (1996). ‘Perspective and the representation of speech and thought in narrative discourse.’ In Fauconnier G & Sweetser E (eds.) Spaces, Worlds and Grammars. Chicago: University of Chicago Press. 290–317. Sanders T, Schilperoord J & Spooren W (eds.) (2001). Text representation: linguistic and psycholinguistic aspects. Amsterdam: John Benjamins. Sanders T & Spooren W (1999). ‘Communicative intentions and coherence relations.’ In Bublitz W, Lenk U & Ventola E (eds.) Coherence in text and discourse. Amsterdam: John Benjamins. 235–250. Sanders T & Spooren W (2007). ‘Discourse and text structure.’ In Geeraerts D & Cuykens H (eds.) Handbook of cognitive linguistics. Oxford: Oxford University Press 916–943. Sanders T, Spooren W & Noordman L (1992). ‘Toward a taxonomy of coherence relations.’ Discourse Processes 15, 1–35. Schiffrin D (2001). ‘Discourse markers: language, meaning, and context.’ In Schiffrin D, Tannen D & Hamilton D (eds.) The handbook of discourse analysis. Malden, MA: Blackwell. 54–75. Spooren W & Risselada R (eds.) (1997). Discourse markers. Special Issue of Discourse Processes. Mawah, NJ: Erlbaum. Sweetser E E (1990). From etymology to pragmatics. Cambridge: Cambridge University Press.

Collocations 97

Collocations R Krishnamurthy, Aston University, Birmingham, UK ß 2006 Elsevier Ltd. All rights reserved.

Historical Use of the Term Collocation The fact that certain words co-occurred frequently was noticed in Biblical concordances (e.g., Cruden listed the occurrences of dry with ground in 1769). Style and usage guides in the 19th and 20th centuries (e.g., Fowler’s The King’s English) addressed only the overuse of collocations, labeling them cliche´s and criticizing their use, especially by journalists (e.g., Myles na Gopaleen (see O’Nolan, 1977: 225–6), in a more humorous vein: ‘When and again have I asked you not to do that? Time . . . What is our civilization much? Vaunted. What is the public? Gullible. What interests? Vested.’).

Collocation in Modern Linguistics In modern linguistics, collocation refers to the fact that certain lexical items tend to co-occur more frequently in natural language use than syntax and semantics alone would dictate. Collocation was first given theoretical prominence by J. R. Firth, who separated it from cognitive and semantic ideas of word meaning, calling it an ‘‘abstraction at the syntagmatic level’’ (Firth 1957a: 196), and accorded it a distinct status in his account of the linguistic levels at which meaning can arise. Firth implicitly indicated that collocation required a quantitative basis, giving actual numbers of co-occurrences in some texts. Halliday (1976) saw collocation as a cohesive device and identified the need for a measure of significant proximity between collocating items and said that collocation could only be discussed in terms of probability, thus validating the need for quantitative analyses and the use of statistics. Sinclair (Sinclair et al., 1970) performed the first computational investigation of collocation, comparing written and spoken corpora, identifying  five words as the span of significant proximity and experimenting with statistical measures and lemmatization. Halliday (1966) and Sinclair (1966) thought that collocation could enable a lexical analysis of language independent of grammar. Sinclair (1991) suggested that lexical items could be defined by their collocational environments, saw collocation as part of the idiom principle (lexically determined choices), as opposed to the open choice principle (grammatically determined choices). Leech (1974: 20) included ‘collocative’ in his categories of meaning, but marginalized it as an idiosyncratic property of individual

words, incapable of contributing to generalizations. Sinclair (1987c) and Stubbs (1996) suggested that all lexical items have collocations; and Hoey (2004) accommodated collocation within a model of ‘lexical priming,’ suggesting that most sentences are made up of interlocking collocations, and can therefore be seen as reproductions of earlier sentences.

Collocation and Lexicography The pedagogical value of collocation was recognized by English teachers in the 1930s. English collocations were described in detail by Harold Palmer in a report on phraseology research with A. S. Hornby, using the term fairly loosely to cover longer phrases, proverbs, and so on, as well as individual word combinations. Palmer and Hornby showed a major interest in the classification of collocations in grammatical and semantic terms but also used collocations to indicate the relevant senses of words in word lists (draw 1. e.g., a picture 2. e.g., a line), and in their dictionary examples (a practice continued in Hornby’s (1948) and subsequent editions of the Oxford advanced learner’s dictionary). Early EFL dictionaries avoided using the term collocation, e.g., Hornby (1974) referred to ‘‘special uses of an adjective with a preposition’’ (liable: for, be  to sth), and a ‘‘special grammatical way in which the headword is used’’ (meantime: in the ). Proctor (1978), in the Longman dictionary of contemporary English, referred to ‘‘ways in which English words are used together, whether loosely bound or occurring in fixed phrases’’ and ‘‘special phrases in which a word is usually (or always) found’’; however, the dictionary also had a section headed ‘Collocations,’ defined as ‘‘a group of words which are often used together to form a naturalsounding combination,’’ and stated that they are shown in three ways: in example sentences, in explanations in Usage Notes, or in heavy black type inside round brackets if they are very frequent or almost a fixed phrase (‘‘but not an idiom’’). These are signaled by ‘in the phr.’ or similar rubrics, and Procter (1978) gave the example a mountain fastness. Later EFL dictionaries (Cobuild, Cambridge, Macmillan, etc.) continued to incorporate collocations in their dictionaries, including them in definitions and examples and typographically highlighting them in phrases. Sinclair’s Introduction to the Cobuild dictionary (1987b), in the section on ‘Word and Environment,’ speaks of ‘‘the way in which the patterns of words with each other are related to the meanings and uses of the words’’ and says that ‘‘the sense of a

98 Collocations

word is bound up with a particular usage . . . a close association of words or a grouping of words into a set phrase’’ and ‘‘(a word) only has a particular meaning when it is in a particular environment.’’ Examples such as hard luck, hard facts, hard evidence, strong evidence, tough luck, and sad facts are discussed. In Sinclair (1987b), collocates are defined as ‘‘words which co-occur significantly with headwords,’’ and regular or significant collocation as ‘‘lexical items occurring within five words . . . of the headword’’ with a greater frequency than expected, which ‘‘was established only on the basis of corpus evidence.’’ For the first time in lexicography, a statistical notion of collocation had been introduced. Collocation is used to distinguish senses: ‘‘Different sets of collocates found with these different senses pinpoint the fact that they are different senses’’; ‘‘Collocation . . . frequently reinforces meaning distinctions’’; and lexical sets used in disambiguation are ‘‘signalled by coincidence of collocation’’ (Sinclair, 1987a). Collocation can also be a marker of metaphoricity: the presence of modifiers and qualifiers indicates metaphorical uses of treadmill and blanket, e.g., . . . the corporate treadmill; . . . the treadmill of office life; a security blanket for new democracies; a blanket of snow (ibid). Collocation is the ‘‘lexical realisation of the situational context’’ (ibid.). In the central patterns of English, ‘‘meaning was only created by choosing two or more words simultaneously’’ (ibid.). However, the flexibility of collocation (sometimes crossing sentence boundaries) can cause problems in the wording of definitions: often, ‘‘no particular group of collocates occurs in a structured relationship with the word’’ and therefore ‘‘there is no suitable pattern ready for use as a vehicle of explanation’’ (ibid.). The difficulty of eliciting collocates by intuition is discussed; we tend to think of semantic sets: feet suggests ‘‘legs, toes, head or shoe, sandals, sock, or walk, run,’’ whereas significant corpus collocates of feet are ‘‘tall, high, long, and numbers’’ (ibid.). Prompted by hint, we produce ‘‘subtle, small, clue’’; the corpus indicates ‘‘give, take, no.’’ The difference between left-hand and right-hand collocates is exemplified by open: the most frequent words before open are ‘‘the, to, an, is, an, wide, was, door, more, eyes’’ and after open are ‘‘to, and, the, for, up, space, a, it, in, door’’ (ibid.). Lexicographers can also use collocations to distinguish between near-synonyms, e.g., the difference between electric (collocates: specific devices such as guitar, chair, light, car, motor, windows, oven, all ‘powered by electricity’), and electrical (collocates: more generic terms such as engineering, equipment, goods, appliances, power, activity, signals, systems, etc., ‘concerning or involving electricity’).

Finding Collocations in a Corpus Initially, collocates for dictionary headwords were identified manually by lexicographers wading through pages of printouts of concordance lines. This was clearly unsatisfactory, and only impressionistic views were feasible. Right-sorted concordances obscured left-context collocates and vice versa. The fixed-length context of printouts prevented the observation of collocates beyond a few words. Subsequent software developments have enabled the automatic measurement of statistically significant co-occurrences. These are within a specifiable and adjustable span or window of context, using different measures of statistical significance, principally mutual information (or MI-score) and t-score. MI-score privileges lower-frequency, high-attraction collocates (e.g., dentist with hygienist, optician, and molar) while t-score favors higher-frequency collocates (e.g., dentist with chair), including significant grammatical words (e.g., dentist with a, and your). The software can also display the collocate’s positional distribution if required, and recursive options are available to investigate the detailed phraseology of collocating items. Software has also become more publicly available, from MicroConcord to Wordsmith Tools and Collocate. Kilgarriff and Tugwell’s WordSketch (Kilgarriff et al., 2004) was used in creating the Macmillan English dictionary (Rundell, 2002) and offers clause-functional information about collocations, e.g., wear þ objects: suit, dress, hat, etc. þ prepositional phrases (after of: armor, clothing, jeans, etc.; after with: pride, sleeve, collar, etc.; after on: sleeve, wrist, finger, etc.; after over: shirt, head, dress, etc.); similarly, fish is the subject of the verbs swim, catch, fry, etc.; the object of the verbs catch, eat, feed, etc. and modified by the adjectives tropical, bony, oily, and so on. Lexicographers are in general less concerned about the detailed classification of collocations, although their judgments affect the both the placement and specific treatment of the combinations. Hornby’s attempts (e.g., Hornby, 1948, 1974) at classification (focusing on verbs) later used transformations and meaning distinctions as well as surface patterns, and Hunston and Francis (2000) listed the linguistic and lexicological terminology that has developed subsequently for collocational units: lexical phrases, composites, gambits, routine formulae, phrasemes, etc., and referred to the work of Moon (e.g. 1998) and Mel’cˇuk (e.g. 1998) in discussing degrees of fixity and variation, which does impact on lexicography. However, one of Firth’s (1957b) original terms, ‘colligation,’ used to describe the habitual co-occurrence

Collocations 99

of grammatical elements, has not achieved the same widespread usage as ‘collocation.’ One manifestation of colligation, phrasal verbs, the combination of verb and particle (adverb or preposition) to form semantic units, has been highlighted in EFL dictionaries. Several EFL publishers have produced separate dictionaries of phrasal verbs. There have been some dictionaries of collocations, but so far each has had its own limitations: not wholly corpus-based (e.g., Benson et al., 1986; Hill and Lewis, 1997), based on a small corpus (e.g., Kjellmer, 1994), or limited coverage (the recent Oxford collocations dictionary for students of English (Lea, 2002)).

Collocation in Computational Linguistics, Pedagogy, and Translation Interest in collocation has increased substantially in the past decade, as evidenced by workshops at lexicographical, linguistic, pedagogical, and translation conferences. For computational purposes, the relevant features of collocation are that they are ‘‘arbitrary, domain independent, recurrent, and cohesive lexical clusters’’ (Smadja, 1993), and ‘‘of limited semantic compositionality’’ (Manning and Schu¨tze, 1999). But the greatest interest has been generated in the language-teaching profession, with numerous conference and journal papers. Lewis (2000) encapsulates the main concerns: students do not recognize collocations in their input, and hence fail to produce them; collocation represents fluency (which precedes accuracy, represented by grammar); transparent versus ‘arbitrary’ (or idiomatic) combinations, with familiar words in rarer combinations (a heavy smoker is not a fat person); transformation can be misleading (extremely disappointed but rarely extreme disappointment); students may generalize more easily from corpus concordance examples than from canonical versions in dictionaries (exploring versus explaining); collocation as a bridge between the artificial separation of lexis and grammar; collocation extends knowledge of familiar words (easier than acquiring new words in isolation); and longer chunks are more useful and easier to store than isolated words.

Conclusions and the Future For many fields, it seems that collocation has a great future. The applications of collocation in language teaching have been one of the notable recent successes. Its more detailed exploration in large language corpora requires a significant advance in software. The exact parameters are not fully established, and the statistical measures can be improved. Research to identify word-senses by the clustering of collocates

was initiated in the 1960s (Sinclair et al., 1970), but has still not become sufficiently robust for automatic processing. The identification of lexical sets by collocation, signaled in Sinclair (1966; Sinclair et al., 1970) and Halliday (1966), is yet to be achieved, as is a corpus-generated thesaurus. The theoretical impetus of collocation has yet to reach the level of a language-pervasive system, although Hoey’s notion of Lexical Priming heads in that direction. See also: Coherence: Psycholinguistic Approach; Cohesion and Coherence; Connotation; Context; Context and Common Ground; Definition in Lexicology; Dictionaries and Encyclopedias: Relationship; Disambiguation; False Friends; Frame Semantics; Generative Lexicon; Idioms; Jargon; Lexical Conditions; Lexical Fields; Lexical Semantics; Lexicology; Lexicon/Dictionary: Computational Approaches; Lexicon: Structure; Meronymy; Partitives; Polysemy and Homonymy; Selectional Restrictions; Taboo Words; Taboo, Euphemism, and Political Correctness.

Bibliography Benson M, Benson E & Ilson R (1986). The BBI combinatory dictionary of English. New York: John Benjamins. Church K W & Hanks P (1989). ‘Word association norms, mutual information, and lexicography.’ In Proceedings of the 27th annual meeting of the Association for Computational Linguistics, reprinted in Computational Linguistics 16(1), 1990. Church K W, Gale W, Hanks P & Hindle D (1990). ‘Using statistics in lexical analysis.’ In Zernik U (ed.) Lexical acquisition: using on-line resources to build a lexicon. Lawrence Erlbaum Associates. Clear J (1993). ‘From Firth principles: computational tools for the study of collocation.’ In Baker M, Francis G & Tognini-Bonelli E (eds.) Text and technology. Amsterdam: John Benjamins. Collocate (2005). Written by Michael Barlow. Houston: Athelstan. For details see http://athel.com/product_info. php?products_id=29&osCsid=8c5d654da554afcb0348e e65eb143265. Cowie A P (1999). English dictionaries for foreign learners – a history. Oxford: Clarendon Press. Firth J R (1957a). ‘Modes of meaning.’ In Papers in linguistics 1934–51. London: Oxford University Press. Firth J R (1957b). ‘A synopsis of linguistic theory 1930–55.’ In Studies in linguistic analysis. (Special volume of the Philological Society). Oxford: Blackwell. Reprinted in Palmer F (ed.) (1968) Selected papers of J. R. Firth 1952–59. Halliday M A K (1966). ‘Lexis as a linguistic level.’ In Bazell C E, Catford J C, Halliday M A K & Robins R H (eds.) In memory of J. R. Firth. London: Longman. Halliday M A K & Hasan R (1976). Cohesion in English. London: Longman. Hill J & Lewis M (1997). LTP Dictionary of Selected Collocations. Hove: LTP.

100 Color Terms Hoey M (2004). ‘Textual colligation – a special kind of lexical priming.’ Language and Computers 49(1), 171–194. Hornby A S (ed.) (1948). Oxford advanced learner’s dictionary of current English (1st edn.). Oxford: Oxford University Press. Hornby A S (ed.) (1974). Oxford advanced learner’s dictionary of current English (3rd edn.). Oxford: Oxford University Press. Kenny D (1998). ‘Creatures of habit? What translators usually do with words.’ Meta 43(4), 515–523. Kilgarriff A, Rychly P, Smrz P & Tugwell D (2004). ‘The sketch engine.’ In Williams G & Vessier S (eds.) Proceedings of Euralex 2004. Lorient, France: Universite´ de Bretagne Sud. For more details and access to software, please see http://www.sketchengine.co.uk/. Kjellmer G (1994). A dictionary of English collocations. Oxford: Clarendon Press. Lea D (ed.) (2002). Oxford collocations dictionary for students of English. Oxford: Oxford University Press. For details see http://www.oup.com/elt/catalogue/isbn/ 0-19-431243-7?cc=gb. Leech G (1974). Semantics. London: Penguin. Lewis M (2000). Teaching collocation. Hove: Language Teaching Publications. Louw B (1993). ‘Irony in the text or insincerity in the writer? The diagnostic potential of semantic prosodies.’ In Baker M et al. (eds.) Text and technology. Amsterdam: John Benjamins. Manning C D & Schu¨tze H (1999). Foundations of statistical natural language processing. Cambridge, MA: MIT Press. Melcˇuk I (1998). Collocations and lexical functions. In Cowie A P (ed.) Phraseology. Theory, analysis, and applications. Oxford: Clarendon Press. 23–53. MicroConcord (1993). Written by Scott M & Johns T. Oxford: OUP. See http://users.ox.ac.uk/ctitext2/ resguide/resources/m125.htmlfor details and http:// www.liv.ac.uk/ms2928/software/ for free download.

Moon R (1998). Fixed expressions and idioms in English: a corpus-based approach. Oxford: O.U.P. O’Nolan K (ed.) (1977). The best of Myles – a selection from ‘Cruiskeen Lawn’. London: Pan Books. Palmer H E (1933). Second interim report on English collocations. Tokyo: Kaitakusha. Procter P (ed.) (1978). Longman dictionary of contemporary English (1st edn.). Harlow: Longman. Rundell M (ed.) (2002). Macmillan English dictionary. Basingstoke: Macmillan. Sinclair J M (1966). ‘Beginning the study of lexis.’ In Bazell C E, Catford J C, Halliday M A K & Robins R H (eds.) In memory of J. R. Firth. London: Longman. Sinclair J M (ed.) (1987a). Looking up-an account of the COBUILD project in lexical computing. London: Collins ELT. Sinclair J M (1987b). ‘Introduction.’ In Sinclair J M (ed.) Collins Cobuild English language dictionary, 1st edn. London/Glasgow: Collins. Sinclair J M (1987c). ‘Collocation: a progress report.’ In Steele R & Threadgold T (eds.) Language topics. Amsterdam/Philadelphia: Benjamins. Sinclair J M (1991). Corpus, concordance, collocation. Oxford: O.U.P. Sinclair J M, Jones S & Daley R (1970). English lexical studies. Report to OSTI on Project C/LP/08. Now published (2004) as Krishnamurthy (ed.). English collocation studies: the OSTI Report. London: Continuum. Stubbs M (1996). Text and Corpus Analysis. Oxford: Blackwell. Smadja F (1993). ‘Retrieving collocations from text: Xtract.’ Computational Linguistics 19(1), 143–177. Smadja F, McKeown K & Hatzivassiloglou V (1996). ‘Translating collocations for bilingual lexicons: a statistical approach.’ Computational Linguistics 22(1), 1–38. Wordsmith Tools (1996). Written by Scott M. Oxford: OUP. For details and downloads, see http://www.lexically. net/wordsmith/.

Color Terms D L Payne, University of Oregon, Eugene, OR, USA ß 2006 Elsevier Ltd. All rights reserved.

Color Perception Color is a shorthand way of referring to the psychological interpretation of retinal and neuronal perception of reflected visible light (Lenneberg and Roberts, 1956; Hardin, 1988). Colors are commonly thought of as being composed of three properties: 1) hue (perception of wavelength interactions), 2) brightness or luminesence on a dark-light scale (based on reflectivity of a surface), and 3) saturation (perception of

purity of one dominant wavelength). The highest degree of luminiscence is ‘white’ or ‘bright,’ while the lowest degree (no reflectivity) is ‘black’ or ‘dark’ (Figure 1). If there is very low or no saturation, the color is interpreted as ‘gray’ (Figure 2).

Color Vocabulary Color terms are not the same thing as the psychophysical perception of wavelength and reflectivity, but are Sausseurian ‘signs’ which name color concepts. Individuals from two distinct language-culture groups may perceive given light-wave experiences

Color Terms 101

Figure 1 Luminescence.

Figure 2 Saturation.

similarly but use very distinct patterns of color terms to talk about their experiences. For example, it is unlikely that a native English speaker would use a single color term to name the entire range of colors that are named by the term niroˆ, or a single term for the range named by poˆ s in Maa, the language of the Maasai, in Kenya and Tanzania (Figure 3). Conversely, many English speakers might use the single word brown for the hues that Maa speakers divide into niroˆ, mu´gı´e´, morı´joi, and several other categories. Essentially, all languages have two or more lexical items that name color concepts as their basic sense (but see Levinson, 2002). The Dani (Irian Jaya) word mola names a color concept roughly corresponding to a combination of ‘red þ white þ yellow.’ The Yagua (Peru) ru´una´˛y names ‘red.’ Some color terms may derive from the names of objects, such as English olive, which names a tree and its fruit; only by metonymic extension does it name the grayish-green color corresponding to the prototypical fruit of the olive tree. Some color terms are contextually

restricted. Thus, English blond primarily applies to human hair colors, and cannot be used for the same hue range in paint found, for example, on cars or walls. The Maa o´mo` is restricted to the color of certain light-brown sheep. Even for terms that are not contextually restricted, their reference on particular occasions of use is likely to be severely affected by context. The meaning of black in black sky versus in black crow is not likely to be same ‘black.’ Red is unlikely to designate the same hue-saturationbrighness values in red lipstick and in red hair (under natural circumstances). Color terms often have emotional or social connotations, such as the widely-attested association of ‘red’ with anger. Color terms are common in idioms for human beings. Sometimes languages include in their ‘color’ category words that cannot be defined only by hue, saturation, and brightness parameters. The Maa emu´a´ ‘color’ category contains both hue-saturationbrightness terms and color-plus-design terms such as aroˆ s ‘spotted black and white,’ keshu´roi ‘red and white/brown and white’ with ‘white’ on or near the face, sa´mpu` ‘thinly striped, typically with tan and white’ (Figure 4), etc. Puko´tı` ‘blend of black and white, so well blended that from a distance the whole may appear blue’ is a hyponym (subcase) of poˆ s ‘blue’, parallel to saga´rara´mı` ‘light blue/purple’ (from the name of a seed pod), and kiı´ ‘blue’ (from ‘whetting stone’) (Payne et al., 2003). On different occasions, the same speaker may name a given hue-saturation-brightness value with different terms. In part, this led MacLaury (1996; 2002) to argue that speakers may switch perspectives in observing a phenomenon; they may look at two items from the vantage point of either how similar, or how different, they are. Perspective-switching allows for flexible cognitive categorizations, hence alternative namings, and eventually may lead to different lexicalizations across speech communities.

Color Term Universals An enduring question concerns whether universal constraints underlie inventories of color terms. If so, do explanations lie in physiology or the nature of cognition? Bloomfield (1933: 140) advanced the relativist idea that languages can ‘mark off’ different portions of the wavelength continuum quite arbitrarily. For him, color naming should be entirely culture-specific. A related question concerns to what extent color vocabulary may affect individuals’ cognitive perceptions of color (cf. Whorf, 1956; Kay and Kempton, 1984). Scientific cross-cultural studies of color terms began with the optician Magnus (1880), who drew evolutionary conclusions about vocabulary development.

102 Color Terms

Figure 3 Maa color naming. See http://darkwing.uoregon.edu/~dlpayne/maasai/MaaColorNaming-.htm. This figure reflects a color-naming task done by Vincent Konene Ole-Konchellah, a Maa (Maasai) speaker of Kenya, il-Wuasinkishu section. When the task was done, the color circles were randomized within a field. They are re-arranged here according to the names applied to the colors. In other Maaspeaking areas some terms, e.g., si0 nteˆt and poˆ s, may designate different colors. Maa has many additional color terms which Ole-Konchellah just did not employ in this task.

Figure 4 Animal hide displaying the Maa (Maasai) color term sa´mpu` ‘thinly striped, typically with tan and white.

The anthropologist Rivers (1901) drew evolutionary conclusions about social and mental development. Employing Lenneberg and Roberts’s (1956) procedures for researching Zuni (New Mexico) color terms, Berlin and Kay (1969) (henceforth BK) addressed the universals question. They distinguished between basic color terms (BCTs) versus color terms generally, and argued against an extreme relativist position, instead positing universal constraints on the evolution of basic terms.

BK defined a BCT as a word that refers to color first and foremost; is not a composite of other color terms; is not a sub-case hyponym of a more general term; is not contextually restricted; and is salient, as judged by being readily used and widely known throughout a language community. By these criteria, we identify Yagua as having four basic color roots (though of differing parts of speech): pupa´-‘white,’ dakuuy ‘be dark, black,’ ru´una˛´y ‘red colored,’ su´nu˛-‘green-blue.’ A concept partially corresponding to ‘yellow’ can be

Color Terms 103

expressed, but this involves modifying su´nu˛-‘greenblue’ with a suffix that probably derives from-diiy ‘near’ (su´nu˛diipo´ ‘pale, yellowish,’ su´nu˛dı´way ‘be yellowish, pale, anemic’; Powlison, 1995). Secondary criteria, appealed to in problematic cases, include whether the term (a) has the same grammatical properties as other BCTs; (b) is not derived from the name of an object; and (c) is not recently borrowed. Secondary criteria can be synchronically irrelevant for determining basic status, even if historically true. English orange was borrowed from French and still is the name of a fruit tree, but orange is considered a BCT in modern English because it meets the primary criteria. BK tested the hypothesis that there are constraints on development of BCTs using an array of about 330 Munsell color chips and 20 languages, relying on bilingual speakers living in California. The BCTs of each language were identified and elicited from the speakers. They were then asked to use the color chips to identify the best example (focal hue) of each term identified as a BCT in their respective languages. In a separate step speakers plotted the range of each BCT on an array of the color chips. The 20-language sample was supplemented by data on 78 more languages extracted from dictionaries and field-workers’ notes. BK concluded that though BCTs could show marked differences in range, there was a high degree of stability for focal hues across languages: only about 30 of the chips were nominated as focal hues. These concentrated around the focal hues of English black, white, red, green, yellow, blue, gray, brown, orange, purple, pink. Some languages had a term that covered blue þ green (cf. Yagua su´nu˛-), but BK’s results showed that the focal hue of this term tended to be either ‘blue’ or ‘green,’ but not half-way in between. They concluded that languages could be placed along a continuum of seven stages of BCT development, and that an implicational hierarchy governed the order in which new BCTs could be added, ending with a maximum of 11 BCTs (Figure 5). These claims opposed the view that languages could vary without limit. Further empirical evidence argued that, for people with normal trichromatic vision, certain focal centers are psychologically salient even when a person’s

language has no BCT corresponding to those focal colors (Heider, 1972; Rosch, 1975). Rosch showed that in Dani, with just two BCTs, speakers were better able to hold certain colors in memory than others, even when the memorable colors did not correspond to a focal center of one of the two Dani color terms. Importantly, the memorable colors corresponded quite closely to the BK ‘best examples’ from other languages. This result argues that the focal colors BK identified are psychologically salient, with the implication that at least the centers of color term categories were not dependent on culture or language. Again this concept countered a strong form of the Whorfian hypothesis. Subsequent scholars have challenged the BK study on several grounds, including Western cultural bias, non-random sampling procedures, bilingual interference, transcription and data errors, and inadequate experimental methodologies (Hickerson, 1971; Saunders and van Brakel, 1997). Dedrick (1998) provides an even-handed review of the research from a philosophy of science perspective. The BK study was nevertheless hugely influential in initiating an enduring research tradition, spurring investigation of hundreds of additional languages (Borg, 1999). Major cross-language studies include MacLaury (1996) and the World Color Survey (Kay et al., forthcoming). Together these motivated revisions to the universalist claims (cf. Kay et al., 1997), including the following. . In addition to ‘blue þ green,’ the developmental sequence was revised to include more composites (Kay and McDaniel, 1978) (Figure 6). This was partially based on the discovery that ‘white’ was not a focal hue in all two-color BCT systems. For example, though the range of the Dani mola includes ‘white þ red þ yellow,’ it had a focal hue within the ‘red’ range. A more insightful characterization is that mola is a WARM color term, and neither a ‘white’ nor a ‘red’ term. The complementary term is mili, which is a ‘black þ green þ blue,’ or DARK-COOL composite. ‘Yellow þ green,’ ‘white þ yellow,’ and ‘black þ blue’ composites have also been documented. In some languages a ‘green þ blue’ composite may persist even after ‘brown,’ ‘purple,’ or both have achieved BCT

Figure 5 Berlin and Kay’s (1969) hypothesized stages in development of BCTs. If a language has any BCT to the right on the hierarchy, it was predicted to have all BCTs to the left. (A Stage VII language need have only some of ‘gray, pink, orange, purple.’)

104 Color Terms

status. Acknowledging composites accounted for how speakers can use BCTs to name any hue-saturationbrightness value, whereas BK would have predicted that some phenomenological color values would go unnamed. . Composite color categories may have their foci in one salient hue or another, or may have multiple foci. This difference may vary by speaker. . In the revised developmental sequence, the colors of Stages VI and VII were viewed as derived. The developmental sequence thus contained category types: composite, unique hue, and achromatic (‘red, yellow, green, blue, white, black’), binary hue (‘orange’ as a combination of ‘yellow’ and ‘red,’ ‘purple’ as a combination of ‘red’ and ‘blue’), and derived (‘brown,’ ‘pink’). . Developmentally, ‘brown, purple, pink, orange’ and especially ‘gray’ may appear earlier than predicted by BK (Greenfield, 1986). Indeed, the supposition that BCTs always come about by splitting hue-based categories into smaller hue-based categories is wrong, as brightness and saturation parameters can play a role. For example, a desaturated ‘gray’ might surface early in the sequence, and subsequently be reinterpreted as ‘blue’ (independently of any ‘green þ blue’ composite) (MacLaury, 1999). . Languages may lexicalize BCTs along a brightness parameter. The Bellonese (Solomon Islands) system has three ‘mothers’ or ‘big names’ of colors: susungu for bright, light colors (other than light greens and green-yellows), ‘ungi for dark colors (except pitch-black), and unga for the rest of the spectrum (plus other non-BCTs) (Kuschel and Monberg, 1974; cf. MacLaury, 1996). . Though color categories cannot be defined by their boundaries, there are still restrictions on boundaries. Suppose one color category has its focus in ‘red’ and another has its focus in ‘yellow.’ If a speaker of such a language moves gradually from the red focus to the yellow one, there will be some point after which the speaker simply can no longer affirm that the hue could be considered ‘red’: a hue boundary has been passed (Dedrick, 1998).

. Some languages have more than 11 BCTs. Russian has 12, including goluboj ‘light, pale blue’ and sinij ‘dark, bright blue.’ Hungarian has both piros ‘light red’ and vo¨ro¨s ‘dark red’ BCTs (MacLaury et al., 1997).

Explaining Basic Color Terms The claim that universals partially govern development of BCTs appears to receive strong statistical support (Kay et al., 1997; and the forthcoming World Color Survey). Even so, what can ultimately explain the constrained developmental patterns remains unresolved. Kay and McDaniel (1978) argued that unique hue terms like white, black, red, green, yellow, and blue could be explained by an opponency theory, derived from the nature of the human eye and basic neural responses (which concerns whether a given retinal cell is maximally excited or inhibited by a given wavelength; Hering, 1920/ 1964; Hardin, 1988). Appeal was then made to fuzzy set theory (Zadeh, 1965) to account for binary and derived color terms like brown, orange, purple, pink and gray. But this set of explanations cannot account well for composite color terms that combine fundamental perceptual categories such as ‘yellow þ red,’ ‘green þ blue,’ and ‘white þ yellow.’ ‘Yellow þ green þ blue’ composites are particularly troubling, since certain retinal cells appear to be maximally excited by focal blue hues but maximally inhibited by focal yellow. Disconcertingly, the proposal did not explain how categories change over time – one of the principal claims of the BK research paradigm was precisely that systems do change. Rosch’s findings led to explanations for color categorization in terms of central prototypes grounded in perception. Such an explanation works well for perceptually salient focal colors, but does not account for BCTs like purple, which tend not to have a salient focus; nor does it account for category boundary phenomena in color naming tasks. Arguments have been advanced that composite color terms for LIGHT-WARM and DARK-COOL

Figure 6 Kay and McDaniel’s (1978) revised BCT color sequence. Arrows represent splitting of composite categories. Gray is ‘wild,’ able to appear anywhere, but later is more likely.

Color Terms 105

may be linked to colors typically associated with day and night (Goddard, 1998); and other color terms may develop based on the color of culturally important objects (Saunders and van Brakel, 1997) (the position of cultural relativists). But troubling data for a culturally-grounded explanation of DARKCOOL and LIGHT-WARM terms is that BCTs for these notions do not often correspond to lexical terms for ‘night,’ and ‘day’ or ‘sun,’ respectively. Most troubling, these accounts have no way of accounting for the strong statistical patterns seen in large data sets such as the World Color Survey or MacLaury’s Mesoamerican study. Almost certainly any reductionist one-factor explanation will ultimately fail in explaining all of the patterns of BCT development in the world’s languages. See also: Categorizing Percepts: Vantage Theory; Cognitive Semantics; Field Work Methods in Semantics; Prototype Semantics; Synesthesia; Synesthesia and Language.

Bibliography Berlin B & Kay P (1969). Basic color terms, their universality and evolution. Berkeley: University of California Press [Reprinted 1991/1999. Stanford: CSLI Publications, with expanded bibliography by Luisa Maffi, and color chart by Hale Color Consultants.]. Bloomfield L (1933). Language. New York: Holt. Borg A (ed.) (1999). The language of color in the Mediterranean. Stockholm: Almqvist & Wiksell. Dedrick D (1998). Naming the rainbow: colour language, colour science, and culture. Dordrecht: Kluwer. Goddard C (1998). Semantic analysis: a practical introduction. Oxford: Oxford University Press. Greenfield P J (1986). ‘What is grey, brown, pink, and sometimes purple: the range of ‘wild card’ color terms.’. American Anthropologist 24, 908–916. Hardin C L (1988). Color for philosophers: unweaving the rainbow. Indianapolis/Cambridge, MA: Hackett. Heider E R (1972). ‘Universals in color naming and memory.’ Journal of Experimental Psychology 93, 1–20. Hering E (1920/1964). Outlines of a theory of the light sense. Cambridge, MA: Harvard University Press. Hickerson N P (1971). ‘Review of Berlin and Kay (1969).’ International Journal of American Linguistics 37, 257–270. Kay P, Berlin B, Maffi L & Merrifield W (1997). ‘Color naming across languages.’ In Hardin C L & Maffi L (eds.) Color categories in thought and language. Cambridge: Cambridge University Press. 21–55. Kay P, Berlin B, Maffi L & Merrifield W (forthcoming). World color survey. Chicago: University of Chicago Press (Distributed by CSLI).

Kay P & Kempton W (1984). ‘What is the Sapir-Whorf Hypothesis?’ American Anthropologist 86, 65–79. Kay P & McDaniel C K (1978). ‘The linguistic significance of basic color terms.’ Language 54, 610–646. Kuschel R & Monberg T (1974). ‘‘We don’t talk much about colour here’: a study of colour semantics on Bellona Island.’ Man 9, 213–242. Lenneberg E H & Roberts J M (1956). The language of experience: a study in methodology, Memoir 13, International Journal of American Linguistics. Baltimore: Waverly. Levinson S C (2002). ‘Ye lıˆ Dyne and the theory of basic colour terms.’ Journal of Linguistic Anthropology 10, 3–55. Maclaury R E (1996). Color and cognition in Mesoamerica: constructing categories as vantages. Austin: University of Texas Press. MacLaury R E (1999). ‘Basic color terms: twenty-five years after.’ In Borg A (ed.) The Language of Color in the Mediterranean. Stockholm: Almqvist and Wiksell. 1–37. MacLaury R E (2002). ‘Introducing vantage theory.’ Language Sciences 24, 493–536. MacLaury R E, Alma´si J & Ko¨vecses Z (1997). ‘Hungarian Piros and Vo¨ro¨s: color from points of view.’ Semiotica 114, 67–81. Magnus H (1880). Untersuchung u¨ber den Farbensinn der Naturvo¨lker. Jena: Gustav Fischer. Payne D L, Ole-Kotikash L & Ole-Mapena K (2003). ‘Maa color terms and their use as human descriptors.’ Anthropological Linguistics 45, 169–200. Powlison P (1995). Nijyami Niquejadamusiy-May Niquejadamuju. (Diccionario Yagua – Castellano) [YaguaEnglish Dictionary]. Lima: Instituto Lingu¨ı´stico de Verano. Rivers W H R (1901). ‘Introduction: colour vision.’ In Haddon A C (ed.) Reports of the Cambridge Anthropological Expedition to Torres Straits 2: Physiology and Psychology. Cambridge: Cambridge University Press. 1–132. Rosch E H (1975). ‘Cognitive reference points.’ Cognitive Psychology 4, 328–350. Saunders B & van Brakel J (1997). ‘Are there nontrivial constraints on colour categorization?’ Behavioral and Brain Sciences 20, 167–228. Whorf B L (1956). ‘The relation of habitual thought and behavior to language.’ In Carroll J B (ed.) Language, thought and reality: selected writings of Benjamin Lee Whorf. Cambridge, MA: MIT Press. 134–159. Zadeh L (1965). ‘Fuzzy sets.’ Information and Control 8, 338–353.

Relevant Website http://www.icsi.berkeley.edu/wcs – World Color Survey Site.

106 Comparatives

Comparatives C Kennedy, Northwestern University, Evanston, IL, USA ß 2006 Elsevier Ltd. All rights reserved.

Introduction The ability to establish orderings among objects and make comparisons between them according to the amount or degree to which they possess some property is a basic component of human cognition. Natural languages reflect this fact: all languages have syntactic categories that express gradable concepts, and all languages have designated comparative constructions, which are used to express explicit orderings between two objects with respect to the degree or amount to which they possess some property (Sapir, 1944). In many languages, comparatives are based on specialized morphology and syntax. English exemplifies this type of system. It uses the morphemes more/-er, less, and as specifically for the purpose of establishing orderings of superiority, inferiority, and equality, respectively, and the morphemes than and as to mark the standard against which an object is compared: (1a) Mercury is closer to the sun than Venus. (1b) The Mars Pathfinder mission was less expensive than previous missions to Mars. (1c) Uranus doesn’t have as many rings as Saturn.

In the case of properties for which specific measure units are defined, it is also possible to express differences between objects with respect to the degree to which they possess some property, even when the predicate from which the comparative is formed does not permit explicit measurement: (2a) Mercury is 0.26 AU closer to the sun than Venus. (2b) ??Mercury is 0.46 AU close to the sun.

Languages such as English also allow for the possibility of expressing more complex comparisons by permitting a range of phrase types after than and as. For example, (3a) expresses a comparison between the degrees to which the same object possesses different properties, (3b) compares the degrees to which different objects possess different properties, and (3c) relates the actual degree that an object possesses a property to an expected degree. (3a) More meteorites vaporize in the atmosphere than fall to the ground. (3b) The crater was deeper than a 50-story building is tall. (3c) The flight to Jupiter did not take as long as we expected.

Finally, many languages also have related degree constructions that do not directly compare two objects but instead provide information about the degree to which an object possesses a gradable property by relating this degree to a standard based on some other property or relation. The English examples in (4) using the morphemes too, enough and so exemplify this sort of construction. (4a) The equipment is too old to be of much use to us. (4b) Current spacecraft are not fast enough to reach the speed of light. (4c) The black hole at the center of the galaxy is so dense that nothing can escape the pull of its gravity, not even light.

Example (4b), for example, denies that the speed of current spacecraft is as great as the speed required to equal the speed of light.

Gradability A discussion of the semantics of comparison must begin with the semantics of gradable predicates more generally. Not all properties can be used in comparatives, as shown by the contrast between the examples in (1) and (5). (5a) ??Giordano Bruno is deader than Galileo. (5b) ??The new spacecraft is more octagonal than the old one. (5c) ??Carter is as former a president as Ford.

The crucial difference between predicates such as expensive and close, on the one hand, and dead, octagonal, and former, on the other, is that the first, but not the second, are gradable – they express properties that support (nontrivial) orderings. Comparatives thus provide a test for determining whether a predicate is inherently gradable or not. The most common analysis of gradable predicates assigns them a unique semantic type that directly represents their order-inducing feature; they are analyzed as expressions that map their arguments onto abstract representations of measurement, or scales. Scales have three crucial parameters, the values of which must be specified in the lexical entry of particular gradable predicates: a set of degrees, which represent measurement values; a dimension, which indicates the property being measured (cost, temperature, speed, volume, height, etc.); and an ordering relation on the set of degrees, which distinguishes between predicates that describe increasing properties (e.g., tall) and those that describe decreasing properties (e.g., short) (see Sapir, 1944; Bartsch and Vennemann, 1973; Cresswell, 1977; Seuren,

Comparatives 107

1978; von Stechow, 1984a; Bierwisch, 1989; Klein, 1991; Kennedy, 1999; Schwarzschild and Wilkinson, 2002). The standard implementation of this general view claims that gradable predicates have (at least) two arguments: an individual and a degree. Gradable predicates further contain as part of their meanings a measure function and a partial ordering relation such that the value of the measure function applied to the individual argument returns a degree on the relevant scale that is at least as great as the value of the degree argument. The adjective expensive, for example, expresses a relation between an object x and a degree of cost d such that the cost of x is at least as great as d. In order to derive a property of individuals, it is necessary to first saturate the degree argument. In the case of the positive (unmarked) form, the value of the degree argument is contextually fixed to an implicit norm or standard of comparison, whose value may vary depending on a number of different contextual factors (such as properties of the subject, the type of predicate, and so forth; see Vagueness). For example, the truth conditions of a sentence such as (6a) can be represented as in (6b), where size is a function from objects to degrees of size and ds is the contextually determined standard – the cutoff point for what counts as large in the context of utterance. (6a) Titan is large. (6b) size(t)  ds

In the context here, the various objects in the solar system, the value of ds is typically such that (6a) is false. If we are talking about Saturn’s moons, however, then ds is such that (6a) is true. This sort of variability is a defining feature of gradable adjectives as members of the larger class of vague predicates.

Comparison In contrast to the positive form, comparatives (and degree constructions in general) explicitly fix the value of the degree argument of the predicate. There are a number of implementations of this basic idea (see von Stechow, 1984a, for a comprehensive survey), but most share the core assumption that the comparative morphemes fix the value of the degree argument of the comparative-marked predicate by requiring it to stand in a particular relation – > for more, < for less, and  for as – to a second degree, the comparative standard, which is provided by the comparative clause (the complement of than or as). One common strategy is to assign the comparative morpheme essentially the same semantic type as a quantificational determiner – it denotes a relation between two sets of degrees. One of these sets is derived by

abstracting over the degree argument of the comparative predicate; the second is derived by abstracting over the degree argument of a corresponding predicate in the comparative clause. This analysis presupposes that the comparative clause contains such a predicate. In some cases, it is present in the surface form (see (3b)), but typically, in particular whenever it is identical to the comparative predicate, it is eliminated from the surface form by an obligatory deletion operation. For example, in the analysis developed in Heim (2000), more (than) denotes a relation between two sets of degrees such that the maximal element of the first (provided by the main clause) is ordered above the maximal element of the second (provided by the comparative clause). At the relevant level of semantic representation, a sentence such as (7) has the constituency indicated in (8a) (where material elided from the surface form is struck through) and the truth conditions in (8b). (7) Titan is larger than Hyperion. (8a) [Titan is d large] more than [Hyperion] - )  d0 } (8b) max{d | large(t)  d}> max{d0 | large(h

Note that because the truth conditions of the comparative form do not involve reference to a contextual norm, the comparative does not entail the corresponding positive. Thus (8a), for example, can be true even in a context in which (6a) is false. Differential comparatives such as (2a) can be accounted for by modifying the basic semantics to include a measure of the difference between the respective (maximal) degrees contributed by the two arguments of the comparative morpheme (von Stechow, 1984a; Schwarzschild and Wilkinson, 2002). Such differences always correspond to closed intervals on a scale and so are measurable even if the degrees introduced by the base-gradable predicate themselves are not (Seuren, 1978; von Stechow, 1984b; Kennedy, 2001). Because the standard of comparison is derived by abstracting over a degree variable in the comparative clause, this approach allows for the expression of arbitrarily complex comparisons such as those in (3). There are some limits, however. First, the comparative clause is a wh-construction, so the syntactic operation that builds the abstraction structure is constrained by the principles governing long-distance dependencies (see Kennedy, 2002, for an overview). Second, it is also constrained by its semantics; because the comparative clause is the argument of a maximalization operator, it must introduce a set of degrees that has a maximal element. Among other things, this correctly predicts that negation (and other decreasing operators) are excluded from the comparative clause (von Stechow, 1984a; Rullmann, 1995):

108 Comparatives (9a) ??Venus is brighter than Mars isn’t. (9b) max{d | bright(v)  d} > max{d0 | :bright(m)  d0 }

The set of degrees d0 such that Mars is not as bright as d0 includes all the degrees of brightness greater than the one that represents Mars’s brightness. Because this set has no maximal element, the maximality operator in (9b) fails to return a value. The hypothesis that the comparative clause is subject to a maximalization operation has an additional logical consequence (von Stechow, 1984a; Klein, 1991; Rullmann, 1995); for any (ordered) set of degrees D and D0 , if DD0 , then max(D0 )  max(D). The comparative clause is thus a downward-entailing context and so is correctly predicted to license negative-polarity items and conjunctive interpretations of negation (Seuren, 1973; Hoeksema, 1984; but cf. Schwarzschild and Wilkinson, 2002): (10a) The ozone layer is thinner today than it has ever been before. (10b) We observed more sunspot activity in the last 10 days than anyone has observed in years. (11a) Jupiter is larger than Saturn or Uranus.) (11b) Jupiter is larger than Saturn, and Jupiter is larger than Uranus.

Finally, the assumption that the comparative is a type of quantificational expression leads to the expectation that it should participate in scopal interactions with other logical operators. The ambiguity of (12), which has the (sensible) de re interpretation in (13a) and an (unlikely) de dicto interpretation in (13b), bears out this prediction. (12) Kim thinks Earth is larger than it is. (13a) max{d | think(large(e)  d) (k)}> max{d0 | large(e) > d0 } (13b) think(max{d | large(e)  } > max{d0 | large(e) > d0 }) (k)

The extent to which comparatives interact with other operators and the implications of such interactions for the compositional semantics of comparatives and gradable predicates is a focus of current investigation (see Larson, 1988; Kennedy, 1999; Heim, 2000; Bhatt and Pancheva, 2004).

Comparison Cross-Linguistically As previously noted, there are in fact several distinct semantic analyses of comparatives that differ in their details but share the core assumption that gradable adjectives map objects to ordered sets of degrees. For example, one alternative analyzes the truth conditions of a sentence such (7) as in (14); roughly, there is a degree d such that Titan is at least as large as

d but Hyperion is not as large as d (Seuren, 1973; Klein, 1980; Larson, 1988). (14) 9d[[large(t)  d] ^ :[large(h)  d]]

Analysis (14) does not express an explicit ordering between two degrees but instead takes advantage of the implicit ordering on the scale of the predicate to derive truth conditions equivalent to (8b) – given the inherent ordering, (14) holds whenever the maximal degree of Titan’s largeness exceeds that of Hyperion (and vice versa). The fact that the underlying semantics of gradable predicates supports multiple equivalent logical analyses of comparatives appears at first to be a frustrating obstacle to the discovery of the ‘right’ semantics of the comparative. In fact, however, this may be a positive result when we take into account the extremely varied syntactic modes of expressing comparison in the world’s languages (see Stassen, 1985), which include forms that superficially resemble the logical representation in (14), such as the example from Hixkarya´na in (15). (15) Kaw-ohra naha Waraka, kaw naha Kaywerye tall-NOT he-is Waraka tall he-is Kaywerye ‘Kaywerye is taller than Waraka’

Although it may turn out to be difficult to find clear empirical evidence to choose between competing, equivalent logical representations of comparatives within a particular language such English, it may also turn out that a study of the various expressions of comparison in different languages will show that all the possible options provided by the underlying semantics of gradability are in fact attested. Comparatives, therefore, provide a potentially fruitful and important empirical domain for investigating broader typological questions about the mapping between (universal) semantic categories and (language-specific) syntactic ones.

See also: Antonymy and Incompatibility; Comparative Constructions; Monotonicity and Generalized Quantifiers; Negation; Quantifiers; Vagueness.

Bibliography Bartsch R & Vennemann T (1973). Semantic structures: A study in the relation between syntax and semantics. Frankfurt: Atha¨enum Verlag. Bhatt R & Pancheva R (2004). ‘Late merger of degree clauses.’ Linguistic Inquiry 35, 1–46. Bierwisch M (1989). ‘The semantics of gradation.’ In Bierwisch M & Lang E (eds.) Dimensional adjectives. Berlin: Springer-Verlag. 71–261.

Comparative Constructions 109 Cresswell M J (1977). ‘The semantics of degree.’ In Partee B (ed.) Montague grammar. New York: Academic Press. 261–292. Heim I (2000). ‘Degree operators and scope.’ In Jackson B & Matthews T (eds.) Proceedings of semantics and linguistic theory, 10. Ithaca, NY: CLC Publications. 40–64. Hoeksema J (1984). ‘Negative polarity and the comparative.’ Natural Language & Linguistic Theory 1, 403–434. Kennedy C (1999). Projecting the adjective: The syntax and semantics of gradability and comparison. New York: Garland Press. Kennedy C (2001). ‘Polar opposition and the ontology of ‘‘degrees.’’’ Linguistics and Philosophy 24, 33–70. Kennedy C (2002). ‘Comparative deletion and optimality in syntax.’ Natural Language & Linguistic Theory 20.3, 553–621. Klein E (1980). ‘A semantics for positive and comparative adjectives.’ Linguistics and Philosophy 4, 1–45. Klein E (1991). ‘Comparatives.’ In von Stechow A & Wunderlich D (eds.) Semantik: Ein internationales Handbuch der zeitgeno¨ssischen Forschung. Berlin: Walter de Gruyter. 673–691. Larson R K (1988). ‘Scope and comparatives.’ Linguistics and Philosophy 11, 1–26.

Rullmann H (1995). Maximality in the semantics of whconstructions. Ph.D. diss., University of Massachusetts: Amherst. Sapir E (1944). ‘Grading: A study in semantics. Philosophy of Science 11, 93–116. Schwarzschild R & Wilkinson K (2002). ‘Quantifiers in comparatives: A semantics of degree based on intervals.’ Natural Language Semantics 10, 1–41. Seuren P A (1973). ‘The comparative.’ In Kiefer F & Ruwet N (eds.) Generative grammar in Europe. Dordrecht: Riedel. 528–564. Seuren P A (1978). ‘The structure and selection of positive and negative gradable adjectives.’ In Farkas D, Jacobsen W J & Todrys K (eds.) Papers from the parasession on the lexicon. Chicago: Chicago Linguistic Society. 336–346. Stassen L (1985). Comparison and universal grammar. Oxford: Basil Blackwell. von Stechow A (1984a). ‘Comparing semantic theories of comparison.’ Journal of Semantics 3, 1–77. von Stechow A (1984b). ‘My reply to Cresswell’s, Hellan’s, Hoeksema’s and Seuren’s comments.’ Journal of Semantics 3, 183–199.

Comparative Constructions L Stassen, Radboud University, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

Definition of the Domain In semantic or cognitive terms, comparison can be defined as a mental act by which two objects are assigned a position on a predicative scale. Should this position be the same for both objects, then we have a case of the comparison of equality. If the positions on the scale are different, then we speak of the comparison of inequality. In both cases, however, the notion essentially involves three things: a predicative scale, which, in language, is usually encoded as a gradable predicate, and two objects. Although these objects can, in principle, be complex, the practice of typological linguistic research has been to restrict them to primary objects, which are typically encoded in the form of noun phrases. Thus, a comparative construction typically contains a predicate and two noun phrases, one of which is the object of comparison (the comparee NP), while the other functions as the ‘yard stick’ of the comparison (the standard NP). In short, prototypical instances of comparative constructions in the languages of the world

are sentences that are equivalent to the English sentences in (1), in which the noun phrase following the items as and than is the standard NP: (1) English (Indo-European, Germanic) (1a) John is as tall as Lucy (1b) John is taller than Lucy

The Comparison of Inequality: Parameters Modern literature on the typology of the comparison of inequality has concentrated largely on the comparison of inequality. Relevant publications include Ultan (1972), Andersen (1983), and Stassen (1984, 1985). The last of these authors presents a typology of comparative constructions that is based on a sample of 110 languages and that boils down to four major types. A basic parameter in this typology is the encoding of the standard NP. First, one can make a distinction between instances of fixed-case comparatives and derived-case comparatives. In the former type, the standard NP is always in the same case, regardless of the case of the comparee NP. In the latter type, the standard NP derives its case assignment from the case of the comparee NP. Classical Latin is an example of a language in which both types were allowed.

110 Comparative Constructions

The sentences in (2) illustrate a construction type in which the standard NP is dependent on the comparee NP for its case marking. In contrast, sentence (3) shows a construction type in which the standard NP is invariably in the ablative case. As a result, sentence (3) is ambiguous between the readings of (2a) and (2b). (2) Latin (Indo-European, Italic) (2a) Brutum ego non minus B.-ACC 1SG.-NOM not less amo quam Caesar love.1SG.PRES than C.-NOM ‘I love Brutus no less than Caesar (loves Brutus)’ (Ku¨hner and Stegmann, 1955: 466) (2b) Brutum ego non minus B.-ACC 1SG.NOM not less amo quam Caesarem love.1SG.PRES than C.-ACC ‘I love Brutus no less than (I love) Caesar’ (Ku¨hner and Stegmann, 1955: 466) (3) Latin (Indo-European, Italic) Brutum ego non B.-ACC 1SG.NOM not amo Caesare love.1SG.PRES C.-ABL (Ku¨hner and Stegmann, 1955: 466)

minus less

Both types of comparative constructions can be subcategorized further, on the basis of additional parameters. Within the fixed-case comparatives, a first distinction is that between direct-object comparatives and locational comparatives. Direct-object comparatives (or, as Stassen [1985] calls them, Exceed-Comparatives) have as their characteristic that the standard NP is constructed as the direct object of a transitive verb with the meaning ‘to exceed’ or ‘to surpass.’ Thus, the construction typically includes two predicates, one of which is the comparative predicate, and another which is the ‘exceed’ verb. The comparee NP is the subject of the ‘exceed’ verb. Concentrations of the Exceed comparative are found in Sub-Saharan Africa, in China and Southeast Asia, and in Eastern Austronesia. Duala, a Bantu language from Cameroon, presents an instance of the Exceed comparative, as does Thai. (4) Duala (Niger-Kordofanian, Northwest Bantu) nin ndabo e kolo buka nine this house it big exceed that ‘This house is bigger than that’ (Ittmann, 1939: 187) (5) Thai (Austro-Asiatic, Kam-Tai) kaˇw suˇuN kwaa` kon tu´k he tall exceed man each ‘He is taller than anyone’ (Warotamasikkhadit, 1972: 71)

kon man

Locational comparatives, on the other hand, are characterized by the fact that the standard NP is invariably constructed in a case form that has a locational/ adverbial function. Depending on the exact nature of this function, adverbial comparatives can be divided into three further subtypes. Separative comparatives mark the standard NP as the source of a movement, with a marker meaning ‘from,’ or ‘out of.’ Allative comparatives construct the standard NP as the goal of a movement (‘to, toward,’ ‘over, beyond’) or as a benefactive (‘for’). Finally, locative comparatives encode the standard NP as a location, in which an object is at rest (‘in,’ ‘on,’ ‘at,’ ‘upon’). Concentrations of (the various subtypes of) the Locational Comparative are found in Africa above the Sahara, in Eurasia (including the Middle East and India, but with the exception of the modern languages of Continental Europe), Eskimo, some Western North American languages, Mayan, Quechuan, Carib, Polynesian, and some (but not many) Australian and Papuan languages. Illustrations of the various subtypes of locational comparatives are: (6) Mundari (Austro-Asiatic, Munda) sadom-ete hati mananga-i horse-from elephant big-3SG.PRES ‘The elephant is bigger than the horse’ (Hoffmann, 1903: 110) (7) Estonian (Uralic, Balto-Finnic) kevad on su¨gis-est ilusam spring is fall-from more.beautiful ‘The spring is more beautiful than the fall’ (Oinas, 1966: 140) (8) Maasai (Nilo-Saharan, Nilotic) sapuk olkondi to lkibulekeny big hartebeest to waterbuck ‘The hartebeest is bigger than the waterbuck’ (Tucker and Mpaayi, 1955: 93) (9) Tamachek’ (Afro-Asiatic, Berber) kemmou tehousid foull oult ma m you pretty.2SG.FEM upon sister of you ‘You are prettier than your sister’ (Hanoteau, 1896: 52) (10) Tubu (Nilo-Saharan, Saharan) sa-umma gere du mado eye-his blood on red ‘His eye is redder than blood’ (Lukas, 1953: 45)

Turning now to the derived-case comparatives, in which the case marking of the standard NP is derived from – or ‘parasitic on’ – the case marking of the comparee NP, we note that, again, two subtypes can be distinguished. First, there is the conjoined

Comparative Constructions 111

comparative. Here, the comparative construction consists of two structurally independent clauses, one of which contains the comparee NP, while the other contains the standard NP. Furthermore, the two clauses show a structural parallelism, in that the grammatical function of the comparee NP in one of the clauses is duplicated by the grammatical function of the standard NP in the other clause. If, for example, the comparee functions as the grammatical subject in its clause, the standard NP will also have subject status in its clause. Since the construction has two clauses, it follows that the construction will also have two independent predicates. In other words, the comparative predicate is expressed twice. There are two ways in which this double expression may be effectuated. The language may employ antonymous predicates in the two clauses (‘good-bad,’ ‘strong-weak’). Alternatively, the two predicates may show a positive-negative polarity (‘good-not good’, ‘strong-not strong’). An example of the first variant is found in Amele; the second variant has been attested for Menomini. Sentence (13) illustrates one of the comparative constructions in Malay. Here the standard NP and the comparee NP are conjoined as sentence topics, and the following clause predicates the property of the comparee NP only; that is, in this (rather infrequent) variant of the Conjoined Comparative, the comparative predicate is expressed only once. In geographical terms, the conjoined comparative seems to be concentrated in the Southern Pacific, including Australian, Papuan, and Eastern Austronesian languages, but it is also common in large parts of the Americas, and there are also some cases in Eastern Africa. (11) Amele (Papuan, Madang) jo i ben , jo eu nag house this big , house that small ‘This house is bigger than that house’ (Roberts, 1987: 135) (12) Menomini (Algonquian) Tata’hkes-ew , nenah teh kan strong-3SG , I and not ‘He is stronger than me’ (Bloomfield, 1962: 506) (13) Malay (Austronesian, West Indonesian) kayu, batu, beˇrat batu wood, stone, heavy stone ‘Stone is heavier than wood’ (Lewis, 1968: 157)

A second subtype of derived-case comparison is defined negatively, in that the standard NP has derived case, but the construction does not have the form of a coordination of clauses. Instead, the construction features a specific comparative particle

that accompanies the standard NP. With a few, mostly West-Indonesian exceptions, this particle comparative appears to be restricted to Europe. The English than comparative is a case in point. Other examples are the comparative construction in French, with its comparative particle que, and the comparative construction in Hungarian, which features the particle mint ‘than, like.’ (14) French (Indo-European, Romance) tu es plus jolie que ta soeur you are more pretty than your sister ‘You are prettier than your sister’ (B. Bichakjian, personal commuication) (15)

Hungarian (Uralic, Ugric) Istvan magasa-bb mint Peter I.NOM tall-more than P.NOM ‘Istvan is taller than Peter’ (E. Moravcsik, personal communication)

In summary, the typology of comparison of inequality developed in Stassen (1984, 1985) can be presented as follows: (16) FIXED CASE

DERIVED CASE

direct object locational

conjoined nonconjoined

EXCEED SEPARATIVE ALLATIVE LOCATIVE CONJOINED PARTICLE

Predicate Marking in Comparative Constructions Apart from, or in addition to, case assignment of the standard NP, a further possible parameter in the typology of comparative constructions might be considered to be the presence or absence of comparative marking on the predicate. In the vast majority of languages, such overt marking is absent; predicative adjectives in comparatives retain their unmarked, ‘positive’ form. Some languages, however, mark a predicative adjective in a comparative construction by means of a special affix, e.g., -er in English, German, and Dutch, -ior in Latin, -bb in Hungarian, -ago in Basque) or a special adverb (more in English, plus in French). Especially in the case of comparative affixes, the etymological origin is largely unknown. As for the areal distribution of this predicate marking in comparatives, it can be observed that it is an almost exclusively European phenomenon, and that it is particularly frequent in languages that have a particle comparative construction. For a tentative explanation of this latter correlation, see Stassen (1985, Chap. 15).

112 Comparative Constructions

Explanation of the Typology of Comparative Constructions Stassen (1985) advances the claim that the typology of comparative constructions is derived from (and hence predicted by) the typology of temporal sequencing. That is, the type(s) of comparative construction that a language may employ is argued to be limited by the options that the language has in the encoding of (simultaneous or consecutive) sequences of events. A first indication in favor of this hypothesis is that at least one of the attested comparative types, viz., the conjoined comparative, has the overt form of a temporal sequence (in this case, a simultaneous coordination). Moreover, for most of the other comparative types a correlation with a possible encoding of some temporal sequence can be established as well. Stassen (1985) produces detailed evidence for the correctness of the following set of universals of comparative type choice: a. If a language has an adverbial comparative, then that language allows deranking (i.e., nonfinite subordination) of one of the clauses in a temporal sequence, even when the two clauses in that sequence have different subjects. b. If a language has an Exceed-Comparative, then that language allows deranking of one of the clauses in a temporal sequence only if the two clauses have identical subjects. c. If a language has a conjoined comparative, then that language does not allow deranking of clauses in temporal sequences at all. The parallelism between these various options in temporal sequence encoding and corresponding comparative types is illustrated by examples from Naga, Dagbane, and Kayapo: (17) Naga (Sino-Tibetan, Tibeto-Burman) (17a) A de kepu ki themma I words speak on man lu a vu-we that me strike-INDIC ‘As I spoke these words, that man struck me’ (Grierson (ed.), 1903: 417) (17b) Themma hau lu ki vi-we man this that on good-INDIC ‘This man is better than that man’ (Grierson (ed.), 1903: 415) (18) Dagbane (Niger-Kordofanian, Gur) (18a) Nana san-la o-suli n-dum nira scorpion take-HAB his-tail PREF-sting people ‘The scorpion stings people with its tail’ (Fisch, 1912: 32)

(18b) O-make dpeoo n-gare-ma he-has strength PREF-exceed-me ‘He is stronger than me’ (Fisch, 1912: 20) (19) Kayapo´ (Ge) (19a) Ga-ja nium-no 2SG-stand 3SG-lie down ‘You are standing, and/ while he is lying down’ (Maria, 1914: 238) (19b) Gan ga-prik ba i-pri 2SG 2SG-big1SG 1SG ISG-small ‘You are bigger than me’ (Maria, 1914: 237)

Given that the universals listed above meet with very few and ‘incidental’ counterexamples, Stassen (1985) concludes that the typology of comparative constructions is modeled on the typology of temporal sequencing, so that, in effect, comparative constructions appear to be a special case of the encoding of temporal sequences. A residual problem for this modeling analysis of comparative types is presented by the particle comparatives. Like conjoined comparatives, particle comparatives form a case of derived-case comparison, but unlike conjoined comparatives their surface structure form is not that of a coordination. Nonetheless, there are indications that even particle comparatives are coordinate in origin. In a number of cases, the particle used in particle comparatives has a clear source in a coordinating conjunction or adverb (e.g., karo ‘than/but’ in Javanese, dan ‘than/then’ in Dutch, baino ‘than/but’ in Basque, asa ‘than/then’ in Toba Batak, noria ‘than/after that’ in Goajiro, ngem ‘than/ but’ in Ilocano, na ‘than/nor’ in Scottish Gaelic, nor ‘than/nor’ in Scottish English, e` ‘than/or’ in Classical Greek). Furthermore, particle comparatives in at least some languages share a number of syntactic properties with coordinations. For example, the Dutch comparative allows Gapping, a rule which is commonly thought to be restricted to coordinate structures. (20) Dutch (Indo-European, Germanic) (20a) Ik verzamel boeken en mijn I collect books and my broer verzamelt platen brother collects records ‘I collect books and my brother collects records’ (own data) (20b) Ik verzamel boeken en mijn broer Ø platen (own data) (21) Dutch (Indo-European, Germanic) (21a) Ik I mijn my

koop buy broer brother

meer more platen records

boeken books koopt buys

dan than

Componential Analysis 113 ‘I buy more books than my brother buys records’ (own data) (21b) Ik koop meer boeken dan mijn broer platen Ø (own data)

One might argue, then, that particle comparatives must be seen as grammaticalizations from an underlying sequential construction. In this way, the particle comparative does not have to present a counterexample to the modeling analysis of comparative constructions, although it certainly forms a recalcitrant case.

See also: Antonymy and Incompatibility; Comparatives.

Bibliography Andersen P K (1983). Word order typology and comparative constructions. Amsterdam: Benjamins. Bloomfield L (1962). The Menomini language. New Haven: Yale University Press. Fisch R (1912). Grammatik der Dagomba-Sprache. Berlin: Reimer. Grierson G A (1903). Linguistic survey of India, vol. III: Tibeto-Burman family, part II: Specimens of the Bodo,

Naga and Kachin groups. Calcutta: Government Printing Office. Hanoteau A (1896). Essai de grammaire de la langue tamachek’. Algiers: Jourdan. Hoffmann J (1903). Mundari grammar. Calcutta: Bengal Secretariat Press. Ittman J (1939). Grammatik des Duala. Berlin: Reimer. Ku¨hner R & Stegmann C (1955). Ausfu¨hrliche Grammatik der lateinischen Sprache: Satzlehre. Leverkusen: Gottschalk. Lewis M B (1968). Malay. London: St. Paul’s House. Lukas J (1953). Die Sprache der Tubu in der zentralen Sahara. Berlin: Akademie-Verlag. Maria P A (1914). ‘Essai de grammaire Kaiapo´, langue des Indiens Kaiapo´, Bre´sil.’ Anthropos 9, 233–240. Oinas F J (1966). Basic course in Estonian. Bloomington: Indiana University. Roberts John R (1987). Amele. London: Croom Helm. Stassen L (1984). ‘The comparative compared.’ Journal of Semantics 3, 143–182. Stassen L (1985). Comparison and universal grammar. Oxford: Blackwell. Tucker A N & Tompo ole Mpaayi J (1955). A Maasai grammar with vocabulary. London: Longmans, Green. Ultan R (1972). ‘Some features of basic comparative constructions.’ Working Papers On Language Universals (Stanford) 9, 117–162. Warotamasikkhadhit U (1972). Thai syntax. The Hague: Mouton.

Componential Analysis D Geeraerts, University of Leuven, Leuven, Belgium ß 2006 Elsevier Ltd. All rights reserved.

Componential Analysis Componential analysis is an approach that describes word meanings as a combination of elementary meaning components called semantic features or semantic components. The set of basic features is supposed to be finite. These basic features are primitive in the sense that they are the undefined building blocks of lexical-semantic definitions. Hence, the term ‘semantic primitives’ (or sometimes ‘atomic predicates’) is used to refer to the basic features. The advantage of having definitional elements that themselves remain undefined resides in the possibility of avoiding circularity: if the definitional language and the defined language are identical, words are ultimately defined in terms of themselves – in which case the explanatory value of definitions seems to wholly disappear. More particularly, definitional circularity would seem to imply that it is impossible to

step outside the realm of language and to explain how language is related to the world. This motivation for having undefined primitive elements imposes an important restriction on the set of primitive features. In fact, if achieving noncircularity is the point, the set of primitives should be smaller than the set of words to be defined: there is no reductive or explanatory value in a set of undefined defining elements that is as large as the set of concepts to be defined. Furthermore, the idea was put forward that the restricted set of primitive features might be universal, just like in phonology. This universality is not, however, a necessary consequence of the primitive nature of features: the definitional set of features could well be language specific.

The European Tradition of Componential Analysis Componential analysis was developed in the second half of the 1950s and the beginning of the 1960s by European and American linguists, at least to some

114 Componential Analysis

extent independently of each other. Although the first step in the direction of componential analysis can be found in the work of Louis Hjelmslev (Hjelmslev, 1953), its full development does not emerge in Europe before the early 1960s, in the work of Bernard Pottier (Pottier, 1964; Pottier, 1965), Eugenio Coseriu (Coseriu, 1964; Coseriu, 1967) and Algirdas Greimas (Greimas, 1966). The fundamental idea behind these studies is that the items in a lexical field are mutually distinguished by functional oppositions. In this sense, componential analysis grew out of a desire to provide a systematic analysis of the semantic relations within a lexical field. Methodologically speaking, componential analysis has a double background. First, it links up with the traditional lexicographical practice of defining concepts in an analytical way, by splitting them up into more basic concepts; thus, a definition of ram as ‘male sheep’ uses the differentiating feature ‘male’ to distinguish the term ram from other items in the field of words referring to sheep. In the Aristotelian and Thomistic tradition, this manner of defining is known as a definition per genus proximum et differentias specificas, i.e., (roughly) ‘by stating the superordinate class to which something belongs, together with the specific characteristics that differentiate it from the other members of the class.’ Second, the background of the componential idea can be traced to structural phonology, where the sound inventory of natural languages had been successfully described by means of a restricted number of oppositions. On the basis of this phonological model, the structuralist semanticians set out to look for functional oppositions within a lexical field, oppositions that are represented, as in phonology, by means of a binary plus/minus notation. Pottier (1964) provides an example in his analysis of a field consisting (among others) of the terms pouf, tabouret, chaise, fauteuil, and canape´; the term that delimits the field as a superordinate term is sie`ge, ‘sitting equipment with legs.’ These five words can be contrasted mutually by means of distinctive oppositions. Consider the following set: s1 s2 s3 s4 s5 s6

‘for sitting’ ‘with legs’ ‘with back’ ‘for a single person’ ‘with arms’ ‘made from hard material.’

We can then define the items in the field: S1 S2 S3

chaise fauteuil tabouret

þs1, þs2, þs3, þs4, s5, þs6 þs1, þs2, þs3, þs4, þs5, þs6 þs1, þs2, s3, þs4, s5, þs6

S4 S5

canape´ pouf

þs1, þs2, þs3, s4, þs5, þs6 þs1, þs2, s3, þs4, s5, s6

The work of the structuralist semanticians of the European school tends to be rich in terminological distinctions, and this is also the case in Pottier’s work. The values of the oppositional dimensions (s1, s2, etc.) are called se`mes, and the meaning of a lexe`me (lexical item) is a se´me`me (S1, S2, etc). Sie`ge, then, is the archilexe`me, and the meaning of this archilexe`me (in this case, features s1 and s2) is the archise´me`me. The archise´me`me is present in the seme`mes of any of the separate lexe`mes in the field. This is not yet the whole story, since foncte`mes (relevant for the description of grammatical meaning aspects, such as word class) and classe`mes (se`mes that recur throughout the entire vocabulary) should also be taken into account. This terminological abundance has, however, hardly found its way to the customary semantic vocabulary (although the English counterparts of the French terms, such as ‘sememe’ and ‘seme,’ may occasionally be met with). This illustrates the fact that, as mentioned before, the European branch of componential analysis has remained more or less isolated. Specifically, it has not played an important role in the developments that grew out of the American branch, such as the incorporation of componential analysis into generative grammar. Beside the ones mentioned above, other names that are of importance within the European tradition are those of Horst Geckeler (Geckeler, 1971), who specifically continues the lines set out by Coseriu, Klaus Heger (Heger, 1964), Kurt Baldinger (Baldinger, 1980), and Leonhard Lipka (Lipka, 2002). Through the work of Greimas, European structuralist semantics has had a considerable impact outside linguistics, especially in literary studies.

The American Tradition of Componential Analysis In America, the componential method emerged from anthropological linguistic studies. In a rudimentary way, this is the case with Conklin (1955), whereas a thorough empirical, formal, and theoretical elaboration is provided by Goodenough (1956) and especially Lounsbury (1956). The major breakthrough of componential analysis did not, however, occur until the appearance of Jerrold J. Katz and Jerry A. Fodor’s seminal article ‘‘The structure of a semantic theory’’ (Katz and Fodor, 1963). It was Katz in particular who extended and defended the theory afterward; see especially Katz (1972).

Componential Analysis 115

Figure 1 Componential analysis of bachelor (after Katz and Fodor, 1963).

Rather than analyzing a lexical field, Katz and Fodor gave an example of the way in which the meanings of a single word, when analyzed componentially, can be represented as part of a formalized dictionary. Such a formalized dictionary (to distinguish it from ordinary dictionaries, it is sometimes referred to by the term ‘lexicon’) would then be part of a formal grammar. What the entry for the English word bachelor would look like is demonstrated in Figure 1. Next to word form and word class, two kinds of semantic components can be found in the diagram: markers and distinguishers (indicated with parentheses and square brackets respectively). Markers constitute what is called the systematic part of the meaning of an item. Like Pottier’s classe`mes, they recur throughout the lexicon. Specifically, they are supposed to represent those features in terms of which selection restrictions (semantic restrictions on the combinatory possibilities of words) are formulated. Distinguishers represent what is idiosyncratic rather than systematic about the meaning of an item; they only appear on the lowest level of the formalized representation. The Katzian approach has had to endure heavy attacks (among others from Bolinger, 1965, Weinreich 1966, and Bierwisch 1969), and Katz’s views gradually moved to the background of the ongoing discussions. The Katzian distinction between markers and distinguishers, for instance, was generally found not to be well established, and was consequently abandoned. Conversely, various other distinctions between types of features were proposed, two kinds of which may be mentioned separately. To begin with, binary features of the plus/minus type were supplemented with nonbinary features, which represent cases where the distinctive dimension can have more than two values. Leech (1974), for instance, suggested a distinctive dimension ‘metal’ with multiple values, in order to distinguish between gold, copper, iron,

mercury, and so on. Further, a distinction between elementary and complex features was drawn to stress the fact that a concept with distinctive value in one lexical field might itself have to be subjected to further decomposition, until the ultimate level of basic features was reached. Other developments triggered by the Katzian approach included attempts to combine componential analysis with other forms of semantic analysis, e.g., with lexical field theory (Lehrer, 1974; Lutzeier 1981). One should bear in mind that suggestions such as those enumerated here, although leading away from the original Katzian model, were by and large situated within the very framework that was designed by Katz and Fodor, i.e., that of a formalized componential meaning representation as part of a formal grammar.

The Contemporary Situation Basically, the contemporary attitude of linguists towards componential analysis takes one of three forms: componential analysis may be used as a descriptive formalism, as an epistemological necessity, or as a heuristic instrument. To begin with, there are various approaches in formal grammar that use some form of semantic decomposition as a descriptive device: see for instance Dowty (1979) and Pustejovsky (1995), which incorporated ideas from componential analysis in the framework of logical semantics. With the exception of researchers such as Ray Jackendoff (Jackendoff, 2002), who dialogues actively with cognitive psychology, the approaches mentioned here tend to pay minimal attention to the methodological question of how to establish the basic, primitive nature of semantic features. If the original Katzian approach combines the idea of primitiveness with the idea of formalization, most of the approaches in this first contemporary group stress the formalization aspect more than the systematic quest for primitives. The converse is the case in Anna Wierzbicka’s natural semantic metalanguage approach (see Natural Semantic Metalanguage), which is not much interested in formalization of lexical and grammatical analyses, but which systematically tries to establish the basic set of primitive concepts. Third, at the other extreme, cognitive semantics and related approaches within contemporary semantics question the componential approach itself: what is the justification for assuming that lexical meanings are to be represented in a fragmented way, as a collection of more basic semantic elements? The antidecompositional reasoning takes many forms (see Fillmore,

116 Componential Analysis

1975 for one of the most influential statements), but one of the basic arguments is the following. The appeal of noncircular definitions seemed to be that they could explain how the gap between linguistic meaning and extralinguistic reality is bridged: if determining whether a concept A applies to thing B entails checking whether the features that make up the definition of A apply to B as an extralinguistic entity, then words are related to the world through the intermediary of primitive features. But obviously, this does not explain how the basic features themselves bridge the gap. More generally, the ‘referential connection’ problem for words remains unsolved as long as it is not solved for the primitives. And conversely, if the ‘referential connection’ problem could be solved for primitive features, the same solution might very well be applicable to words as a whole. So, if noncircularity does not solve the referential problem as such, decomposition is not a priori to be preferred over nondecompositional approaches, and psychological evidence for one or the other can be taken into account (see Aitchison, 2003 for an overview of the psychological issues). However, even within those approaches that do not consider semantic decomposition to be epistemologically indispensable, componential analysis may be used as a heuristic device. For instance, in Geeraerts et al. (1994), a work that is firmly situated within the tradition of cognitive semantics, the internal prototypical structure of lexical categories is analyzed on the basis of a componential analysis of the referents of the words in question. It would seem, in other words, that there is widespread agreement in linguistics about the usefulness of componential analysis as a descriptive and heuristic tool, but the associated epistemological view that there is a primitive set of basic features is generally treated with much more caution. See also: Category-Specific Knowledge; Classifiers and

Noun Classes; Cognitive Semantics; Compositionality; Concepts; Definition in Lexicology; Dictionaries and Encyclopedias: Relationship; Evolution of Semantics; False Friends; Hyponymy and Hyperonymy; Idioms; Lexical Fields; Lexical Meaning, Cognitive Dependency of; Lexical Semantics; Lexicon: Structure; Natural Semantic Metalanguage; Neologisms; Semantic Primitives; Synonymy; WordNet(s).

Bibliography Aitchison J (2003). Words in the mind: an introduction to the mental lexicon (3rd edn.). Oxford: Blackwell.

Baldinger K (1980). Semantic theory. Oxford: Blackwell. Bierwisch M (1969). ‘On certain problems of semantic representations.’ Foundations of Language 5, 153–184. Bolinger D (1965). ‘The atomization of meaning.’ Language 41, 555–573. Conklin H (1955). ‘Hanuno´o color categories.’ Southwestern Journal of Anthropology 11, 339–344. Coseriu E (1964). ‘Pour une se´mantique diachronique structurale.’ Travaux de Linguistique et de Litte´rature 2, 139–186. Coseriu E (1967). ‘Lexikalische Solidarita¨ten.’ Poetica 1, 293–303. Dowty D (1979). Word meaning and Montague grammar. Dordrecht: Reidel. Fillmore C (1975). ‘An alternative to checklist theories of meaning.’ In Cogen C, Thompson H & Wright J (eds.) Proceedings of the First Annual Meeting of the Berkeley Linguistics Society. Berkeley, CA: Berkeley Linguistics Society. 123–131. Geckeler H (1971). Zur Wortfelddiskussion. Munich: Fink. Geeraerts D, Grondelaers S & Bakema P (1994). The structure of lexical variation. Berlin: Mouton de Gruyter. Goodenough W (1956). ‘Componential analysis and the study of meaning.’ Language 32, 195–216. Greimas A (1966). Se´mantique structurale. Paris: Larousse. Heger K (1964). Monem, Wort, Satz und Text. Tu¨bingen: Niemeyer. Hjelmslev L (1953). Prolegomena to a theory of language. Bloomington: Indiana University Press. Jackendoff R (2002). Foundations of language. Oxford: Oxford University Press. Katz J J (1972). Semantic theory. New York: Harper and Row. Katz J J & Fodor J A (1963). ‘The structure of a semantic theory.’ Language 39, 170–210. Leech G (1974). Semantics. Harmondsworth, England: Penguin. Lehrer A (1974). Lexical fields and semantic structure. Amsterdam: North Holland. Lipka L (2002). English lexicology. Tu¨bingen: Niemeyer. Lounsbury F (1956). ‘A semantic analysis of Pawnee kinship usage.’ Language 32, 158–194. Lutzeier P (1981). Wort und Feld. Tu¨bingen: Niemeyer. Pottier B (1964). ‘Vers une se´mantique moderne.’ Travaux de Linguistique et de Litte´rature 2, 107–137. Pottier B (1965). ‘La de´finition se´mantique dans les dictionnaires.’ Travaux de Linguistique et de Litte´rature 3, 33–39. Pustejovsky J (1995). The generative lexicon. Cambridge, MA: MIT Press. Weinreich U (1966). ‘Explorations in semantic theory.’ In Sebeok T A (ed.) Current Trends in Linguistics 3. The Hague: Mouton. 395–477.

Compositionality 117

Compositionality G Sandu and P Salo, University of Helsinki, Helsinki, Finland ß 2006 Elsevier Ltd. All rights reserved.

According to the principle of compositionality, the meaning of a complex expression depends only on the meanings of its constituents and on the way these constituents have been put together. The kind of dependence involved here is usually a functional one. Principle of Compositionality (PC): The meaning of complex expression is a function of the meanings of its constituents and of the rule by which they were combined.

PC is rather vague unless one specifies the meanings of ‘is a function of’ and ‘meaning(s)’, something that is easier said than done. A more rigorous formulation of these notions is possible for formal languages and is due to Richard Montague. Montague (1974) defined compositionality as the requirement of the existence of a homomorphism between syntax and semantics, both to be understood as ‘structures’ in the mathematical sense. To keep technicalities down to a minimum, Montague’s requirement of a compositional interpretation was that for each syntactic operation ‘O’ that applies to n expressions e1, . . ., en in order to form the complex expression ‘O(e1, . . ., en)’, the interpretation of the complex expression ‘Oi(e1, . . ., en)’ is the result of the application of the semantic operation ‘Ci’, which is the interpretation of ‘Oi’ to the interpretations m1, . . ., mn of ‘e1’, . . .,‘en’, respectively. In other words, the interpretation of ‘Oi (e1, . . ., en)’ is Ci (m1, . . ., mn). An immediate consequence of PC is the ‘Substitutivity Condition’: Substituting a constituent with its synonym in a given expression does not change the meaning of the resulting expression. Thus, PC is violated if a complex expression has meaning but some of its component expressions do not (the Domain Condition) or if the Substitutivity Condition fails. As one can see, PC is by itself rather weak, and so it comes as no surprise that in the case of formal languages, one can always devise a trivial compositional interpretation by assigning some arbitrary entities to the primitive expressions of the language and then associating arbitrarily the syntactic operations of the language with corresponding operations on the domain of those entities. This way of implementing the principle can hardly be of any interest, although it has led some philosophers and logicians to claim that PC is methodologically empty.

A slightly more interesting case is the one in which one has an intended semantic interpretation in mind, that is, an interpretation with an intended domain of entities for the primitive expressions of the language to be mapped into, and a class of intended operations to serve as the appropriate interpretations of the syntactic operations of the language. A case in point is Horwich’s (1998) interpretation. His formal language was intended to serve as a regimentation for a fragment of English that contains proper names (‘John,’ ‘Peter,’ etc.), common nouns (‘dogs,’ ‘cows,’ etc.), and verb phrases (‘talks,’ ‘walks,’ ‘bark,’ etc.) as primitive expressions together with grammatical operations on them. For simplicity, let us assume predication is such a grammatical operation marked in this case by an empty space. Thus the syntax contains clauses of the form: If ‘n’ is a proper name and ‘v’ is a verb phrase, then ‘n v’ is a complex expression.

The intended semantic interpretation consists of a domain of entities that serve as the intended meanings of the proper names and verbs phrases (whatever they are; they are marked by capitals), together with an operation – say, P – that interprets the grammatical operation of predication (whatever that is). The only thing one needs to worry about in this case is to see to it that the operation of predication is defined for the entities mentioned above. The relevant semantic clauses now have this form: The interpretation of ‘n v’ is the result of the application of P to the entities assigned to ‘n’ and ‘v’, respectively.

Thus, the interpretation of the sentence ‘John talks’ is the result of the application of P to TALKS to JOHN. This interpretation is trivially compositional in that the interpretation of every compound ‘n v’ has been defined as the result of the application of the operation assigned to the syntactic operation of concatenation to the interpretations of ‘n’ and ‘v’, respectively. The more challenging cases for PC are those in which one has an intended interpretation for the complex expressions and would like to find a compositional interpretation that agrees with it. In contrast to the previous case, the meanings of the complex entities are not any longer defined but are given at the outset. We have here a typical combination of PC with the Context Principle (CP): An expression has a meaning only in the context in which it occurs. The combination was largely explored in the work of Gottlob Frege and in Donald Davidson’s theory of meaning, which assumed the form of a theory of truth. Davidson took whole sentences to be the meaning-carrying units in language, and truth to be

118 Compositionality

a primitive, undefinable semantic property that is best understood. Truth being undefinable, the strategy applied above, which ensured a trivial implementation of PC, is no longer available. Instead, PC acquires the status of a methodological constraint on an empirical theory of truth for the target language: the division of a sentence into parts and their association with appropriate semantic entities in a compositional theory becomes a theoretical business that has no other role except to show how they contribute to the computation of the truth of the sentences of the target language in which they occur. The literature on formal semantics for natural language has plenty of cases of the application of the Context Principle. We consider just two examples. In game-theoretical semantics (GTS), one starts with a standard first-order language and defines truth only for the sentences of that language. The truth of every such sentence (in a prenex normal form) is defined via a second-order sentence, known as its Skolem form. This interpretation is clearly not compositional, since it violates the Domain Condition. One can now ask whether there is a compositional interpretation that agrees with the given game-theoretical interpretation of sentences. It is known that the answer is positive, but only assuming certain nontrivial mathematical principles (the Axiom of Choice). The second example concerns Dynamic Predicate Logic. The starting point is the same language as in GTS – that is, a standard first-order language – but we now want a compositional interpretation in which, e.g., an existential quantifier occurring in the antecedent of a conditional binds a free variable occurring in the consequence of the conditional and in addition has the force of an universal quantifier. There is a compositional interpretation that has the required property, that of Dynamic Predicate Logic (Groenendijk and Stokhoff, 1991). From a technical point of view, the situation described in the two examples may be depicted as an extension problem (Hodges, 1998). One starts with an intended interpretation I, which either (a) fixes only the interpretation of certain complex expressions (e.g., sentences) or (b) puts some relational constraints on the interpretation of complex expressions. One then wants to find a compositional interpretation I" that agrees with the independently understood interpretation I. Hodges’s Extension Theorem solves case (a). It shows that any partial interpretation for a grammar can be extended to a total compositional interpretation. This shows that the combination of PC with CP (in its form [a]) is trivially satisfiable. The more interesting cases are those falling under (b). This is the situation that typically arises in the case of empirical linguistics where the intended

interpretation is supposed to be motivated by empirical argument. As an illustration, consider the much-discussed ‘pet fish’ problem. There is some empirical evidence to the effect that the meanings of concept words are prototypes. A prototype is either a good exemplar of the category or a statistical average of all or some instances of the category (Smith and Medin, 1981). A given instance x is then categorized as X if x resembles the prototype of X more than any other prototype. Given two expressions X (e.g., ‘pet’) and Y (‘fish’), one asks whether there is an interpretation that assigns to the complex concept word XY (‘pet fish’) a prototype that is the composition of the prototype assigned to X and the prototype assigned to Y. One also wants the meaning function to satisfy certain basic properties that are required for explanatory purposes; e.g., it should be the case that if x is XY, it must also be X and Y. We thus want every x to resemble the prototype of XY no less than it resembles the prototypes of X and Y. It has been argued that there is no such interpretation, that is, there is no operation of composition that yields a prototype as the interpretation of XY with the desired properties when applied to the two prototypes that are the interpretations of X and Y respectively (Fodor, 1998; Osherson and Smith, 1981). The moral to be drawn from all this should have been somehow anticipated from our discussion of formal languages. When the intended interpretation puts constraints only on the meanings of primitive expressions and on the operations governing them, PC follows rather trivially, provided the semantic entities of complex expressions are not constrained in any way. When the intended interpretation concerns only the meanings of complex expressions, Hodges’s extension theorem shows that a compositional semantics can still be found, at least in some cases, provided that one does not constrain the meanings of the primitive expressions or syntactical operations on them. In natural language, however, the situation is hardly so simple, as one meets constraints at every level. It is no wonder, then, that Fodor and Lepore (2002) argued that most theories of concepts or mental architecture in cognitive science are in contradiction with PC. The case of prototype semantics was only one example, but the same considerations apply to the theory that the meaning of a word is its use or the criteria for its application, etc. PC is often defended as the best explanation of the empirical phenomenon of systematicity: Any competent speaker of a given language who has in his repertoire the complex expressions P, R, and Q has also in his repertoire the complex expressions in which P, R, and Q are permuted (provided they are grammatical). For instance, anybody who understands the sentence

Concepts 119

‘Mary loves John’ also understands the sentence ‘John loves Mary’. Fodor and his collaborators argued extensively that PC is the best explanation of the systematicity of language, but this is an issue that will not be tackled here (cf. Fodor and Pylyshyn, 1988; Fodor, 2001; Fodor and Lepore, 2002; Fodor, 2003; Aizawa, 2002). PC should not be confused with the principles of productivity or generativity of language, which require that the expressions of a language be generated from a finite set of basic expressions and syntactical rules. Although it presupposes that the language under interpretation has a certain syntactic structure, PC does not take a stand on how that structure should be specified (phrase structure rules, derivational histories, etc.), as long as it is given a compositional interpretation. See also: Cohesion and Coherence; Context; Context Principle; Default Semantics; Discourse Representation Theory; Dynamic Semantics; Formal Semantics; Gametheoretical Semantics; Interpreted Logical Forms; Operators in Semantics and Typed Logics; Philosophical Theories of Meaning; Propositional Attitude Ascription; Propositional Attitudes; Prototype Semantics; Selectional Restrictions; Stereotype Semantics.

Bibliography Aizawa K (2002). The systematicity argument. Amsterdam: Kluwer. Bloom P (1994). ‘Generativity within language and other domains.’ Cognition 51(2), 177–189. Chomsky N (1957). Syntactic structures. The Hague: Mouton.

Fodor J A (1998). Concepts: where cognitive science went wrong. Oxford: Clarendon Press. Fodor J A (2001). ‘Language, thought and compositionality.’ Mind and Language 16(1), 1–15. Fodor J A (2003). Hume variations. Oxford: Oxford University Press. Fodor J A & Lepore E (2002). The compositionality papers. Oxford: Clarendon Press. Fodor J A & Pylyshyn Z (1988). ‘Connectionism and cognitive architecture: a critical analysis.’ Cognition 28, 3–71. Groenendijk J & Stokhoff M (1991). ‘Dynamic predicate logic.’ Linguistics and Philosophy 14, 39–100. Hintikka J & Kulas J (1983). The game of language. Dordrecht: Reidel. Hodges W (1998). ‘Compositionality is not the problem.’ Logic and Philosophy 6, 7–33. Horwich P (1998). Meaning. Oxford: Clarendon Press. Janssen T M V (1997). ‘Compositionality.’ In van Benthem J & Meulen A T (eds.) Handbook of logic and language. Amsterdam: Elsevier. 417–473. McLaughlin B (1993). ‘The classicism/connectionism battle to win souls.’ Philosophical Studies 70, 45–72. Montague R (1974). Formal philosophy: selected papers of Richard Montague. New Haven: Yale University Press. Oshershon D N & Smith E E (1981). ‘On the adequacy of prototype theory as a theory of concepts.’ Cognition 9, 35–58. Pelletier F J (1994). ‘The principle of semantic compositionality.’ Topoi 13, 11–24. Rips L J (1995). ‘The current status of research on concept combination.’ Mind and Language 10(1/2), 72–104. Smith E E & Medin D L (1981). Categories and concepts. Cambridge: Harvard University Press. Smolensky P (1987). ‘The constituent structure of mental states: a reply to Fodor and Pylyshyn.’ Southern Journal of Philosophy 26, 137–160. Zadrozny W (1994). ‘From compositional to systematic semantics.’ Linguistics and Philosophy 17, 329–342.

Concepts E Margolis, Rice University, Houston, TX, USA S Laurence, University of Sheffield, Sheffield, UK ß 2006 Elsevier Ltd. All rights reserved.

In cognitive science, concepts are generally understood to be structured mental representations with subpropositional content. The concept CHAIR, for example, is a mental representation with the content chair. It is implicated in thoughts about chairs and is accessed in categorization processes that function to determine whether something is a chair. Theories of concepts are directed to explaining, among other things, the character of these processes and the

structure of the representations involved. Related to this is the project of explaining what conceptual content is and how concepts come to have their content. In the study of conceptual structure, four broad approaches should be distinguished: (1) the classical theory, (2) probabilistic theories, (3) the theorytheory, and (4) conceptual atomism. For recent overviews of theories of concepts, see Margolis and Laurence (1999) and Murphy (2002).

The Classical Theory According to the classical theory, concepts have definitional structure. A concept’s constituents encode

120 Concepts

conditions that are individually necessary and jointly sufficient for its application. A standard illustration of the theory is the concept BACHELOR, which is claimed to be composed of the representations UNMARRIED, ADULT, and MALE. Each of these is supposed to specify a condition that something must meet in order to be a bachelor and, if anything meets them all, it is a bachelor. The classical theory has always been an enormously attractive theory. Many theorists find it to be intuitively plausible that our concepts are definable. In addition, the theory brings with it a natural and compelling model of how concepts are learned. They are learned by assembling them from their constituents. The classical theory also offers a straightforward account of categorization. Something is deemed to fall under a concept just in case it satisfies each and every condition that the concept’s constituents encode. Finally, the theory appeals to the very same resources to explain the referential properties of a concept. A concept refers to those things that have each and every feature specified by its constituents. Of course, all of these explanations depend upon there being a separate treatment of the primitive (i.e., unstructured) representations that ultimately make up the concepts we possess. But the classical theory supposes that a separate treatment can be given, perhaps one that grounds all of our concepts in perceptual primitives in accordance with traditional empiricist models of the mind. The classical theory has come under considerable pressure in the last thirty years or so. In philosophy, the classical theory has been subjected to a number of criticisms but perhaps the most fundamental is that attempts to provide definitions for concepts have had a poor track record. There are few – if any – examples of uncontroversial definitional analyses. The problem isn’t just confined to philosophically interesting concepts (e.g., JUSTICE) but extends to concepts of the most ordinary kind, such as GAME, PAINT, and even BACHELOR (Wittgenstein, 1953; Fodor et al., 1980). What’s more, Quine’s (1951) influential critique of the analytic-synthetic distinction has led many philosophers to suppose that the problem with giving definitions is insurmountable. For psychologists, the main objection to the classical theory has been that it appears to be at odds with what are known as ‘typicality effects.’ Typicality effects include a broad range of phenomena centered around the fact that certain exemplars are taken to be more representative or typical (Rosch and Mervis, 1975; Rosch, 1978). For instance, apples are judged to be more typical than plums with respect to the category of fruit, and subjects are quicker to judge that apples are a kind of fruit than to judge that plums

are and make fewer errors in forming such judgments. Though not strictly inconsistent with these findings, the classical theory does nothing to explain them.

Probabilistic Theories In response to the failings of the classical theory, Eleanor Rosch and others began exploring the possibility that concepts have a structure that is described as graded, probabilistic, or similarity-based (Smith and Medin, 1981). The difference between these approaches and the classical theory is that the constituents of a concept are no longer assumed to express features that its members have by definition. Instead, they are supposed to express features that its members tend to have. For example, a standard treatment for the concept BIRD incorporates constituents picking out the features has wings, flies, eats worms, etc., but probabilistic theories don’t require all of these features to be possessed by something to count as a bird. Instead, something falls under the concept when it satisfies a sufficient (weighted) number of them (or on some accounts, something falls under the concept to a degree corresponding to how many are satisfied; then nothing is a bird absolutely but only a bird to degree n). Like the classical theory, probabilistic theories explain concept learning as a process where a concept is assembled from its constituents. And like the classical theory, probabilistic theories offer a unified treatment of reference and categorization. A concept refers to those things that satisfy enough of the features it encodes, and something is judged to fall under a concept when it satisfies enough of them as well. Categorization, on this account, is often described as a similarity comparison process. An item is categorized as belonging to a given category when the representations for each are deemed sufficiently similar, where this may be measured in terms of the number of constituents that they share. One advantage of probabilistic theories is that a commitment to probabilistic structure may explain why definitions are so hard to come by. More important, however, is the way that probabilistic structure readily accommodates and explains typicality effects. This is achieved by maintaining that typicality, like categorization, is a similarity comparison process. On this model, the reason apples are judged to be more typical than plums is that the concept APPLE shares more of its constituents with FRUIT. Likewise, this is why apples are judged to be a kind of fruit faster than plums are. Probabilistic theories continue to enjoy widespread support in cognitive science, but they aren’t without their own problems. One concern is that many concepts appear to lack probabilistic structure, especially concepts that correspond to phrases as opposed to

Concepts 121

words. For example, Fodor (1981), (1998) notes that while GRANDMOTHER may have probabilistic structure (encoding the features gray-haired, old, kind, etc.), there is no such structure for GRANDMOTHERS MOST OF WHOSE GRANDCHILDREN ARE MARRIED TO DENTISTS. Fodor also challenges probabilistic theories on the grounds that even when phrasal concepts do have probabilistic structure, their structure doesn’t appear to be compositionally determined. This is a problem, since it’s the compositionality of the conceptual system that explains the productivity of thought, viz., the fact that there is no upper bound on the number of distinct thoughts that humans can entertain. Fodor points out that the probabilistic structure associated with PET FISH encodes features (colorful, tiny, lives in a bowl, etc.) that aren’t drawn from the probabilistic structures associated with PET (furry, cuddly, etc.) and FISH (gray, lives in the ocean, etc.). Another common criticism of probabilistic theories is that they leave out too much. They don’t sufficiently incorporate the causal information that people appeal to in categorization and don’t do justice to the fact that reflective categorization isn’t always based on similarity (Murphy and Medin, 1985; Keil, 1989; Rips, 1989). For example, when time is short and when given little information about two animals apart from the fact that they look alike, people may judge that they are both members of the same category. But when asked for a more thoughtful answer about whether, for example, a dog that is surgically altered to look like a raccoon is a dog or a raccoon, the answer for most of us – and even for children – is that it is remains a dog (see Gelman, 2003, for an overview of related literature).

about conceptual development. The claim that infants are like little scientists has generated a great deal of criticism (e.g., Segal, 1996; Stich and Nichols, 1998). One objection focuses on particular examples, especially of concepts that are fundamental to human cognition (e.g., OBJECT, AGENT, and BELIEF). Although theory-theorists often cite these as examples where substantial conceptual change occurs – change that is supposed to illustrate the theory-theory’s model of cognitive development – others would argue that these are innate concepts that remain invariant in important respects throughout development (e.g., Leslie, 1994). A more basic objection to the theory-theory is that the appeal to causal-explanatory reasoning is minimally informative. It may be true that categorization is somewhat like scientific reasoning, but scientific reasoning is itself in need of a great deal of clarification. The result is that the model of categorization is extremely sketchy and somewhat mysterious. A third objection to the theory-theory, one that has been especially influential in philosophy, is that it makes it difficult to maintain that different people have the same concepts. This objection is directed to versions of the theory-theory that are especially lenient in what counts as a theory. On these versions, just about any belief or inferential disposition associated with a concept is part of a ‘theory.’ The problem with this approach, however, is that people are bound to have different beliefs than one another and hence different theories. But since a concept’s identity and content are supposed to be a matter of its role in one’s mental theories, people will be unable to share concepts (Fodor and Lepore, 1992).

Conceptual Atomism The Theory-Theory The theory-theory is largely a reaction to the last problem associated with probabilistic theories. It explains categorization, particularly reflective categorization, as a process of causal-explanatory reasoning. On this approach, conceptual structure is a matter of how a concept is related to other concepts in relatively stable causal-explanatory frameworks. The designation ‘theory-theory’ sometimes implies little more than this. For some psychologists, it is meant to indicate that the explanatory frameworks are comparable to explicit scientific theories and that the mechanisms for acquiring them are identical with the cognitive mechanisms that underlie scientific reasoning. On this more extreme version of the theory-theory, conceptual development is likened to radical theory change in science (Carey, 1985; Gopnik and Meltzoff, 1997). Many objections to the theory-theory are directed to its more extreme forms, particularly the commitment

The last of the four theories of conceptual structure is that lexical concepts – word-sized concepts – have no structure at all (Fodor, 1998; Millikan, 2000). Concepts such as BIRD, CHAIR, NUMBER, and RUN are all primitives. Of course, conceptual atomism needs an account of how these primitive concepts are to be distinguished from one another and how their contents are fixed. A standard approach is to appeal to the mind-world causal relations between a concept and the object or property it refers to. Conceptual atomism is motivated in light of the problems with other theories, especially the problem of providing definitions (the classical theory), the problem of compositionality (probabilisitic theories), and the problem of shared concepts (the theorytheory). If concepts lack structure, then it is no surprise that we have difficulty providing definitions for them. Also, it doesn’t matter that probabilistic structure doesn’t compose, since complex concepts can

122 Concessive Clauses

still be composed on the basis of atomic constituents. And sharing a concept is no longer a challenge. It isn’t a matter of having the same beliefs so much as having representations that stand in the same mind-world causal relations. Conceptual atomism is sometimes rejected outright on the grounds that unstructured concepts can’t be learned and hence that atomism implies an untenably strong form of concept nativism. The main concern with conceptual atomism, however, is that without structure, there is nothing to explain how concepts are implicated in categorization and other psychological processes. Nonetheless, atomists see this as an advantage rather than a problem, maintaining that people can have the same concept despite widely varying psychological dispositions. For this reason, the structures that are accessed in categorization and other psychological processes are said to be associated with a concept but not constitutive of it. See also: Category-Specific Knowledge; Cognitive Semantics; Color Terms; Definitions; Evolution of Semantics; Human Reasoning and Language Interpretation; Ideational Theories of Meaning; Lexical Conceptual Structure; Lexical Meaning, Cognitive Dependency of; Mentalese; Metaphor and Conceptual Blending; Possible Worlds; Prototype Semantics; Psychology, Semantics in; Reference and Meaning, Causal Theories; Representation in Language and Mind; Selectional Restrictions; Semantic Maps; Semantic Primitives; Stereotype Semantics; Vagueness.

Bibliography Carey S (1985). Conceptual change in childhood. Cambridge, MA: MIT Press. Fodor J A (1981). ‘The present status of the innateness controversy.’ In his Representations: philosophical essays on the foundations of cognitive science. Cambridge, MA: MIT Press. 257–316. Fodor J A (1998). Concepts: where cognitive science went wrong. New York: Oxford University Press. Fodor J A, Garrett M, Walker E & Parkes C (1980). ‘Against definitions.’ Cognition 8, 263–367.

Fodor J A & Lepore E (1992). Holism: A shopper’s guide. Cambridge, MA: Basil Blackwell. Gelman S (2003). The essential child. New York: Oxford University Press. Gopnik A & Meltzoff A (1997). Words, thoughts, and theories. Cambridge, MA: MIT Press. Keil F (1989). Concepts, kinds, and cognitive development. Cambridge, MA: MIT Press. Leslie A (1994). ‘ToMM, ToBy, and agency: core architecture and domain specificity.’ In Hirshfeld L & Gelman S (eds.) Mapping the mind: domain specificity in cognition and culture. New York: Cambridge University Press. 119–148. Margolis E & Laurence S (1999). Concepts: core readings. Cambridge, MA: MIT Press. Millikan R (2000). On clear and confused ideas. New York: Cambridge University Press. Murphy G (2002). The big book of concepts. Cambridge, MA: MIT Press. Murphy G & Medin D (1985). ‘The role of theories in conceptual coherence.’ Psychological Review 92(3), 289–316. Quine W (1951). ‘Two dogmas of empiricism.’ In his From a logical point of view: nine logico-philosophical essays. Cambridge, MA: Harvard University Press. 20–46. Rips L (1989). ‘Similarity, typicality, and Categorization.’ In Vosniadou S & Ortony A (eds.) Similarity and analogical reasoning. New York: Cambridge University Press. 21–59. Rosch E (1978). ‘Principles of categorization.’ In Rosch E & Lloyd B (eds.) Cognition and categorization. Hillsdale, NJ: Lawrence Erlbaum Associates. 27–48. Rosch E & Mervis C (1975). ‘Family resemblances: studies in the internal structure of categories.’ Cognitive Psychology 7, 573–605. Segal G (1996). ‘The modularity of theory of mind.’ In Carruthers P & Smith P (eds.) Theories of theories of mind. Cambridge: Cambridge University Press. 141–158. Smith E & Medin D (1981). Categories and concepts. Cambridge, MA: Harvard University Press. Stich S & Nichols S (1998). ‘Theory-theory to the max.’ Mind and Language 13(3), 421–449. Wittgenstein L (1953). Philosophical investigations. Anscombe (trans.). Oxford: Blackwell.

Concessive Clauses E Ko¨nig, Freie Universita¨t Berlin, Berlin, Germany ß 2006 Elsevier Ltd. All rights reserved.

Together with terms like ‘temporal,’ ‘conditional,’ ‘causal,’ ‘instrumental,’ and ‘purposive,’ the term ‘concessive’ belongs to the terminological inventory

that traditional grammar makes available for the characterization and classification of adverbials and adverbial clauses. Concessive clauses are separately identifiable on formal grounds in a wide variety of languages, but many other types of adverbial clauses may also have a concessive use. As one type of adverbial clause, concessive clauses share numerous

Concessive Clauses 123

syntactic properties with other adverbial clauses, from which they are distinguished mainly on the basis of semantic criteria. They also manifest, however, specific formal properties in addition to their semantic properties.

Meaning and Syntactic Properties In uttering a complex sentence with a concessive clause, i.e., a sentence of the type Even though p, q (e.g., Even though it is raining, Fred is going out for a walk), a speaker is committed to the truth of both clauses p (It is raining) and q (Fred is going out for a walk) and asserts these two propositions against the background of an assumption that the two types of situations, p and q, are generally incompatible. This background assumption or presupposition can roughly be described as follows: if p, then normally not-q (If it is raining, one normally does not go out for a walk or The more it rains, the less people go out for a walk) (cf. Ko¨nig, 1988; Azar, 1997). How it is to be spelled out precisely is still a puzzle. What is clear, however, is that the situation described by a sentence with a concessive clause is an exception to a general tendency and therefore remarkable. Concessive clauses generally occur in all positions where adverbial clauses are permitted in a language. In English, for example, they may either precede or follow the main clause. Concessive clauses differ, however, from other types of adverbial clauses in a number of ways: a. In contrast to most other types of adverbial clauses, there does not seem to be a concessive interrogative adverb in any language, analogous to English when, where, why, how, etc. b. Concessive clauses cannot be the focus of a focusing adjunct (focus particle) such as only, even, just, and especially (cf. Only because it is raining versus *Only although it was raining. . .). c. Concessive clauses cannot occur as focus in a cleft sentence (*It was although it was raining that. . .). d. Concessive clauses cannot be the focus of a negation or a polar interrogative (cf. Was he harassed because he was a journalist? versus Was he harassed, although he was a journalist?). All of these divergent properties seem to be manifestations of a single syntactic constraint on the use of concessive clauses: They cannot be focused against the background of the rest of the sentence, a property that they share with causal clauses introduced by since and resultative clauses introduced by so that. This constraint with regard to focusability is generally taken to indicate that the relevant clauses are less tightly integrated into a main clause than other types of

adverbial clauses. To a certain extent, sentences with concessive clauses exhibit properties of paratactic rather than subordinate structures and in spoken discourse concessive relations are much more frequently expressed by paratactic structures, particularly by adversative conjunctions such as but. Whether this constraint is also a sign of greater semantic complexity relative to other types of adverbial clauses is not so clear. A certain support for the assumption that concessive constructions are especially complex semantically can be derived from the fact that they tend to develop in connection with the introduction of written forms of a language and are also acquired much later by children than other types of adverbial clauses.

Concessive Connectives Concessive relations between two clauses or between a clause and an adverbial are not only expressed by conjunctions such as even though and although in English, but can also be signaled by prepositions such as English despite and in spite of, and by conjunctional adverbs such as English nevertheless, nonetheless, still, and yet. The near-synonymy of the following constructions shows that the term concessive is applicable to all three groups of connectives and that the selection of a specific subcategory depends on the syntactic environment: Fred is going out for a walk although it is raining. – Fred is going out for a walk in spite of the rain. – It is raining. Nevertheless Fred is going out for a walk. A cross-linguistic investigation of all three types of concessive connectives provides interesting information on the affinity between concessivity and other semantic domains, as well as on the historical development of concessive connectives. Such a comparison shows that concessive connectives are typically composite in nature (e.g., al-though, never-the-less) and that in most cases earlier and more basic meanings can easily be identified for these components. These earlier meanings as well as the other uses of the components that enter into the formal make-up of concessive connectives provide important insights into the relatedness of concessivity to other semantic domains. Five different types of connectives can be distinguished on the basis of their etymology and their historical development: a. Members of the first group derive from notions such as ‘obstinacy,’ ‘contempt,’ and ‘spite,’ that is, from notions originally applicable only to human agents or experiencers. Examples are English in spite of; Spanish a pesar de (cf. pesar ‘sorrow, regret’); Dutch ondanks (‘ingratitude, thoughtlessness’) and in weerwil van; Italian malgrado che; and Finnish huolimatta (‘heedless, careless’).

124 Concessive Clauses

b. There is a close relationship between concessivity and free-choice quantification as expressed in English by any or whatever. In a wide variety of languages, concessive connectives contain a component that is also used as a free-choice quantifier: English albeit, however, and anyway; Latin quamquam; Hungarian haba´r (cf., ha ‘if’; ki ‘who’; ba´rki ‘whoever’); and Maori ahakoa (‘whatever-indeed’). c. In many languages, concessive connectives are composed of an originally conditional or temporal connective (e.g., French quand) and/or an additive focus particle (e.g., English also and even). This type, probably the most frequently occurring in the world’s languages, is exemplified by English even though, German (Standard German) wenn-gleich, French quand meˆme, and Bengali jodi-o (‘if-also’). d. Concessive connectives may also derive from expressions originally used for emphatic affirmation. Expressions with the original meaning ‘true’, ‘indeed’, ‘fact,’ or ‘well’ are frequently grammaticalized as concessive connectives. English true, German zwar (‘it is true’), Bahasa Indonesian sungguh-pun (‘true-even’), and Mandarin (Mandarin Chinese) gu`ra´n (‘of course, true, to be sure’) are cases in point. Typically, such connectives are more often used in a more general adversative sense, rather than the more specific concessive sense (cf. English True p, but q). e. Members of the fifth type all derive from expressions originally used to assert remarkable co-occurrence or coexistence in some form or another. This type is exemplified by English nevertheless, notwithstanding, still; French n’empeˆche que (‘does not prevent’); Portuguese contudo (‘with everything’); Turkish bununla beraber (‘together with this’); and Hopi naama-hin (‘together thus’). As is shown by this typology, the historical development of concessive connectives and the original, or at least earlier, meaning of their components directly reflect various aspects of the meaning of these connectives: the factual character of these constructions, the presupposition (or ‘implicature’) of general dissonance (incompatibility, conflict) between two situation types, and the remarkable fact of their co-occurrence in specific instances. Moreover, this typology also reveals a close relationship between concessive constructions and certain types of conditionals.

Relationship to Other Types of Adverbial Clauses Further insights into the form and meaning of concessive constructions can be gained by comparing

them to and delimiting them from other types of adverbial clauses, notably conditionals and clauses of cause and reason. As was already mentioned, concessive clauses are closely related to certain types of conditionals and frequently derive from such conditionals. In a wide variety of grammar handbooks and specific analyses of the relevant area, both traditional and modern, the following sentence types are also discussed under the heading ‘concessive’: Whatever his prospects of finding a job are, he is going to marry Susan next month. – Whether or not he finds a job, he is going to marry Susan next month. – Even if he does not find a job, he will marry Susan next month. A closer look at these sentences reveals, however, that they are basically conditionals. Where they differ from standard forms of conditionals (if p, then q) is in the nature of the antecedent. Instead of relating a simple antecedent to a consequent, as standard conditionals do, the ‘concessive conditionals,’ as they are also and more appropriately called, relate a series of antecedent conditions to a consequent (He will marry Susan next month). This series of antecedent conditions can be expressed by a quantification (e.g., wh-ever), by a disjunction (e.g., ‘p or not-p’), or by a scalar expression that denotes an extreme (e.g., highly unlikely) value on a scale. In addition to being similar to standard conditionals, these concessive conditionals share certain properties with the factual concessive sentences discussed thus far. In each of the three types of concessive conditionals, a conditional relation is asserted for a series of antecedents that includes an unlikely and thus remarkable case and it is this dissonance and conflict that have led to labels such as ‘unconditionals,’ ‘irrelevance conditionals,’ ‘hypothetical concessives,’ just to mention only those most frequently used. In order to draw a clear terminological distinction between the factual concessive clauses introduced in English by although or even though and the three types of conditionals under discussion, it seems advisable to reserve the term concessive for the former and to use the label ‘concessive conditional’ for the latter. Concessive conditionals with focus particles, i.e., conditionals of the type even if p, q, are particularly difficult to keep apart from factual concessive clauses. In the core cases, the distinction seems to be clear enough: The distinction is expressed by the connective (e.g., English even if versus even though, Japanese temo versus noni), by the mood (subjunctive versus indicative) of the adverbial clause (e.g., Spanish aunque llueva ‘even if it rains’ versus aunque llueve ‘even though it is raining’), or by some other inflectional contrast marked on the verb. The boundary between these two types of constructions, however, seems to be a fluid one in a wide variety of languages. In many,

Concessive Clauses 125

and perhaps all, languages, concessive conditionals with focus particles can be used in a factual sense, i.e., in exactly the same way as genuine concessive clauses (e.g., English Even if he IS my brother, I am not going to give him any more money). Furthermore, as pointed out above, concessive conditionals with focus particles frequently develop into genuine concessive constructions. English though, for instance, was still used in the sense of ‘even if’ at the time of Shakespeare, as the following quotation from Hamlet shows: I’ll speak to it though hell itself should gape and bid me hold my peace. In Modern English, by contrast, though is used only in a factual, concessive sense, apart from certain relics like as though. The fact that in some languages (e.g., French) the subjunctive is used in standard concessive clauses (i.e., after bienque, quoique) is a further indication of such developments from conditionals, for which the use of the subjunctive is more clearly motivated. Sentences with concessive clauses have always been considered to be related to, and in fact to be in some way opposed to, clauses of cause and reason. This intuition is most clearly reflected in terms such as ‘anticause,’ ‘incausal,’ ‘inoperant cause,’ and ‘hidden causality’ that have often been proposed as more suitable replacements for the traditional term concessive. That concessivity expresses the negation of a causal relationship is suggested by the fact that in some languages (Cambodian (Central Khmer), Japanese, Lezgian (Lezgi), Mundari, and Indonesian) concessive connectives can be derived from causal connectives through the addition of certain particles. Moreover, negated causal expressions are frequently used as markers of concessivity. Connectives such as German ungeachtet, English unimpressed and regardless, French n’empeˆche que, and Dutch ondanks are cases in point. As is shown by the equivalence of the following pair of sentences, the external negation of a causal construction may be equivalent to a concessive construction with a negated main clause: This house is no less comfortable because it dispenses with air conditioning. — This house is no less comfortable, although it dispenses with air conditioning. In the first sentence, which is to be read as one tone group, the negation affects the whole sentence (‘It is not the case that this house is less comfortable because. . .’). In the second example, by contrast, only the main clause is negated and it is exactly in such a situation that a causal construction may be paraphrased by a suitably negated concessive one. This ‘equivalence’ between the external negation of a sentence with an operator O and the internal negation of a sentence with an operator O0 (not((because p) q)  (although p) not-q) looks like a case of duality (Ko¨nig, 1991; Di Meola, 1998;

Iten, 1998), but since such paraphrases are not possible for sentences with external negations of concessive clauses (not ((although) p, q)), the relevant relationship between causality and concessivity cannot be assumed to be an instance of this general phenomenon (cf. Ko¨nig and Siemund, 2000).

Types of Concessive Clauses Under the right contextual conditions, many types of adverbial clauses may receive a concessive interpretation: a. Temporal clauses (There was a funny smile on D.’s face as if D. were pulling his leg by pretending to fall in with his plan, when he had not the least intention to fall in with it.); b. Comparative clauses (Poor as he is, he spends a lot of money on horses.); c. Conditionals (If the aim seems ambitious, it is not unrealistic. Considering his age, he seems quite fit.), concessive conditionals, etc. These interpretations are, however, the result of certain processes of interpretative enrichment on the basis of contextual inferences and none of the relevant clauses would be considered a concessive clause in the narrow sense of the term. Concessive clauses, identifiable as a separate category in numerous languages on the basis of the formal properties discussed above, are never augmented in their interpretation in this way and thus seem to constitute an end-point beyond which such interpretative processes never go. Thus far, concessive constructions have been differentiated only from other types of adverbial clauses. Further distinctions can be drawn within that category itself and these distinctions seem to be a consequence of the general phenomenon that adverbial relations can be applied in parallel ways to different conceptual domains or levels (e.g., the content domain, the epistemic domain, the illocutionary domain, and the textual domain) in the sense of Sweetser (1990) and Crevels (2000). Not all concessive constructions allow the inference that the two sentences asserted to be true are instances of situations that do not normally go together. In many cases, it is not the factual content of the two clauses that is incompatible. The incompatibility may lie in the conclusions or arguments that are based on these assertions. Such rhetorical concessives, as they are often called (cf. Anscombre and Ducrot, 1977; Azar, 1997), are typically introduced by a connective of type (d) and/or by the adversative conjunction but and may thus be indistinguishable from adversative sentences (True he is still very young,

126 Concessive Clauses

but he has proved very reliable so far.). In English, the modal verb may is another frequently used indicator of this type of concessive construction, but although and though may also be used in this function (He may be a professor but he is an idiot.). Sentences of this type are used to concede the first assertion and to emphasize the second. It is for these constructions that the term concessive is particularly appropriate. Another subtype of concessive clause that is frequently singled out in descriptions of European languages is the so-called ‘rectifying’ concessive clause (e.g., A. Yes, it has come at last, the summons I know you have longed for. — B. I, too, though it has come in a way I cannot welcome.). Whereas in the standard case the content of the main clause is emphasized and made remarkable through the addition of the concessive clause, the content of the main clause is weakened whenever a rectifying clause follows. In English, such rectifying clauses are marked by although, though, but then, except, not that, etc.; in French encore que invariably indicates such a function of weakening the import of a preceding assertion (cf. Ranger, 1997). Concessive clauses of this type always follow the main clause and are only loosely linked to that main clause. Moreover, they typically exhibit main clause word order in those languages in which main and subordinate clauses are distinguished on the basis of word order (German Er wird das sicherlich akzeptieren, obwohl bei ihm kann man das nie wissen. ‘He will certainly accept that, although you never know with this guy.’). What such discussions about subdivisions within the class of concessive clauses and adverbials clearly show is that one cannot assume synonymy for all the concessive connectives that a language has at its disposal. Concessive prepositions (e.g., English despite, in spite of ) and certain conjunctions (e.g., English even though) are not used in a rectifying or rhetorical function, some conjunctions (e.g., French encore que) are used exclusively for rectification, and still others (e.g., English although) can be used in all functions. What is also clearly revealed is that different subtypes of concessive clauses manifest different degrees of subordination to and integration into a main clause. See also: Comparative Constructions; Comparatives; Con-

ditionals; Connectives in Text; Counterfactuals; Discourse Domain; Discourse Parsing, Automatic; Grammatical Meaning; Logical Consequence; Modal Logic; Mood and Modality; Nonmonotonic Inference; Presupposition; Projection Problem; Temporal Logic; Virtual Objects.

Bibliography Anscombre J C & Ducrot O (1977). ‘Deux mais en franc¸ais.’ Lingua 43, 23–40. Azar M (1997). ‘Concession relations as argumentation.’ Text 117, 3. Blakemore D (1989). ‘Denial and contrast: A relevance theoretic analysis of but.’ Language and Philosophy 12, 15–37. Couper-Kuhlen E & Kortmann B (eds.) (2000). Cause, condition, concession, contrast. Berlin: Mouton. Crevels M (2000). ‘Concessives on different semantic levels.’ In Couper-Kuhlen E & Kortmann B (eds.), 313–339. Di Meola C (1998). ‘Zur Definition einer logischsemantischen Kategorie: Konzessivita¨t als, versteckte Kausalita¨t.’ Linguistische Berichte 175, 329–352. Haiman J (1974). ‘Concessives, conditionals, and verbs of volition.’ Foundations of Language 11, 341–359. Harris M B (1988). ‘Concessive clauses in English and Romance.’ In Haiman J & Thompson S A (eds.) Clausecombining in grammar and discourse. Amsterdam: Benjamins. 71–99. Haspelmath M & Ko¨nig E (1998). ‘Concessive conditionals in the languages of Europe.’ In van der Auwera J (ed.) Adverbial constructions in the languages of Europe. Berlin: Mouton. 563–640. Iten C (1998). ‘Because and although: A case of duality?’ In Rouchota V & Jucker A (eds.) Current issues in relevance theory. Amsterdam: Benjamins. 1–24. Ko¨nig E (1985). ‘On the history of concessive connectives in English: Diachronic and synchronic evidence.’ Lingua 66, 1–19. Ko¨nig E (1988). ‘Concessive connectives and concessive sentences: Cross-linguistic regularities and pragmatic principles.’ In Hawkins J (ed.) Explaining language universals. Oxford: Blackwell. 145–166. Ko¨nig E (1991). ‘Concessive relations as the dual of causal relations.’ In Zaefferer D (ed.) Semantic universals and universal semantics. Dordrecht: Foris. 190–209. Ko¨nig E & Siemund P (2000). ‘Causal and concessive clauses: Formal and semantic relations.’ In Couper-Kuhlen E & Kortmann B (eds.) Cause – condition – concession – contrast. Berlin: Mouton de Gruyter. 341–360. Nakajima H (1998). ‘Concessive expressions and complementizer selection.’ Linguistic Inquiry 29, 333–338. Ranger G (1997). ‘An enunciative study of rectifying concessive constructions not that, except and only.’ Anglophonia 2, 107–127. Rudolph E (1996). Contrast: Adversative and concessive relations and their expressions in English, German, Spanish, Portuguese on sentence and text level. Berlin: Mouton de Gruyter. Sweetser E (1990). From etymology to pragmatics. Cambridge: Cambridge University Press. Traugott E C, ter Meulen A, Reilly J S & Ferguson C A (eds.) (1986). On conditionals. Cambridge: Cambridge University Press.

Conditionals 127

Conditionals S Kaufmann, Northwestern University, Evanston, IL, USA ß 2006 Elsevier Ltd. All rights reserved.

Form and Meaning Conditionals are complex sentences built up from two constituent clauses, called the antecedent and the consequent; alternatively, the terms protasis and apodosis are found in the linguistic literature. English conditionals are typically of the form if A, (then) B, where A and B are the antecedent and consequent, respectively. Some examples are given in (1). (1a) If the sun comes out, Sue will go on a hike. (1b) If the sun came out, Sue went on a hike. (1c) If the sun had come out, Sue would have gone on a hike.

In the linguistic and philosophical literature, a distinction is commonly drawn between indicative conditionals, such as (1a) and (1b), and subjunctive or counterfactual conditionals, like (1c). This classification is not uncontroversial: some authors would draw the major dividing line between (1a) and (1c) on the one hand and (1b) on the other. However, we adopt the standard classification and focus on indicative conditionals (see also Counterfactuals). The class of indicatives may be further divided into predictive and nonpredictive conditionals, illustrated in (1a) and (1b), respectively. Despite subtle differences, these share a common semantic core and have similar logical properties. We do not distinguish between them in this discussion. In general, if A, B asserts that B follows from, or is a consequence of A, without asserting either A or B. Often the relation in question is causal (A causes B) or inferential (B is inferable from A). Other uses include the statement that B is relevant if A is true (2a), conditional speech acts (2b), and metalinguistic comments on the consequent (2c). (2a) If you want to meet, I am in my office now. (2b) If you will be late, give me a call. (2c) If you excuse my saying so, she is downright incompetent.

The form if A, B is neither necessary nor sufficient for the expression of conditionality. Inverted forms, as in (3a), are used as conditional antecedents. Sentences like (3b) and (3c) also typically have conditional interpretations. (3a) Should the sun come out, Sue will go on a hike. (3b) Buy one – get one free. (3c) Give me $10 and I will fix your bike.

On the other hand, some if-then sentences do not fit the semantic characterization and are not considered conditionals, as in (4). (4) If these problems are difficult, they are also fascinating.

Despite these marginal counterexamples, if is clearly the prototypical conditional marker in English. Other languages show more diversity in their expression of conditionality. The German conditional maker falls is freely interchangeable with wenn ‘when/if’, which also functions as a temporal conjunction. Japanese employs a family of verbal suffixes and particles (-ba, -tara, -tewa, nara, to), each of which adds subtle semantic and pragmatic constraints to the conditional meaning and some of which may also express temporal relations without conditionality (-tara ‘and then’; A to B ‘upon A, B’). Languages also vary in the extent to which they overtly mark (non)counterfactuality. In Japanese, the distinction is usually inferred from context; Classical Greek, on the other hand, has an elaborate inventory of markers of different degrees of hypotheticality. In all languages, the interpretation of conditionals is determined and constrained by expressions of temporal relations, modality, quantification, and a variety of pragmatic factors. For instance, the differences in (1a) through (1c) arise from the interaction of the marker if with the tenses and modal auxiliaries in the constituent clauses. For descriptive surveys of conditionals in English and other languages, see Traugott et al. (1986), Athanasiadou and Dirven (1997), Dancygier (1998), and Declerck and Reed (2001).

Truth-Conditional Semantics The formal semantic approach in linguistics and philosophical logic is concerned with the truth conditions of sentences and their logical behavior. Conditionals are among the most extensively studied linguistic constructions in this tradition and pose specific challenges, which have been addressed in a number of ways. Material Conditional

In classical Fregean logic, if A, B is interpreted as the material conditional (also called material implication) ‘!’: (5) A ! B is true iff either A is false, or B is true, or both.

128 Conditionals

The material conditional is a truth function on a par with conjunction and disjunction. However, while there is general agreement that the latter are well suited to capture the truth conditions of and and or, the logical properties of the material conditional do not well match those of conditional sentences. For example, A ! B and A ! :B are mutually consistent, and the falsehood of A is sufficient for the truth of both, hence of their conjunction. But (6b) is intuitively contradictory and does not follow from (6a). Likewise, the negation of A ! B is equivalent to A ^ :B, but (6c) and (6d) are not intuitively equivalent. (6a) Today is Saturday. (6b) If today is Friday, it is raining, and if today is Friday, it is not raining. (6c) It is not the case that if the team wins, I will be happy. (6d) The team will win and I will be unhappy.

Strictly truth-functional theories employ the material conditional in spite of these shortcomings, since no other truth function comes any closer to capturing our intuitions about conditionals. One way to reconcile the approach with linguistic intuitions is to augment the truth conditions with pragmatic conditions on use. Jackson (1987), building on Grice’s original proposals, appealed to probabilistic ‘assertibility’ conditions. For if A then B to be assertible, two conditions must be met: A ! B must be highly probable, and it must remain highly probable in the event that A turns out true. Jackson noted that this comes down to the requirement that the conditional probability of B given A be high. (Variably) Strict Implication

An alternative reaction to the problems of the material conditional is to conclude that conditionals do not express truth functions. Instead, most current theories assume that if A then B asserts that A cannot be true without B also being true. This is typically spelled out in the framework of possible worlds: (7) If A then B is true at a possible world w relative to an accessibility relation R iff for all possible worlds w0 such that wRw0 and A is true at w0, B is true at w0.

The relation R determines the modal base (Kratzer, 1981), the set of possible worlds that are relevant to the truth of the conditional at w. Definition (7) subsumes the material conditional as the special case that R is the identity relation, so the only world relevant at w is w itself. At the other end of the spectrum lies strict implication, under which all possible worlds are relevant and the conditional is true iff B is a logical consequence of A.

These extreme cases are rarely relevant in linguistic usage. Usually, conditionals are evaluated against speakers’ beliefs, the conversational common ground, the information available in a given situation, possible future courses of events in branching time, or other background assumptions. All of these interpretations correspond formally to different choices of the accessibility relation. The fact that the intended reading need not be overtly marked is a source of versatility and context dependence. A given conditional can be simultaneously true with respect to one modal base and false with respect to another. Thus, (8) may be objectively true, but believed to be false by a speaker with insufficient information or false beliefs. (8) If this material is heated to 500 C, it will burn.

The definition in (7) makes room for variation and context dependence of the modal base and overcomes some of the limitations of the material conditional. However, like the latter, it fails to account for the invalidity of certain nonmonotonic inference patterns involving conditionals. For instance, under both analyses, a true conditional remains true under Strengthening of the Antecedent (if A then B entails if C and A then B). Intuitively, however, it is possible for (8) to be true while (9) is false. (9) If this material is placed in a vacuum chamber and heated to 500 C, it will burn.

There are several ways of addressing this problem. We will describe two of them, each departing from definition (7) in a different direction. Relative Likelihood

The first approach takes examples (8) and (9) to show that in cases like (8), not all A-worlds in the modal base are relevant for the truth of the conditional, but only those that satisfy implicit defaults or ‘normalcy’ assumptions. The listener will assume that air was present (as in [8]) unless this is explicitly denied in the antecedent (as in [9]). Kratzer (1981) represented such assumptions as an ordering source, a set of propositions that are ‘normally’ true at w. This set induces a preorder on the worlds in the modal base: w00 is at least as normal as w0 iff all the propositions in the ordering source that are true at w0 are also true at w00. The interpretation of conditionals is sensitive to the relation in (10). (10) If A then B is true at w relative to a model base MB iff for every A-world w0 in MB, there is an AB-world in MB that is at least as normal as w0 and not equalled or outranked in normalcy by any A-world in MB at which B is false.

Conditionals 129

This offers a solution to the problem posed by (8) and (9). Suppose the material is normally not placed in a vacuum chamber. Then every antecedent-world at which it is, is outranked in normalcy by one at which it is not; thus, (8) may be true while (9) is false. Formally, the order induced by the ordering source is similar to the relation of ‘comparative similarity’ between possible worlds that is at the center of the Stalnaker/Lewis theory of counterfactuals (see the article Counterfactuals for details; Lewis, 1981, for a comparison; and Stalnaker, 1975, for an account of indicative conditionals that refers to this notion). The term ‘relative likelihood’ is applied to such orders in artificial intelligence (Halpern, 2003). Like the modal base, the ordering source is subject to underspecification and context dependence. Different ordering sources correspond to different readings of the conditional. Besides normalcy, Kratzer (1981) considers ordering sources that rank worlds according to desires, obligations, and other criteria. Probability

The second approach to dealing with the nonmonotonicity of conditionals does not manipulate the modal base but instead rejects the universal quantification over possible worlds as ill suited for modeling the notion of consequence that speakers employ in interpreting conditionals. On this account, if A then B asserts not that all A-worlds are B-worlds but rather that the conditional probability of B, given A, is high. In other words, the posterior probability of B upon learning A would be high, or, alternatively, a world that is randomly chosen from among the A-worlds would likely be one at which B is true. Different modal bases and ordering sources correspond to different (subjective or objective) probability distributions over possible worlds. Adams (1975) developed a theory of probabilistic entailment in which just those inference patterns that are problematic for the classical account, such as Strengthening of the Antecedent, are no longer predicted to be valid. The intuitive appeal of the probabilistic approach is offset somewhat by the fact that it necessitates a rather profound rethinking of the logical basis of semantic theory. Lewis (1976) showed that a conditional probability cannot in general be interpreted as the probability that a proposition is true, hence that the central premise of the probabilistic account is at odds with the idea that conditionals denote propositions (for detailed discussions see Edgington, 1995; Eells and Skyrms, 1994). Some authors conclude that conditionals do not have truth values (Adams, 1975) or that the conditional probability is only relevant to their use and independent of their truth conditions (Jackson, 1987). Another approach is to

assign nonstandard truth values to conditionals in such a way that the problem is avoided (Jeffrey, 1991; Kaufmann, 2005).

Summary Kratzer’s theory is the most influential one in linguistics. The probabilistic approach has been studied extensively in philosophy and, more recently, artificial intelligence. Many other options have been explored. In addition to the works cited above, for overviews and specific proposals the reader is referred to Bennett (2003); Ga¨rdenfors (1988); Harper and Hooker (1976); Harper et al. (1981); Jackson (1991); Nute (1980, 1984); Sanford (1989); Stalnaker (1984); Veltman (1985); and Woods (1997). It is not always clear whether there are empirical facts of a purely linguistic nature that would decisively favor one approach over another. With such criteria lacking, the choice depends on the purpose of the analysis at hand and other extralinguistic considerations (e.g., assumptions about rational behavior or psychological reality, or tractability in computational modeling). See also: Counterfactuals; Donkey Sentences; Formal Semantics; Implicature; Inference: Abduction, Induction, Deduction; Logical Consequence; Modal Logic; Possible Worlds; Propositional and Predicate Logic; Truth Conditional Semantics and Meaning.

Bibliography Adams E (1975). The logic of conditionals. Dordrecht/ Boston: D. Reidel. Athanasiadou A & Dirven R (eds.) (1997). Amsterdam Studies in the Theory and History of Linguistic Science, vol. 143: On conditionals again. Amsterdam/Philadelphia: John Benjamins. Bennett J (2003). A philosophical guide to conditionals. Oxford: Oxford University Press. Dancygier B (1998). Conditionals and prediction. Cambridge: Cambridge University Press. Declerck R & Reed S (2001). Conditionals: a comprehensive empirical analysis. Number 37 in Topics in English Linguistics. Berlin/New York: Mouton de Gruyter. Edgington D (1995). ‘On conditionals.’ Mind 104(414), 235–329. Eells E & Skyrms B (eds.) (1994). Probabilities and conditionals: belief revision and rational decision. Cambridge: Cambridge University Press. Ga¨rdenfors P (1988). Knowledge in flux: modeling the dynamics of epistemic states. Cambridge, MA: MIT Press. Halpern J Y (2003). Reasoning about uncertainty. Cambridge, MA: MIT Press. Harper W L & Hooker C A (eds.) (1976). The University of Western Ontario series in philosophy of science, vol. 1: Foundations of probability theory, statistical inference,

130 Connectives in Text and statistical theories of science. Dordrecht/Boston: D. Reidel. Harper W L, Stalnaker R & Pearce G (eds.) (1981). The University of Western Ontario series in philosophy of science, vol. 15: Ifs: conditionals, belief, decision, chance, and time. Dordrecht / Boston: D. Reidel. Jackson F (1987). Conditionals. Oxford/New York: Basil Blackwell. Jackson F (ed.) (1991). Conditionals. Oxford: Oxford University Press. Jeffrey R C (1991). ‘Matter-of-fact conditionals.’ In The symposia read at the joint session of the Aristotelian Society and the Mind Association at the University of Durham. Supp. vol. 65. 161–183. Kaufmann S (2005). ‘Conditional predictions: a probabilistic account.’ To appear in Linguistics and Philosophy. Kratzer A (1981). ‘The notional category of modality.’ In Eikmeyer J & Riesner H (eds.) Words, worlds, and contexts. Berlin/New York: Walter de Gruyter. 38–74. Lewis D (1976). ‘Probabilities of conditionals and conditional probabilities.’ Philosophical Review 85, 297–315.

Lewis D (1981). ‘Ordering semantics and premise semantics for counterfactuals.’ Journal of Philosophical Logic 10(2), 217–234. Nute D (1980). Topics in conditional logic. Dordrecht/ Boston: D. Reidel. Nute D (1984). ‘Conditional logic.’ In Gabbay D & Guenthner F (eds.) Handbook of philosophical logic, vol. 2: Extensions of classical logic. D. Reidel. 387–439. Sanford D (1989). If P, then Q: conditionals and the foundations of reasoning. London/New York: Routledge. Stalnaker R (1975). ‘Indicative conditionals.’ Philosophia 5, 269–286. Stalnaker R (1984). Inquiry. Cambridge, MA: MIT Press/ Bradford Books. Traugott E C, ter Meulen A, Snitzer Reilly J & Ferguson C A (eds.) (1986). On conditionals. Cambridge: Cambridge University Press. Veltman F (1985). Logics for conditionals. Ph.D. diss., University of Amsterdam. Woods M (1997). Conditionals. Oxford: Clarendon Press.

Connectives in Text H Pander Maat and T Sanders, Universiteit Utrecht, Utrecht, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

Connectives are one-word items or fixed word combinations that express the relation between clauses, sentences, or utterances in the discourse of a particular speaker. More generally, a connective indicates how its host utterance is relevant to the context. Often, discourse segments are not explicitly introduced by connectives. However, languages provide extensive repertoires of connective expressions. This article reviews work on connectives in monological stretches of discourse; a considerable part of this work, which has become an important topic for linguists, discourse analysts, and psychologists, has also been motivated by a more general interest in the structure of discourse and of linguistic communication. The category of ‘connectives’ differs from that of ‘discourse markers’ in several respects (Schourup, 1999). Whereas discourse markers are commonly regarded as not affecting the truth conditions of their host sentences and are typically only loosely connected to their host sentence in terms of syntactic structure, connectives may be either truth-functional or non-truth-functional and may be tightly integrated in the syntactic structure of the sentence. Common examples of discourse markers are anyway and well.

Common examples of connectives are conjunctions (and, because, so), but in many languages adverbial expressions may also relate sentences, such as hence in English, daarom in Dutch, or darum in German. The categories of discourse markers and connectives overlap for non-truth-functional conjunctions such as so. Some authors have argued that discourse markers may evolve out of connectives (Traugott, 1995). For instance, Gu¨nthner (2000) has shown that the German subordinating conjunction obwohl is increasingly used for introducing sentences with main clause word order; this use no longer expresses the usual concessive relation but rather a restriction on the validity of the preceding utterance (see also Concessive Clauses). Again, the degree of syntactic integration seems to be the defining feature here for discourse marker status. In this article, the discussion is confined to written discourse. Interesting as it may be, analyses of the use of connectives in conversations are not pursued, other than mentioning the work of Gu¨nthner (2000) and comparing the work of Schiffrin (1987), Ford (1994), and Couper-Ku¨hlen (1996). Two questions regarding connectives are addressed: 1. What do they mean and how can their meanings be classified? 2. What role do they play in language learning, language acquisition, and discourse processing?

Connectives in Text 131

The Semantics of Connectives The meaning of connectives has been investigated by linguists from different theoretical traditions, such as relevance theory (Blakemore, 1988a; Carston, 1993), discourse representation theory (Jayez and Rossari, 2001; Txurruka, 2003; see also Discourse Representation Theory), systemic functional linguistics (Halliday and Hasan, 1976; Martin, 1992), and cognitive linguistics (Pander Maat, 1999; Pit, 2003). Some linguists concentrate on a particular connective (e.g., Txurruka, 2003) and others compare or classify connectives (e.g., Knott and Sanders, 1998). A much debated issue in analyses of particular connectives is whether the different contexts of use for a connective can be subsumed under a general, abstract meaning (the minimalist view) or whether different uses should be distinguished (the maximalist position). Consider and (Carston, 1993; Pander Maat, 2001; Txurruka, 2003). The general issue here is whether and has any specific content beyond its traditional truth-functional interpretation of ‘&’. Authors who propose an affirmative answer to this question propose fairly abstract meanings; for instance, Pander Maat (2001) and Txurruka (2003) seemed to agree that and signals the existence of a common discourse topic for the related utterances, although they offered different conceptualizations for the notion of discourse topic and used different methodologies. Connectives other than and – for instance, but – have an even wider range of uses. As an example of the minimalist approach, Janssen (1995) proposed such a general meaning for the Dutch maar (‘but’), which covers meanings that are differently lexicalized in other languages: German has both aber and sondern, Spanish has pero and sino; the second item expresses only so-called substitutive adversativity, such as in he is not rich but poor (see Rudolph, 1996: 131–144). Janssen argued that this substitutive interpretation shares the notion of restriction with the more general adversative meaning in he is short but brave. Another example of a unification attempt is provided by Vlemings (2003), who explored the relation between discourse marker uses of French donc (‘so’) and its use as a marker of argumentative inference. Unlike the inferential uses, the discourse marker donc may be used in isolated utterances – that is, without referring back to the linguistic context. For instance, a mother may suddenly say tais-toi donc! to her noisy child in the theater. Vlemings argued that on a higher level of abstraction, even the discourse marker use may fit into the inferential scheme of modus ponens, when the extralinguistic context is considered as providing the minor premise.

It is quite difficult to formulate unified accounts of various extensions of connectives in such a way that they can be tested empirically. Typically, unification proposals allow for interactions between contextual features and the categorical meaning that may result in quite specific and very subtle interpretations. In our view, corpus data and reader response data should be used to lend some empirical substance to the postulated interpretational procedures. After all, the decisive issue is not whether some unifying semantic bridge can be constructed between various uses: this is only to be expected, given that they are diachronically related. What really counts is whether the postulated abstract meaning is relevant for language users producing and understanding particular uses of the connective. Another approach to the analysis of connectives compares and classifies connective meanings in terms of the discourse relations they may express. This line of work takes as its starting point a classification of coherence relations in discourse. For example, Sanders et al. (1992) have proposed several cross-cutting parameters to order the set of coherence relations. Consider the following examples: (1) Gareth gets on well with Betty. He loves disco music; she loves it too. (2) Gareth and Edith have different musical tastes. He loves disco music, while she despises it. (3) Gareth grew up during the 1970s, so he loves disco music. (4) Edith grew up during the 1970s, but she despises disco music.

The relations in Examples (1) and (2) have something in common, because they both compare two individuals. The relations in Examples (3) and (4) can also be argued to have something in common, because they both rely on a piece of presupposed knowledge, namely, that people who grew up during the 1970s are likely to love disco music. Note also that the difference between Examples (1) and (2) resembles the difference between Examples (3) and (4). If you negate the last clause in Examples (1) and (3), the resulting relations are the same as those in Examples (2) and (4), respectively, and the conjunctions must be changed to restore coherence. We can hypothesize that the relations in these examples are composite constructs, defined in terms of two primitive bivalued parameters: one parameter might distinguish between comparative (or additive) and causal relations, and the other might distinguish between positive and negative relations. Elaborating this simple account of

132 Connectives in Text

relation semantics involves framing suitable definitions for these and other parameters. Combinations or parameter values yield a considerable range of relational types. One of these further parameters might be called the discourse level of the relation: many authors have proposed a distinction between a semantic (or ideational, or external) level of relations and a pragmatic (or internal) level (the first of them being van Dijk (1979) and Halliday and Hasan (1976); see also Redeker (1990) and Sanders et al. (1992)). ‘Semantic’ relations hold between the propositional content of the two related discourse segments, i.e., between their locutionary meanings; ‘pragmatic’ relations, on the other hand, involve the illocutionary meaning of one or both of the related segments. To illustrate, consider the following examples: (5) The neighbors left for Paris last Friday. So they are not at home. (6) The lights in the neighbors’ living room are out. So they are not at home.

The relation in Example (5) can be interpreted as semantic because it connects two events in the world; our knowledge allows us to relate the segments as coherent in the world. A relation like that in Example (5) could be paraphrased as ‘the cause in the first segment (S1) leads to the fact reported in the second segment (S2)’ (Sanders, 1997). In Example (6), however, the two discourse segments are related because we understand the second part as a conclusion from evidence in the first, and not because there is a causal relation between two states of affairs in the world: it is not because the lights are out that the neighbors are not at home. The causal relation in Example (6) could be paraphrased as ‘the description in S1 gives rise to the conclusion or inference formulated in S2’. Hence the relation in this example is pragmatic. The use of the connective so in both examples suggests that it can operate at different levels of representation within a discourse, indicating a causal relation either at the level of propositional content or at the level of speaker intentions or illocutionary force (see the various contributions to Knott et al. (2001)). A classification like this may be used both for analyzing discourse relations and for analyzing the meanings of connectives. One of the questions it leads to is: to what degree do connectives have specialized meanings, in that they express specific values of classificational parameters? For instance, but expresses all kinds of contrastive (negative) relations (Spooren, 1989), whereas although seems to specialize in causal negative relations (Pander Maat,

1998). A classification also invites the analysis of semantic contrasts and parallels. For instance, when but is taken to indicate negativity, may and be considered to be its parallel on the positive side? Finally, we may ask whether different parameters are entirely independent. For instance, the orthogonal design of the classification implies that both positive and negative relations may be semantic and pragmatic. However, an inspection of negative relations when indicated by connectives shows that expectations are being denied here. Expectations can be seen as a kind of inference of speakers and hearers, so that it may be asked whether negative relations are not pragmatic (epistemic) by definition (Pander Maat, 1999). We now focus on the particular relational subdomain of positive causal relations. There has been a lively debate about the degree to which causal connectives indicate the discourse level of the causal relation. As mentioned previously, many authors have presented a dichotomy between semantic and pragmatic levels of relations. Some authors propose finer distinctions, such as Sweetser’s (1990; see also Verstraete, 1998) threefold distinction between content relations (roughly the same as semantic relations (see Examples (7) and (8)), epistemic relations (see Example (9)), and speech act relations (see Example (10)): (7) The baby is crying because it is hungry. (8) I went home because I felt tired. (9) The baby is hungry, because it is crying. (10) What are you doing tonight, because there is a good movie on.

That is, the level of coherence may be further distinguished according to whether inferences or speech acts are being related to earlier information. The notion of epistemic coherence has also been approached in terms of polyphony (Ducrot, 1983) and mental spaces (Verhagen, 2000; Dancygier and Sweetser, 2000). A number of publications have dealt with the question whether connectives specialize in expressing semantic or pragmatic relations. In English, because may express both interpretations, whereas since seems to be specialized for pragmatic interpretations. Similar observations have been made for German, French, and Dutch connectives (see Pit, 2003), with the difference that these languages have a more differentiated repertoire of connectives than English seems to have. The same kinds of differences have been observed for forward-looking causal connectives such as English that’s why and so, Dutch daarom and daardoor, and French de ce fait and

Connectives in Text 133

c’est pourquoi (on the one hand) and alors and donc (on the other) (Pander Maat and Degand, 2001; Jayez and Rossari, 2001). The general picture emerging from these studies is that connectives do specialize, although their semantic interrelations are more complex than a simple one-to-one assignment from connectives to classes of coherence relations would suggest (Knott and Dale, 1994; Knott and Sanders, 1998). Besides these empirical issues, researchers in this field have embarked on a conceptual debate on how this distinction should be defined. As another approach to the differences exemplified in Examples (7)–(10), a number of authors have tried to provide a unified analysis of discrete distinctions, such as semantic-pragmatic and content-epistemic-speech act, in terms of a scalar feature, i.e., degree of subjectivity (Pander Maat and Sanders, 2001; Pander Maat and Degand, 2001; Pit, 2003). This proposal was motivated by a number of recurrent problems that could not be solved in terms of discrete twofold or threefold distinctions. For one thing, there appear to be two kinds of content causal relations: volitional causality concerns reasons for actions (see Example (8)) and nonvolitional causality concerns causes and consequences. Many languages encode this distinction between volitional and nonvolitional causality in their connective repertoire, but this distinction cannot be accounted for in the current frameworks. Second, many connectives that may express epistemic relations may also express volitional relations, but not nonvolitional relations. This suggests that volitional and epistemic relations have something in common that is lacking in nonvolitional relations. Third, some connectives (e.g., Dutch daarom and French c’est pourquoi) may be used to express epistemic relations based on forward causality (see Example (11)) but not epistemic relations based on backward causality (‘abductive relations’; see Example (12)). And connectives that may express abductions (e.g., Dutch dus, French donc, and English so) may also express speech act relations (see Example (10)): (11) The baby misses its mother. It will probably start crying soon. (12) The baby misses its mother. It has been crying for half an hour now.

Figure 1 Luminescence.

Observations such as these have led to the hypothesis that connective meanings encode different degrees of subjectivity. Subjectivity is defined here as the degree to which a subjective participant is implicitly responsible for the causal relation. The present speaker is taken to represent the maximal degree of subjectivity for a participant, so that subjectivity can also be called the degree of speaker involvement. The scale of subjectivity seems to run as shown in Figure 1. The subjectivity hypothesis has been supported by corpus findings indicating that, for instance, Dutch dus and French donc (‘so’) systematically occur more in subjective environments than their counterparts daarom and c’est pourquoi do, though there is a certain overlap (Pander Maat and Degand, 2001). In a larger study, Pit (2003) found analogical subjectivity differences between omdat and want in Dutch, parce que and car in French, and weil and denn in German. Of course, the subjectivity of the connective context is not evidence alone for a claim concerning the subjective meaning of the connective, though it certainly points in a certain direction. Hence it is important to back these claims up further by experimental evidence. For instance, Pander Maat and Sanders (2001) had participants choose the most appropriate connectives to fit into contexts that differed on subjectivity variables. Such a scalar conceptualization allows for a number of specific predictions concerning the semantics of connectives. For instance, connectives should occupy a contiguous area on the scale: we should not encounter connectives that encode volitional relations and noncausal epistemics, but not causal epistemics. An important advantage of such an approach is that it may explain how the choice of a connective may contribute to the relational interpretation. For instance, somebody motivating his actions using donc strongly presumes that his decisions follow a generally acceptable pattern of reasoning. Relevance theorists have chosen a quite different approach for handling semantic differences such as the ones in Examples (7)–(10). These theorists are critical of the entire notion of connectives ‘expressing’ discourse relations (Blakemore, 1988b; Rouchota, 1998). Instead, they propose two distinctions. The first is between conceptual and procedural meaning. Conceptual connectives such as because, like most words, encode constituents of conceptual

134 Connectives in Text

representations of utterances. Other causal expressions, such as because of and bring about, do exactly the same. By contrast, ‘procedural’ connectives such as but and moreover encode information about the inferential process needed to provide an optimally relevant interpretation of an utterance. The second distinction is the one between truth-conditional and non-truth-conditional meaning (see also Truth Conditional Semantics and Meaning). Generally, conceptual connectives affect the truth conditions of utterances. An utterance X because Y differs in its truth conditions from X and Y, because it is true only if X causes Y. But procedural connectives need not affect truth conditions. For instance, but is taken to suggest some contrast between the conjoint utterances. But is this contrast part of the truth conditions? This does not seem to be the case, because the truth conditions of the following two complex utterances do not differ: (13) If Tom is here and Anne isn’t then we cannot play bridge. (14) If Tom is here but Anne isn’t then we cannot play bridge.

Both utterances state that we cannot play bridge if (a) Tom is here and (b) Anne is not here. It seems only natural that procedural connectives do not contribute to truth conditions, since the question of truth of falsity does not arise for the computations they encode: the computation may just run successfully or fail. Yet, other procedural connectives do not give such clear results in the test of conditional embedding. For instance, so has been dubbed procedural by relevance theorists (Blakemore, 1988a), but it in the following examples it certainly seems to affect truth conditions in that it introduces a cause–consequence relation into the propositional interpretation: (15) The mushrooms served at the restaurant were poisonous. So, Bill died. (16) If the mushrooms served at the restaurant were poisonous and so Bill died, then his family should be compensated.

Conversely, conceptual connectives have speech act uses in which their effects on truth conditions are less than clear: (17) Will Mary resign? Because she keeps on fighting with the boss.

Rouchota (1998: 39) proposed to deal with this kind of use in terms of what relevance theorists have called ‘higher level explicatures,’ i.e., a secondary proposition (see the italicized part in Example (18)) framing the primary one:

(18) I ask you whether Mary will resign because she keeps on fighting with the boss.

By considering the secondary proposition as truth evaluable in its own right, one could rescue the notion that conceptual connectives affect truth conditions. To sum up, the relevance theoretic approach tries to capture the differences in the ‘levels of interpretation’ in terms of two dichotomies, ending up with four categories of connectives: . Conceptual/basic proposition (e.g., because in Example (7)) . Conceptual/higher level explicature (e.g., because in Examples (10) and (17)) . Procedural/affecting truth conditions (e.g., so in Example (15)) . Procedural/not affecting truth conditions (e.g., but in Example (14)) How does this approach relate to the subjectivity approach sketched previously? There is certainly a relation between subjective involvement and procedural meaning, in that procedures focus on inferential computations that have no counterpart in external reality but are related to cognitive operations performed by subjects of consciousness. A major difference between the relevance theoretic approach and the work reviewed previously is that relevance theorists assume that the semantic status of connectives is fixed: because encodes a concept, whereas so encodes a procedure, no matter their context of use. The theorists on subjectivity, including others such as Jayez and Rossari (2001), consider connectives as flexible operations that may take different kinds of entities as their input (real-world situations, beliefs/ epistemic states, and speech acts), although not every input is allowed. That is, so may operate both on epistemic entities and real-world situations, but imposes constraints on the types of input (in the subjectivity approach, these constraints relate to subjectivity) and on the kinds of relations between the inputs. It is unclear how such flexibility could be handled in the relevance theoretic view. Another difference between the two approaches is that the work on subjectivity uses corpus research as its main methodology, whereas relevance theorists work with constructed examples.

Connectives in Language Development and Discourse Processing If categorizations of coherence relations and connectives indeed have cognitive significance, they should show relevance in areas such as language development – both diachronic (grammaticalization

Connectives in Text 135

(cf. Sweetser, 1990; Traugott, 1988; Traugott and Heine, 1991)) and synchronic (language acquisition), and in discourse processing. In all three areas, substantial studies are under way, and there already exists suggestive evidence. Research on first-language acquisition suggests that the order in which children acquire connectives reflects increasing complexity, which can be accounted for in terms of the relational categories mentioned previously: additives (and) before causals (because) and positives (and, because) before negatives (but, although) (Bloom, 1991; Spooren, 1997; Evers-Vermeul, 2005). However, concerning the level of the relations, the results are less clear. In a corpus of naturalistic data, Kyratzis et al. (1990) found that speech act causal relations are used frequently even at a very early age, whereas epistemic causal relations are acquired very late (they hardly occur, even in the oldest age group (6.7–12.0 years) studied by Kyratzis et al.). An interesting issue is how to relate the relational idea – that cognitive complexity of coherence relations predicts acquisition order – to so-called usage-based or input-based accounts of language acquisition (Tomasello, 2000; Evers-Vermeul, 2005). In research on diachronic development, the classification categories of connectives also prove to be relevant. Sweetser (1990) originally introduced her three-domain distinction to cover the semantics of a umber of related phenomena such as verbs of perception, modal elements, and connectives. She argued that these linguistic elements have developed diachronically from a content meaning through metaphorical extension to the more subjective epistemic and speech act domains. Examples of such developments in the realm of connectives have been presented by Ko¨nig and Traugott (1988) and Traugott (1995): the connective still originally meant ‘now as formerly’; the change is that simultaneity has become denial of expectation; while developed from a marker exclusively expressing simultaneity (at the time that) to a marker that can nowadays be used to express contrast and concession; German weil had the same root meaning, but developed into a causal connective. Traugott (1995: 31) considered this a case in which ‘‘subjectification: meanings become increasingly based in the speaker’s subjective belief state/attitude toward the proposition.’’ Compare Examples (19) and (20): (19) Mary read while Bill sang. (20) Mary liked oysters while Bill hated them.

Is there any psycholinguistic work on discourse processing showing the relevance of these ideas of

connectives as processing instructors? The function of linguistic markers has been examined in various online processing studies; these studies have primarily aimed at the investigation of the processing role of the signals per se, rather than on more sophisticated ideas such as the exact working of ‘space building’. The experimental work typically includes the comparison of reading times of identical textual fragments that have different linguistic signals preceding them. Studies on the role of connectives and signaling phrases show that these linguistic signals affect the construction of the text representation (cf. Cozijn, 2000; Millis and Just, 1994; Noordman and Vonk, 1997; Sanders and Noordman, 2000). Millis and Just (1994), for instance, investigated the influence of connectives such as because by questioning participants immediately after they read a sentence. After participants had read two clauses that were either linked or not linked by a connective, they judged whether a probe word had been mentioned in one of the clauses. The recognition time to probes from the first clause was consistently faster when the clauses were linked by a connective. The presence of the connective also led to faster and more accurate responses to comprehension questions. These results suggest that the connective does influence the representation immediately after reading. Using eye movement techniques, Cozijn (2000) studied the exact location of the various effects of using because. Using because implies making a causal link between the related segments. Cozijn found that words immediately following the connective were read faster whereas reading slowed down for words at the end of the clause, compared to the noconnective condition. This suggests that connectives help to integrate linguistic material (thus leading to faster reading when the connective is present), and at the same time they instruct the reader to draw a causal inference (thus slowing down clause-final reading). According to Noordman and Vonk (1997), these findings illustrate the integration and inference functions of text connectives. In sum, several studies show the influence of linguistic markers on text processing. However, studies of the influence on text representation show a much less consistent pattern (for an overview, see Degand et al., 1999; Sanders and Noordman, 2000; Degand and Sanders, 2002). On the one hand, some results show that linguistic marking of coherence relations improves the mental text representation. This becomes apparent from better recall performance, a faster and more accurate response on a prompted recall task, a faster response on a verification task, and better answers on comprehension questions. On the other hand, a number of studies

136 Connectives in Text

have indicated that linguistic markers do not have this facilitating role, as shown by a lack of effect on the amount of information recalled or a lack of better answers on multiple-choice comprehension questions. Some authors even claim a negative impact of connectives on text comprehension (Millis et al., 1993). There are several plausible explanations for the reported contradictions (Degand and Sanders, 2002). One is that the category of linguistic markers under investigation is not well defined. For instance, in the signaling literature, different types of signals seem to be conflated. A second explanation is that some experimental methods, such as the recall task, are simply too global to measure the effect of relational markers. Other methods, such as recognition, question answering, or sorting (Kintsch, 1998), might be more sensitive in this respect. Indeed, Degand et al. (1999) and Degand and Sanders (2002) provided evidence for the claim that under average conditions (i.e., in natural texts of normal text length and with a moderate number of connectives), causal connectives do contribute significantly to the comprehension of the text. In sum, connectives and cue phrases seem to affect both the construction process and the representation, once the text has been processed, but the effects are rather subtle and specific measurement techniques are needed to assess them. Having briefly discussed the role of connectives (and other signaling phrases) in discourse processing, a preliminary conclusion might be that they can indeed be treated as linguistic markers that instruct readers how to connect the new discourse segment with the previous one (Britton, 1994). Given what is known about text processing, it is plausible to expect that text connectives also play a role in discourse production. Bestgen (1998) argued that this is the case, and showed how connectives such as and can be traces of production difficulties that occur when a new topic is introduced. This illustrates the segmentation function of connectives.

Conclusion The various approaches to the study of connectives represent a perplexing variety of conceptual frameworks and methodologies. Consider, for instance, the issue concerning what counts as data for the various approaches. Some theorists rely entirely on intuitions regarding the appropriateness of connectives in constructed examples. Others concentrate on naturally occurring language use, but without actually testing their hypotheses against corpus data. Still others present corpus data that have been coded and quantified. And finally, researchers in discourse processing and

production tend to prefer experimentally elicited reactions to texts, or experimentally elicited language use. Each kind of data has strengths and weaknesses. Constructed examples, for instance, can be disputed for reasons of biased sampling; but only constructed examples may make us aware of what cannot be done with connectives. In-depth analyses of naturally occurring discourse may provide insights into the intricate interaction between semantic features and interactional conditions, but do not enable us systematically to tease out the contributions of these two factors. Corpus research may compare larger numbers of connective uses in both linguistic and contextual factor frameworks, but every corpus analyst in this field acknowledges that the interpretation of discourse relations may differ among coders, so that substantial energy has to be spent in ensuring a satisfactory degree of intercoder agreement. Finally, experimental research into connective effects is a superior way of supporting causal models, but the often very short texts used in these experiments have sometimes rightly been criticized for their lack of external validity (Graesser et al., 1997). From a methodological point of view, it can be concluded that the integration of cognitively plausible theories with empirical testing is the ultimate aim, a situation that has not yet been realized (Sanders and Spooren, in press). One way to realize this goal is to proceed with the thorough investigation of the corpora of actual language use. Digital corpora enable researchers to do this on a larger scale than ever before. It is especially important to extend corpus research in the direction of spoken discourse. Results from text-linguistic studies presented here are largely based on the study of written discourse. To what extent can they be generalized to spoken discourse? And what do the specific insights from the linguistic analysis of spoken discourse add to the picture we have so far? Integration of text and psycholinguistic insights is a second way to realize the goal of interaction between theory and empirical testing. The subtle semantic distinctions proposed by linguists on the one hand, and the processing effects revealed by psycholinguistic research on the other hand, still need to be linked. For instance, the general processing effects of because and and have been investigated (e.g., Millis et al., 1995), but we are not aware of any experimental research into the processing instructions encoded by connectives that differ in specificity (e.g., but versus although), or between connectives dubbed conceptual and procedural in relevance theory (e.g., because versus after all), or into the difference between connectives and corresponding lexical signals (e.g., because versus the reason is that. . .). Psycholinguistic

Connectives in Text 137

work based on such linguistically sophisticated analyses would lead to further progress in the research field of connectives. See also: Concessive Clauses; Conditionals; Counterfac-

tuals; Discourse Domain; Discourse Parsing, Automatic; Discourse Representation Theory; Discourse Semantics; Presupposition; Projection Problem; Selectional Restrictions; Syntax-Semantics Interface; Truth Conditional Semantics and Meaning.

Bibliography Bestgen Y (1998). ‘Segmentation markers as trace and signal of discourse structure.’ Journal of Pragmatics 29, 753–763. Blakemore D (1988a). ‘So as a constraint on relevance.’ In Kempson R (ed.) Mental representation: the interface between language and reality. Cambridge: Cambridge University Press. 183–195. Blakemore D (1988b). ‘The organization of discourse.’ In Newmeyer F (ed.) Linguistics: the Cambridge survey, vol. IV. Cambridge: Cambridge University Press. 229–250. Bloom L (1991). Language development from two to three. Cambridge: Cambridge University Press. Britton B K (1994). ‘Understanding expository text: building mental structures to induce insights.’ In Gernsbacher M A (ed.) Handbook of psycholinguistics. San Diego: Academic Press. 640–674. Carston R (1993). ‘Conjunction, explanation and relevance.’ Lingua 90, 27–48. Couper-Ku¨hlen E (1996). ‘Intonation and clause combining in discourse: the case of because.’ Pragmatics 6, 389–426. Cozijn R (2000). Integration and inference in understanding causal sentences. Ph.D. diss., Tilburg University. Dancygier B & Sweetser E (2000). ‘Constructions with if, since, and because: causality, epistemic stance, and clause order.’ In Couper-Kuhlen E & Kortmann B (eds.) Cause, condition, concession and contrast: cognitive and discourse perspectives. Berlin: Mouton de Gruyter. 111–142. Degand L & Sanders T (2002). ‘The impact of relational markers on expository text comprehension in L1 and L2.’ Reading and Writing 15, 739–757. Degand L, Lefe`vre N & Bestgen Y (1999). ‘The impact of connectives and anaphoric expressions on expository discourse comprehension.’ Document Design 1, 39–51. Ducrot O (1983). ‘Puisque: essay de description polyphonique. Analyses grammaticales du franc¸ais.’ Revue Romane 24, 166–185. Evers-Vermeul J (2005). Connections between form and function of Dutch connectives. Change and acquisition as windows on form–function relations. Ph.D. diss., Utrecht Institute of Linguistics OTS, Universiteit Utrecht. Ford C E (1994). ‘Dialogic aspects of talk and writing: because on the interactive-edited continuum.’ Text 14, 531–554. Graesser A C, Millis K K & Zwaan R A (1997). ‘Discourse comprehension.’ In Spence J, Darley J & Foss D (eds.) Annual review of psychology 48. Palo Alto, CA: Annual Reviews Inc. 163–189.

Gu¨nthner S (2000). ‘From concessive connector to discourse marker: the use of obwohl in everyday German interaction.’ In Couper-Ku¨hlen E & Kortmann B (eds.) Cause, condition, concession, contrast, cognitive and discourse perspectives. New York: Mouton de Gruyter. 439–468. Halliday M A K & Hasan R (1976). Cohesion in English. London/New York: Longman. Janssen T A J M (1995). ‘Heterosemy or polyfunctionality? The case of Dutch maar ‘but, only, just’.’ In Shannon T F & Snapper J P (eds.) The Berkeley conference on Dutch linguistics 1993. Lanham, MD: University Press of America. 71–85. Jayez J & Rossari C (2001). ‘The discourse-level sensitivity of consequence discourse markers in French.’ Cognitive Linguistics 12, 275–290. Kintsch W (1998). Comprehension. A paradigm for cognition. Cambridge: Cambridge University Press. Knott A & Dale R (1994). ‘Using linguistic phenomena to motivate a set of coherence relations.’ Discourse Processes 18, 35–62. Knott A & Sanders T (1998). ‘The classification of coherence relations and their linguistic markers: an exploration of two languages.’ Journal of Pragmatics 30, 135–175. Knott A, Sanders T & Oberlander J (eds.) (2001). Levels of representation in discourse relations. Special issue of Cognitive Linguistics. Berlin: Mouton de Gruyter. Ko¨nig E & Traugott E C (1988). ‘Pragmatic strengthening and semantic change: the conventionalizing of conversational implicature.’ In Hu¨llen W & Schulze R (eds.) Understanding the lexicon: meaning, sense and world knowledge in lexical semantics. Tu¨bingen: Max Niemeyer Verlag. 110–124. Kyratzis A, Guo J & Ervin-Tripp S (1990). ‘Pragmatic conventions influencing children’s use of causal constructions in natural discourse.’ In The proceedings of the 16th annual meeting of the Berkeley Linguistics Society. Berkeley, CA: BLS. 205–214. Martin J R (1992). English text. System and structure. Philadelphia/Amsterdam: John Benjamins. Millis K K & Just M A (1994). ‘The influence of connectives on sentence comprehension.’ Journal of Memory and Language 33, 128–147. Millis K K, Graesser A C & Haberlandt K (1993). ‘The impact of connectives on the memory for expository texts.’ Applied Cognitive Psychology 7, 317–339. Millis K K, Golding J M & Barker G (1995). ‘Causal connectives increase inference generation.’ Discourse Processes 20, 29–49. Noordman L G M & Vonk W (1997). ‘The different functions of a conjunction in constructing a representation of the discourse.’ In Costermans J & Fayol M (eds.) Processing interclausal relationships. Studies in the production and comprehension of text. Mahwah, NJ: Lawrence Erlbaum Assoc. 75–93. Pander Maat H L W (1998). ‘Classifying negative coherence relations on the basis of linguistic evidence.’ Journal of Pragmatics 30, 177–204.

138 Connotation Pander Maat H L W (1999). ‘The differential linguistic realization of comparative and additive coherence relations.’ Cognitive Linguistics 10, 147–184. Pander Maat H L W (2001). ‘Unstressed ‘en/and’ as a marker of joint relevance.’ In Sanders T, Schilperoord J & Spooren W (eds.) Text representation: linguistic and psycholinguistic aspects. Amsterdam: John Benjamins. 197–230. Pander Maat H L W & Degand L (2001). ‘Scaling causal relations and connectives in terms of speaker involvement.’ Cognitive Linguistics 12, 211–245. Pander Maat H L W & Sanders T J M (2001). ‘Subjectivity in causal connectives: an empirical study of language in use.’ Cognitive Linguistics 12, 247–273. Pit M (2003). How to express yourself with a causal connective. Amsterdam and New York: Editions Rodopi B.V. Redeker G (1990). ‘Ideational and pragmatic markers of discourse structure.’ Journal of Pragmatics 14, 367–381. Redeker G (1991). ‘Review article: linguistic markers of discourse structure.’ Linguistics 29, 1139–1172. Rouchota V (1998). ‘Connectives, coherence and relevance.’ In Rouchota V & Jucker A H (eds.) Current issues in relevance theory. Amsterdam/Philadelphia: John Benjamins. 11–58. Rudolph E (1996). Contrast. Adversative and concessive expressions on sentence and text level. Berlin and New York: Walter de Gruyter. Sanders T (1997). ‘Semantic and pragmatic sources of coherence: on the categorization of coherence relations in context.’ Discourse Processes 24, 119–147. Sanders T J M & Noordman L G M (2000). ‘The role of coherence relations and their linguistic markers in text processing.’ Discourse Processes 29, 37–60. Sanders T & Spooren W (in press). ‘Discourse and text structure.’ In Geeraerts D & Cuykens H (eds.) Handbook of cognitive linguistics. Oxford: Oxford University Press. Sanders T, Spooren W & Noordman L (1992). ‘Coherence relations in a cognitive theory of discourse representation.’ Cognitive Linguistics 4, 93–133.

Schiffrin D (1987). Discourse markers. Cambridge: Cambridge University Press. Schourup L (1999). ‘Discourse markers.’ Lingua 107, 227–265. Spooren W P M (1989). Some aspects of the form and interpretation of global contrastive coherence relations. Ph.D. diss., Nijmegen University, The Netherlands. Spooren W (1997). ‘The processing of underspecified coherence relations.’ Discourse Processes 24, 149–168. Sweetser E E (1990). From etymology to pragmatics. Metaphorical and cultural aspects of semantic structure. Cambridge: Cambridge University Press. Tomasello M (2000). ‘First steps toward a usage-based theory of language acquisition.’ Cognitive Linguistics 11, 61–82. Traugott E C (1988). ‘Pragmatic strengthening and grammaticalization.’ Proceedings of the Berkeley Linguistics Society 14, 406–416. Traugott E C (1995). ‘Subjectification in grammaticalization.’ In Stein D & Wight S (eds.) Subjectivity and subjectivisation: linguistic perspective. Cambridge: Cambridge University Press. 31–54. Traugott E C & Heine B (eds.) (1991). Approaches to grammaticalization. Amsterdam: Benjamins. Txurruka I G (2003). ‘The natural language conjunction and.’ Linguistics and Philosophy 26, 255–285. van Dijk T A (1979). ‘Pragmatic connectives.’ Journal of Pragmatics 3, 447–456. Verhagen A (2000). ‘Concession implies causality, though in some other space.’ In Couper-Kuhlen E & Kortmann B (eds.) Cause, condition, concession, contrast. Cognitive and discourse perspectives. New York: Mouton de Gruyter. 361–380. Verstraete J C (1998). ‘A semiotic model for the description of levels in conjunction: external, internal-modal and internal-speech functional.’ Functions of Language 5, 179–211. Vlemings J (2003). ‘The discourse use of French donc in imperative sentences.’ Journal of Pragmatics 35, 1095–1112.

Connotation K Allan, Monash University, Victoria, Australia ß 2006 Elsevier Ltd. All rights reserved.

Based on the distinction made by medieval schoolmen such as Duns Scotus (ca. 1300) between connotatum and significatum, Mill (1843) contrasts connotation with denotation much as Frege (1892) contrasts Sinn with Bedeutung and later writers sense with reference and intension with extension – although these

contrastive pairs are not quite synonymous with one another. There is, however, another use of connote that distinguishes it from sense and intension. It is similar to what Leech (1981) calls affective meaning: ‘The connotations of a language expression are pragmatic effects that arise from encyclopedic knowledge about its denotation (or referent) and also from experiences, beliefs, and prejudices about the contexts in which the expression is typically used.’

Connotation 139

Connotations vary between contexts and speech communities independently of sense, denotation, and reference. For example, Mike and Michael can have the same reference but different connotations. Just as John is an unsuitable name for your new-born daughter, so is Springtime in Paris an inappropriate name for a 1200-cc Harley-Davidson motorbike or an auto-repair shop. Wheels and Deals might be a good name for a used car mart, but not for a new strain of corn or for a maternity boutique. People are well aware of these facts, and marketing folk exploit such knowledge to the full. Lehrer (1992b) reviewed the proper names of car models, rock bands, beauty salons, streets, and university buildings and found that baptisms (Kripke, 1972) are mostly systematic and that each genre develops its own naming themes and styles based on connotation. Take the different connotations of (1) and (2): (1) Tom’s dog killed Jane’s rabbit (2) Tom’s doggie killed Jane’s bunny

As Gazdar (1979: 3) pointed out (without, incidentally, mentioning connotation), in (2) the speaker ‘‘is either a child, someone posing as a child, someone who thinks that they are addressing a child, or someone posing as someone who thinks that they are addressing a child.’’ A dog can be referred to by any of the nouns in (3). (3) dog, dish-licker, bow-wow, cur, mutt, mongrel, whelp, hound

Dish-licker smacks of dog-racing jargon; and bowwow is either racing slang or baby-talk (compare geegee). Cur is pejorative along with mutt, mongrel, and whelp, which have additional senses, as does hound, except that it connotes a noble animal. Dog, however, connotes nothing in particular, being the unmarked lexeme among the others. Because of the blocking principle (Aronoff, 1976; Lehrer, 1992a), it is rare to find words that are synonymous enough that they can substitute for one another in every context; each takes on distinctive connotations from the various contexts in which it is used. The effects of connotation can be either euphemism or its opposite, dysphemism (Bolinger, 1980; Allan and Burridge, 1991). For instance, racist dysphemisms occur when a speaker refers to or implicates the hearer or some third person’s race, ethnicity, or nationality in such terms as to cause a face affront. Among the racist dysphemisms of English, are frog for a French person, kraut and hun for a German, chink for a Chinese, and slant(ie) for any East Asian. Many such racist terms can be disarmed by being used, without irony, as in-group solidarity markers

by the targeted group. For example, Folb (1980: 248) glosses nigger as follows: nigger Form of address and identification among blacks (can connote affection, playful derision, genuine anger, or mere identification of another black person; often used emphatically in conversation).

Similarly, Greek Australians, for instance, often refer to themselves as wogs, although it is perceived as derogatory when used by skips – Anglo-Celtic Australians (sourced, perhaps, from a 1960s television series Skippy the Bush Kangaroo). The connotations of some existing word often lead to its replacement by another word or phrase, usually with some revision of meaning. Gendered words such as man(kind) ‘human beings, people’, chairman ‘chairperson, chair’, or actress ‘female actor’ have been regarded as discriminatory and are dispreferred by many and taboo to a few. Waiter or waitress have given way to server or waitperson; salesman has given way to salesperson. These novelties are usually euphemistic and have positive connotations. It is not the denotation but the connotations of offending vocabulary items that are dispreferred. This is also true for a whole batch of euphemisms for avoiding the mildly distasteful, upgrading what is favored and downgrading what is not. Consider (4) from Time Australia (April 17, 1989: 36): (4) ‘‘Bribes, graft and expenses-paid vacations are never talked about on Capitol Hill. Honorariums, campaign contributions and per diem travel reimbursements are.’’

The terms honorarium, campaign contributions, and per diem travel reimbursements are used as alternatives to the dispreferred expressions bribes, graft, and expenses-paid vacations because they have positive instead of negative connotations. we’ll have to let you go replaces you’re fired, even when the dehiring is merited. People arrested but not yet charged are helping the police with their enquiries. If a soldier is hit by incontinent ordinance (a kind of friendly fire) she or he may suffer a ballistically induced aperture in the subcutaneous environment or worse. If you jump out of a 10thstorey window, you will not just go splat when you hit the ground, you will suffer sudden deceleration trauma. The final remark on the hospital chart of a case of negative patient care outcome was ‘‘Patient failed to fulfill his wellness potential’’ (Lutz, 1989: 66); it is not reported whether this resulted from therapeutic misadventure. Simple dieting will involve you in negative expenditure, but you will have to pay for nutritional avoidance therapy. A sanitation engineer sounds more exalted than a garbage collector; the vermin control officer has replaced the rat catcher. A preloved object

140 Connotation

sounds more attractive than a second-hand or used one does; they can be found in an opportunity shop, which specializes in reutilization marketing. We are, at best, comfortably off ourselves; other people are wealthy or even filthy rich. Many words suffer pejorization through society’s perception of a word’s tainted denotatum contaminating the connotations of the word itself. Throughout the ages, country folk have been held in low esteem by townies. Latin urbanus ‘townsman’ gives rise to urbane ‘sophisticated, elegant, refined’ versus rusticus ‘rustic’ with connotations of ‘clownish, awkward, boorish.’ Boorish means ‘ill-mannered, loutish, uncouth’ and derives from the noun boor (Old English (Ze)bu´r ‘dweller, husbandman, farmer, countryman’; cf. Dutch boer ‘farmer’); the old meaning lives on in neighbor (‘near-dweller’). English churl once meant ‘countryman of the lowest rank,’ and by the Early Middle English period churlish already meant ‘ill-tempered, rude, ungracious.’ Taboos on the names of gods seek to avoid metaphysical malevolence by counteracting possible blasphemies (even, perhaps, profanities) that arouse their terrible wrath. Thus, in the Holy Communion service of the Anglican Church, the minister says, ‘‘Thou shalt not take the Name of the Lord thy God in vain: for the Lord will not hold him guiltless, that taketh his Name in vain’’ (The Book of Common Prayer, 1662). To avoid blasphemy, the word God is avoided in euphemistic expletives such as the archaic ’Od’s life! Zounds! by gad! ’Sbodlikins! and Odrabbet it! and in the more contemporary Gosh! Golly! Cor! Gordon’ighlanders! For goodness sake! and Good gracious! These examples demonstrate the generation of new vocabulary by various kinds of remodeling, including clippings and substitutions of phonetically similar words. Jesus is end-clipped to Jeeze! and Gee! (which is also the initial of God); Gee whiz! is a remodeling of either jeeze or jesus. More adventurous remodeling are By jingo! Jiminy cricket! Christmas! Crust! Crumbs! and Crikey! Note that the denotation of Gee! Jeepers! and Jesus! is identical. From a purely rational viewpoint, if one of them is blasphemous, then all of them are. What is different is that the first two have connotations that are markedly different from the last. Connotation is seen to be a vocabulary generator – or, more precisely, reactions to connotations are. The connotations of taboo terms are contaminated by the taboo topics that the terms denote; but, by definition, euphemisms are not – or not yet – contaminated. Euphemisms often degenerate into taboo terms through contamination by the taboo topic. For example, in 45 B.C.E., Cicero observed that Latin penis ‘tail’ had earlier been a euphemism for mentula ‘penis’: ‘‘But nowadays,’’ he wrote, ‘‘penis is

among the obscenities’’ (Epistulae ad Familiares IX, xxii). As a learned term borrowed from Latin, it is not among the obscenities of present-day English; it is an orthophemism (Allan and Burridge, 2006). English undertaker once meant ‘odd-job man, someone who undertakes to do things’, which was used as a euphemism for the person who takes care of funerals; like most ambiguous taboo terms, the meaning of undertaker narrowed to the taboo sense alone and is now being replaced by the euphemism funeral director. Such euphemisms often start off with a modifying word, funeral in funeral undertaker; then the modifier is dropped as the taboo connotations displace other senses of the head noun. Other examples are deranged, derived from mentally deranged (‘mentally disordered’), and asylum from lunatic asylum (‘place of refuge for lunatics’). The connotations of taboo terms such as those for body parts connected with sexual reproduction, defecation, and the correlative effluvia give rise to a belief, be it ever so vague, that the language expression somehow reflects the essential nature of the taboo topics they denote. This is exactly why the terms themselves are often said to be unpleasant or ugly-sounding and why they are miscalled dirty words; it is the result of the powerful hold that naturalist beliefs of an intrinsic link between form and content have upon the community. A word or longer expression with bad connotations (however unjustifiable) will suffer pejorization; hence, the proverb give a dog a bad name. Polysemous words with taboo senses are downgraded by semantic narrowing to the taboo senses alone. Words with a taboo homonym tend to be downgraded out of use. It is rare for euphemisms to be degraded into taboo terms and later come back from the abyss after they have lost their taboo sense. During the 17th and 18th centuries the verb occupy meant ‘copulate’ (for reasons you can well imagine); at the same time, its nontaboo senses lapsed. Occupy reentered the lexicon in its current sense of ‘inhabit, take up’ only after it had ceased to be used dysphemistically. Where there is little likelihood of a speaker being misunderstood, the homonyms of a taboo term are likely to persist in the language. Connotations are not a function of the form alone. For instance, queen ‘regina’ is under no threat from the homonym meaning ‘gay male’ simply because one denotatum is necessarily female and the other is necessarily male. The converse holds for the end-clipped American epithet mother ‘motherfucker’. Similarly, we experience no constraint in saying it’s queer, but we generally avoid saying he’s queer if we mean ‘he’s peculiar’, preferring he’s eccentric or he’s a bit odd. Bull meaning ‘bullshit’ is dissimilated from bull ‘male, typically bovine, animal’ because it heads an uncountable NP instead of a countable one.

Constants and Variables 141

In this article, it has been shown that the connotations of a language expression are pragmatic effects that arise from encyclopedic knowledge about its denotation (or reference) and also from experiences, beliefs, and prejudices about the contexts in which the expression is typically used. The connotation of a language expression is clearly distinct from its sense, denotation, and reference. Identifying the connotations of a term is to identify the community attitude toward it. For instance, the connotations of English octopus and the Japanese translation equivalent tako are very different: an octopus is a sinister, alien creature; tako is edible and endearing (Backhouse, 2003). Connotation is intimately involved with notions of appropriateness in language use, conditioning the choice of vocabulary (including proper names) and style of address. Connotation is involved in choosing expressions that upgrade, downgrade, and insult. It plays a part in the loaded weapon of dysphemism and the euphemistic avoidance of dispreferred expressions judged discriminatory, blasphemous, obscene, or merely tasteless. Reactions to connotation motivate semantic extension and the generation of new vocabulary. See also: Direct Reference; Expression Meaning vs Utter-

ance/Speaker Meaning; Extensionality and Intensionality; Folk Etymology; Gender; Honorifics; Idioms; Irony; Jargon; Meaning, Sense, and Reference; Pragmatic Determinants of What Is Said; Pragmatic Presupposition; Reference: Philosophical Theories; Sense and Reference; Taboo, Euphemism, and Political Correctness; Taboo Words.

Bibliography Allan K & Burridge K (1991). Euphemism and dysphemism: language used as shield and weapon. New York: Oxford University Press.

Allan K & Burridge K (2006). Forbidden Words: Taboo and the censoring of language. Cambridge, UK: Cambridge University Press. Aronoff M (1976). Word formation in generative grammar. Cambridge, MA: MIT Press. Backhouse A E (2003). ‘Connotation.’ In Frawley W (ed.) International encyclopedia of linguistics. New York: Oxford University Press. 4: 9–10. Bolinger D L (1980). Language: the loaded weapon. London: Longman. Cicero (1959). Letters to his friends (Epistulae ad familiares). Williams W G (trans.). London: Heineman. Folb E (1980). Runnin’ down some lines: the language and culture of black teenagers. Cambridge, MA: Harvard University Press. Frege G (1892). U¨ber Sinn und Bedeutung. Zeitschrift fu¨r Philosophie und philosophische Kritik 100, 25–50. Reprinted as ‘On sense and reference.’ In Geach P & Black M (eds.) (1960) Translations from the philosophical writings of Gottlob Frege. Oxford: Blackwell. 56–78. Gazdar G (1979). Pragmatics: implicature, presupposition, and logical form. New York: Academic Press. Kripke S (1972). ‘Naming and necessity.’ In Davidson D & Harman G (eds.) Semantics of natural language. Dordrecht: Reidel. 253–355. (Republished as Naming and necessity. Oxford: Blackwell, 1980.) Leech G N (1981). Semantics: a study of meaning (2nd edn.). Harmondsworth, UK: Penguin. Lehrer A (1992a). Blocking and the principle of conventionality. In Proceedings of the Western Conference on Linguistics WECOL 92. Fresno, CA: Dept of Linguistics, California State University. Lehrer A (1992b). ‘Names and naming: why we need fields and frames.’ In Lehrer A & Kittay E F (eds.) Frames, fields, and contrasts. Hillsdale, NJ: Lawrence Erlbaum. 123–142. Lutz W (1989). Doublespeak: from ‘‘revenue enhancement’’ to ‘‘terminal living’’; how government, business, advertisers, and others use language to deceive you. New York: Harper and Row. Mill J S (1843). A system of logic. London: Longmans.

Constants and Variables P Chamizo-Domı´nguez, Universidad de Ma´laga, Ma´laga, Spain ß 2006 Elsevier Ltd. All rights reserved.

‘Constant’ and ‘variable’ are terms used in linguistics, logic, mathematics, physics, economics, and elsewhere. In linguistics, constant and variable are antonymous terms that are used in two different ways: one derives from the jargon of logic, the second from

consideration of meanings and change in the form of words. In propositional calculus, a logical constant is the part of a formula that expresses truth conditions in a combination of propositions and is represented by connectives such as ‘and,’ ‘or,’ ‘if . . . then,’ and so on. Truthvalues of propositions depend on truth-values of constants used. By contrast, a variable is, so to speak, an empty place that can be replaced by any assertion about reality. A typical combined proposition like ‘p & q’ is

142 Context

true only if truth-values of ‘p’ and ‘q’ are true, whereas it is false in the other three possible cases. Given that logic was developed as a calculus, this analysis can be applied to ordinary language as long as we pay attention only to the connective functions of logical constants, and not their translations into ordinary language. So, for instance, logicians consider conjunctions ‘and’ and ‘but’ as synonyms for calculus purposes and both are symbolized ‘&.’ Thus, according to classical propositional calculus (1) and (2) have the same logical form ‘p & q.’ (1) John visited the barber’s shop and got a nice haircut. (2) John visited the barber’s shop but his hair remained hideous.

This reduction disregards the fact that ‘and’ and ‘but’ are more than mere connectives in ordinary language and they are not synonymous. Although truth conditionally ‘and’ and ‘but’ are identical with the logical constant ‘&,’ they differ from a pragmatic point of view. ‘And’ often has the meaning of ‘and then’ in ordinary language, while ‘but’ implies that ‘q’ is an objection to what is expected given ‘p.’ In predicate logic, a sentence such as (3) is analyzed as describing a state of affairs in which a property ‘is tall’ is predicated of an individual ‘Jack.’ (3) Jack is tall.

Individuals are symbolized by variables x, y, . . . ; predicates are symbolized by variables P, Q, . . . . However, in (3) the individual is named and so is represented by the ‘individual constant’ j; the predicate is also named and so is symbolized by a predicate constant T. Thus, in the language of predicate logic, (3) is translated T(j). Individual constants may be mnemonic, like this, or be drawn from the letters a, b, . . . . Thus, (4) may be symbolized as G(a). (4) The book is green.

Generalizing over (3) and (4) is the (variable-using) formula P(x). Generally speaking, the proper nouns and predicates of ordinary language function as constants whereas pronouns work as variables because they are used to refer to many different things. Ordinary language terms are only relatively constant or variable. So, for instance, the Spanish possessive pronoun suyo, in an utterance such as El libro es suyo, is more variable than its five English equivalents his, hers, its, yours, and theirs. From a narrowly semantic point of view, monosemous words could be considered constants and polysemous words variables. From a morphological point of view, adverbs are typically constant in form (i.e., ‘invariable’), whereas verbs and nouns are variable. In English, adjectives do not inflect for number or gender and may be considered morphologically constant; in languages such as Latin, Spanish, and Russian, they are variable. Isolating languages such as Chinese or Vietnamese characteristically have morphological constants. Inflecting languages with variable morphology typically have freer word order because the syntactic function of words is indicated by word endings. There are six possible sequences (with different nuances of meaning) for the Latin words puer amat puellam but normally only one in the English phrase The boy loves the girl. See also: Formal Semantics; Logic and Language; Logical

and Linguistic Notation; Metalanguage versus Object Language; Propositional and Predicate Logic.

Bibliography Martin-Lo¨f P (1996). ‘On the meanings of the logical constants and the justifications of the logical laws.’ Nordic Journal of Philosophical Logic 1, 11–60. http://www.hf. uio.no/ijikk/filosofi/njpl/vol1no1/meaning/meaning.html.

Context W Hanks, University of California at Berkeley, Berkeley, CA, USA ß 2006 Elsevier Ltd. All rights reserved.

Introduction One of the central foci of research on language over the last several decades has been the relation between language and context. Work in linguistic anthropology, sociolinguistics, pragmatics, psycholinguistics,

and philosophy of language has demonstrated a wide variety of ways in which language and verbally communicated information of various sorts are informed and even shaped by the social and interpersonal contexts in which speech occurs (see Duranti and Goodwin, 1992). Overlapping lines of research have also demonstrated various ways in which language constitutes context, including the social effects described in speech act theory (see Speech Acts), the formulation and attribution of beliefs in relevance

Context 143

theory (Grice, 1989; Levinson, 2000; Sperber and Wilson, 1995) and the ‘creative use’ of indexical terms such as pronouns, deictics, and other shifters (Silverstein, 1976). The focus on context, as both a constraining factor and a product of discourse, has led to increasingly fine-grained approaches to speech, since it is primarily in the formation of spoken or written utterances that language and context are articulated. The significance of these developments for linguistics lies in the increased precision with which linguistic systems, cognitive processes, and language use are co-articulated. For anthropology, it lies primarily in the fact that communicative practice is integral to social practice more generally. Language is a factor, if not a defining one, in most of social life, and ideas about language have had a basic impact on social theory for the last century. Given the scope of these developments, it is unsurprising that there are various approaches to context corresponding to the disciplinary predilections of researchers. Speech act theory zeroed in on the relation between speech forms and circumstances as captured in felicity conditions and the doctrine of (illocutionary) force (Austin, 1962). Gricean approaches to conversation focus on inference and belief ascription under the assumption that speech is a cooperative engagement, subject to the maxims of quality, quantity, relation, and manner (Grice, 1989). Relevance theory shares a focus on inference as a central feature of speech, but dispenses with the Gricean cooperative principle (see Cooperative Principle), maxims, and the tasks of calculating and testing for implicatures (see Implicature). In their place, it proposes to explain inferential processes in terms of a single principle of relevance according to which logical, encyclopedic, and lexical information are combined. Speech act, implicature, and relevance theories are all closely associated with linguistics and have in common that they treat context as built up utterance by utterance in the course of speaking. From a social perspective, ethnomethodology and conversation analysis have made major contributions to our understanding of language in interaction. Both assert that face-to-face interaction is the primordial context for human sociality (Schegloff, 1987: 208) and the most important locus of observation of language. While they may rely on the pragmatic and inferential processes studied by linguists, their focus is different. Conversation analysis (CA) has emphasized the temporal and hence sequential organization of verbal exchange (Sacks et al., 1974), the existence of procedural rules guiding turn taking in talk, the phenomenon of conversational repair, and the microanalysis of actually occurring verbal interaction. Psycholinguists and cognitive linguists treat context

as a matter of mutual knowledge and cognitive representation, hence as a basically mental construct. The approaches mentioned so far have in common that they treat context as a radial structure whose center point is the spoken utterance. They share a commitment to methodological individualism, which prioritizes the individual over the collective and seeks to reduce social structures to individual behaviors. Starting from the perspective of the participant(s) in speech production, they derive context from relevance, mental representation (including attention focus and practical reason), and the momentary emergence of the speech situation. From this viewpoint, context is a local concomitant of talk and interaction, ephemeral and centered on the emergent process of speaking. Whether one places primary emphasis on actual language use as attested in the field or on constructed examples, the ultimate frame of reference and explanation is individual speech activities and the verbal interactions in which they occur. From the opposite viewpoint, others have developed approaches to language and discourse in which context is neither local nor ephemeral, but global and durable, with greater social and historical scope than any localized act. There are several shifts here worth distinguishing. Whereas the first set of approaches is grounded in linguistics, psychology, and microsociology, the second is based in largescale social theory and history. Utterance production is not assumed to be the generative center of context, as it is for individualist approaches. Rather, the explanatory frameworks are social and historical conditions that are prior to discourse production and place constraints on it. Standard linguistic description is a case in point, because it claims that individual uses of language depend for their intelligibility on linguistic systems (grammatical and semantic) that are logically prior to any act of speech. To the extent that such perspectives treat discourse production at all, the relevant units are either analytic abstractions (the idealized Speaker of linguistics) or collectivities (communities, classes, social networks, kinds of agent defined by gender, age, profession, residence, etc.). Similarly, the temporal frame of discourse production is not the momentary unfolding of utterances in what individualists call real time, but the conjunctural time of collective systems and historical processes. Just as there are various local approaches to context, there are also diverse global ones. In a Foucaultian view, for example, if there can be said to be a basic context for language, it is neither interaction nor the individual strips of speech or text familiar to linguists. On the contrary, the frame of reference is ‘discourse,’ meaning large-scale formations of beliefs

144 Context

and categorizations pervaded by power relations and articulated in ‘assemblages.’ Similarly, Bourdieu has argued that language forms and varieties should be analyzed relative to linguistic markets in which they bear various sorts of symbolic and cultural capital (Bourdieu, 1993). Both Foucault and Bourdieu take as their point of departure collective facts and set out views contradictory to methodological individualism (a predictable corollary of the structuralist orientation of both thinkers). Critical discourse analysis (CDA) offers another clear example (nicely reviewed in Blommaert and Bulcaen, 2000). In this approach, discourse is treated under three perspectives: as text endowed with linguistic form, as ‘discursive practice’ through which texts are produced, distributed, and consumed and as ‘social practice’ which has various ideological effects, including normativity and hegemony. CDA emphasizes power, exploitation, and inequality as the social conditions of language, tracing them through various contexts including political and economic discourse, racism, advertising and media, and institutional settings such as bureaucracies and education. Notice that, while these forces may be played out in individual speech events, the frame of reference is broader than, and logically prior to, any given event. Moreover, the focus on speakers’ intentions as the source of meaning that is common to methodological individualist approaches is absent in all large-scale approaches. The linguistic, psychological, and microsociological approaches from which we started are largely complementary to the large-scale approaches just mentioned. The local settings of utterance and faceto-face interaction that are central to the first group are absent or at best marginal in the second. Conversely, the collective facts central to social definitions of context are marginal or simply ignored in individualist approaches. This polarization gives rise to overstatement and many missed opportunities for productive research. It becomes unclear how to articulate different levels of context analytically, or even whether such articulation is an appropriate aim. Given that discourse responds to context at multiple scales, and that any actual social setting can be characterized under either micro- or macro-perspectives, the two are inevitably pitted against one another. In its strong forms, methodological individualism claims that the collective facts studied by sociologists and anthropologists are epiphenomena of individual actions, whereas a proponent of collectivism may claim with equal conviction that individual utterances and face-to-face interactions are the trivial precipitates of larger social forces. Thus, the dichotomies of scale underwrite contradictory claims about what is most basic to context.

While much of the literature bearing on context can be placed on one or the other of these extremes, linguistic anthropology has been the exception because it has attempted to integrate levels. One motivation for this is the empirical fact that speech practices are shaped by and help shape contexts at various levels. Another is the patent inadequacy of all bipolar accounts, which inevitably distort the relative significance of contextual features and produce a vacuum at one level or the other. As an interdisciplinary enterprise, linguistic anthropology has always straddled grammar and actual language use in socially and historically defined settings. The focus on speech requires detailed analysis of locally emergent facts both linguistic and ethnographic (hence ‘micro’), while the focus on linguistic and social-cultural systems requires equally careful analysis of formal and functional regularities whose motivations lie far beyond individuals and their actions (hence ‘macro’). Thus, the ethnography of speaking combined such descriptive units as the speech event, the community, and verbal repertoires in an attempt to bring the ethnographic setting of utterances to bear on their formal and functional properties. In recent decades, linguistic anthropologists examined the relation between language and political economy and what has come to be known as language ideologies, both of which combine phenomena from different scales (Silverstein, 1979; Schieffelin et al., 1998). Any study of context that seeks to account for the formal specificity of utterance practices and their social embedding must reject familiar divisions between micro- and macrolevel phenomena. Context is a theoretical concept, strictly based on relations. There is no ‘context’ that is not ‘context of,’ or ‘context for.’ How one treats it depends on how one construes other basic elements including language, discourse, utterance production and reception, social practice, and so on. It is by now widely recognized that much (if not all) of the meaning production that takes place through language depends fundamentally on context and further, that there is no single definition of how much or what sorts of context are required for language description. There is therefore no reason to expect that any single model or set of processes will be analytically sufficient for all research (and good reason to be skeptical of universal claims). At the same time, it is clear that there are principles and kinds of relations that recurrently organize contexts. We are concerned here with both the semiotic specificity of discourse practices and their social and historical embedding. What are the units and levels of context one needs to distinguish in order to give a rigorous account of language as practice? What are the relations and processes that

Context 145

give rise to different contextual units and levels? How does one analyze actual contexts without falling into a morass of particulars? My way into these questions is through two broad dimensions of context, which I will call emergence and embedding. The former designates aspects of discourse that arise from production and reception as ongoing processes. It pertains to verbally mediated activity, interaction, copresence, temporality, in short context as a phenomenal, social, and historical actuality. Embedding designates the relation between contextual aspects that pertain to the framing of discourse, its centering or groundedness in broader frameworks. So stated, there is an initial alignment of emergence with the highly local sphere of utterance production, on the one hand, and embedding with larger scale contexts on the other. This is the way the two are usually discussed in the literature on language. Emergence is associated with so-called real time utterance production and interaction, and embedding describes the locatedness of utterances in some broader context. However, emergence can easily be conceived at different temporal levels, as any historian knows, just as embedding applies within the most local fields of utterance production. To see this, let us turn to a more detailed look at each of the dimensions, starting with emergence.

Emergence Context as a Sheer Situation

In an interesting discussion of the micro–macro problem from the viewpoint of conversation analysis, Schegloff (1987: 208) asserted that interaction, minimally involving two people, is the primordial site of sociality. This view is rooted in a significant history of thought about both sociality and interaction (Schutz, 1970a). The former term is seldom defined precisely, but in Schegloff’s pithy claim, it stood for the human propensity to engage with others, with the invited inference that this propensity is a basic aspect of human society. The primordial importance accorded interaction can be traced to the phenomenological sociology of Alfred Schutz. Schutz set about to merge the social theory of Max Weber with the phenomenology of Edmund Husserl (with a healthy dose of William James and Gestalt psychology). In his view, social subjects develop in a world of intersubjective relationships, in which others are given to them both as objects in space and as other selves (Schutz, 1970a: 163). They partake of a primitive reciprocity in the sense that each exists in relation with the other. They are parties to a mutual ‘we,’ each located in a world also occupied by the other. In interaction relationships (as opposed to relations

among contemporaries or predecessors), the parties are copresent corporeally, which entails their being in the same place at the same time and in what phenomenologists call the ‘natural attitude’ (wide awake, in their senses, with access to common sense). For each party to interaction, the other’s bodily gestures (including utterances) present themselves as expressions that project and make perceptible inner states of consciousness. In interaction, the other’s body is primarily a field of expression presumed to be meaningful, not a mere object perceived. As a corollary of copresence, the two are in the same temporal stream, and each can perceive the other’s expressions as they emerge from moment to moment. The mutual copresence and reciprocal access of interactants takes the shape of a more exacting reciprocity: until further notice and for all practical purposes, each party can put himself in the shoes of the other, taking on, or at least entertaining, the other’s perspective. Goffman (1972) helped formulate a Schutzian view of context in his influential paper ‘‘The neglected situation’’. Goffman critiqued the then widespread treatments of social context in terms of correlations between macrolevel sociological variables, such as gender, class, profession, and institutional roles. He argued that situations have their own properties that follow from the fact of copresence between two or more people. A situation is a space of mutual monitoring possibilities within which all copresent individuals have sensory access to one another with the naked senses. Hence, the following conditions apply: 1. There are at least two parties who cooccupy the same objective time (which Schutz [1970a: 165ff.] distinguished from inner time and the constituted experience of space-time), in which perceptions and expressive gestures unfold sequentially. 2. Each party to the situation is present in body, both perceivable and capable of perceiving the other. 3. The situation is a field of mutual monitoring possibilities, which entails the capacity of the cooccupants to notice and attend to each other. These three conditions imply mutuality (we share this), cooccupancy of the same space-time (we are both here-now) and reciprocity (I perceive you and you perceive me). Notice that a situation is not a field of actual mutuality, reciprocity, and cooccupancy, but a field in which these are alive as potentials. (This is another factor reminiscent of phenomenology in the idea of the ‘horizon’.) It is minimally structured, logically prior to any utterance, and notably lacking any object beyond the copresent parties. From the perspective of language, the situation provides a sort of ‘prior outside’ into which speech and language are projected through utterance acts.

146 Context

Given what we have said so far, all of dialogic speech could be described as situated insofar as it occurs in situations. The utterance the curry is done is situated, let’s say, when produced by a cook in response to the question when will dinner be ready? Both question and response are situated in the perceptible, interactive relationship between the parties, and all three of the above conditions apply. But Goffman distinguished between ‘‘merely situated’’ and inherently situated factors. The former include linguistic and symbolic structures that are instantiated in utterances but do not really depend on the situation for their definition. In contrast, timing and delivery of utterances reflect the in situ mutual adjustments between interlocutors and are inherently situated. Goffman’s situation, then, represents a layer of context that is prior to language, but with which we can distinguish between merely and inherently situated aspects of the speech stream. A great deal of research over the last decades has demonstrated the significance of this framework, and the ways in which speech transforms and adapts to situations. It has become an article of common sense that context is situated, whatever else it is. Relevant Settings

The situation thus defined is insufficient to describe interaction because it lacks several basic features. Whereas any situation exists in time, it lacks such temporally grounded distinctions as early vs. late and middle vs. beginning or ending. The latter terms can only be applied to a course of activities in which there are act units and expectations. Furthermore, at the level of the situation each party is potentially aware of the presence of the other, but is not attending to the expressive meaning of the other’s gestures. As a mere field of copresence (what Schutz [1970b] called a ‘pure we relationship’), a situation has no meaningful structure: nothing in particular is going on or is especially relevant. If we layer onto the situation socially identifiable acts, expectations, mutual understanding among parties, and a framework of relevance, we arrive at a contextual unit closer to interaction and considerably more structured. We will call this new unit the setting (Sacks, 1992: 521–522). If a speaker says, I am here to meet with Martin, this is a great party, or I’m asking you to help (s)he has in each case formulated the setting in which the utterance occurs. In the language of CA, a formulation is a description, hence a categorization, as opposed to indexical expressions that invoke the setting, but do not formulate it since they lack descriptive content. In paradigm cases, the formulation applies reflexively to the very speech setting in which it occurs. What

most concerns us at this point is that formulations are internal to interactive context, they display the participants’ judgments of what is relevant and going on, and they illustrate the conversion of a mere situation into a social setting (Schegloff, 1987). To introduce the concept of relevance is to transform fundamentally the idea of context. On the one hand, judgments of relevance always imply a theme or point of interest from which the relevance relation is established. On the other hand, this relation is rooted in actors’ previous experiences, in light of which the interest arises (Schutz, 1970b: 5). A theme, like a focal point, implies a background or horizon against which it is distinguished and in relation to which it functions as a center point. This in turn implies that any context in which thematic relevance operates is a bilevel structure (usually described as foreground/background, or theme/horizon). The reference to the biographical history of the actors for whom something is thematic effectively expands the temporal scope of context from the vivid present of situated perception to a past remembered and sedimented through habitual experience. In short, as soon as we introduce relevance, context becomes a hierarchical structure connected to a nonlocal history. In addition to the distinction between theme and horizon, Schutz (1970b) developed a three-way contrast between kinds of relevance, which he calls topical, interpretive, and motivational. The first centers on the object or matter to which actors turn their attention. The second has to do with which aspects of the object are relevant to the question at hand, and which parts of the actor’s background knowledge are brought to bear on it. The third pertains to the actor’s prospective purpose (which Schutz called ‘‘in order to motives’’) and the past conditions that give rise to that purpose (Schutz’s ‘‘because motives’’). The combined effect of these three sorts of relevance is to create a multistranded relevance system in the setting, encompassing both memory and anticipation. While Schutz’s probing study makes further distinctions, these will suffice to underscore that interactive context, even at the relatively primitive level of the setting, is hierarchical along several dimensions, both local copresent and nonlocal. Semiotic Field, Symbolic and Demonstrative

Although we introduced the function of formulation in order to clarify the difference between a situation and a setting, our framework is as yet impoverished from the perspective of language structure and semiotics. This is the next element we must introduce in order to approximate a notion of context adequate for linguistic description. We will do so by way of the theory developed by Karl Bu¨hler (1990 [1934]),

Context 147

which had a profound impact on subsequent linguistic and semiotic approaches to context (particularly the ethnography of speaking, linguistic treatments of deixis and contemporary linguistic anthropology). Bu¨hler distinguished two aspects of the context in which any sign is used: (i) the Symbolfeld ‘symbolic field,’ consisting of words, other signs, and the concepts they represent, and (ii) the Zeigfeld ‘demonstrative field,’ which is the immediate interpersonal setting in which an utterance is produced. These two elements combine in various ways in Bu¨hler’s treatment, for instance anaphora and ‘‘imaginary deixis,’’ and the resulting model of context is pervasively semiotic. It inherits all the features of settings as laid out above, but these are transformed by signs (symbolic, indexical, iconic), sign relations (syntactic, semantic, pragmatic), the presence of objects stood for, and various functions including individuated reference and directivity (purposive orientation of an interlocutor’s attention by word and gesture). Bu¨hler summarized the Zeigfeld as ‘‘Here Now I,’’ thus foregrounding its relation to the linguistic system(s) of the participants. These three terms are the prototypical deictics: they are referring expressions whose conventional meanings belong to the linguistic code, and yet, as indexicals, their reference on any occasion of use depends strictly on the context of utterance. Deixis is the single most obvious way in which context is embedded in the very categories of human languages. Recall that in Sacks’ terms, deictics ‘invoke’ the setting, because they are indexicals, but do not ‘formulate’ it, because they lack descriptive content. Contrast I am here with I am in the dining room of my home. It has also become common in the literature to distinguish between referential indexicals, (e.g., deictics, pronouns, presentatives, certain temporal adverbs) and nonreferential or social indexicality (Silverstein, 1976). The latter would include such phenomena as regional or other recognized accents, stylistic registers, and honorifics (Agha, 1998; Errington, 1988) insofar as these features of language signal aspects of utterance context without actually referring to or describing it. Contrast an utterance spoken with a heavy New England accent, which nonreferentially indexes the provenience of the speaker, with I am from New England, which states it. What is most relevant about indexicality for present purposes is the way that both referential and nonreferential varieties serve to articulate language as a general system with utterance context. The deictic categories of any language, and the combination of those categories into phrases, sentences, and utterances, reveal schematic templates for context. The demonstrative field therefore converts the interactive setting into a field of signs. For Bu¨hler, it

includes gestures and other perceptible aspects of the participants, such as posture, pointing, directed gaze, and the sound of the speaker’s voice, all of which orient the subjective attention focus of the participants. Like Goffman and the conversation analysts, Buhler assumes that the participants are in the ‘‘natural attitude’’: wide awake, oriented, each with a sense of his or her own body, synthesizing sensory data from vision, hearing, and touch in a system of coordinates whose origo is the here-now-I (Bu¨hler 1990: 169ff.). Within this phenomenal setting, utterances, in both their symbolic and indexical dimensions, both reflect and transform context. They orient participants attention, thematize objects of reference, formulate, invoke, and construe the setting, operate on relevance systems, in short, they produce context. The situation, setting, and demonstrative field are emergent in the sense that they unfold in time. This is one consequence of the fact that linguistic practice produces context in an ongoing fashion. It gives rise to duration, sequence, simultaneity, synchronization, and it forces us to include memory, anticipation, and teleology in our model of context. Time is central to the study of conversation and sequence is basic to turn-taking systems, anaphora, and thematic coherence, the interactive production of sentences (Goodwin and Goodwin, 1992), and the organization of a host of conversational structures. It is also at the root of the concept of adjacency in conversation analysis, the relation between contiguous units in talk. Notice that emergence entails time but is different from it, since it describes the relation between various units of discourse production. When individuals co-engage in a setting, their perceptual fields are oriented by relevance and when they coparticipate in a demonstrative field, they are reoriented by signs. So too temporal relations are converted in the passage from situation to setting to demonstrative field. Diachrony is an existential condition of context at any level of analysis, but it denotes different processes according to the level. For example, the inner duration of experience, the simultaneity of pure we relations, the production of utterances, the taking of turns at talk, the opening, middle, and closing of conversational units, and the prospective projection implied by intention and strategy are different diachronies. Emergence is everywhere in relation to structure, and to describe context as emergent implies that it is structured.

Embedding The progression from situation to setting to demonstrative field is neither a temporal sequence nor a

148 Context

set of inclusion relations. It is a matter of logical ordering, from the relatively primitive level of the sphere of perceptual awareness through the semiotically entangled demonstrative field. The setting inherits features of copresence from the situation, transforming it by way of relevance relations and socially recognized units of action. The symbolic– demonstrative field inherits an interperspectival relevance system from the setting, but transforms it by means of multifunctional semiotic systems (most notably language). The model of context implicit in the demonstrative field is the minimal starting point for the study of discourse. Whatever else is true about discourse context then, it entails bodies and perceptual fields, relevance systems, act types, and the expectations they engender, semiotic systems and the transformations they effect. As emphasized above, all of these contextual formations are emergent in that they involve duration, sequence, simultaneity and, in the more complex formations, memory and anticipation. In virtually all of ordinary communicative practice, the three levels combine and the distinctions between them are analytic: a setting is merely what is left when we analytically peel away the effects of semiosis; a situation is what remains when we bracket off relevance. Such analytic separation has the advantage of clarifying how discourse contexts hang together and fall apart, and how language and discourse differentially tie into the distinct orders of context. It should not suggest that the three lead separate lives, as it were: in the course of social life, there is no situation that is not tied into a setting and no setting that is cut loose from semiosis. This relation of ordered entailment and tying in we will describe as embedding. To study context is to study embedding. If contextual formation X is embedded in Y, then the following statements are true: i. Y entails X, but X does not entail Y; ii. Y inherits certain properties from X but introduces other properties; iii. Y transforms X, altering inherited properties and introducing new principles of organization (via rearrangement, reweighting, etc.); iv. If any part of X becomes a thematic focus, either for participants or for analysts, then Y is the relevant horizon. For example, a setting entails the possibility of mutual monitoring among participants, but a situation does not entail a relevance structure (i). The symbolic– demonstrative field inherits act units and relevance relations from the setting, but reconstitutes it via multifunctional semiosis (ii). The relevance system

of a setting organizes how participants monitor one another in the situation, and the semiotic functions of the symbolic–demonstrative field transform relevance via thematization (iii). If a problem arises in mutual monitoring among copresent participants, then it is dealt with in light of the setting, which establishes the expectations and relevance requisite to identifying or rectifying the problem. Similarly, if a problem of (ir)relevance or failed expectations arises, then it is grasped and dealt with in the light of the ongoing symbolic–demonstrative field, which invokes, formulates, and provides the means to construe the setting (iv). The upshot of these remarks is that contextual embedding is never a mere add-on or external surround to features of discourse or interaction. At whatever level we examine it, context is embedding relations. Research in linguistic anthropology over recent decades provides abundant and forceful evidence that embedding is not limited to the contextual levels so far adduced. No symbolic–demonstrative field exists in a social vacuum and however strong the impulse to generalize by way of rules, invariant structures, or procedures, contexts vary more radically than so far suggested, and on parameters not yet mentioned. This comes as no surprise to ethnographers, but it poses a real challenge to linguists because linguistic systems and practices articulate precisely and in detail with social phenomena beyond the reach of even the most sophisticated semiotics. How are we to explain the impact on context of such systematic phenomena as the difference between expert and novice practice in institutional settings (Cicourel, 2001), the role of ideology on discursive practice, the differential affects of national, ethnic, or class identities on discourse production, the values that attach to different ways of speaking, writing, or other mediated forms of discourse that do not assume the face-to-face situation at the heart of the demonstrative field? For example, it is clear that persons and objects in the demonstrative field have, for the participants, values of various kinds. They are good, bad, beautiful, ugly, mine, yours, costly or cheap, coveted or to be avoided, yet such values derive from social systems and experiences beyond the scope of the demonstrative field. In actual practice, as opposed to theory-driven proxies of practice, discourse circulates in contexts that are themselves embedded in social formations only partly explainable by discourse. In Bu¨hler’s demonstrative field, participants and objects are anonymous, the accidental occupants of semiotically defined positions and roles. Yet in ordinary discourse, actual persons, groups, objects, and settings are in play, and these are, for the most part, familiar and valued.

Context 149

This is one of the most difficult problems in the study of context: in order to achieve a general account, we formulate schematic regularities, yet in order to actually engage in discourse, speakers and addressees must come to grips with emergent particulars. Semiotic accounts seek to overcome this difficulty by distinguishing types (generalities) from tokens (particular instances), or by combining structures (e.g., linguistic systems) with phenomenology (Bu¨hler’s solution). The problem with these approaches is that they treat actual practices as merely situated instantiations of general laws, which domesticates particularity by making it a mere instance of the general. At the same time, they preserve a radial definition of context, according to which the individual, utterance, or situation is the center point and all other factors are defined in relation to it. This is a productive solution to certain problems, such as the semantics of indexical reference, where the sign stands for its object in an aboutness relation. But what of an organizational context like a hospital, a university campus, or a courtroom? Most of the interactions that occur in these contexts are shaped in part by institutional frameworks, credentialing processes, and social divisions that exist before and beyond any demonstrative field, that may be nowhere signaled in the discourse, and yet shape the context and constrain participants’ access to discourse. We need a way of analyzing contextual dimensions that are not radial to the utterance or the field of copresence, but that shape it significantly. Relegating such factors to the social setting is a convenient shorthand, but it fails to explain how they impinge on discourse. To require that they be relevant to the participants is to require that they be thematized, whereas much of the social formation impinging on discourse contexts is unnoticed. To label them background knowledge begs the question, since most individuals in institutional settings have fragmentary or systematically skewed knowledge of the forces that objectively shape the contexts in which they interact. Social Field

We have said that when one contextual level or sphere is embedded within another, the embedding level inherits certain properties from the embedded one, that it transforms it, and that it serves as the operative horizon against which the embedded level is grasped. In these terms, we can say that any demonstrative field is embedded in one or more social fields. The term ‘social field’ as used here is adapted from practice sociology and designates a bounded space of positions and position takings, through which values circulate, in which agents have historical trajectories

or careers, and in which they engage on various footings (e.g., competitive, collaborative, collusive, strategic). Thus defined, a social field is neither radial nor discourse-based, although discourse circulates in most fields, but there are interactive settings embedded in any social field. What is different about a social field is its scope (nonlocal), the way it is organized (nonradial), the character of the boundary (credentials and limited access as opposed to the gradual boundaries and relatively open access to demonstrative fields), and the values that circulate in it (e.g., economic, symbolic capital and power as opposed to meaning production through indexicality, reference, and description). Moreover, whereas participants in discourse production are traditionally conceived as individuals (hence interaction means intersubjective engagement), the agent positions in a social field can be occupied by collectivities (e.g., professional organizations, ‘communities,’ classes, departmental staff), whose interactions are typically mediated by writing, electronic media, and other instruments. Under this definition, a hospital, university, profession, academic discipline, a courtroom, a supermarket, an airport, a religious congregation, and a neighborhood are all social fields. This does not mean they are all equivalent or that any one of them may not itself be embedded. It does mean that these and other social formations provide critical embedding contexts that shape radial, interactively centered demonstrative fields. The social field places constraints on who has access to the participant roles of Speaker (Spr), Addressee (Adr), overhearers (ratified and unratified), the sanction to participate in a capacity, the requirement to manage face in specific ways (Goffman, 1967), and so on (see Face). In the demonstrative field as such, there are no constraints on who can play what role in acts of reference, directing of joint attention, or semiotically proper indexicality. It suffices that the participants master the language and be in the natural attitude. But this is not true in a social field, in which access to different positions is constrained, the authority to speak in certain terms and to specific others is restricted, and the capacity to monitor another is a selective right or even a responsibility, not a mere existential condition. In the kinds of organizational fields listed above, there are also many virtual counterpart relations, such as the correspondence between the patient and the X-ray image, the cash register number and the cashier, the evidence and the now-past actions that produced it, the paper and its author. These correspondences create networks of counterpart relations between objects in the immediate demonstrative field and ones that are absent (in other places or other

150 Context

times). Careful study of deictic practice shows that such counterpart relations play a formative role in how participants resolve indexical reference. The implication is that in order to understand simple indexical practices, we are forced to look beyond the immediate field of copresence, just as participants must do in order to get the point of utterances. In order to explain the actual functioning of the Zeigfeld, then, we are obliged to look beyond it to the social field. These are all embedding effects and the social field is unavoidable in any description of indexical practice. Settings and demonstrative fields are designed so as to project them into further embeddings. Any relevance system ties its thematic focus into a history of other engagements with the object, a horizon of other related objects, a set of judgments regarding what is interpretationally relevant and what can be ignored. Hence, the setting is already rooted in a world beyond itself. Once we introduce semiosis, we have aboutness relations, and not all objects stood for are copresent in the situation. Furthermore, the symbolic categories themselves tie the sign and its object into other signs and objects in absentia, as Saussure put it. Two rather different transformations take place in the embedding of a demonstrative field in a social one. The social field is made actual, we might say, localized, by its articulation via relevance, symbolization, and indexical invocation (all the better if it is explicitly formulated, although this is not necessary, as we have seen). This is a genuine transformation because the social field does not owe its structure or existence to the kind of radial, intentional structures into which it is recruited by signs (the world is not organized in the same way as the language that refers to it). The second mode of embedding is occupancy: the actor occupies a participant role, which occupies an agent position (Dr. Jones speaks as an expert performing a procedure). The copresent setting occupies a socially defined site (the relevance system and actions in progress are procedures in a medical clinic). The referent-object occupies a socially defined position (the needed instrument in an ongoing procedure). Hence, the embedding social field provides a space of positions (including referent positions) and those positions are occupied or taken up by the various elements inherited from the embedded demonstrative field. Embedding is a process in time, and a proper study of context at the level of social fields must attend to the temporal order of occupancies, including the careers of persons, objects, places, and actions in the time course of the organization. The social field has a history that transcends any particular occupancy. The clinic outlasts Dr. Jones, just as it outdistances the

single office he occupies on such and such an occasion. Utterance-to-utterance temporality at the level of the demonstrative field is embedded, and hence transformed, in the broader history of the field. Looking across the communicative practices embedded in one or more social fields, it becomes possible to ask which elements remain relatively invariant across embeddings, and which ones are subject to transformation. The distinctions between Spr, Adr, Object, the semiotic means of thematization, the omni-relevance of perception and the procedural organization of turn-taking (Schegloff, 1987) may remain constant, for instance, even if differentially realized and constrained in different fields. Such invariance contributes to the partial autonomy of the demonstrative field across embeddings. Inversely, certain features of social fields may function as relatively constant constraints or resources for any demonstrative field that emerges within their scope of embedding. To that degree, these factors contribute to what Bourdieu (1993) called the ‘‘heteronomy’’ of the embedded fields. Autonomous features of any field derive from the field’s own organization, whereas heteronomous (nonautonomous) features derive from its embedding in some other field. Thus, we can ask of any discourse context: in what measure and in which features is it autonomous? It is standard in the literature on language to describe speech contexts as if they were highly autonomous, such as Bu¨hler’s generalized Zeigfeld or the hackneyed Speaker– Hearer dyad of linguistics. But this nomothetic bias toward autonomous schemas hides heteronomous effects that are systematic and consequential for a theory of context. The participants in any process of discourse production are clearly a key part of the context, whether they engage as individuals or groups and whether we treat context in local or nonlocal terms. In the discussion so far, there is an implicit series of embeddings of participants, from the individual subject to intersubjective copresence (situation), to coengagement (setting) to participant roles (demonstrative field) to agent positions (social fields). In a series of influential studies, Goffman (1963, 1981) brought attention to the differential kinds and degrees of involvement that parties to discourse sustain in social practice. He distinguished, for example, unfocused from focused interaction, the former pertaining to mere situations and the latter to settings in which the participants share a common attention focus and orientation (which he dubbed ‘‘encounters’’). Given a focused interaction, the question arises as to the degree of intensity of involvement and the distribution of involvement among participants (over time). This in turn led Goffman to distinguish among contexts

Context 151

according to how they regulate involvement, the embodiment of that regulation in space and physical conduct, the penalties for inappropriate involvement (invasion, exclusion, drifting away, excess intensity), and the overall ‘‘tightness or looseness’’ of contexts (Goffman, 1963: 198–210). Although this entire discussion is rooted in the phenomenological sense of subjective engagement, it can be analogically projected to the level of social field and the agent positions they entail. Here, involvement has to do with modes of occupancy of positions, how tight or loose a field or a position is, the degree to which occupying one position precludes or requires engagement with other positions, the vectors of access or exclusion provided by given positions, the means of displaying or concealing involvement, the varieties of collusion or competition that are differentially built into sectors of the field. In short, the embedding of discourse production in social fields defines a space of involvement among agents. Whereas most of Western language theory has posited a speaking subject endowed with free will and uncurtailed intentionality, social theory has long debated the extent to which social actors and actions are determined by social forces external to them. This has given rise to a host of concepts significant in the study of context, including structuration, subjection, ideological state apparatuses, and habitus (Bourdieu, 1977). Notwithstanding significant differences among them, these ideas have in common the basic observation that social actors, from subjects to collectivities, are not given by nature but are, in critical ways, produced by society. Such ideas turn individualism on its head by asserting that not only is the ‘natural subject’ not the starting point from which society is produced, but the subject is itself, already, a social production. The importance of this line of thought for a study of context lies in the challenge it poses to any theory of meaning production that starts with individual intentions and phenomenal situations in order to then derive context by addition of external factors (a view pervasive in language sciences). From the vantage point of social fields, the corresponding question would be to what extent engagement in a field shapes the participants not only in their agentbased external engagements, as it were, but more pervasively in their habits, dispositions, and intentions. In other words, there is a tipping point at which context ceases to be conceivable as the layering of structure upon intersubjective copresence and becomes the very production of subjects and the condition of possibility for intersubjectivity. Our definition of embedding as entailment, partial inheritance, transformation, and the necessary horizon for any contextual factor foreshadowed this shift.

In practice theory, the idea of a field is intimately related to the notion of habitus. The former defines the space of positions and position takings and the latter defines the social conformation of agents who engage in the space. There are four principal sources of the idea of habitus, which will help clarify its meaning. First, the Aristotelian idea of hexis, which joins individual desire or disposition with the evaluative judgment of what is good. If these two are aligned, we might say, the person is disposed to act in ways that are good. A linguistic analog might be interactive hexis, as evidenced in the spontaneous desire that good speakers have to be cooperative interlocutors or to say the right thing at the right time. Second, the phenomenological idea of habit and habituation as developed in the writings of Husserl, Merleau-Ponty, and Schutz. The idea is that in the course of ordinary experience we habitually engage in certain ways, we tend to routinize and typify. Under various guises, the idea that Sprs use language in habituated, routine ways has been a staple in the study of language for the last half century or more (cf. phenomenologists, Sapir, Whorf, Garfinkel, conversation analysis, ethnography of speaking). The third source of habitus is the idea, made prominent by Mauss (1973), that human beings conduct themselves physically in culturally patterned, habituated ways. Mauss was concerned with such phenomena as walking gait, posture, ways of carrying oneself, the management of body space in social settings (like waiting in line), socially standardized gestures (whether actually conventional, like the thumbs up gesture, or not), standard ways of holding objects, such as tools, of covering or revealing parts of the body. Mauss’ insight was that these myriad aspects of how social actors inhabit and act through their bodies are socially patterned. Notice that, while some of this is explicitly taught to children and sanctioned, such as proper modesty or table manners, other aspects are merely instilled by habit and the tendency of human groups to routinize. The linguistic analogue to this would be utterance production as a corporeal activity, subject to habitual voice modulation, pacing, posture, degrees of involvement, and their embodiments. The fourth source is the scholastic philosophical idea of habitus, meaning mental habits that regulate acts. This idea most decisively entered into practice theory through the writings of the art historian Erwin Panofsky, whose work Bourdieu translated and considered fundamental to his theory of practice. For our purposes, the most salient lines of argument in Panofsky (1976) were these: 1. In a given historical conjuncture, there exist underlying mental habits that guide people’s cultural

152 Context

production in different spheres (such as philosophy and architecture in 12th–13th century Paris). 2. These habits are instilled through education. 3. They come to guide both how actors act and how they evaluate acts. 4. They are realized in works. Thus stated, the habitus is a modus operandi, flexible enough to be realized in different works, every one of which is unique, and in different spheres of work, which may differ widely. From a language perspective, the habitus would involve discourse genres (Hanks, 1987; Briggs and Bauman, 1992), routine ways of speaking and interpreting speech, and the habits of mind implicit in standard ways of representing the world in language. As used in practice theory, the term habitus bundles these four sources into a single idea. It therefore claims that there is a basic unity between the disposition to speak in certain ways, the evaluation of speech, the bodily habits enacted in speech production, and the mental habits instilled in speakers as social beings. What unifies this set of features is not logical necessity, but historical necessity. Habitus is individual, since it forms individual persons, and collective, since it is a social formation. It joins the body with the mind rather than asserting the division and priority of either one over the other, as is more typical in language studies long dominated by mentalism. Finally, it is an alternative vision of the speaking subject, openly contradictory of the traditional idea that speakers are freely intending persons whose inner mental states (propositional attitudes, intentions) are the source of discursive meaning. The relation between habitus and field is subtle and far-reaching. On the one hand, the habitus is usually associated, in writings on practice, with the social provenance of the individual in terms of class, gender, ethnicity, and other macro-sociological divisions. It is inculcated in childhood, primarily in the domestic field and through more or less formal education. It is reinforced and reproduced in ordinary social life in these spheres and also in labor practices, which exert particular influences (including agriculture among farmers, research, writing and teaching for an academic, painting for a painter and so on, driving for a cabby, etc.). Any form of ritual practice has a potentially strong impact on habitus, by dint of engaging the dispositions, evaluations, body and mental orientations of practitioners through the repeated doing of practice. The important point is that there is a dynamic (if not dialectical) relation between contextual embedding and the formation of the actors who engage contexts. Language and discourse are among

the central modalities through which the dynamic is articulated. How do participants enact their positions in the field so as to achieve their communicative goals? How do they decide on goals and plausible ways of achieving them? Which strategies and moves are permissible or effective in a given field, and which are ineffective or impermissible? The idea that speakers are strategic is widely accepted in discourse studies. Gumperz’s (1992) work on discourse strategies effectively shows that with contextualization cues, speakers strategically position themselves and frame the interpretation of their utterances for their own ends. Grice’s (1989) theory of implicature rests on a model of the speaker as one who pursues communicative ends through ‘‘implicitation,’’ by deriving and conveying complex conversational meanings with underspecified statements, formulated so as to be expanded by inference. The speaker in an inferential language game must be strategic if only to properly attain such subtle and understated affects. Similarly, conversation analysis envisions the speaker in interaction as an active maker of context, one who masters the procedural system of turn taking, the conditional relevances of conversational moves, knows how to hold the floor, call for a repair, invite or block certain inferences. In general, the exemplary speaker talks on purpose and pursues practical ends by more or less effective means, in more or less locally defined contexts. To call this ‘strategy’ invites the assumption that it is elaborately thought out, which is sometimes true, but not always. Whether full-blown strategies or mundane purposive gambits, discourse strategies have a double relation to the field: they may be called forth by context or they may produce it anew. If context bears an unavoidable relation to the habitus of those who occupy it, it is also subject to the purposive projects and strategies they pursue. Contextualization Processes

In the course of spelling out a minimal architecture for discourse context based on embedding and emergence, we have made reference to a number of processes. As we have emphasized, all of the units involved emerge in time, albeit at different levels, and embedding itself is a dynamic process. At this point I want to draw together and offer a preliminary summary of the processes through which context occurs. The first class of processes involves intentionality, in both senses of representation and purpose. Thus, when a speaker pays attention, thematizes, formulates, or invokes context, he or she converts it into a semiotic object in a standing-for relation. Similarly, when the speaker uses grammatical, intonational, or

Context 153

gestural means to cue his or her current footing and to contextualize the current utterance, semiotic relations are produced between the expressive stream and the context of its expression. In deictic usage, speakers construe context, signaling both the referent and the perspective under which it is individuated. Austinian performatives (Austin, 1962) rest partly on the intentionality that links the propositional content to the conventional act type, the locutionary act to the illocutionary (see Speech Acts and Grammar). The kinds of creative indexicality revealed by Friedrich (1979), Silverstein (1976), and others all involve the consequential use of signs to invoke contexts and thereby bring them about. Through intentionality, signs and expressions project their objects and thereby alter context. Inferential processes (interpretation, extrapolation, implicitation, contextual enrichment) also operate on expression forms in the light of contexts, with special importance given to relevance structures. All of these processes rely critically on the capacity of participants to produce and evaluate signs of context, and to do so on purpose. Strategy and improvisation are ways of exercising this capacity. But we have also mentioned processes that are not subject to the intentionality of participants, at least not necessarily. This is a different class of phenomena. From situations to settings, demonstrative fields, and embedding social fields, we have said that objects, persons, and groups occupy positions in context. This occupancy is not a standing-for relation, and it may or may not be subject to the purposes of the actor. If a police officer calls to me on the street, I am interpolated into a position, whether or not I wish and whether or not I produce a sign of my position. When I go to the airport and pass through security, I occupy the position of a ticketed passenger to be inspected whether I wish to or not, just as I become a customer when I sit in a restaurant. Occupancy can be described as ‘taking up a position’ but it also designates ‘finding oneself in’ and ‘being put in’ a position. When persons or objects are referred to in discourse, they are thereby thrust into positions and the social relations that define them. The key point is that the positions and the process of occupying them are social facts at least partly independent of the intentional states of the participants. Another contextual process whose source is beyond the scope of intentional action is what might be called over-determination. The social field in which an interaction is embedded does not determine what participants do, or how context emerges. But it does make certain contextual configurations and actions more likely and predictable. It reinforces and calls for them the way the operating room calls for a certain engagement from the medical experts working in it,

or the courtroom calls for specific forms of engagement on the part of its occupants. The acquired habitus of a practitioner of any profession is reinforced constantly by the settings, rights, responsibilities, and routine practices that make up the field. Along with the training that inculcates ways of seeing appropriate to the profession, these aspects of the field reproduce, sanction, and guide contexts and ways of occupying them. We will say that embedding overdetermines context when habitus, field, built space, and sanctioned practice align to impose or induce specific features of context. Organizations, religious and missionary settings provide clear examples of this, but the effect is much more widespread. Social fields also authorize and legitimate certain contexts and modes of engagement, but not others. A cashier has the authority to tell you how much you owe for a product, just as a doctor has the authority to classify your body states and a teacher is authorized to evaluate your work. This authority is enacted in intentional processes, but its source is the field, not the intentional states of individuals. We describe it as a process and not an attribute in order to foreground the dynamic whereby authority is conferred on certain contexts and agents in them. Legitimacy could also be conceived as an attribute of contexts and actions, but is more productively viewed as the process whereby they are aligned to the values of the field. In the same family of phenomena, Ide (2001) distinguished between volitional aspects of discourse production and nonvolitional ‘discernments’ of context. The latter designated the process whereby participants construe and align themselves to the field-based requirements of context, such as when they use Japanese honorifics unreflectively and automatically, out of a habitus-like sense of what is called for. Any of these processes may involve intentionality, but they illustrate the capacity of fields to exert a structuring influence apart from intentionality.

Conclusion Discourse context cannot be formulated as a set of correlations between global, macrolevel social features and local, microlevel ones: correlation is far too crude for the kinds of articulation in play. It cannot be described as the reproduction of macrolevel types at the token level: speech is productive and inherently situated. It cannot be derived by sheer seatof-the-pants creative expression guided by purely local intentions and relevance systems: we do not simply fabricate the contexts of our discourse whole cloth. In short, the social horizon of discourse production requires that we use a different vocabulary.

154 Context

Two key terms in this new lexicon are emergence and embedding, and together they define a space of contextualization more productive and realistic than any of the familiar divisions of scale. Embedding describes the relation that holds between situations, settings, demonstrative fields, social fields, and habitus. These have the status of analytically isolable levels in the overall architecture of context. For any level X embedded in Y, Y entails X, Y inherits features from X and adds others, Y transforms X on several distinguishable dimensions (temporality, participation, weighting of factors), and Y serves as the presumptive horizon of X, held ready for thematization and relevance relations. Embedding is more basic than correlation, instantiation, or reproduction, because it is the objective condition under which these occur. Some features of context follow from the distinct logic of the level at which they arise, whereas others are imposed by embedding. The sheer copresence of the situation, the relevance of the setting, the semiosis of the demonstrative field, the constraints and resources of the social field, the bodily dispositions of the habitus – any of these may be relatively free from the structuring effects of the fields in which they are embedded. To the extent that this is so, the contextual level to which they belong is relatively autonomous. By contrast, to the extent that some process at a given level is determined by its embedding in another field, it is nonautonomous. The functions that make up the demonstrative field, for instance, are relatively autonomous, whereas those that count as mutual monitoring and relevance are not, because they depend on the field in which they are embedded. Similarly, the resolution of ordinary indexical reference is nonautonomous. Emergence is a pervasive feature of context, which is dynamic along several trajectories at several levels. What we might call context time precipitates from the interaction between distinct temporalities at the levels of situation (body time), setting (act time), demonstrative field (time formulated and invoked with signs, themselves produced in time), social field time (careers, historical revaluation of positions, objects, and what is at stake), and habitus time (embodied mental and physical habits, routinizations, alignments of disposition with evaluation). Just as time and participation are defined by copresence at the level of the situation, by cognitive engagement at the level of the setting and agent positions at the level of the field, so too other aspects of context emerge in different temporal streams. It is only in practice, especially communicative practice, that they are synchronized to one another. Context occurs when multiple temporal relations are articulated to one another in the emerging actuality of practice.

Behind the standard divisions of scale in context lies a more basic distinction between context-building processes that presuppose individual intentionality, and those that do not. The latter derive directly from the field in which communicative practice is embedded. Intentionality encompasses purposes (as in I intend [to] X) and aboutness relations (I am talking about X). For any student of language, it is doubtful whether the dispositions, mental habits, and embodiments of the habitus can replace intention as a motor for action, as Bourdieu suggested. It is difficult to imagine a theory of language or discourse context that exempted itself from aboutness relations or purposive action. Yet the habitus and the current state of the field cooperate on the intentional states of those that occupy them. They provide a ready-made universe of objects and agents, frames of reference, spaces and evaluative stances – the very stuff of context. See also: Context and Common Ground; Context Principle;

Cooperative Principle; Face; Implicature; Pragmatics and Semantics; Reference: Philosophical Theories; Register; Situation Semantics; Speech Acts and Grammar; Speech Acts; Type versus Token.

Bibliography Agha A (1998). ‘Stereotypes and registers of honorific language.’ Language in Society 27(2), 43. Austin J L (1962). How to do things with words. Cambridge: Cambridge University Press. Blommaert J & Bulcaen C (2000). ‘Critical discourse analysis.’ Annual Review of Anthropology 29(1), 447–466. Bourdieu P (1977). Outline of a theory of practice. Nice R (trans.). Cambridge: Cambridge University Press. Bourdieu P (1993). The field of cultural production: essays on art and literature. New York: Columbia University Press. Briggs C L & Bauman R (1992). ‘Genre, intertextuality and social power.’ Journal of Linguistic Anthropology 2(2), 131–172. Bu¨hler K (1990 [1934]). Theory of language: the representational function of language. Goodwin D F (trans.). Amsterdam: John Benjamins. Cicourel A (2001). Le raisonnement me´dical. Paris: Seuil. Duranti A & Goodwin C (eds.) (1992). Rethinking context: language as an interactive phenomenon. Cambridge: Cambridge University Press. Errington J J (1988). Structure and style in Javanese: a semiotic view of linguistic etiquette. Philadelphia: University of Pennsylvania Press. Friedrich P (1979). Language, context, and the imagination: essays by Paul Friedrich. Stanford: Stanford University Press. Garfinkel H & Sacks H (1970). ‘On formal structures of practical actions.’ In McKinney J C & Tiryakian E A (eds.) Theoretical sociology: perspectives and developments. New York: Meredith. 337–366.

Context and Common Ground 155 Goffman E (1956). The presentation of self in everyday life. Edinburgh: University of Edinburgh Social Sciences Research Centre. Goffman E (1967). Interaction ritual: essays on face-to-face behavior. Garden City, NY: Anchor Books. Goffman E (1972). ‘The neglected situation.’ In Giglioli P P (ed.) Language and social context: selected readings. New York: Penguin. Goffman E (1981). Forms of talk. Philadelphia: University of Pennsylvania Press. Goodwin C & Goodwin M H (1992). ‘Assessments and the construction of context.’ In Duranti A & Goodwin C (eds.) Rethinking context: language as an interactive phenomenon. Cambridge: Cambridge University Press. 147–190. Grice H P (1989). Studies in the way of words. Cambridge: Harvard University Press. Gumperz J J (1992). ‘Contextualization and understanding.’ In Duranti A & Goodwin C (eds.) Rethinking context: language as an interactive phenomenon. Cambridge: Cambridge University Press. 229–252. Hanks W F (1987). ‘Discourse genres in a theory of practice.’ American Ethnologist 14(4), 64–88. Hanks W F (1996). Language and communicative practices. Boulder: Westview Press. Ide S (2001). ‘The speaker’s viewpoint and indexicality in a high context culture.’ In Kataoka K & Ide S (eds.) Culture, interaction and language. Tokyo: Hituz Syobo. 3–20. Levinson S C (2000). Presumptive meanings: the theory of generalized conversational implicature. Cambridge: MIT Press.

Mauss M (1973). ‘Techniques of the body.’ Economy and Society 2(1), 70–88. Panofsky E (1976 [1957]). Gothic architecture and scholasticism. New York: Meridian Books. Sacks H (1992). Lectures on conversation. Jefferson G (ed.) (2 vols). Oxford: Blackwell. Sacks H, Schegloff E A & Jefferson G (1974). ‘A simplest systematics for the organization of turn-taking in conversation.’ Language 52(2) 361–383. Schegloff E A (1987). ‘Between micro and macro: Contexts and other connections.’ In Alexander J C, Giesen B & Smelser N J (eds.) The micro–macro link. Berkeley: University of California Press. 207–234. Schieffelin B B, Woolard K A & Kroskrity P V (1998). Language ideologies: practice and theory. New York: Oxford University Press. Schutz A (1970a). On phenomenology and social relations. Chicago: University of Chicago Press. Schutz A (1970b). Reflections on the problem of relevance. New Haven: Yale University Press. Silverstein M (1976). ‘Shifters, verbal categories and cultural description.’ In Basso K & Selby H (eds.) Meaning in anthropology. Albuquerque: School of American Research. 11–57. Silverstein M (1979). ‘Language structure and linguistic ideology.’ In Clyne P, Hanks W F & Hofbauer C (eds.) Papers from the Fifteenth Regional Meeting of the Chicago Linguistic Society, vol. 2: parasession on linguistic units and levels. Chicago: Chicago Linguistic Society. Sperber D & Wilson D (1996). Relevance: Communication and cognition (2nd edn.). Oxford: Blackwell.

Context and Common Ground H H Clark, Stanford University, Stanford, CA, USA

History

ß 2006 Elsevier Ltd. All rights reserved.

‘Common knowledge’ as a technical notion was introduced by David Lewis (1969) to account for how people coordinate with each other. Suppose A, B, and C agree to meet at city hall at noon. The three of them take it as common knowledge that they intend to go to city hall at noon if and only if: (1) all three believe that the agreement holds; (2) the agreement indicates to all of them that they believe the agreement holds; and (3) the agreement indicates to all of them that they intend to go to city hall at noon. In Lewis’s terminology, the agreement is the ‘basis’ for A, B, and C’s common knowledge that they intend to go to city hall at noon. Common knowledge is always a property of a community of people, even though the community may consist of just two people. The notion of ‘common ground’ was introduced, in turn, by Robert Stalnaker (1978), based on Lewis’s

People talking to each other take much for granted. They assume a common language. They assume shared knowledge of such things as cultural facts, news stories, and local geography. If they know each other, they assume shared knowledge of earlier conversations and other joint experiences. And if they are talking face to face, they assume shared knowledge of the scene around them. ‘Common ground’ is the sum of the information that people assume they share. Although the notion is often treated informally, it has a formal definition that has been essential to the study of semantics, pragmatics, and other areas of language.

156 Context and Common Ground

common knowledge, to account for the way in which information accumulates in conversation: Roughly speaking, the presuppositions of a speaker are the propositions whose truth he takes for granted as part of the background of the conversation . . . Presuppositions are what is taken by the speaker to be the common ground of the participants in the conversation, what is treated as their common knowledge or mutual knowledge [p. 320, Stalnaker’s emphases].

In this view, people in conversation take certain propositions to be common ground, and when they make assertions, they add to this common ground. When A tells B, George arrived home yesterday, A takes it as common ground with B who George is, what day it is, and where George lives. A uses the assertion to add to their common ground the proposition that George arrived home the day before. Common ground therefore also includes common (or mutual) beliefs, and common (or mutual) suppositions (Clark and Marshall, 1981; Clark, 1996). Common ground is a reflexive, or self-referring, notion (Cohen, 1978). If A takes a proposition as common ground with B, then A takes the following statement to be true: A and B have information that the proposition is true and that this entire statement is true. (This sentence has five words is reflexive in the sense that this sentence refers to the sentence that contains it.) Because of the self-reference, people can, technically, draw an infinity of inferences from what they take to be common ground. Suppose A takes it that A and B mutually believe that George is home. A can infer that B believes that George is home, that B believes that A believes that George is home, that B believes that A believes that B believes that George is home, and so on ad infinitum. In practice, people never draw more than a few of these inferences. These iterated propositions are therefore a derivative and incomplete representation of common ground. The reflexive notion is more basic (Lewis, 1969; Clark and Marshall, 1981; Clark, 1996).

Bases for Common Ground In conversation and other joint activities, people have to assess and reassess their common ground, and to do that, they need the right bases. These bases fall into two main categories: community membership and personal experiences (Clark, 1996). Communal Common Ground

Common ground is information that is common to a community of people. Some of these communities are built around shared practices or expertise, such as the

communities of ophthalmologists, New Zealanders, or English speakers. Once A and B mutually establish that they are both ophthalmologists, New Zealanders, or English speakers, they can take as common ground everything that is taken for granted in these communities. Even if A and B mutually establish that A is a New Zealander and B is not, they can take as common ground everything an outsider would think an insider should know about New Zealand. Common ground based on community membership is called ‘communal common ground.’ Everybody belongs to many communities at the same time. Some of these communities are nested (e.g., North Americans, Americans, Californians, San Franciscans, Nob Hill residents), and others are cross cutting (Californians, lawyers, football fans, Christians). Both nesting and cross-cutting communities lead to gradations in common ground. Any two Californians might readily presuppose common knowledge of the Golden Gate Bridge on San Francisco Bay, but only two San Franciscans would presuppose common knowledge of Crissy Field right next to it. People have both direct and indirect ways of establishing which communities they jointly belong to. When people meet for the first time, they often begin by exchanging information about their occupations, residences, hobbies, and other identities. They display other communal identities indirectly – in their choice of language, dialect, and vocabulary; their choice of dress and accoutrements; and their age and gender. It is remarkable how many cultural identities people can infer as they talk and how useful these are in establishing communal common ground. Personal Common Ground

The other main basis for common ground is joint experience. The joint experience may be perceptual. When A and B look at a candle together, they can take their joint experience as a basis for certain mutual beliefs – that there is a candle between them, that it is green, that it smells of bayberry, that it is lit. Or the joint experience may be linguistic or communicative. When A tells B (on April 8), George arrived home yesterday, and once they mutually establish that B has understood A, the two of them can take it as common ground that George arrived home on April 7. Common ground that is based on joint perceptual or linguistic experiences between two people is called their ‘personal common ground’. It often holds only for the two of them. Conversations and other joint activities depend on the orderly accumulation of personal common ground. Suppose A and B are assembling a television

Context and Common Ground 157

stand together. To succeed, they need to establish as common ground what each is going to do next. Part of this they accomplish linguistically, in their spoken exchanges, as when A proposes, Let’s put on this piece next, and B takes up the proposal, Okay. But other parts they accomplish perceptually, as when A hands B a board, screw, or screwdriver, or when A holds up a board and they examine it together. Most face-to-face conversations depend on a mix of linguistic and perceptual bases for the accumulation of personal common ground. Telephone conversations depend almost entirely on linguistic bases.

Language and Communal Common Ground Communal common ground is fundamental to account for the conventions of language, what are termed the ‘rules of language’. These include conventions of semantics, syntax, morphology, phonology, and pragmatics (Lewis, 1969). Speakers ordinarily try to use words that their addressees will understand, and that requires a ‘shared lexicon.’ The problem is that every community has its own ‘communal lexicon’ (Clark, 1996). Once A and B jointly establish that they are both speakers of English, they may presuppose common knowledge of a general English-language lexicon. But because other communities are nested and cross cutting, so are the lexicons associated with them. There is a nesting of communities that speak English, North American English, New England English, and Bostonian. Although words such as dog and in are common to English in general, others are common only to one or another nested community; in Bostonian, for example, a barnie is a Harvard student. Indeed, every community (Californians, lawyers, football fans, ophthalmologists) has a specialized lexicon. The lexicon for lawyers includes tort, mortmain, and ne exeat. The lexicon for ophthalmologists includes tonometry, uveal, and amblyopia. To use barnie or mortmain is to take as common ground a Bostonian or legal lexicon. Communal lexicons are sometimes called jargon, dialect, patois, idiom, parlance, nomenclature, slang, argot, lingo, cant, or vernacular; or they consist of regionalisms, colloquialisms, localisms, or technical terminology (see Jargon). Speakers also try to use syntactic constructions, or rules, that they share with their addressees. For example, in English generally, it is conventional to mention place before time (George is going to London tomorrow); yet in Dutch, a closely related language, it is conventional to mention place and time in the reverse order (Pim gaat morgen naar London, ‘Pim goes tomorrow to London’). The rules of syntax,

however, vary by nested communities. It is conventional to say He gave it me in British English, but not in English generally. It is conventional to say My car needs washed in Western Pennsylvania English, but not in North American English. Many rules of syntax are tied to specific words in a communal lexicon, and these vary from one community to the next. Speakers also try to use, or adapt to, the phonology of their cultural communities. Indeed, pronunciations vary enormously from one community to the next. The vowel in can’t, for example, changes as one goes from British to North American English, from northern to southern dialects of American English, and even from one social group to another within a single school. Also, the same person may pronounce singing as ‘singing’ in an informal setting but as ‘singing’ in a classroom or a court of law.

Discourse and Personal Common Ground Personal common ground is essential to the processes by which people converse. To communicate is, according to its Latin roots, to make common – to establish something as common ground. To succeed in conversation, people must design what they say (1) against the common ground they believe they already share with their interlocutors and (2) as a way of adding to that common ground (Stalnaker, 1978). Two consequences of trying to make something common are ‘information structure’ and ‘grounding.’ ‘Information structure’ is a property of utterances. When A tells B, What the committee is after is somebody at the White House, A uses the special construction to distinguish two types of information (Prince, 1978). With the Wh-cleft What the committee is after, A provides information that A assumes B is already thinking about. It is one type of ‘given information.’ In contrast, with the remainder of the utterance ‘is somebody at the White House,’ A provides information that A assumes B doesn’t yet know. It is ‘new information.’ Given information is assumed to be inferable from A and B’s current common ground, whereas new information is not. New information is, instead, what is to be added to common ground. The way people refer to an object in a discourse (e.g., the committee, somebody, of the White House) depends on whether they believe that the object is readily evoked, known but unused, inferable, or brand new in their common ground for that discourse (Prince, 1981). ‘Grounding’ is the process of trying to establish what is said as common ground (Clark and Schaefer, 1989; Clark and Brennan, 1991). When A speaks to B in conversation, it is ordinarily not enough for

158 Context Principle

A simply to produce an utterance for B. The two of them try to establish as common ground that B has understood what A meant by it well enough for current purposes. In this process, B is expected to give A periodic evidence of the state of his or her understanding, and A is expected to look for and evaluate that evidence. One way B can signal understanding is with back-channel signals such as uh-huh, yeah, a head nod, or a smile. Another way is with the appropriate next contribution, as when B answers a question asked by A. But if B does not manage to attend to, hear, or understand A’s utterance completely, the two of them will try to repair the problem. One way is illustrated here: A (on telephone): Can I speak to Jim Johnstone, please? B: Senior? A: Yes. B: Yes.

In turn 2, B asks A to clear up an ambiguous reference in A’s question, and in turn 3, A does just that. Only then does B go on to answer A’s question. Turns 2 and 3 are called a ‘side sequence’ (Jefferson, 1972). Grounding takes many other forms as well. Common ground is central to accounts of language and language use. It is needed in accounting for the conventions, or rules, of language and to explain how people contribute to conversation and to other forms of discourse.

See also: Context; Conventions in Language; Coopera-

tive Principle; Discourse Domain; Face; Honorifics; Human Reasoning and Language Interpretation;

Jargon; Neo-Gricean Pragmatics; Nonmonotonic Inference; Nonstandard Language Use; Politeness; Politeness Strategies as Linguistic Variables; Pragmatic Determinants of What Is Said; Pragmatic Presupposition; Presupposition; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Semantic Change, the Internet and Text Messaging; Speech Acts.

Bibilography Clark H H (1996). Using language. Cambridge: Cambridge University Press. Clark H H & Brennan S A (1991). ‘Grounding in communication.’ In Resnick L B, Levine J M & Teasley S D (eds.) Perspective on socially shared cognition. Washington, DC: APA Books. 127–149. Clark H H & Marshall C R (1981). ‘Definite reference and mutual knowledge.’ In Joshi A K, Webber B L & Sag I A (eds.) Elements of discourse understanding. Cambridge: Cambridge University Press. 10–63. Clark H H & Schaefer E R (1989). ‘Contributing to discourse.’ Cognitive Science 13, 259–294. Cohen P R (1978). On knowing what to say: planning speech acts. Ph.D. diss., University of Toronto. Jefferson G (1972). ‘Side sequences.’ In Sudnow D (ed.) Studies in social interaction. New York: Free Press. 294–338. Lewis D K (1969). Convention: a philosophical study. Cambridge, MA: Harvard University Press. Prince E F (1978). ‘A comparison of Wh-clefts and It-clefts in discourse.’ Language 54(4), 883–906. Prince E F (1981). ‘Towards a taxonomy of given-new information.’ In Cole P (ed.) Radical pragmatics. New York: Academic Press. 223–256. Stalnaker R C (1978). ‘Assertion.’ In Cole P (ed.) Syntax and semantics 9: Pragmatics. New York: Academic Press. 315–332.

Context Principle R J Stainton, University of Western Ontario, London, Ontario, Canada ß 2006 Elsevier Ltd. All rights reserved.

It is a near truism of the philosophy of language that a word has meaning only in the context of a sentence; this principle is sometimes formulated as the claim that only sentences have meaning in isolation. This is the context principle, first emphasized in Western philosophy by Frege (1884), endorsed early on by Wittgenstein (1922: 51), and sanctioned more recently by Quine (1951: 42), among many others. The Principle and several different ways of understanding

it seem to have been foreshadowed in classical Indian philosophy. (See also Matilal and Sen, 1988.) In this article, I provide some background to the Principle, describe three ways of reading it (a methodological reading, a metasemantic reading, and an interpretational/psychological reading). I offer some reasons for endorsing the Principle, and some reasons for being skeptical. The heated exegetical controversies over Frege’s relationship to the Principle are not presented in this article. Some believe that Frege would have applied it to both sense and reference; others disagree. Some believe that Frege rejected the Principle in his later work, others that he retained it throughout. In addition, different authors

Context Principle 159

take Frege to endorse different readings of the Principle: nearly everyone would agree that he accepted the methodological reading, but it is less clear whether he endorsed the metasemantic or interpretational/ psychological reading. Such scholarly issues are not my concern in this article. For a thorough discussion, see Dummett (1981: 369ff, 1993a).

Sentence Primacy: Three Interpretations of the Context Principle The context principle gives primacy to sentences. Specifically, sentences are taken to be semantically prior to the words that make them up. The Principle is, in this regard, a member of a family of theses that have some whole to being somehow ‘prior’ to its parts. As with all such doctrines, one obtains a holistic primacy thesis by specifying what the whole is, what its parts are, and in what sense the former is prior to the latter. Most important for present purposes, one can mean different things by ‘prior.’ Of particular interest here, one can take sentences to be methodologically prior, metasemantically prior, or interpretationally prior to the words that compose them. Let me begin with the methodological reading of the Principle. In his Foundations of Arithmetic, Frege (1884: x) famously promised to keep to the following fundamental constraint: ‘‘never to ask for the meaning of a word in isolation, but only in the context of a sentence.’’ Taken as a methodological precept, this principle essentially tells the lexical semanticist only to contemplate the effect that a word can have on sentences in which it may be embedded. For instance, to find out the meaning of the word ‘one’ (an example of great interest to Frege), the lexical semanticist should reflect upon such questions as the following: What whole sentences containing ‘one’ have in common (e.g., ‘‘One apple fell’’ and ‘‘One dog died’’); how sentences that contain words slightly different from ‘one’ differ systematically in meaning from maximally similar sentences containing ‘one’ (e.g., ‘‘One dog died’’ versus ‘‘No dog died’’); and so on. What the lexical semanticist should never do is try to figure out the meaning of ‘one’ just by thinking about it – that phrase – in isolation (where in isolation means not embedded in any larger syntactic structure). The second reading of the context principle considered in this article is the metasemantic reading. A metasemantic view is a view about the source of meaning. It poses an ‘‘in virtue of what’’ question. Here’s an example. Suppose we ask (1) In virtue of what is the sound /to:fu/ meaningful? In virtue of what does it mean ‘‘a pale curd of varying consistency made from soybean milk,’’ rather than ‘‘sea lion’’ or ‘‘watch’’?

Notice that we are not asking, in (1), what the sound /to:fu/ means. Rather, we are asking why it means what it does. Nor is this the causal-historical question about the steps whereby /to:fu/ came to have this meaning. It is, instead, the issue of what more primitive present facts make for this less primitive present fact: how do the ‘higher’ facts emerge from ‘lower’ ones? For example, compare these two questions: what makes it the case that things have the monetary value they do, or what makes it the case that certain things are illegal, or rude, or immoral? These too are ‘‘in virtue of what’’ questions. Some philosophers seem to have taken from Frege’s discussion of ‘‘not asking for the meaning of a word in isolation’’ a claim about what makes words meaningful and what makes them have the meaning they do. The claim is that, fundamentally, only sentences have meaning. This is not to say that subsentences are gibberish. Rather, the entities that have meaning in the first instance are sentences. Unlike the first reading of the Principle, this doctrine is not about where one should look to find out about meaning; it is, rather, a doctrine about where meaning comes from, i.e., the basic source of meaning. What the Principle says is that the only things that have meaning non-derivatively are sentences, so it must be in virtue of their role within sentences that subsentential expressions have meaning at all. Here is the same idea put another way: suppose that some expressions obtain their meaning from how they alter the meanings of larger wholes. Suppose, indeed, that this is how words/phrases obtain their meaning; they therefore have meaning only derivatively, not fundamentally. Now, it cannot be the case that all expressions obtain their meaning in this way or there would be an infinite regress. The claim says that the things that have meaning non-derivatively are sentences. Does this mean that one must first grasp the meaning of each of the infinite number of sentences in the language and only then solve for word meanings? No, not least because doing so is not humanly possible. To avoid this problem, proponents of the metasemantic version of the context principle can make several claims. First, they may insist on a sharp difference between (1) a psychological story about how humans grasp word and sentence meanings and (2) a philosophical story about the metaphysical underpinnings of word and sentence meaning. They may then eschew any claims about the first of these, stressing that they only mean to address the second (see Dummett, 1973: 4 for this approach). Second, the proponents of the context principle, read metasemantically, could propose that there is some finite cluster of simple sentences, the meaning of which one grasps from use; one then presumably solves for the meaning of

160 Context Principle

the words and for the contribution of syntax, using just those sentences. Performing this finite task then gives the person the capacity to understand new sentences, a potential infinity in fact, on the basis of the (familiar) words in the (unfamiliar) sentences and how those words are structured. Either move would save the proponents of the metasemantic thesis from endorsing the absurd view that one first understands all sentences and only then understands any words. So far we have examined two readings of the context principle. The first was merely methodological, a claim about how to find out what particular words mean: To find word meanings, look at what they contribute to sentences. The second reading was metasemantic, a claim about why words have the meanings they do: words only have meaning because of how they affect sentence meanings. The third reading of the Principle is interpretational/psychological. It is an empirical claim about the psychology underlying comprehension. Dummett (1993b: 97) discusses the view that ‘‘it is possible to grasp the sense of a word only as it occurs in some particular sentence.’’ In a way, this reading of the Principle is the most straightforward of the three: the idea underlying it is that the only things we are psychologically able to understand are whole sentences. Put in terms of generative capacity, the claim would amount to this: the only thing that our semantic competence generates are meanings for whole sentences; it does not output meanings for words/phrases (though it presumably uses word/phrase meanings in generating meanings for whole sentences, they just are never ‘output’). Thus, we can understand words only when they are spoken within whole sentences. Even this most straightforward of the three readings admits of further subreadings, however. Dummett (1993b: 109), for instance, contrasted two varieties of ‘‘grasping a sense,’’ one dispositional and the other occurrent. He granted that one may dispositionally grasp the sense of a subsentence outside the context of any sentence. However, he apparently denied – or anyway, has Frege deny – that one can, in the occurrent sense, grasp the sense of a word/phrase without grasping the sense of a sentence within which that word/phrase occurs. This would mean that one could ‘‘know the meaning’’ of a word in isolation, but that whenever one put that knowledge to work, in actual understanding, it would have to be in grasping a sentential content. This last is what the context principle would come to, on this weaker subreading of the interpretational/psychological principle.

Motivating the Context Principle Having explained three senses in which one could take whole sentences to be prior to the words that

make them up, let us consider reasons for endorsing sentence primacy. Some of these reasons support just one reading of ‘priority.’ Some support more than one. Given the limited space, I present only three such reasons and for the most part leave for the reader the question of which reason supports which reading of ‘‘Sentences are prior.’’ Frege believed that, in failing to obey his methodological constraint, ‘‘one is almost forced to take as the meanings of words mental pictures or acts of the individual mind’’ (Frege, 1884: x). Thus in the case of number-words, the failure to respect the principle could easily lead one to suppose that ‘one’ stands for a mental item, and hence that mathematics is somehow about mental entities, which in Frege’s view is an extremely serious error (see Frege, 1884: 116). However, when one obeys the principle, one comes to the right view: the meaning of a word is not some idea that we associate with it, but is instead the thing that the word contributes to the meaning of larger expressions. Frege writes (1884: 71): That we can form no idea of its content is therefore no reason for denying all meaning to a word, or for excluding it from our vocabulary. We are indeed only imposed on by the opposite view because we will, when asking for the meaning of a word, consider it in isolation, which leads us to accept an idea as the meaning. Accordingly, any word for which we can find no corresponding mental picture appears to have no content. But we ought always to keep before our eyes a complete proposition. Only in a proposition [Satz] have the words really a meaning.

So, one advantage of endorsing the Principle is that it keeps us from making such a mistake. Consider this related motivation: starting from the top – focusing on whole sentence meanings and only then considering what the parts must mean, in order for the observed whole meaning to be generated – opens up the possibility of novel and surprising accounts of what the parts mean. Indeed, it becomes possible to conceive of syntactic parts that, though they have some sort of impact on meaning, do not themselves have a meaning in isolation. Such parts receive only what is called a ‘contextual definition.’ This concept is best explained by appeal to an example. If we start by looking at the phrasal parts of ‘‘The king of France is bald,’’ asking what they mean, it can seem inevitable that the phrase, ‘‘The king of France,’’ must stand for an object. What else could its meaning be, in isolation? This, of course, raises all manner of ontological issues: What is this bizarre object, since there is, in reality, no king of France? How can such an unreal entity be bald or not, so as to render this sentence true or false? And so on. Crucially, however, if we pursue the methodology suggested

Context Principle 161

here and start with the whole sentence, we may notice, with Russell (1905), that the sentence as a whole means the following: there is exactly one king of France, and every king of France is bald. We may further notice that this whole meaning can be generated without assigning any reference at all to the phrase, ‘‘The king of France.’’ This is not to say that this phrase makes no difference to what the whole means; patently it does make a difference. However, in place of a meaning-entity for ‘‘The king of France,’’ all we need is a rule, a contextual definition, that says: (2) A sentence of the form ‘‘The F is G’’ is true iff exactly one thing is F and everything that is F is G.

Taking this contextual definition to be the meaning-determining rule, we simply avoid the issue of what the phrase, ‘‘The king of France,’’ stands for, since the phrase itself, upon analysis, does not contribute a constituent to the whole meaning. Another methodological advantage of the context principle, then, is that it is rather easier to arrive at this kind of contextual definition than if we begin with what the parts mean, in isolation. A second kind of advantage is that, by strictly obeying the context principle, we automatically meet a key constraint of semantic theories: compositionality. Roughly speaking, compositionality says that the meaning of a whole expression is exhausted by (1) what its parts mean, and (2) how those parts are put together (see Compositionality for more details.) Compositionality is accepted as a constraint for two related reasons. First, insofar as these are the sole determinants of whole meanings, we can explain why people understand complex expressions that they have never encountered before: they understand them by calculating the whole meaning from precisely these two elements, both of which are familiar. Second, were whole meanings not compositional, it would be an utter mystery how we finite beings could in principle know the meaning of the infinite number of sentences that, though we have never heard them, we would, but for our finite lifetime and memory, be capable of understanding. That is, compositionality accounts for an observed ability in practice and a different though related ability in principle. Notice, however, that compositionality is one side of a coin, the other side of which is the context principle. Compositionality says that whole meaning is entirely a function of part meanings plus structure: (3) Whole meaning ¼ þ structure

The context principle employs this same equation to solve for a part meaning, i.e., taking part meaning to

be entirely determined by the whole meaning, the meanings of the other parts, and the structure: (4) Part-meaningi ¼ Whole meaning – ( þ structure)

So, if we assign part meanings in line with (4), the context principle, we cannot help but get the desired result vis-a`-vis (3), i.e., compositionality. (Note: obviously the manner of combination of part meanings and structure is not literally addition. Nevertheless, I use the symbols ‘ þ ’ and ‘’ to simplify presentation.) Automatically satisfying the compositionality constraint in this way is thus another advantage of endorsing the context principle. A third kind of motivation for endorsing the Principle is that it seems to be connected with several other holistic primacy theses, each of which is allegedly independently motivated. (Unfortunately, space does not permit me to explain what the independent motivation is for these other theses. See Brandom, 1994, chapter 2, sections II and III, for discussion and an overview of the relations among these various primacy claims.) Kant (1787) famously insisted that judgment is prior to perception of individuals: seeing that Marı´a is a female, a person, tall, and the like is prior to seeing Marı´a. Put otherwise, whereas classical empiricists started with representations of individual objects and of universals and then built up complex mental representations that could be true/ false, Kant turned this on its head: the whole representation (i.e., what is judged) is prior to the objectdenoting parts that make it up. The early Wittgenstein (1922: 31) also insisted that facts are prior to the objects and properties that make them up: ‘‘the world is the totality of facts, not of things.’’ In a related move, Dummett (1973) has urged, following the later Wittgenstein, that the practice of assertion – and other full-fledged ‘‘moves in the language game’’ – is prior to the act of referring. As Wittgenstein (1953: 24) put it: For naming and describing do not stand on the same level: naming is a preparation for description. Naming is so far not a move in the language-game – any more than putting a piece in its place on the board is a move in chess. We may say: nothing has so far been done, when a thing has been named. It has not even got a name except in the language-game. This was what Frege meant too, when he said that a word had meaning only as part of a sentence.

Adopting these primacy theses can, each in their own way, lead one to expect sentences to be primary as well. Goes the idea, what is judged are sentential representations; the linguistic item that corresponds

162 Context Principle

to a fact is a sentence, and the linguistic item that we assert with is the sentence. Dummett’s point about sentence use deserves to be expanded upon, since it underlies several of the points made above. Dummett suggested that the only things that can be used in isolation – that is, used without being embedded in a larger structure – are sentences. He wrote (1973: 194): A sentence is, as we have said, the smallest unit of language with which a linguistic act can be accomplished, with which a ‘‘move can be made in the language-game’’: so you cannot do anything with a word – cannot effect any conventional (linguistic) act by uttering it – save by uttering some sentence containing that word.

Yet, as a famous Wittgensteinian slogan says, meaning comes from use (see Wittgenstein, 1953 and elsewhere). Thus, the things that have meaning fundamentally have it because of their use: an expression has the non-derivative meaning that it does because of the kinds of actions speakers can perform with it. However, as suggested just above, those just are the sentences. So words must get their meaning because they appear in meaningful sentences. Dummett, expanding on this Wittgensteinian theme, put the general lesson as follows: Indeed, it is certainly part of the content of the dictum [i.e., the context principle] that sentences play a special role in language: that, since it is by means of them alone that anything can be said, that is, any linguistic act (of assertion, question, command, etc.) can be performed, the sense of any expression less than a complete sentence must consist only in the contribution it makes to determining the content of a sentence in which it may occur (1973: 495; see also Dummett, 1993a).

A Possible Objection to the Context Principle Having noted three kinds of reasons for embracing the context principle, let me end with an objection that may come immediately to mind. First, it seems that adults speak in subsentences all the time. I see a woman wearing a lovely garment and say to my wife, ‘‘Nice dress.’’ I receive a letter in the mail, hold it up, and say to my companion, ‘‘From Spain.’’ Such talk is absolutely ubiquitous. (For empirical support, see the papers in Elugardo and Stainton, 2004, and the many references cited there; for an overview, see Stainton, 2004.) Second, children learning a language seem to start with subsentences – which makes it equally hard to see how grasping a sentential meaning could be a prerequisite for grasping a subsentential one. Let us consider the problem that such subsentential speech might pose for the Principle.

Start with the methodological reading. It is a bit strong to demand that one never consider the word in isolation if words/phrases can be used unembedded to perform speech acts. More appropriate, and still in the broadly Fregean spirit, would be this claim: never only consider the word in isolation, but instead also consider its behavior when embedded in whole sentences. Non-sentential speech does not conflict with this latter, more inclusive, methodological precept. In addition, the methodological point of the context principle – to cure one of the habit of taking mental images and such as meanings – is met even on this weaker reading. Hence subsentence use actually poses no problems for the Principle, on this first reading. What of the metasemantic doctrine? Notice that a key premise in the argument for the doctrine was that only sentences can be used to perform speech acts. Words and phrases cannot be: that is why they were denied meaning, fundamentally speaking. Yet, this key premise looks false, if words really can be used in isolation. Therefore, without this premise, some other argument must be given for the conclusion that only sentences have meaning fundamentally. Thus subsentence use, if genuine, does not falsify the Principle read in this way, but it does leave one in need of an empirically adequate argument for meaning having to come from sentences alone. It might seem that a better argument for the claim that meaning must still come from sentences is at hand: Surely this doctrine is required to preserve compositionality. As I stressed above, you do not get (3) above unless you also accept (4) and (4) requires that word meanings – the meaning of the parts – not exceed what they contribute to full sentences. In fact, however, compositionality does not, on its own, support the metasemantic doctrine, which makes two claims: first, sentences are a metaphysical source of word meaning, and second, they are the only such source. Neither of these claims, however, can be inferred from compositionality per se. All (4) gives us is a constraint: Whatever story we tell about where a word’s meaning comes from, it must be consistent with sentence meanings being exhausted by what their parts mean. This does not support any claim about sources. Moreover, if words are used in isolation, then, though sentence use might be one source, it surely would not be the only one. To see why compositionality does not, taken alone, support the metasemantic doctrine, consider an analogy. Take this proposal: facts about what art works are beautiful derive from facts about what works are attractive to (most) art experts. That is, it is in virtue of the judgment of (most) experts that art works are beautiful or not. Suppose one tried to

Context Principle 163

defend this meta-esthetic view by saying: ‘‘Look, it can’t be that most genuine experts are wrong about what’s beautiful. They wouldn’t be experts otherwise.’’ This defense would not really succeed as an argument for the meta-esthetic view because, even granting it, one could only infer that it is a constraint on where beauty comes from that most experts are right about what is beautiful. This fact would not, on its own, support the idea that beauty comes from expert judgment. Nor would it support the even stronger idea that beauty comes solely from expert judgment. In the same way, compositionality may well impose a constraint on metasemantic theories: one might well contend that any successful metasemantics must have whole meanings exhaustively determined by part meanings and linguistic structure. Yet, one cannot go from such a constraint immediately to conclusions about where meaningfacts emerge from; still less can one move from such a constraint to a conclusion about the sole thing from which they emerge. In sum, given subsentential speech, we are still in need of a reason for embracing the metasemantic reading of the context principle. Let me now make a brief detour into a related issue. One reason that it matters whether the metasemantic doctrine is upheld is this: If sentence meaning is the only source of word meaning, then it is arguable that the latter is indeterminate. That is, there might be no fact of the matter about what individual words ‘‘really mean.’’ The argument goes like this. We can hold constant the meaning of every sentence in the language while varying the contribution that we assign to the words within those sentences. To give a highly simplified example, one way to assign the right meaning to the Spanish sentence, ‘‘Marı´a no fuma’’ [‘‘Marı´a doesn’t smoke’’] is to assign the person MARIA to ‘Marı´a’, SMOKES to ‘fuma’, and DOESN’T to ‘no’. Another way, which still gives the right meaning for the whole sentence, is to assign the person MARIA to ‘Marı´a no’ and DOESN’T SMOKE to ‘fuma’. Now, with respect to this highly simplified example, we can find reasons for picking the first over the second option: ‘fuma’, ‘no’ and ‘Marı´a’ show up in lots of sentences, and their contribution in those other sentences is, surely, SMOKES, DOESN’T, and MARIA, respectively. So that is what they contribute here too. However, suppose we revised our view of the meaning of the other parts in all sentences containing ‘fuma’, ‘Marı´a’ and ‘no’. Surprisingly, it has been suggested that this sort of rearrangement is something we could systematically do. The result would be that the complete set of sentences containing a given word leaves us with various options about what the word means. Further, assuming that the meaning of all sentences in which a

word occurs is the sole thing that metaphysically determines its meaning, there can be no single thing that is ‘‘the meaning of ‘fuma.’’’ This is the thesis of indeterminacy (see Quine, 1960 and Putnam, 1981 for worked-out examples). I introduce the indeterminacy thesis because it highlights the sense in which the metasemantic version of the context principle says more than ‘‘the meanings one assigns to words must fit with the meanings one assigns to sentences containing those words.’’ It also says that the word meanings are exhausted by sentence meanings – in a way that can lead to indeterminacy. In contrast, if word meanings depend also upon how words are used on their own, then even if the complete set of sentence meanings does not fix the meaning of individual words, we cannot yet conclude that word meaning is indeterminate. For word meaning might be more completely fixed by how words in isolation are used (for more on this connection between the context principle and indeterminacy, see Stainton, 2000). We have seen that subsentence use is consistent with the methodological reading of the context principle. It is also consistent with the metasemantic reading, though it leaves this latter doctrine in need of an empirically adequate supporting argument. Consider finally the interpretational/psychological doctrine. It says that, as a matter of our psychology, we cannot understand a word, when uttered, unless it is embedded in a sentence. This reading of the context principle seems simply false, given the existence of subsentential speech. There is no hope for making it consistent with genuine subsentence use. Apparently, hearers understand subsentential expressions in isolation; hence their semantic competence must generate a meaning for such expressions in isolation. The best hope for the Principle read in this strongest way is thus to deny that the phenomenon of subsentential speech is genuine: adults do not actually speak in subsentences, they merely appear to do so. What is really going on is that adults speak ‘elliptically’ in some sense – they produce sentences, but those sentences somehow ‘‘sound abbreviated’’ (see Stanley, 2000 for this sort of idea). As for children, who seem to grasp word meanings long before they grasp the meanings of any sentences, proponents of the interpretational reading of the context principle must make some fairly implausible suggestions. They may insist that children actually do understand sentence meanings even though they do not speak in sentences; or they may claim that what children mean by their words (e.g., ‘doggie’) is not what the adult word means. The child’s expression, they might insist, is actually a one-word sentence meaning, ‘‘There is a dog,’’ and hence is not synonymous with our word.

164 Context Principle

(That is, on this second disjunct, the idea would be that children actually do not employ/understand our words outside sentences, but rather they employ homophonous sentences – until, that is, they are also competent with our sentences.) Does this inconsistency with the interpretational/ psychological reading mean that the other primacy doctrines – of judgment, facts, and assertion – are also required to make these implausible empirical claims? After all, it was suggested that those doctrines supported sentence primacy. The answer is no, because these other primacy doctrines really do not entail anything about only sentences being used and only sentence meanings being graspable occurrently. At best what they lend credence to is the primacy of a certain sort of content, namely the proposition. For, strictly speaking, it is propositions that are judged, propositions that correspond to facts, and propositions that are exchanged in assertion. Further, subsentential speech does not call the centrality of propositions into question: When I say ‘‘Nice dress’’ or ‘‘From Spain,’’ I still convey something propositional; that is, a proposition about the salient dress to the effect that it is nice, and a proposition about the letter to the effect that it is from Spain, respectively. I merely do so using linguistic expressions that are not propositional. So, subsentential speech leaves proposition primacy intact. To move immediately and without further argument to any conclusion about the syntactic structures that (purportedly) express propositions, however, is to commit some kind of global use/mention error, running together features of a content (i.e., a proposition) with features of its supposed linguistic ‘vehicle’ (i.e., a sentence). In short, even if one takes judgments, facts, or assertions to be primary, one need not endorse the context principle vis-a`-vis interpretation – since the latter is about the centrality of a certain class of syntactic items. In summary, I have presented three different ways of reading the context principle: methodological, metasemantic, and interpretational/psychological. I then noted three rationales for embracing the Principle: to avoid the errors of psychologism, to enforce compositionality, and because of links to other independently motivated ‘primacy doctrines.’ I ended with an objection to the Principle, from non-sentence use. The suggested result, in the face of this objection, was two parts consistency and one part inconsistency: (1) the first reading of the Principle would be untouched, (2) the second would be left unsupported, but (3) the third reading would be outright falsified, so that the proponent of this reading of the Principle must make some

(implausible) empirical claims to the effect that people do not actually speak subsententially.

See also: Coherence: Psycholinguistic Approach; Cohe-

sion and Coherence; Compositionality; Concepts; Context; Context and Common Ground; Conventions in Language; Cooperative Principle; Default Semantics; Dictionaries and Encyclopedias: Relationship; Human Reasoning and Language Interpretation; Indeterminacy; Intention and Semantics; Mood, Clause Types, and Illocutionary Force; Nonstandard Language Use; Propositions; Prosody.

Bibliography Brandom R (1994). Making it explicit. Cambridge, MA: Harvard University Press. Dummett M (1973). Frege: philosophy of language. Cambridge, MA: Harvard University Press. Dummett M (1981). The interpretation of Frege’s philosophy. London: Duckworth. Dummett M (1993a). ‘The context principle: centre of Frege’s philosophy.’ In Ingolf M & Stelzner W (eds.) Logik und mathematik. Berlin: De Gruyter. 3–19. Dummett M (1993b). Origins of analytical philosophy. Cambridge, MA: Harvard University Press. Elugardo R & Stainton R (eds.) (2004). Ellipsis and nonsentential speech. Dordrecht: Kluwer. Frege G (1884). Foundations of arithmetic (2nd rev. edn., 1978). Austin J L (trans.). Oxford: Blackwell. Kant I (1787). Critique of pure reason. Smith N K (trans., 1929). New York: St. Martin’s Press. Matilal B K & Sen P K (1988). ‘The context principle and some Indian controversies over meaning.’ Mind 97, 73–97. Putnam H (1981). Reason, truth and history. Cambridge: Cambridge University Press. Quine W V O (1951). ‘Two dogmas of empiricism.’ From a logical point of view. Cambridge, MA: Harvard University Press. 20–46. Quine W V O (1960). Word and object. Cambridge, MA: MIT Press. Russell B (1905). ‘On denoting.’ Mind 14, 479–494. Reprinted in Marsh R C (ed.) (1956). Logic & Knowledge. London: Unwin Hyman. Stainton R J (2004). ‘The pragmatics of nonsentences.’ In Horn L & Ward G (eds.) The handbook of pragmatics. Oxford: Blackwell. 266–287. Stainton R J (2000). ‘The meaning of ‘‘sentences.’’’ Nous 34(3), 441–454. Stanley J (2000). ‘Context and logical form.’ Linguistics & Philosophy 23, 391–434. Wittgenstein L (1922). Tractatus logico-philosophicus, tr. Ogden C K. London: Routledge & Kegan Paul. Wittgenstein L (1953). Philosophical investigations. Oxford: Blackwell.

Conventions in Language 165

Conventions in Language M Ko¨lbel, University of Birmingham, Birmingham, UK

Grice’s earlier analysis of linguistic meaning in terms of speaker intentions.

ß 2006 Elsevier Ltd. All rights reserved.

Grice Independently of the question of what exactly linguistic meaning is (see Philosophical Theories of Meaning), a question arises as to the nature of its attachment to linguistic expressions: why does the word ‘banana’ mean what it does rather than something else? Why doesn’t some other word have the meaning that ‘banana’ actually has? The answer almost everyone agrees upon is that it is a matter of convention that words mean what they do. Had there been different conventions of language, then words would have had different meaning. Views diverge, however, on the significance of the conventionality of language, on the question of what exactly a convention of language is, and on the extent to which meaning is conventional (as opposed to, say, inferential). In what follows the focus will be mainly on the second of these issues, i.e., on the nature of linguistic conventions.

Convention and Analyticity In the background of current thinking on language conventions is the attempt of the logical empiricists to explain a priori knowledge as knowledge of analytic truths, i.e., propositions that are true in virtue of meaning (Carnap, 1947; Ayer, 1946). An example are the truths of arithmetic: while Kant had thought they were synthetic (not true in virtue of meaning), Ayer and Carnap followed Frege in claiming that they are analytic, i.e., true by definition. Carnap extended this approach to modality: necessary truths are just those that are true in virtue of linguistic rules. Conventionalism was opposed by Quine, who argued against Carnap that there is no coherent way of drawing a distinction between analytic and synthetic truths (Quine, 1951, 1960). According to Quine, it is impossible to separate the conventional from the empirical ingredient of any truth, because every attempt to explicate analyticity will ultimately rely on some other inexplicable semantic notion, such as synonymy or possibility. The debate between Carnap and Quine forms the historical background for recent efforts to explain in detail how language is conventional. The most influential account is that by David Lewis (1969, 1983), who provided a game-theoretic account of convention in general and then explained the specific nature of conventions of language within this framework. However, Lewis’s account built on

Grice claimed that linguistic meaning is ultimately a matter of the communicative intentions of speakers (Grice, 1989). Grice started by defining a notion of speaker meaning (‘non-natural meaning’) in terms of speaker intentions and then analyzed the meaning of expression types in terms of their use by speakers to speaker-mean something with them (see Expression Meaning vs Utterance/Speaker Meaning). He defined speaker-meaning as follows: a speaker S speaker-means that p by uttering s just if in uttering s S intends his or her audience to think that (S believes that) p on the basis of the audience’s recognition of that very intention (Grice, 1989: 213–223, 123). For Grice, the meaning of expression types depended on what speakers in a speech community use these types to speaker-mean on particular occasions of use. A little more precisely, the dependence is as follows: a sentence type s means that p in a community C just if members of C have the habit of speaker-meaning that p by uttering s, and they retain the habit conditionally upon other members doing likewise. In short, words mean what they do because speakers use these words habitually with certain communicative intentions, and this habitual procedure is conditional upon other speakers doing likewise. (For the fine details of the account, see Grice, 1989: 124–128.)

Lewis Grice’s analysis of linguistic meaning in terms of speaker intentions was initially perceived to be in competition with accounts offered by formal semanticists (see e.g., Strawson, 1969). The formal semanticist’s central notion in the explanation of meaning is not that of intention but that of a ‘truth condition.’ However, it now seems that the two approaches can complement each other, and need not be viewed as competitors. Formal semanticists study artificial languages (which often serve as models of fragments of natural languages) with the aim of elucidating phenomena of compositionality (see Formal Semantics). Grice’s framework does not address questions of compositionality, but it can in fact accommodate the formal semanticists’ approach. David Lewis’s theory of linguistic conventions not only showed how the insights of formal semantics can be appropriated within Grice’s theory of communicative intentions;

166 Conventions in Language

it also offered a detailed explication of the notion of convention itself (Lewis, 1969, 1983). According to Lewis, there is a vast range of possible languages. Restricting himself initially to simple cases (languages with only context-insensitive declarative sentences), Lewis thought of a possible language as a function from a domain of sentences into a range of truth conditions. Many of the languages described by formal semanticists are possible languages in this sense. Most possible languages, however, are not used by anyone. According to Lewis, this is where convention plays a key role. He used his game-theoretic notion of convention to specify under what conditions a possible language is an actual language, i.e., is actually used by a population of language users.

Lewis’s General Notion of Convention Any word could in principle be used to mean anything. If two language users are to communicate successfully they therefore need to coordinate their use of words and make sure they use the same words with the same meaning. This type of situation, where several agents have a common interest in coordinating their actions, is called a ‘coordination problem.’ Conventions are a way of solving coordination problems – linguistic conventions are just a special case of this more general phenomenon. According to Lewis, conventions (linguistic or not) are regularities in the behavior of the agents of a population. These regularities arise from the common interest of the agents to coordinate their actions and is sustained because each agent expects the others to conform to the regularity and prefers to conform him- or herself if the others conform. There are potential alternative regularities which could also secure coordination, hence the need for a convention. For example, if our phone conversation is interrupted and we have the common aim of continuing the conversation, then there are two alternatives: either I phone back and you wait, or you phone back and I wait. No other combination of actions will achieve our common aim. Each of us prefers to phone back if the other waits and prefers to wait if the other phones back. But how do we know what the other is doing? If the problem is a recurrent one, then a convention can help. For example, if each of us expects the other to phone back just if the other was the original caller and not to phone back otherwise, then each of us will prefer to phone back if and only if he or she was the original caller. Lewis’s definition of convention is roughly as follows (see Lewis, 1983: 165 for full details): a regularity R is a convention in a population P, just if

1. everyone conforms to R; 2. everyone believes that the others conform to R; 3. the belief that the others conform to R gives everyone a decisive reason to conform to R him- or herself; 4. R is not the only regularity meeting (3); 5. (1)–(4) are common knowledge among P: they are known to everyone, it is known to everyone that they are known to everyone, etc.

Conventions of Language Lewis used the above definition of convention in his explication of what it is for any of the many possible languages (as described by a semantic theory) to be the language actually used by a population. According to Lewis, a population P uses a possible language L just if members of P have a convention of uttering sentences of L only if they are true in L, and of coming to believe in the truth in L of sentences that are uttered by others. The relevant coordination problem for a population here is the problem of converging on one possible language. It is in the interest of each member to use the language the other members are using because there is a common interest in communication. Lewis called this a ‘‘convention of truthfulness and trust.’’ There are some difficulties of detail that can be resolved by further refinements. For example, the proposal as sketched above does not take into account indexical languages or languages with nondeclarative sentences (e.g., interrogative sentences). Lewis himself discussed how his approach can be suitably extended (Lewis, 1983). Another difficulty is the fact that too few speakers try to utter only sentences that are true in their language, and similarly too few speakers believe everything they are told. There is therefore no convention of truthfulness and trust in English among English speakers. Lewis’s account can be modified to deal with this problem. For example, instead of saying that users of a language try to utter only sentences that are true in that language, Lewis could say that they utter sentences only if they accept, or want to commit themselves to, their truth for the purposes of the conversation.

A Basic Difficulty for Grice–Lewis There are also some more fundamental difficulties, which concern the basic assumptions on which the Grice–Lewis approach is built. It is part of both Grice’s and Lewis’s accounts to attribute to language users highly complex mental states. On both

Conventions in Language 167

accounts, language users are required to have unrealistically complex iterated preferences and beliefs concerning other language users (see definitions above). Typical language users, however, do not report these mental states. Lewis’s response to these doubts concerning the psychological reality of these mental processes was to say that they are merely ‘potential’: users would explicitly have these cognitive states if they bothered to think hard enough (Lewis, 1983: 165) and presumably they would also be able to report these intentions if they thought hard enough. However, it is unclear whether the phrase ‘hard enough’ is substantial enough to render the theory empirically testable. Would Lewis accuse anyone denying the psychological reality of the account of not thinking hard enough? Some psychological findings seem to add weight to this line of objection. The fundamental assumption behind Grice’s and Lewis’s approaches is that linguistic behavior is a product of a special case of instrumental reasoning. This much seems to be implied by Grice’s idea that linguistic meaning is a matter of communicative intentions and linguistic behavior a special case of intentional action. As Laurence (1996) pointed out, however, there are cases which suggest that language processing and instrumental reasoning are independent faculties. A disability in instrumental reasoning can be accompanied by full linguistic abilities. Conversely, lack of linguistic abilities can be accompanied by fully functioning instrumental reasoning.

Chomskyan Accounts of Linguistic Convention A Chomskyan view of language processing lends itself to a different account of linguistic convention. Any account of linguistic convention needs to preserve the idea that what a given word means is a contingent and largely arbitrary matter; that words could have meant something other than what they actually mean, and that other words could have meant what they actually do. Laurence (1996) argued that a Chomskyan view does preserve this idea. On such a Chomskyan view, language processing is performed by a special language-processing faculty. This faculty processes language at various levels, phonologically, syntactically, and semantically. At each level, the faculty associates certain representations with utterances. On this view, one might say that the various representations the language faculties of a group associate with a given utterance determine that utterance’s meaning in the language of that group. The meaning of an expression type would then be a function of the representations the

language faculties would associate with any utterances of that type. On this view of the meaning of expression types, it does indeed turn out to be contingent: each type might have meant something other than it actually means, etc. For the precise working of the language faculty in an adult is partly the result of environmental influences. Within the constraints of universal grammar, children learn the language spoken in their surroundings. Thus, the representations computed by a given language faculty will depend in part on the language-learning environment. Had the environment been different, the representations associated by the language processor would have been different, thus its meaning would have been different. This model works best for the conventions of a language spoken by people who have learnt the language in the natural way. But it would also explain explicit linguistic conventions (e.g., when a new technical term is explicitly introduced in a scientific paper, or when an adult learns a natural language). Presumably, these are cases where instrumental reasoning provides input for, and interacts with, the separate language-processing faculty.

Convention versus Inference The controversy between Griceans and Chomskyans concerns the role of instrumental reasoning in the determination of what expressions conventionally mean. There is another controversy, again involving Grice at centre stage, concerning the extent to which the meaning of utterances is the product of the conventional meaning of the expression types used as opposed to other, linguistically unanticipated, inferences. Grice distinguished between what is literally said by an utterance from what is ‘implicated’ (see Semantics–Pragmatics Boundary). What is literally said is more or less determined by the conventional meaning of the expressions used. However, language users often aim to convey messages that go beyond what is literally said, such as the polite referee in Grice’s famous example: when the referee says ‘‘the candidate has an excellent command of English’’ he is relying on the audience’s ability to infer that he wished to convey that the candidate is no good at philosophy (see Grice, 1989: 33). The controversy concerns which aspects of communication should be viewed as arising from pragmatic inferences, as in the case of Gricean implicatures, and which aspects should be viewed as pertaining to literal meaning. (Another related question is whether any implicature can be conventional.) Davidson is at one end of the spectrum of possible views here: he practically denies (in good Quinean

168 Cooperative Principle

fashion) that there is any conventional meaning. It may be helpful in interpreting an utterance to start with a conjecture that the expression types uttered have certain stable meaning, but ultimately such a conjecture is merely a ‘crutch’ (Davidson, 1984: 279). (For more on these questions see Recanati, 2004; also Semantics–Pragmatics Boundary and Nonstandard Language Use). See also: Cooperative Principle; Expression Meaning vs Utterance/Speaker Meaning; Face; Formal Semantics; Game-theoretical Semantics; Gender; Honorifics; Jargon; Memes; Natural versus Nonnatural Meaning; Neo-Gricean Pragmatics; Nonstandard Language Use; Philosophical Theories of Meaning; Politeness Strategies as Linguistic Variables; Pragmatic Determinants of What Is Said; Register; Semantics–Pragmatics Boundary.

Bibliography Ayer A J (1946). Language, truth and logic (2nd edn.). London: Victor Gollancz.

Carnap R (1947). Meaning and necessity. Chicago: University of Chicago Press. Davidson D (1984). ‘Communication and convention.’ In Davidson D (ed.) Inquiries into truth and interpretation. Oxford: Oxford University Press. 265–280. Grice H P (1989). Studies in the way of words. Cambridge, MA: Harvard University Press. Laurence S (1996). ‘A Chomskian alternative to conventionbased semantics.’ Mind 105, 269–301. Lewis D K (1969). Convention. Cambridge, MA: Harvard University Press. Lewis D K ([1975] 1983). ‘Languages and language.’ In Lewis D (ed.) Philosophical papers, (2 vols). Oxford: Oxford University Press Vol. 1: 163–188. Quine W V (1951). ‘Two dogmas of empiricism.’ Philosophical Review 60, 20–43. Quine W V (1960). ‘Carnap and logical truth.’ Synthese 12, 350–374. Recanati F (2004). Literal meaning. Cambridge: Cambridge University Press. Strawson P F (1969). ‘Meaning and truth,’ Inaugural Lecture, reprinted in Logico-linguistic papers (1971). London: Methuen. 170–189.

Cooperative Principle K Lindblom, Stony Brook University, Stony Brook, NY, USA ß 2006 Elsevier Ltd. All rights reserved.

The Principle Itself In his William James Lectures at Harvard University in 1967, H. Paul Grice posited a general set of rules contributors to ordinary conversation were generally expected to follow. He named it the Cooperative Principle (CP), and formulated it as follows: Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged (Grice, 1989: 26).

At first glance, the Cooperative Principle may appear an idealistic representation of actual human communication. After all, as Grice himself has learned from his detractors, many believe ‘‘. . . even in the talk-exchanges of civilized people browbeating disputation and conversational sharp practices are far too common to be offenses against the fundamental dictates of conversational practice.’’ Further, even if one discounts the tone of an exchange, ‘‘much of our talk exchange is too haphazard to be directed toward an end cooperative or otherwise’’ (Grice, 1989: 369).

However, Grice never intended his use of the word ‘cooperation’ to indicate an ideal view of communication. Rather, Grice was trying to describe how it happens that – despite the haphazard or even agonistic nature of much ordinary human communication – most discourse participants are quite capable of making themselves understood and capable of understanding most others in the course of their daily business.

What Counts as Cooperation? Grice invites us to consider the following, quite unextraordinary exchange: A: I am out of petrol. B: There is a garage round the corner (Grice, 1989: 32).

Assuming A immediately proceeds to the garage, secures the petrol, and refills his car, we may describe B’s contribution as having been successful. By what rational process of thought was A so quickly able to come to the conclusion that the garage to which B refers would fulfill his need for petrol? Why did B’s utterance work? Grice’s answer: because A and B adhere to the Cooperative Principle of Discourse. It is not hard to imagine that two friends sharing a ride would want to help each other through a minor crisis; thus, ‘cooperation’ in this scenario seems quite apt.

Cooperative Principle 169

But imagine the exchange went this way instead: A: I am out of petrol. B: (sarcastically) How nice that you pay such close attention to important details.

In this second scenario, not only does B refuse to assist A in solving the problem, he uses the occasion to add to A’s conundrum an assault upon his character. Assuming A feels the sting, again B’s contribution has been successful. So how and why in this case has B’s contribution worked? How can such a sour response as B’s callous retort be considered ‘cooperative’? Again, Grice’s Cooperative Principle proves a useful answer. The explanation requires closer inspection of the strictness with which Grice uses the term.

The Cooperative Principle and the Maxims of Cooperative Discourse Grice explicates his Cooperative Principle of Discourse in ‘Logic and Conversation,’ the paper originally presented at Harvard University in 1967, later printed in Cole and Morgan (1975), and reprinted in a slightly revised version in Grice’s Studies in the Way of Words (1989). We cite from his final version as we assume this is the one he considered most complete. In the essay, Grice is careful to limit use of the CP for describing only talk exchanges that exhibit the following three specific characteristics: 1. The participants have some common immediate aim. 2. The contributions of the participants [are] dovetailed, mutually dependent. 3. There is some sort of understanding (often tacit) that, other things being equal, the transactions should continue in appropriate style unless both parties are agreeable that it should terminate (Grice, 1989: 29). Though he is careful to limit the CP’s application to talk exchanges that exhibit these particular cooperative characteristics, this list should not be read as an admission of great limitation. For Grice finds that most talk exchanges do follow the CP because most talk exchanges do, in fact, exhibit the cooperative characteristics he outlines: Our talk exchanges . . . are characteristically, to some degree at least, cooperative efforts; and each participant recognizes in them, to some extent, a common purpose or set of purposes, or at least a mutually accepted direction (Grice, 1989: 26).

Grice identified the Cooperative Principle as a ‘super principle’ or a ‘supreme principle’ (1989:

368–369) that he generalized from four conversational ‘maxims’ he claimed discourse participants ordinarily follow. With a nod to Kant, Grice identifies the maxims as: 1. Quantity (give as much information as is required, and no more than is required) 2. Quality (do not say what is false or that for which you lack adequate evidence) 3. Relation (be relevant) 4. Manner (be clear, be orderly, and avoid ambiguity) (1989: 28). Clear fulfillment of these maxims may be demonstrated in the following exchange: A: Do you know where I can buy some petrol? B: You can buy petrol at the garage right around the corner.

Let us assume that B is sincere and knowledgeable, and A finds the garage right away based upon B’s advice. It is the case then that B’s response to A’s question follows the maxims completely, giving exactly the right amount of information (quantity), information for which B has the required evidence (quality), information that is directly connected to A’s question (relevance), and information given in a fashion effectively and efficiently understood (manner). But Grice knew that people do not always follows these maxims as they communicate. (What dull business conversation analysis would be if they did!) Rather, interlocutors can fail to fulfill the maxims in a variety of ways, some mundane, some inadvertent, but others lead to what most consider the most powerful aspect of Grice’s CP: conversational ‘implicature.’

Failures to Fulfill Maxims and Implicature Grice describes four ways in which maxims may go unfulfilled in ordinary conversation. The first three ways are fairly straight forward. One might violate or infringe a maxim. This infringement is often done with the intention of misleading; for example, one might say, ‘Patricia was with a man last night’ as a way of making Patricia’s routine dinner out with her husband seem clandestine. One might opt out, making it clear that one refuses to cooperate in a conversation for some reason; for example, one may be legally bound not to provide information one has. Or, one might encounter a clash of maxims, facing the choice of violating one maxim or another. For example, one may not be able to give all of the information required (quantity) because one does not have adequate evidence for the information (quality).

170 Cooperative Principle

Most interesting is the final possibility for the nonfulfillment of a maxim: flouting or exploiting a maxim for the purpose of implicating information (implicature). This case is the one in which even an apparently uncooperative response illustrates discursive or linguistic cooperation. Recall the examples with which this article was introduced. A: I am out of petrol. B: There is a garage round the corner.

In this instance, we may claim, that B – at first blush – appears to break the maxim of relation. For what does a garage have to do with petrol? Since drivers are aware that garages sell petrol, it is not long before A realizes that B has not broken the maxim of relation at all; it is, in fact, instantaneous. B’s point is directly relevant. B is being cooperative in both the colloquial sense and the specialized sense Grice applies to the term. Grice’s Cooperative Principle makes sense of the speed with which A is able to process the usefulness of B’s contribution. A assumes B is following the maxims and would thus not mention the garage unless it had petrol. In the next scenario, however, the exchange, and thus the rational process by which A makes sense of B’s contribution, is markedly different: A: I am out of petrol. B: (sarcastically) How nice that you pay such close attention to important details.

In this instance, B flouts the maxim of quality by stating as true something for which he has specific and immediate evidence is untrue. One likely implication of B’s remark is that A is an idiot for not paying attention to such an important detail as having enough petrol in the car. If A feels the sting of B’s remark, A and B have exhibited discursive cooperation that resulted in an implicature directed to A from B. While one example hardly illustrates so many cases, Grice works out a number of possible forms of implicature: irony, metaphor, meiosis (understatement), hyperbole, social censure, deliberate ambiguity, and deliberate obscurity (for example, if one is trying to keep a secret from the children). In all of these cases, maxims are broken and the breaks result in specific information implied to and understood by the receiver of the utterance. The power of the conversational maxims to describe rational processes by which speakers and hearers make sense of each other’s utterances have energized many scholars of language and conversation across many fields. But, as the introduction to this article makes clear, the Cooperative Principle has not been free from serious critique.

Major Critiques of the Cooperative Principle Problems with the Term ‘Cooperation’

Despite the care with which he used the term ‘‘cooperation,’’ Grice is regularly accused of promulgating a theory that assumes too friendly a spirit of communicative interaction among people. This charge is most commonly made in work outside of Grice’s own field of linguistic philosophy. In effect, these detractors claim Grice is just too nice. For example, Tannen (1986) claims that Grice’s maxims of cooperative discourse can’t apply to ‘‘real conversations’’ because in conversation ‘‘we wouldn’t want to simply blurt out what we mean, because we’re judging the needs for involvement and independence’’ (1986: 34–45). Tannen assumes that Grice’s maxims are prescriptions that conversations must follow strictly in order to be considered cooperative. Cameron (1985) makes a similar case, taking issue with Grice’s application of the term ‘cooperation’ to all discourse. Cameron is quite correct in her claim that – at least in the colloquial sense of the term – assumptions regarding the appropriateness of ‘cooperative’ behavior have dogged women for centuries. But Cameron demonstrates a reductive view of Grice’s use of the term ‘cooperation’ when she describes Grice’s CP as an ‘inflexible’ and ‘unproductive’ apparatus that provides yet another way for both ‘chauvinists and feminists’ to believe that ‘whereas men compete in competition, women use co-operative strategies’ (1985: 40–41). Grice’s version of cooperation is more flexible and less dogmatic than these critics assume. Others have gone so far as to claim Grice advocated cooperation among conversational participants, believing Grice prescribed cooperation as the most effective way of engaging in meaningful communication with others. Cooper (1982), interested in applying Grice to theories of written composition, claims that Grice advocates cooperation because what enables conversation to proceed is an underlying assumption that we as conversants have purposes for conversing and that we recognize that these purposes are more likely to be fulfilled if we cooperate (1982: 112).

The notion that discourse participants cooperate with each other and that they do so out of a mutual benevolence is a misreading of Grice’s position on cooperative discourse; but it is one that persists. Grice himself acknowledged the difficulty some have had interpreting his use of ‘cooperation.’ As a final chapter to his 1989 book, Grice wrote a

Cooperative Principle 171

‘Retrospective Epilogue’ in which he considered criticism his theories had engendered. It has already been related that here Grice acknowledged that his theory suffers from a perceived naı¨vete´. To combat the criticism, Grice adds useful information about what counts as cooperative in discourse. First, he reminds readers of the sort of utterances he seeks to elucidate: voluntary talk exchanges that require some form of ‘‘collaboration in achieving exchange of information or the institution of decisions.’’ And, he points out that within exchanges intended to produce information or determine decisions, cooperation ‘‘may coexist with a high degree of reserve, hostility, and chicanery and with a high degree of diversity in the motivations underlying quite meager common objectives’’ (Grice, 1989: 369). Even as adversarial an exchange as a hostile courtroom cross-examination would at least simulate adherence to the CP. To further explain the sort of cooperation to which Grice refers, it might help to borrow a term from classical rhetoric. The ancient Greeks used the term ‘Nomos’ to indicate cultural practices that defined a group of people. Two closely related connotations of the term are useful for the present discussion: (1) ‘the mores’’ of a given collective (Ostwald, 1969: 33); and, (2) customs ‘‘which are generally observed by those among whom they prevail’’ ( 1969: 36). Nomos is not necessarily an explicit, prescribed set of conventions, but rather a set of conventions that are brought into existence by the very fact that people ordinarily follow them, perhaps without even realizing they are following a set of conventions. When American youths visit Europe, the locals can spot them in an instant by their footwear; but, in the United States, sneakers are simply what young people ordinarily wear. Nomos applied to conversation, then, is a set of conventions, or rules (or maxims) for talk according to which a group of people ordinarily makes meaning. In the maxims, Grice believes he has found universal conventions that all people may regularly follow in their meaning-making talk exchanges. In order for such a set of conventions to function, a certain degree of at least tacit assent to those conventions is necessary. Thus, the term ‘cooperation’ is quite apt. The crucial subtlety of Grice’s theory is this: interlocutors do not necessarily cooperate with each other; they cooperate with a set of conventions that allows each interlocutor to produce approximate enough meanings for communication to work. This form of cooperation is not necessarily benevolent at all; even the bitterest of verbal fights require linguistic cooperation to work.

The aim for Gricean conversation analysis – and thus the CP and the maxims – is not to advocate benevolent cooperation, but to prove the rationality of conversation. ‘‘. . . observance [of the maxims] promotes and their violation [except in the case of implicature] dispromotes conversational rationality’’ (Grice, 1989: 370). Although many have claimed Grice’s writing on the CP is ambiguous and is on occasion inconsistent with terminology, this should not be said of Grice’s measured use of the term ‘cooperation.’ Precise readings of Grice’s writing on cooperation demonstrate that he rarely, if ever, describes interlocutors as being cooperative. Rather, he claims that interlocutors’ contributions to conversation are cooperative. The contributions are uttered in cooperation with a set of conventions for producing meaning. In this sense, we might think of a pair of interlocutors as each operating according to the dictates of a set of conventions (the maxims) and thus they are ‘co/operators’: two operators of discourse operating at once. Consider also, Grice’s use of the term ‘dovetailed’ in describing the state of cooperative contributions to conversation (1989: 29). Dovetailed elements are placed within very close proximity to each other, maintaining the integrity of each separate element, but creating a stronger whole. Utterances remain utterances, but conversations take flight, implicating new meaning for hearers and speakers. Problems with the Maxims: The Haphazardness of Communication and the Specificity of Maxims

The second major critique of the Cooperative Principle has been a topic of spirited discussion among linguistic philosophers since Grice first proposed it. Grice himself identifies the problem as resulting from the thought that communication is simply too ‘‘haphazard’’ to be described accurately as having a cooperative end. Some forms of communication are not appropriately described by the CP. For example, as Grice puts it, ‘‘Chitchat goes nowhere, unless making the time pass is a journey’’ (1989: 369). Grice suggests the problem is two-fold. First, he agrees with critics that the maxims appear less ‘‘coordinate’’ than he would prefer. The maxim of quality appears in some ways more definitive of information than the other maxims. And, the maxims are not independent enough: relevance, as will be shown, has been often regarded as containing the essence of the other maxims. Second, Grice’s selection of cooperation as the ‘‘supreme Conversational Principle’’ underpinning the rationalizing operations of implicature remains, to say the least, not generally accepted (1989: 371).

172 Cooperative Principle

In his ‘Conversational maxims and rationality’ Kasher (1976), claims that cooperation is not a principle that accounts for all information conveyed by implicature because cooperation may be ‘‘contrary to [a speaker’s] interest’’ (1976: 241). Kasher offers the following example: Man A. is asked by Man B. ‘‘Who is going to marry your sister?’’ Man A., who knows the proper name of the intended, replies, ‘‘A peacock dealer.’’ Man A.’s reply, Kasher points out, does not satisfy the demands of full cooperation, and the CP, claims Kasher, cannot account for a situation in which there is no cooperation. As an alternative explanation for the operation of conversational implicature, Kasher poses the ‘‘Rationalization Principle,’’ which stems from the idea that Relevance (one of Grice’s maxims) is the only necessary element to explain a talk exchange. In a later work, Kasher renames his principle ‘‘the principle of rational coordination,’’ which states: ‘‘Given a desired basic purpose, the ideal speaker chooses that linguistic action which, he believes, most effectively and at least cost attains that purpose’’ (Kasher, 1977). Kasher’s well known critique thus began what has become ‘Relevance Theory,’ which is at its base a refinement of Grice’s earlier work. (See below for references to other work on Relevance.) Though in his final work he admitted some misgivings and offered minor refinements of his maxims of cooperative discourse, Grice, up until his death in 1988, defended his selection of the Cooperative Principle as the ‘supreme principle.’

Scholarship Influenced by the Cooperative Principle Though critiques of the CP remain unresolved – and perhaps they always will be – there is nevertheless no denying that Grice’s CP has had a dramatic influence on discourse studies across disciplines. The CP can probably not be considered definitive, but there is no denying it has proven quite generative. Because Grice’s Cooperative Principle has such cross-disciplinary appeal, any survey of work influenced by it is almost certainly incomplete. The sketch here is intended to acquaint the reader with some applications of major importance and to give readers a richer understanding of the depth and breadth of the influence Grice has had. (For more citations and commentary on work influenced by Grice’s CP, see Lindblom, 2001.) Grammar

Grammarians frequently view literal or sentence meaning as more important than any individual’s

intended meaning in making an utterance. Thus Chomsky, for example, has critiqued Grice’s CP for being unprincipled (1975: 112) and has complained that Grice’s approach to language study is behaviorist due to his focus on utterer’s intention (Suppes, 1986: 121). Other grammarians influenced by Chomsky have used similar logic to critique the CP as too concerned with context. Suppes, whose essay is an excellent synthesis of grammar studies using Grice, argues that these grammarians assume an even more closely rule-bound language governance, making their claims essentialist. Further, he argues, that Grice’s CP is useful precisely because it is so context dependent. Chomsky’s positivism is not an issue in a Gricean analysis because Grice’s work ‘‘bring[s] out the importance of context’’ (Suppes, 1986: 124). Neo-Gricean Pragmatics

Grice’s influence is most apparent in a branch of linguistic study that has become known among some as Neo-Gricean pragmatics. Scholars in this field have greatly revised Grice’s maxims of cooperative discourse in a variety of interesting ways, but they have maintained the basic direction of Grice’s work, especially in regard to the concept of conversational implicature. Huang (1991) usefully surveys a great deal of scholarship from well known scholars in this area, including Atlas, Levinson, Sperber and Wilson, Leech, and Horn (see Neo-Gricean Pragmatics). As mentioned previously, Kasher developed a specific focus on one of Grice’s maxims, thus establishing the field of Relevance Theory. Sperber and Wilson have also generated an important Relevance Theory, theirs influenced by Fodor’s theory of cognitive modularity. According to Huang, Sperber and Wilson believe ‘‘one is always maximizing the informational value of contextual stimuli to interpret the utterance in a way which is most consistent with the Principle of Relevance’’ (Huang, 1991: 303). Along with texts by Kasher and Sperber and Wilson, important developments in Relevance Theory may also be found in Grandy and Warner (1986) and Tsohatzidis (1994). More recently, a special issue of Journal of Pragmatics has focused exclusively on Gricean themes in pragmatic analysis. Although he resists the notion of a school of ‘Neo-Gricean’ approaches, the journal editor has nevertheless gathered a collection of papers that illustrates that Grice’s CP and maxims are ideas that ‘‘shook the world of language study in the past century, and continue to move and inspire today’s research’’ (Mey, 2002: 911). The special issue includes essays focused on social roles

Cooperative Principle 173

in Japan, maxim confluence among multi-lingual code-switchers, academic writing, and other current approaches to Gricean pragmatics. The CP is not only applicable across cultures, it is also possible to use Gricean analysis to examine a ‘theme’ in discourse. For example, much interesting work is underway in the pragmatics of humor (for example, Attardo, 2003). Politeness Theory

Politeness theorists use Grice’s CP specifically to examine the ways in which maxims are exploited to indicate some special status of the hearer. For example, a lawyer would answer a judge, ‘‘Yes, your honor.’’ The ‘your honor’ breaks the maxims of quantity – as surely the judge is aware of her title – but including the words ‘your honor’ implies the speaker’s understanding that the judge holds a greater position of authority. For a valuable survey of Politeness Theories, see Fraser (1990). In this piece Fraser examines politeness theories posited by Lakoff and by Leech, and he explains that both of these theories rely heavily on Grice’s CP, though Lakoff reduces the maxims by two and Leech increases the number by six. The most influential Politeness Theory was developed by Brown and Levinson (1987). Brown and Levinson’s work is primarily influenced by Goffman, but they also claim ‘‘Grice’s theory of conversational implicature and the framework of the maxims that give rise to such implicatures is essentially correct’’ (1987: 3). Goffman’s influence may be seen in Brown and Levinson’s concentration on the concept of ‘face wants.’ Their politeness theory examines the ways in which speakers and hearers use conversational implicature to fulfill the ‘face wants’ of higher-status participants in conversation. Like the CP itself, Politeness Theory is certainly not free from critique, but it has resulted in fascinating analysis and has generated much spirited debate (see Politeness). Question Processing

Several works in the area of question processing have developed from Grice’s Cooperative Principle. Questions and questioning patterns can result in implicatures regarding politeness, status, and authority, and they operate according to conventions that many have build upon Grice’s maxims. Singer provides a useful assessment of the study of question processing in all of its stages: question encoding, question categories, selection of answering strategies, memory search, comparison, and response (1990: 261). He identifies ‘response’ as the category for which Grice’s CP is the most powerful.

Most interesting in ‘response’ is Lehnert’s theory of secondary questions. According to Singer, ‘‘If asked ‘Are there oil wells in Manitoba?’ a simple ‘no’ would appear rather blunt. Instead, in keeping with Grice’s ‘maxim of quantity’ and Lehnert’s theory of secondary questions, it is more appropriate to hypothesize the next logical question and answer, ‘There are a few, but there is not much oil east of Saskatchewan.’’’ (Singer, 1990: 273). Gender Studies

Though above we single out some scholarship in gender studies for applying superficial accounts of the CP, there is excellent scholarship in the field that has used Grice’s CP and maxims to examine behavioral and status differences between women and men. Brown, using the Politeness Theory she developed with Levinson, has used Grice to examine the sociopolitical situations of women in non-Western cultures (1990). Rundquist and Michell have looked at men’s and women’s use of conversational strategies in western culture. Rundquist uses Grice to confront the ‘‘popular belief that women’s speech is more indirect than men’s’’ (1992: 431). She finds that men more frequently than women flout maxims to implicate information. Some of the purposes she identifies for which men tend to implicate information include to ‘‘give direction to their children,’’ to ‘‘put themselves down as well as to tease others,’’ ‘‘to be humorous,’’ ‘‘to show themselves off to their best advantage in conversation,’’ and perhaps most significantly for a study of gender, ‘‘to avoid direct confrontation’’ (Rundquist, 1992: 447). Michell (1984) questions if women often flout maxims to implicate information. She determines that women are far more likely to simply lie to protect themselves from verbal and physical abuse in a misogynist culture. For example, imagine a woman missed a meeting because she had painful menstrual cramps and because she had an important report to finish. This woman would be far more likely to claim she missed the meeting because of the report, leaving out the mention of cramps, even if the report was not even close to being the primary reason for her absence; her omission is an opting out, not an implicature (Michell, 1984: 376). Teacher Research and Pedagogy

Studies in teacher research have approached Grice’s Cooperative Principle for two important purposes: (1) to examine the discourse of the classroom situation; and (2) to establish effective pedagogical strategies. Three valuable works serving the first purpose may be found in Edwards and Mercer (1987), Kleifgen (1990), and McCarthy (1987). The first two works

174 Cooperative Principle

focus closely on the ways in which the educational scenario highlights the need for listeners to fill in propositions implicated by speakers. Edwards and Mercer examine the ways in which children become more and more proficient in these skills through their educational training. Kleifgen suggests that teachers should look for the points in classroom discourse when students begin to predict the outcomes of teachers’ questions so quickly that it is clear the students are ready to move on to a higher level of difficulty. McCarthy’s essay – probably the finest treatment of the CP from a pedagogy scholar – traces the development of a college student as he writes for his composition, cell biology, and poetry classes. Examining both the student’s written assignments and the teachers’ detailed responses to them, McCarthy uses Grice’s CP to determine what is required for this student to cooperate as a writer in each class and whether or not he was successful. In McCarthy’s judgment, the student was successful as a student because he was able to determine ‘‘what counted as ‘cooperation’’’ in each of his classes (1987: 249). Thus, McCarthy uses the CP in a flexible, context specific manner consistent with Grice’s own descriptions of it. Other scholars with an interest in writing instruction have used Grice for productive ends. Though they are too likely to read Grice’s CP as describing a benevolent, cooperative relationship between writer and reader, Cooper (1982; 1984) and Lovejoy (1987) have used the CP to positive effect in college writing classes. Lovejoy’s very practical revising template using the maxims is especially useful for college students learning to write more sophisticated texts. Professors of literature have also found Grice’s CP of use in articulating abstract themes from literature. Pratt’s (1977) work is probably the best known, but for a fascinating reading of Beckett’s Waiting for Godot using Gricean analysis, see Gautam and Sharma (1986).

Conclusion A cross-disciplinary examination of how Grice’s Cooperative Principle has been put into practice clearly indicates that the CP has had tremendous appeal and influence. It is precisely the CP’s flexibility and context-dependent nature that makes it of such broad value. However, that same flexibility and context-dependence has also generated a fair number of critiques that cite lack of specificity and a toorelativistic application to discourse. Thus, it seems, the CP’s strength is also its weakness. Certainly a great diversity of scholars have found the Cooperative Principle of Discourse and its attendant Maxims of Conversational Cooperation useful as analytical tools

toward a variety of ends. It is doubtful, however, that the notion of ‘cooperation’ among discourse participants will ever be universally accepted. See also: Context and Common Ground; Conventions in Language; Default Semantics; Expression Meaning vs Utterance/Speaker Meaning; Face; Gender; Honorifics; Human Reasoning and Language Interpretation; Implicature; Intention and Semantics; Neo-Gricean Pragmatics; Nonmonotonic Inference; Nonstandard Language Use; Politeness Strategies as Linguistic Variables; Politeness; Pragmatic Determinants of What Is Said; Rhetoric, Classical.

Bibliography Attardo S (2003). ‘Introduction: the pragmatics of humor.’ Journal of Pragmatics 35, 1287–1294. Brown P (1990). ‘Gender, politeness, and confrontation in Tenejapa.’ Discourse Processes 13, 123–141. Brown P & Levinson S C (1987). Politeness: some universals in language use. Cambridge: Cambridge University Press. Cameron D (1985). Feminism and linguistic theory. New York: St. Martins. Chomsky N (1975). Reflections on language. New York: Pantheon. Cooper M M (1982). ‘Context as vehicle: implicatures in writing.’ In Nystrand M (ed.) What writers know: The language, process, and structure of written discourse. New York: Academic Press. 105–128. Cooper M M (1984). ‘The pragmatics of form: how do writers discover what to do when?’ In Beach R & Bridwell L S (eds.) New directions in composition research. New York: The Guildford Press. 109–126. Edwards D & Mercer N (1987). Common knowledge: the development of understanding in the classroom. London: Routledge. Fraser B (1990). ‘Perspectives on politeness.’ Journal of Pragmatics 14, 219–236. Gautam K & Sharma M (1986). ‘Dialogue in Waiting for Godot and Grice’s concept of implicature.’ Modern Drama 29, 580–586. Grandy R & Warner R (eds.) (1986). Philosophical grounds of rationality: intentions, categories, ends. Oxford: Clarendon. Grice H P (1975). ‘Logic and conversation.’ In Cole P & Morgan J L (eds.) Syntax and semantics: Speech acts 3. New York: Academic Press. 58–85. Grice H P (1989). Studies in the way of words. Cambridge: Harvard University Press. Huang Y (1991). ‘A neo-Gricean pragmatic theory of anaphora.’ Journal of Linguistics 27, 301–335. Kasher A (1976). ‘Conversational maxims and rationality.’ In Kasher A (ed.) Language in focus: foundations, methods and system. Reidel: Dordrecht. 197–216. Kasher A (1977). ‘Foundations of philosophical pragmatics.’ In Butts R E & Hintikka J (eds.) Basic problems in methodology and linguistics: part three of the proceedings of the Fifth International Congress of Logic,

Coreference: Identity and Similarity 175 Methodology and Philosophy of Science, London, Ontario, Canada-1975. Reidel: Dordrecht. 225–242. Kleifgen J A (1990). ‘Prekindergarten children’s second discourse learning.’ Discourse Processes 13, 225–242. Lehnert W (1978). The process of question answering. Hillsdale, New Jersey: Erlbaum. Lindblom K (2001). ‘Cooperating with Grice: a crossdiscplinary metaperspective on uses of Grice’s Cooperative Principle.’ Journal of Pragmatics 33, 1601–1623. Lovejoy K B (1987). ‘The Gricean model: A revising rubric.’ Journal of Teaching Writing 6, 9–18. McCarthy L P (1987). ‘A stranger in strange lands: a college student writing across the curriculum.’ Research in the Teaching of English 21, 233–265. Mey J (2002). ‘To Grice or not to Grice.’ Journal of Pragmatics 34, 911. Michell G (1984). ‘Women and lying: a pragmatic and semantic analysis of ‘‘telling it slant.’’’ Women’s Studies International Forum 7, 375–383.

Ostwald M (1969). Nomos and the beginnings of the Athenian democracy. Oxford: Oxford University Press. Pratt M L (1977). Toward a speech act theory of literary discourse. Bloomington: Indiana University Press. Rundquist S (1992). ‘Indirectness: a gender study of flouting Grice’s maxims.’ Journal of Pragmatics 18, 431–449. Singer M (1990). ‘Answering questions about discourse.’ Discourse Processes 13, 261–277. Sperber D & Wilson D (1986). Relevance: communication and cognition (2nd edn. 1995). Oxford: Basil Blackwell. Suppes P (1986). ‘The primacy of utterer’s meaning.’ In Grandy R & Warner R (eds.) 109–130. Tannen D (1986). That’s not what I meant: how conversational style makes or breaks your relations with others. New York: Morrow. Tsohatzidis S (ed.) (1994). Foundations of speech act theory: philosophical and linguistic perspectives. London: Routledge.

Coreference: Identity and Similarity Y Huang, University of Reading, Reading, UK ß 2006 Elsevier Ltd. All rights reserved.

Defining Coreference Coreference can in general be defined as the phenomenon whereby two or more anaphoric or referential expressions denote, or refer to, the same entity in the external world. This can be illustrated by (1). (1) John said that he had won two Olympic gold medals.

In (1), if the anaphoric pronoun he refers to what its antecedent John refers to, then there is a relation of coreference obtaining between them; he is thus said to be coreferential with John. In contemporary linguistics, including generative grammar, coreference between two or more anaphoric or referential expressions in a sentence or discourse is usually marked by using identical subscript letters, as in (2) or numbers, as in (3). (2) Johni said that hei had won two Olympic gold medals. (3) John1 said that he1 had won two Olympic gold medals.

Another common way of saying that he and John are coreferential is to say that they are coindexed. On the other hand, in (1) if he does not refer to what John refers to, then the two anaphoric/referential expressions are disjoint in reference. Disjoint in reference is typically marked by using different subscript letters, as in (4), or numbers, as in (5).

(4) Johni said that hej had won two Olympic gold medals. (5) John1 said that he2 had won two Olympic gold medals.

Identity From a truth-conditional, semantic point of view, the anaphoric relation exhibited in (1) is called referential anaphora (e.g., Huang, 2000: 5). A referential anaphoric expression refers to some entity in the external world either directly, as in (6), or via its coreference with its antecedent in the same sentence or discourse, as in (1). In the latter case, as already mentioned, the referentially anaphoric expression refers to what its antecedent refers to, thus they are of referential identity. (6) (Referent in the physical context, and with selecting gesture) He’s the robber!

Similarity We move next to types of anaphoric relations that may be said to indicate some kind of anaphoric or referential dependency other than referential identity. Bound-Variable Anaphora

Sentence (7) is an example of bound-variable anaphora. (7) Every child1 wishes that he1 could visit the land of Lilliput.

176 Coreference: Identity and Similarity

Generally speaking, a bound-variable anaphoric expression does not refer to any fixed entity in the external world, as can be shown by sentences such as (8) below. (8) Nobody1 thought that he1 would wish to live with the giants of Brobdingnag.

But it is interpreted by virtue of its dependency on some quantificational expression in the same sentence or discourse, thus seeming to be the natural language counterpart of a bound variable in first-order logic. As indicated by (7) and (8), the bound-variable anaphoric expression and its quantificational antecedent can be coindexed, but they are not considered to be coreferential. One interesting characteristic of bound-variable anaphora is that different languages afford their speakers different types of anaphoric or referential expressions to encode such a dependency, (cf. Huang, 2000: 6). For example, to express a bound-variable anaphoric relation between a matrix subject and an embedded subject, while English normally allows neither gaps (or empty categories) nor reflexives, Serbo-Croatian allows gaps (or empty categories), Marathi allows reflexives, and Chinese allows both. (9a) Gaps or empty categories (Serbo-Croatian cited in Huang, 2000: 6) svaki student misli da ce Ø every-M-SG student thinks that will dobiti desetku. get A ‘Every student thinks that he will get an A.’ (9b) Pronouns Every actress said that her career has been a roller coaster ride. (9c) Reflexives (Marathi, cited in Huang, 2000: 6) sarva˜a˜naa1 vaatta ki aapan1 libral aahot. everybody believes that self liberal is ‘Everybody believes that he is liberal.’ (9d) Gaps and reflexives (Chinese) mei ge ren dou shouØ/ ziji every CL person all say self xihuan zhongguocai. like Chinese food ‘Everybody says that he likes Chinese cuisine.’

Note next that crosslinguistically bound-variable anaphora occasionally can also be encoded by repeating the same lexical NP. (10) Of every ritual bronze that was found in the tomb, it was subsequently discovered that the bronze belonged to the Chinese Northern Song e´lite.

Finally, as noted in Kempson (1988) and Huang (1994: 292), examples of the following kind can also

have a bound-variable interpretation. On such a reading, the supervisor is interpreted as each Ph.D. student’s supervisor. Of particular interest here is that this bound-variable interpretation is obtained only by virtue of the addition of the pragmatic inference that every Ph.D. student characteristically has a supervisor. (11) Every Ph.D. student thinks that the supervisor is intelligent.

E-Type Anaphora

Somewhat related to bound-variable anaphora is E-type anaphora, also known as donkey anaphora, first discussed by Geach (1962: 128). It is called E-type anaphora in memory of the late Oxford philosopher Gareth Evans after Evans (1977). A classical example of E-type anaphora is given in (12). (12) Every farmer who owns a donkey beats it.

For technical reasons, an E-type anaphoric relation is neither pure referential anaphora nor pure bound-variable anaphora, but appears to constitute a unified semantic type of its own (Evans, 1977). The main reason why it is neither pure referential anaphora nor pure bound-variable anaphora is this: the antecedent of the anaphoric pronoun a donkey is variable-bound by a quantificational expression every farmer, but unlike in the case of pure bound-variable anaphora such as (7), the antecedent does not syntactically bind the anaphoric pronoun, because the antecedent does not c-command the anaphoric pronoun. Put differently, in E-type anaphora, the anaphoric expression falls outside the scope of its binder (see also Heim, 1990; Kamp and Reyle, 1993; de Swart, 1998: 127–130). Anaphora of Laziness

The following is a classical example of anaphora or pronoun of ‘laziness’ (e.g., Karttunen, 1976). (13) The man who gave his paycheck to his wife was wiser than the man who gave it to his mistress.

Anaphora of laziness is so-called because it exhibits neither a referential anaphoric relation nor a boundvariable anaphoric relation. Rather, it functions as a shorthand for a repetition of its antecedent, which supplies the descriptive content for the anaphoric expression. In other words, it is a device for a repeated occurrence of the linguistic form, rather than the truth-conditional content of its antecedent. This can be illustrated by a consideration of (13). In (13), the referent of it is the paycheck of the second man rather than the paycheck of the first man. There is thus no coreferential relation between it and the paycheck.

Coreference: Identity and Similarity 177

Anaphora of laziness is considered as a case of the semantically defined type of identity of sense anaphora, that is, anaphora in which the anaphoric expression and its antecedent are related in terms of sense. It is on a par with N-bar-anaphora, as in (14), and arguably with the sloppy reading of VP-ellipsis, as in (16b) (see Huang, 2000: 131–156 for further discussion of VP-ellipsis). (14) Mary’s favorite tenor is Pavarotti, but Jane’s Ø is Carreras. (15) John loves his wife, and Peter does, too (16a) Strict reading John likes John’s wife, and Peter loves John’s wife. (16b) Sloppy reading John likes John’s wife, and Peter loves Peter’s wife.

Identity of sense anaphora contrasts with the semantically defined type of identity of reference anaphora, that is, anaphora in which the anaphoric expression and its antecedent have identical referent, as (2) illustrates. Bridging Cross-reference Anaphora

Bridging cross-reference anaphora is used to establish an association with some preceding expression or antecedent in the same sentence or discourse via the addition of background assumption (e.g., Clark, 1977; Huang, 1994, 2000). What is tacitly bridged is typically the information that is not structurally retrievable either from the sentence or discourse that triggers the inferential process. A typical example of bridging cross-reference is given below. (17) John walked into a church. The stained glass windows were magnificent.

In (17), the anaphoric expression is the stained glass windows, and its antecedent is a church. The pragmatically inferred background assumption is that the church John walked into has stained glass windows. As pointed out in Huang (2000: 249), bridging cross-reference anaphora has three characteristic properties:

1. The anaphoric expression, which is usually a definite NP, must occur in the appropriate context of its antecedent, which is usually an indefinite NP, 2. There is some semantic and/or pragmatic relation between the anaphoric expression and its ‘antecedent,’ 3. The anaphoric expression and its antecedent do not stand in a strictly coreferential relation. Rather, they are linked to each other via the addition of some pragmatic inference (see Huang, 2000: 249–253 for further discussion). Other interesting cases of anaphoric or referential dependency other than referential identity may include split antecedence and overlap in reference. See also: Anaphora, Cataphora, Exophora, Logophoricity;

Discourse Anaphora; Donkey Sentences; Intensifying Reflexives; Meaning, Sense, and Reference; Reference: Philosophical Theories.

Bibliography Clark H (1977). ‘Bridging.’ In Wason P & Johnson-Laird P (eds.) Thinking: readings in cognitive science. Cambridge: Cambridge University Press. 411–420. De Swart H (1998). Introduction to natural language semantics. Stanford. CSLI. Evans G (1977). ‘Pronouns, quantifiers, and relative clauses. Parts I and II.’ Canadian Journal of Philosophy 7, 467–536. Geach P (1962). Reference and generality. Ithaca: Cornell University Press. Heim I (1990). ‘E-type pronouns and donkey anaphora.’ Linguistics and Philosophy 13, 137–177. Huang Y (1994). The syntax and pragmatics of anaphora. Cambridge Studies in Linguistics. Cambridge: Cambridge University Press. Huang Y (2000). Anaphora: a cross-linguistic study. Oxford Studies in Typology and Linguistic Theory. Oxford: Oxford University Press. Kamp H & Reyle U (1993). From discourse to logic: introduction to model theoretic semantics of natural languages, formal logic and discourse representation theory. Dordrecht: Kluwer. Karttunen L (1976). ‘Discourse referents.’ In McCawley J (ed.) Syntax and semantics 7: notes from the linguistic unground. London: Academic Press. 363–385. Kempson R (1988). ‘Logical form: the grammar cognition interface.’ Journal of Linguistics 24, 393–431.

178 Counterfactuals

Counterfactuals S Barker, University of Nottingham, Nottingham, UK ß 2006 Elsevier Ltd. All rights reserved.

Counterfactuals are a class of conditionals, or if-then statements. They are interesting because they are instances of a semantically significant class of sentences, conditionals, and because of their intimate connections to modality, laws of nature, causation, dispositions, knowledge, perception, and other concepts, all of which are of central philosophical concern. Counterfactuals are those if-sentences that have a modal auxiliary as main consequent verb and have in their antecedent and consequent clauses a backward shift of syntactic tense relative to notional tense that renders those clauses incapable of self-standing assertion (see Dudman, 1994). Some instances: (1) If Osama had not existed, George would/might/ could have created him. (2) If Osama were to strike, George would defeat him.

Paradigmatically, counterfactuals are uttered in the knowledge that they have false antecedents, but this is not a characterizing mark. They may be issued when their antecedents are believed to be merely improbable. Furthermore, indicative conditionals may be asserted with known false antecedents; consider (3). (3) If Bill Clinton was bald, no one knew about it.

There is some debate about where to draw the line between counterfactuals and, more broadly, subjunctives and indicative conditionals (see Jackson, 1987; Dudman, 1994; Bennett, 2003). This mainly concerns the status of future open conditionals like (4), which, as Dudman (1994) pointed out, has the same syntactic tense shift found in counterfactuals and has a consequent modal auxiliary. (4) If Clinton goes bald, everyone will know about it.

Another issue is whether counterfactuals have truth conditions. There are strong arguments that indicatives, like (3), do not but that they do require a probabilistic assertability condition semantics (see Adams, 1975; Edgington, 1995). Unification of if indicates that counterfactuals should be treated similarly. But just how to provide a probabilistic semantics for counterfactuals is far from obvious (see Edgington, 1995; Barker, 1999). Let us assume from now on that counterfactuals do have truth conditions. The central problem of

counterfactuals is devising a noncircular specification of their truth conditions. Philosophers have tended to concentrate on the would-conditionals, conditions of the form If P had been the case, Q would have been (abbreviated [P > Q]), the assumption being that other counterfactuals, such as might-conditionals (If P had been the case, Q might have been, abbreviated [P e! Q]), can be analyzed in terms of the former (for proposals, see Lewis, 1973, 1986c; Stalnaker, 1981). There are broadly two forms of analysis of counterfactual truth conditions: metalinguistic analyses and possible worlds analyses. Both are said to be inspired by the idea, in Ramsey (1990 [1929]), that we evaluate a conditional by adding the antecedent to our stock of beliefs and adjusting for consistency, to determine if the consequent is in the resultant state. The older approach is the metalinguistic one. Possible worlds are currently the entrenched view. Lewis (1973) argued that they are equivalent, which is open to dispute (see Barker, 1999).

Metalinguistic Approaches On the metalinguistic approach, counterfactuals are like condensed arguments – roughly, P and laws of nature L plus facts cotenable with P (legitimate factual premises) entail, or probabilize, Q: (P > Q) is true  {P þ Cotenable factual premises þ L} ! (probable) Q

The challenge for metalinguistic theories is (i) to define in a noncircular way the conditions for A’s being cotenable with P and (ii) to fashion a conception of law. Goodman (1965) famously found both of these to be insuperable challenges. Goodman argued that the extensionally correct specification for a premise A to be cotenable with P is as follows: A is contenable with P  (P > still A)

The conditional (P > still A) is a semifactual: a counterfactual with a true consequent A, expressing that P is causally benign with respect to A, as in If the match had been struck it would still have been dry. Goodman argued that the truth conditions of a semifactual are as follows: (P > still A)  (P > A) & A.

But this introduces circularity into the analysis, since we have defined conditions of cotenability in terms of counterfactuals.

Counterfactuals 179

The second challenge – finding some acceptable analysis of natural laws – is to provide an explanation of why some generalities support counterfactuals and others do not. Goodman despaired of finding that mysterious counterfactual supporting feature. The current conception is that the problem of law is less acute, since irreducible modalities are now more palatable (for discussion of some of the issues, see Jackson and Pargetter, 1980; Sober, 1988). After Goodman, various metalinguistic theories attempted to deal with the problem of cotenability, e.g., Pollock (1981), but were not successful. Kvart (1986) represented a very sophisticated breakthrough. Central to his analysis was the idea that counterfactuals presuppose indeterminism for their literal truth, since they involve a conception of reality’s branching off from a time t0 prior to the antecedent time tP and developing through a scenario of transition to P – and orderly, lawful development of reality to P. Kvart showed that Goodman’s cotenability condition was extensionally incorrect, since his truth conditions for semifactuals were wrong and he analyzed cotenability by using the notions of positive causal relevance and causal irrelevance, which in turn were reduced to probabilistic relations. Kvart’s theory, however, could not capture certain cases (see Belzer, 1993). Barker (1999) provided an improvement over Kvart with a quasialgorithmic approach to solving the problem of cotenability; Barker’s approach invoked causal or more broadly connective relations. The truth conditions of a semifactual relative to a scenario S leading to P are given thus, where pi is an event in S: (P > still A)  (P e! pi causes :A) & A

To determine (P > Q) relative to a scenario S, we need to find an instantiated law-based generality (G1) linking P to Q via true A. To determine A’s cotenability, we need to evaluate (P > still A). To determine that, we need to determine (P e! pi causes :A). This can be evaluated directly if there is no instantiated lawbased causal generality (G2) linking P, factual premise B, and the possibility of pi causing :A. If there is no (G2), (P e! pi causes :A) is false, (P > still A) is true, and so (P > Q) is true. If there is a (G2), then we ask if the might-conditional cotenability condition (P e! still B) is true, and that leads to a line of inquiry similar to that for (P > still A). The recursion is bound to terminate at some stage and provide a determinate answer about the truth values relative to S of the counterfactuals in the procedure. We do the same evaluation of (P > Q) for all the scenarios leading to P. A noncircular determination of (P > Q)’s true results. Unlike Kvart’s account, Barker’s

approach can be extended to deal with probabilistic counterfactuals.

Possible Worlds Approach Lewis (1973) proposed these truth conditions in terms of possible worlds and similarity relations: (P > Q) is true iff some (accessible) world where both P and Q is more similar to our actual world @, overall, than is any world where P is true and Q is false.

Stalnaker (1968) offered a variant that assumes that there is a unique closest P-world to the actual: Lewis’s does not. Lewis (1973) gave some evidence that this general approach captures the basic logic of counterfactuals – explaining the failure of transitivity, antecedent strengthening, etc. (Metalinguistic theories can do similarly through relativizing entailments to scenarios of transition.) According to Lewis, would-counterfactuals have two readings: forwardtrackers and backtrackers. Forwardtrackers are meant to be those counterfactuals that are used in analyzing causation and that capture the sense in which the future depends causally on the past. Backtrackers carry information about the causal in the opposite direction. Lewis (1986c) offered R as the criteria governing the similarity metric that determines relative similarity of possible worlds in the case of forwardtracking counterfactuals. R: i. It is of first importance to avoid big, widespread, diverse violations of law. ii. It is of second importance to maximize the spatiotemporal region throughout which perfect match of particular fact prevails. iii. It is of third importance to avoid even small, localized, simple violations of law. iv. It is of little or no importance to secure approximate similarity of particular fact, even in matters that concern us greatly. The result of application of R is meant to be that the closest P-worlds to the actual @ will be worlds that diverge from @ at a time t0 not too long before tP, at which point a small miracle of divergence occurs, and thereafter develop in a law-governed way. Miracle has purely a technical meaning: miracle relative to the laws of @, not with respect to the laws of the P-worlds, which remain unbroken. Laws of nature require nothing stronger than a Humean account. Laws are simply statements that feature in the most economical description of the pattern of matters of fact in a world in which perfectly natural properties are picked out.

180 Counterfactuals

Unfortunately, there is strong evidence that Lewis’s analysis fails for deterministic cases (see Elga, 2000; Hausman, 1998), because the de facto temporal asymmetries of physical determination that Lewis thought provide the basis for counterfactual dependency of future on past, do not in fact deliver that result. There are strong reasons to think it fails for indeterministic cases and in particular for counterfactuals such as (5), which depends for its truth on a chance outcome: the coin’s landing heads. (5) If I had bet on heads I would have won.

If, following clause (iv) of R, approximate agreement of fact after tP counts for nothing, Lewis’s theory deems (5) false, because (I bet heads)-worlds in which the coin lands heads and I win and (I bet heads)worlds where it lands tails and I lose are equally similar to the actual world. If we allow approximate agreement of fact to count, then it is still not obvious why (5) would come out true. Change the situation so that the end of the world will occur if I win, then (5) comes out false, even though my betting has no causal influence on the coin. It seems global similarity has absolutely nothing to do with the evaluation of (5). Indeed, it can be argued that causation is essential (see Barker, 2003). Lewis’s analysis fails.

Some Issues A pure analysis of counterfactuals without invoking causation looks dubious. There is real concern that Lewis’s (1986b) project of a counterfactual analysis of causation will be circular (see Barker, 2003). A more general issue is the direction of counterfactual dependency. To date, both metalinguistic and possible worlds treatments have failed to provide an adequate account. Why does counterfactual thought involve divergence from past to future rather than the other way around? (For one interesting discussion of these matters, see Price, 1996.) See also: Aspect and Aktionsart; Causatives; Conditionals; Evidentiality; Formal Semantics; Logic and Language; Logical and Linguistic Notation; Logical Consequence; Possible Worlds; Propositional and Predicate Logic.

Bibliography Adams E (1975). The logic of conditionals. Dordrecht: Reidel.

Barker S (1999). ‘Counterfactuals, probabilistic counterfactuals and causation.’ Mind 108, 427–469. Barker S (2003). ‘A dilemma for the counterfactual analysis of causation.’ Australian Journal of Philosophy 81, 62–77. Belzer M (1993). ‘Kvart’s a theory of counterfactuals.’ Nous 27, 113–118. Bennett J (2003). A philosophical guide to conditionals. Oxford: Oxford University Press. Dudman V H (1994). ‘On conditionals.’ Journal of Philosophy 91, 113–128. Edgington D (1995). ‘On conditionals.’ Mind 104, 235–329. Elga A (2000). ‘Statistical mechanics and the asymmetry of counterfactual dependence.’ Philosophy of Science, suppl. vol. 68, 313–324. Goodman N (1965). Fact, fiction and forecast. Cambridge, MA: Harvard. Hausman D M (1998). Causal asymmetries. Cambridge: Cambridge University Press. Jackson F (1987). Conditionals. Oxford: Basil Blackwell. Jackson F & Pargetter R (1980). ‘Confirmation and the nomological.’ Canadian Journal of Philosophy 10, 415–428. Kvart I (1986). A theory of counterfactuals. Indianapolis: Hackett Publishing. Pollock J (1981). ‘A refined theory of counterfactuals.’ Philosophical Studies 74, 239–266. Lewis D (1973). Counterfactuals. Cambridge, MA: Harvard University Press. Lewis D (1986a). Philosophical papers (vol. 2). Oxford: Oxford University Press. Lewis D (1986b). ‘Causation.’ In Lewis (ed.) (1986a). 159–213. Lewis D (1986c). ‘Counterfactual dependence and time’s arrow.’ In Lewis (ed.) (1986a). 32–66. Price H (1996). Time’s arrow and Archimedes’ point: new directions for the physics of time. New York and Oxford: Oxford University Press. Ramsey F (1990 [1929]). ‘General propositions and causality.’ In Mellor D H (ed.) Philosophical papers. Cambridge: Cambridge University Press. 145–163. Sober E (1988). ‘Confirmation and lawlikeness.’ Philosophical Review 97, 93–98. Stalnaker R (1968). ‘A theory of conditionals.’ In Studies in logical theory, American philosophical quarterly. Monograph 2. Oxford: Blackwell. 98–112. Stalnaker R (1981). ‘A defence of conditional excluded middle.’ In Harper W et al. (eds.) Ifs: conditionals, belief, decision, chance and time. Dordrecht: Reidel. 87–104.

D Default Semantics K Jaszczolt, University of Cambridge, Cambridge, UK ß 2006 Elsevier Ltd. All rights reserved.

It is hardly contestable that the interpretation of the speaker’s utterance by the addressee is frequently driven by the salience of some of the possible interpretations. This salience can be caused by a greater frequency of a certain meaning or by its simplicity, but ultimately it rests on knowledge of social and cultural conventions or the cognitive principles that govern our thinking. Default Semantics concerns such cognitive defaults. Before laying out the principles of Default Semantics, it is necessary to situate the default-based views in the research on the semantics/pragmatics interface. According to the traditional view, in addition to lexical and syntactic ambiguities, there are also semantic ambiguities such as that between the wide and narrow scope of negation in (1), represented in (1a) and (1b) respectively. ‘KoF’ stands for ‘present king of France’. (1) The present king of France is not bald. (1a) :9x (KoF(x) & 8y (KoF(y)  y ¼ x) & Bald (x)) (1b) 9x (KoF(x) & 8y (KoF(y)  y ¼ x) & :Bald (x))

The ambiguity position, held by Russell, among others, has been successfully refuted. Instead, it has been proposed that such differences in meaning belong to what is implicated rather than what is said (Grice, 1975), and subsequently that semantics can be underspecified as to some aspects of meaning and require pragmatic intrusion in order to arrive at the full propositional representation of the utterance (see, e.g., Carston, 1988, 2002) (see Pragmatics and Semantics). It is now usual to talk about the underdetermination of sense and underspecification of the logical form. According to some post-Griceans, such differences in meaning can be explained through default interpretations. The level of defaults has been conceived of in a variety of ways: as belonging (i) to semantics (as in Discourse Representation Theory, Kamp and Reyle, 1993, and its offshoots, such as Segmented Discourse Representation Theory, Asher and Lascarides, 2003) (see Discourse Representation Theory); (ii) to pragmatics (Bach, 1994); or (iii) to fully fledged social and

cultural conventions, called presumptive meanings or generalized conversational implicatures (Levinson, 2000). All of these default-based approaches advocate some degree of semantic underdetermination, understood as conceptual gaps in the output of lexicon and grammar. In other words, the logical form, which is the output of the grammatical processing of a sentence, does not provide the totality of meaning of the proposition expressed by the speaker. While this statement is certainly true, and while it also seems to be true that some pragmatic contribution is often required in order to get the correct truth conditions of the utterance, it does not mean that such an underspecified or underdetermined representation need be distinguished as an epistemologically real level in utterance processing. In Default Semantics, there is no semantic ambiguity, but there is no underspecification either. The logical form as the output of syntactic processing interacts with the information coming from the property of mental states of having an object, being about something, called their intentionality. So, if we ask where meaning comes from, we can point to two sources of meaning: (i) compositionality of the sentence meaning and (ii) intentionality of the mental state that underlies the sentence. Both are equally basic and equally important, and hence it would be incorrect to consider any information coming from intentionality as an additional, pragmatic level of utterance processing. They both belong to semantics. In dynamic approaches to meaning, such as Discourse Representation Theory, such a level of representation, called in Default Semantics an intentionality-compositionality merger, has been successfully implemented and seems to be more in the spirit of dynamic meaning than postulating any unnecessary underspecifications or ambiguities (see Jaszczolt, 1999a, 1999b, 2000). Default Semantics is governed by three main principles: the Parsimony of Levels (PoL), Degrees of Intentions (DI), and the Primary Intention (PI): PoL: Levels of senses are not to be multiplied beyond necessity. DI: Intentions in communication come in various degrees: they can be stronger or weaker.

182 Default Semantics

PI:

The primary role of intention in communication is to secure the referent of the speaker’s utterance.

In PoL, the principle of parsimony with respect to the proposed levels of meaning is taken further than in other post-Gricean approaches. Instead of discerning an underspecified logical form and pragmatic intrusion, both sources of meaning are treated on an equal footing and both contribute to a common level of representation (the intentionality-compositionality merger). DI and PI principles specify how intentionality contributes to the meaning representation. In agreement with the phenomenological tradition (Husserl, 1900–1901), we have defined intentionality as the property of beliefs, thoughts, doubts, etc., of being about an object. It is compatible with the definition of intentionality that this aboutness can be stronger or weaker. For example, a definite description ‘the best Italian painter’ can correspond to a thought about a particular individual, e.g., Michelangelo (and be used referentially); to a thought about a particular individual who does not correctly match the description, e.g., Picasso (i.e., there is a referential mistake); or finally to a thought about whoever happens to undergo the description (and be used descriptively). In the first case, intentionality is in the strongest form: as a property of the mental state, it reaches, so to speak, a real object. In the middle case, it is weaker: a real object is intended, but there is no such object corresponding to that description, and hence it reaches a mental construct that is a composite of the real person and an incorrect description. In the final case, the intentionality is dispersed and does not reach an object. Now, intentional mental states need vehicles of meaning, and language is one such vehicle. As a result, linguistic expressions share the property of intentionality, and hence we can talk about intentionality of utterances as well as intentionality of thoughts. On the level of utterances, this intending is realized as intentions in communication. Three types of such intentions are distinguished in Default Semantics: an intention to communicate certain content, to inform about certain content, and to refer to objects, states, events, and processes. In accordance with the DI and PI principles, information from the degree of intentionality of the mental state (or the strength of intending, informativeness of an utterance) merges with the information from compositionality and produces the complete propositional representation that conforms to PoL. So, Default Semantics offers a more economical alternative to the approaches founded on underspecified semantics in that it implements Occam’s razor (the methodological principle of not multiplying beings beyond

necessity) ‘one level up.’ Semantic representation structures of Discourse Representation Theory have been implemented as formalizations for such intentionality-compositionality mergers (Jaszczolt, 1999b, 2000, 2006). The DI and PI principles, in recognizing degrees and strengths of intentions, explain how default interpretations can arise. In the case of definite descriptions such as ‘the best Italian painter,’ the hearer normally assumes that the speaker utters the description with a referential intention and that the description is used correctly. This assumption is further corroborated by the assumed intentionality of the speaker’s belief: the intentionality is strongest when a particular, identifiable individual has been intended. By force of the properties of vehicles of thought discussed in this article, the stronger the intentionality, the stronger the speaker’s intentions. In the case of definite descriptions, the stronger the intentionality, the stronger the referential intention. In the case of definite descriptions, there are three degrees of intentionality corresponding to the three readings distinguished previously: (i) the strongest, referential; (ii) the intermediate, referential with a referential mistake; and (iii) the weakest, attributive. The strongest intentionality corresponds to the default reading. This default reading arises instantly, as a compositionality-intentionality merger. Only if addressees have evidence from their knowledge base or from the context that this default is not the case does the default interpretation fail to arise. This procedure is an improvement on other default-based approaches where defaults have to be canceled or overridden. Cancellation of defaults is a costly process and should not be postulated lightly: if there is no evidence of such cancellation, it is better to do without it and assume a more economical model of utterance processing. Similarly, cognitive defaults can be discerned for belief and other propositional attitude reports (see the article Propositional Attitudes). Sentence (2a) can give rise to a report, as in (2b). (2a) The best Italian painter painted this picture. (2b) Mary believes that the best Italian painter painted this picture.

Using the representation of the Discourse Representation Theory (Kamp and Reyle, 1993; Reyle, 1993), we can represent the possible readings of (2) as in Figure 1 (Jaszczolt, 1999b: 287): The discourse referent y is enclosed by a box drawn with a broken line, which signals that y can belong to any of the three remaining boxes. If it belongs to the outermost box, the reading is de re (about a particular individual, say, Michelangelo). Placed in the middle box, it signals that Mary has a de re belief but is referentially mistaken, thinking, for example, of

Default Semantics 183

examples previously discussed: the weakest intentionality corresponds to the default sense of will, and this, predictably, turns out to be the regular future marker in (5) (for a formal account, see Jaszczolt, 2006). Not all default interpretations are reducible to cognitive defaults. For example, the interpretation of possessives, as in (6), is dependent on the addressee’s background knowledge and the context, rather than on the properties of mental states. (6) Peter’s book is about a glass church.

Figure 1 A combined DRS for the three readings of (2b).

Picasso. Placing y in the innermost box corresponds to a belief about whoever undergoes the description, i.e., a belief in a proposition (de dicto) rather than about a particular individual. Analogously to the case of definite descriptions where referential use was the default, the de re reading of a belief report comes out as a default, because it corresponds to the strongest intentions and the strongest intentionality. So, Figure 1 comprises three possible representations (three possible compositionality-intentionality mergers). In addition to definite descriptions in extensional and in propositional attitude contexts, the mechanism of the principles of Default Semantics has been applied to a variety of language expressions and constructions, including proper names (Jaszczolt, 1999b), presuppositional expressions (Jaszczolt, 2002a, 2002b), expressions of temporality and modality, and tentatively to numerals and sentential connectives (Jaszczolt, 2005a, 2005b). Naturally, the PI principle will not always be relevant. The referential intention will not always be present, and even when it is, it may not pertain to the assessment of the default or nondefault status of various readings. For example, in an assessment of the default meaning of will from among the epistemic necessity will in (3), dispositional necessity will in (4), and a marker of future tense in (5), it is the intention to inform the addressee about a certain content that is graded from the strongest to the weakest: (3) Mary will be in the opera now. (4) Mary will sometimes go to the opera in her tracksuit. (5) Mary will go to the opera tomorrow night.

The Default-Semantic account of will also demonstrates that modal and temporal senses of will are traceable to one, overarching modal concept (akin to the sentential operator of acceptability in Grice, 2001). And since will is modal, it follows that the assignment of defaults has to be reversed as compared with the

Similarly, inferences to a stereotype (‘female nurse’), such as in (7), are not the case of the strength of intending but rather stem from the acquaintance with social and cultural practices. (7) They employed a nurse to look after the patient.

Such default interpretations belong to the category of social and cultural defaults and are not always of central interest to semantic theory. The phenomenon of negative-raising, i.e., the tendency for negation on the main clause to be interpreted as negation on the subordinate clause, is not an obvious cognitive default, but here we must be cautious. Neg-raising unpredictably applies to some relevant verbs but not to others, as (8) and (9) demonstrate. (8) I don’t think he is dishonest. (communicates, defeasibly: ‘I think he is not dishonest.’) (9) I don’t hope he will win. (does not communicate: ‘I hope he will not win.’)

The important question at this point is to ask about the scope of applicability of the theory. The question of the scope of applicability can be taken in the narrow and in the wide sense. In the narrow sense, we ask which default interpretations can be regarded as cognitive defaults, traceable to the properties of mental states. Cognitive defaults are rather widespread. In addition to the examples already mentioned, numerals seem to default to the ‘exactly’ meaning, rather than being underdeterminate between ‘at least,’ ‘at most,’ and ‘exactly,’ or having an ‘at least’ semantics. The enrichment of some sentential connectives such as if (to ‘if and only if’) and or (to exclusive or) can possibly also be traced to the strength of the informative intention and intentionality. This proposal concerning connectives and numerals is still highly programmatic and in need of further research. It is signaled here in order to shed some light on possible applications of cognitive defaults. In the wide sense, Default Semantics also comprises social and cultural defaults simply by assigning them an epistemological status that has nothing to do with the compositionality-intentionality merger.

184 Definite and Indefinite

To sum up: Default Semantics postulates a level of utterance interpretation called a compositionalityintentionality merger and thereby significantly decreases the role of underspecification in semantic theory. It distinguishes cognitive defaults and intention-based degrees of departures from these defaults, triggered by the addressee’s knowledge base and the context. The theory also acknowledges the existence of social and cultural defaults whose source lies beyond semantics proper. See also: Compositionality; Definite and Indefinite De-

scriptions; Discourse Representation Theory; Implicature; Neo-Gricean Pragmatics; Pragmatics and Semantics; Presupposition; Proper Names; Propositional Attitudes; Referential versus Attributive; Semantics– Pragmatics Boundary.

Bibliography Asher N & Lascarides A (2003). Logics of conversation. Cambridge: Cambridge University Press. Bach K (1994). ‘Semantic slack: what is said and more.’ In Tsohatzidis S L (ed.) Foundations of speech act theory: philosophical and linguistic perspectives. London: Routledge. 267–291. Carston R (1988). ‘Implicature, explicature, and truththeoretic semantics.’ In Kempson R M (ed.) Mental representations: the interface between language and reality. Cambridge: Cambridge University Press. 155–181. Carston R (2002). Thoughts and utterances: the pragmatics of explicit communication. Oxford: Blackwell. Grice H P (1975). ‘Logic and conversation.’ In Cole P & Morgan J L (eds.) Syntax and semantics, vol. 3. New York: Academic Press. Reprinted in Grice H P (1989). Studies in the way of words. Cambridge, MA: Harvard University Press. 22–40. Grice P (2001). Warner R (ed.) Aspects of reason. Oxford: Clarendon Press.

Husserl E (1900–1901). Logical investigations (vol 2). Findlay J N (trans.) (1970; reprinted in 2001). London: Routledge and Kegan Paul. Jaszczolt K M (1999a). ‘Default semantics, pragmatics, and intentions.’ In Turner K (ed.) The semantics/pragmatics interface from different points of view. Oxford: Elsevier Science. 199–232. Jaszczolt K M (1999b). Discourse, beliefs, and intentions: semantic defaults and propositional attitude ascription. Oxford: Elsevier Science. Jaszczolt K M (2000). ‘The default-based context-dependence of belief reports.’ In Jaszczolt K M (ed.) The pragmatics of propositional attitude reports. Oxford: Elsevier Science. 169–185. Jaszczolt K M (2002a). ‘Against ambiguity and underspecification: Evidence from presupposition as anaphora.’ Journal of Pragmatics 34, 829–849. Jaszczolt K M (2002b). Semantics and pragmatics: meaning in language and discourse. London: Longman. Jaszczolt K M (2006). ‘Futurity in Default Semantics.’ In von Heusinger K & Turner K (eds.) Where semantics meets pragmatics. Oxford: Elsevier. 471–492. Jaszczolt K M (2005a). ‘Prolegomena to Default Semantics.’ In Marmaridou S, Nikiforidou K & Antonopoulou E (eds.) Reviewing linguistic thought: converging trends for the 21st century. Berlin: Mouton. 107–142. Jaszczolt K M (2005b). Default semantics: Foundations of a compositional theory of acts of communication. Oxford: Oxford University Press. Kamp H & Reyle U (1993). From discourse to logic: introduction to model-theoretic semantics of natural language, formal logic and Discourse Representation Theory. Dordrecht: Kluwer. Levinson S C (2000). Presumptive meanings: the theory of generalized conversational implicature. Cambridge, MA: MIT Press. Reyle U (1993). ‘Dealing with ambiguities by underspecification: construction, representation and deduction.’ Journal of Semantics 10, 123–179. http://www.mml.cam. ac.uk/ling/research/ds_project.html.

Definite and Indefinite B Abbott, Michigan State University, East Lansing, MI, USA ß 2006 Elsevier Ltd. All rights reserved.

What Does ‘Definite’ Mean? ‘Definite’ and ‘indefinite’ are terms that are usually applied to noun phrases (NPs). In English, the is referred to as ‘the definite article,’ and a/an as ‘the indefinite article.’ Noun phrases (NPs) that begin with the (e.g., the Queen of England, the book), which are also called (especially in the philosophical literature)

‘definite descriptions,’ are generally taken to be prototypical examples of definite NPs in English. However, it should be noted that not all of them show the same pieces of behavior that have come to be taken as criterial for definiteness. Similarly, NPs that begin with a/an (an elephant, a big lie), ‘indefinite descriptions,’ are prototypical examples of indefinite NPs. (Plural indefinite descriptions use the determiner some.) Uniqueness?

Exactly what differentiates definite from indefinite NPs has been a matter of some dispute. One tradition comes

Definite and Indefinite 185

from the philosophical literature, more specifically Bertrand Russell’s classic work on denoting phrases (Russell, 1905). On this tradition, what distinguishes the from a/an is uniqueness – more specifically the existence of one and only one entity meeting the descriptive content of the NP. So, while use of an indefinite description in a simple, positive sentence merely asserts existence of an entity meeting the description, use of a definite description asserts in addition its uniqueness in that regard. While (1a), on this view, is paraphrasable as (1b), (2a) is equivalent to (2b). (1a) I met an owner of El Azteco. (1b) There is at least one owner of El Azteco whom I met. (2a) I met the owner of El Azteco. (2b) There is one and only one owner of El Azteco, and I met that individual.

It should be noted that Russell was concerned to capture the meaning of definite descriptions in a formal language of logic. Also, on his analysis both definite and indefinite descriptions are quantificational expressions (like explicitly quantified NPs, such as every apple, no unwanted visitors). The idea that definite descriptions are quantificational has been questioned by others, who view these NPs instead as referential. Fewer people question the idea that indefinite descriptions are quantificational, although some (primarily linguists, rather than philosophers) assume that they, too, are referential. The uniqueness theory seems to accord well with our intuitions. It also is supported by the fact that when we stress the definite article contrastively, it brings out the sense of uniqueness. Example (3) seems to be inquiring as to whether there is more than one owner, or only one. (3) Did you meet an owner of El Azteco or the owner?

It might seem that this approach would necessarily be confined to singular NPs. However, as argued by Hawkins (1978), the notion of uniqueness can be extended to plurals by employing the idea of exhaustiveness – the denotation of a definite consists of everything meeting the descriptive content of the NP. An NP like the owners of El Azteco would thus be held to be very similar to all the owners of El Azteco. The first challenge to Russell’s analysis of definite descriptions was put forward by P. F. Strawson, who argued that sentences containing definite descriptions are not used to assert the existence and uniqueness of an entity meeting the descriptive content in question. Instead, Strawson argued, definite descriptions are referential NPs, and the existence and uniqueness of a referent is presupposed (cf. Strawson, 1950; in this

seminal work, Strawson did not use the term ‘presuppose,’ although it appeared very quickly in reference to the phenomenon in question). Strawson also argued that if the presupposition fails, the sentence as a whole is neither true nor false. Thus, in the case of (2a), should it turn out that no one owns El Azteco (perhaps it is a government installation), an addressee of (2a) would not respond ‘‘That’s false!’’ but would correct the speaker’s mistaken presupposition. Another, more serious problem for Russell’s analysis has attracted a lot of attention more recently, and that is the fact that in a great number of cases, perhaps the vast majority, the descriptive content of a definite description is not sufficient to pick out a unique referent from the world at large. One example of such an ‘incomplete description’ is in (4). (4) Please put this on the table.

The sentence in (4) is readily understandable despite the fact that the world contains millions of tables. There are two main kinds of approach to dealing with this problem. A syntactic solution would propose that there is sufficient additional descriptive material tacitly present in the NP – e.g., the table next to the armchair in the living room of the house at 76 Maple Avenue, Eastwood, Kansas, USA. But it would be hard to explain how an addressee would guess which descriptive content had been left tacit. On a more plausible approach, the uniqueness encoded in definite descriptions should be understood relative to a context of utterance, which would include only those items in the surroundings of the discourse participants and those items mentioned in the course of the conversation or understood to be relevant to its topic. However, this runs into a problem with examples like (5), first pointed out by James McCawley (McCawley, 1979). (5) The dog got into a fight with another dog.

David Lewis proposed that definite descriptions denote the most salient entity meeting the descriptive content (Lewis, 1979). Familiarity?

The other main tradition concerning the meaning of definiteness generally cites the Danish grammarian Paul Christophersen. In Christophersen’s view, what distinguishes definite from indefinite descriptions is whether or not the addressee of the utterance is presumed to be acquainted with the referent of the NP. In an often cited passage, Christophersen remarked: ‘‘Now the speaker must always be supposed to know which individual he is thinking of; the interesting thing is that the the-form supposes that the hearer knows it too’’ (Christophersen, 1939: 28).

186 Definite and Indefinite

This approach appears to fare better with examples like (4), where, indeed, it seems that the speaker must be assuming that the addressee knows which table the speaker is referring to. Within current linguistic theory, the familiarity approach was revived by the work of Irene Heim (1982, 1983). Like Strawson, Heim argued that definite descriptions are referential rather than quantificational; however, she also argued that indefinite descriptions are referential as well. Heim took the uses of definite and indefinite descriptions as they occur in (6) as typifying their semantics. (6) Mary saw a movie last week. The movie was not very interesting.

In the mini discourse in (6), the indefinite NP a movie is used to introduce a new entity into the discourse context. Subsequently that entity is referred to with a definite (the movie). Notice that we might as easily have referred to the movie in the second sentence of (6) with a pronoun: . . . It was not very interesting. Heim grouped pronouns and definite descriptions together as being governed by a ‘‘Familiarity’’ condition: use of a definite is permitted only when the existence of the referred-to entity has been established in the particular discourse. Indefinite descriptions, on the other hand, are subject to a ‘‘Novelty’’ condition: they presuppose that their referent is being introduced into the discourse for the first time. It’s easy to see that this will solve the problem of incomplete descriptions. An example like (5) would be used only when the first dog referred to was presumed to be known to the addressee. Though the familiarity theory is very plausible for a number of uses of definite descriptions, there are some kinds of cases it does not appear to cover very well. One of these is definite descriptions where the descriptive content of the NP is sufficient to determine a unique referent, no matter what the context. Some examples are given in (7). (7a) Mary asked the oldest student in the class to explain everything. (7b) Philip rejected the idea that languages are strictly finite.

Here we need not assume that the addressee is familiar with the referents of the underlined NPs or that these referents had been mentioned previously in the conversation. Note, too, that in this kind of case, the indefinite article is not allowed, as shown in (8). (The asterisk in front of these examples indicates that they are not well formed.) (8a) * Mary asked an oldest student in the class to explain everything. (8b) * Philip rejected an idea that languages are strictly finite.

And even when the descriptive content is not sufficient to determine a unique referent relative to the whole world, there are examples where the content may determine a unique referent in context. In these cases, too, the definite article may be used, even if the addressee is not assumed to know who or what is being talked about. An example is given in (9). (9) Sue is mad because the realtor who sold her house overcharged his fee.

Adherents to the familiarity theory often invoke the idea of accommodation (following Lewis, 1979) to explain these uses. The idea is that addressees are willing to accept a definite description if they are able to figure out the intended referent. Some Puzzling Cases

While most occurrences of definite descriptions are consistent with both the uniqueness theory and the familiarity theory, there are several kinds that don’t match either theory. One group of examples is given in (10). (10a) Horace took the bus to Phoenix. (10b) The elevator will take you to the top floor.

It seems that with modes of transportation, a singular definite description can be used despite the fact that there are, e.g., many buses to Phoenix, and the building in (10b) may have many elevators. We also don’t suppose that the addressee will be familiar with the particular bus or elevator in question. A different kind of case is illustrated in (11). (11a) My uncle wrote something on the wall. (11b) We camped by the side of a river. (11c) She shot herself in the foot.

These sentences are well formed even though rooms typically have more than one wall, rivers more than one side, and people more than one foot. It may be relevant that these are locations. In all of these cases, as pointed out by Du Bois (1980), to use an indefinite article puts too much emphasis on the location, as though it were inappropriately being brought into focus. A third kind of example shows some dialectal variation. The example in (12) is well formed in American English, although in British English the article would be missing from the underlined NP. (12) My mother is in the hospital.

Compare the examples in (13), which are good in both dialects. (13a) Bill went to school this morning. (13b) If you’re very good, you’ll get to heaven some day.

Definite and Indefinite 187

The examples in (14), also good for English in general, indicate a certain amount of idiomaticity. (14a) I heard it on the radio. (14b) I saw it on TV.

It seems that some nouns simply require the definite article, while others are fine without it. Finally, some adjectives call for the definite article in English, despite their not restricting the reference of NPs they occur in to either a unique or a familiar referent. (15) She gave the wrong answer and had to be disqualified.

It is not clear whether these examples indicate the need for a brand-new theory of the definite article in English or are just idiomatic exceptions to the rule.

Grammatical Phenomena Sensitivity to definiteness of NP is called a definiteness effect, and a number of constructions are believed to have such an effect.

case, the prepositional phrase that follows the focus NP is a separate constituent that locates the item in question. It is only locative conditionals that show a definiteness effect, and these have been used as a test for definiteness, as we see in the section ‘Other Kinds of Definite and Indefinite NPs.’ The Have Construction

Another construction, similar to existential sentences, is one involving the verb have when it is used to indicate inalienable possession. Here, too, we see a definiteness effect, in that the examples in (19) are natural, while those in (20) are not. (19a) She had a full head of hair. (19b) He had a sister and two brothers. (20a) * She had the full head of hair. (20b) * He had the sister and the two brothers.

It is perhaps not too surprising that these two constructions should show a similar definiteness effect, since have and be verbs are often used for similar kinds of propositions in the world’s languages.

Other Kinds of Definite and Indefinite NPs Existential Sentences

One of the earliest constructions showing a definiteness effect to be noticed was existential, or there be, sentences. Examples like those in (16) are quite natural, but the corresponding sentences in (17) sound peculiar, if not downright ungrammatical. (16a) There is a book in the shop window. (16b) There were some bachelors on board the ship. (17a) * There is the book in the shop window. (17b) * There were the bachelors on board the ship.

One complicating factor is the existence of a construction that is similar to the existential construction but that is used in a restricted set of circumstances. The latter kind of sentence, often called a list existential, typically seems to be used to offer entities to fulfill some role or purpose. However, this kind of existential does not allow a locative prepositional phrase to follow the focus NP. Examples just like those in (17) but where the prepositional phrase is an NP modifier, as in (170 ), could be used in reply to the questions in (18). (170 a) There is the book in the shop window. (170 b) There were the bachelors on board the ship. (18a) What can we get for Bill for his birthday? (18b) Weren’t there any people around to help?

The more common type of existential, like those in (16), can be called locative existentials. In this

As shown in the first section, there is no commonly agreed on essence of definiteness or indefiniteness. Hence the need for some kind of diagnostic for these properties. Ability to occur naturally in a locative existential has become the main diagnostic used. Other Kinds of Definite NPs

As noted in this article, Heim assumed that pronouns are definite, like definite descriptions. Others agree with this categorization. And as we might expect, pronouns do not go naturally in locative existentials. The sentences in (21) are not natural. (21a) * There was it in the fireplace. (21b) * There were them all over the floor.

Pronouns seem to fit both the uniqueness and the familiarity conceptions of definiteness. When they are used, it is assumed that there is a unique intended referent within the discourse context, and it is also assumed that the addressee will know who or what the speaker was intending to refer to. Another subcategory of NP that is typically assumed to be definite consists of proper names. Like pronouns and definite descriptions, these do not occur naturally in locative existentials. (22a) * There was Joan in the library. (22b) * There is France in the United Nations.

Although it might not be so obvious as it is with pronouns, proper names also seem definite by both

188 Definite and Indefinite

theories of definiteness. Proper names behave as though they have a unique referent; they cannot accept restrictive adjectives or other restrictive modifiers. And in fact in most contexts, each proper name does have a unique referent. On the other hand, we do not usually use a proper name unless we assume that our addressee has already been introduced to the referent. A third kind of NP that is generally agreed to be definite would be those that have a demonstrative determiner: this, that, these, or those. These cannot occur naturally in a locative existential, as shown (23) and (24). (23a) * There was that book over there in Mary’s bag last Tuesday. (23b) * There are these applicants waiting to see the dean.

In addition, NPs with a possessive determiner are usually classed as definite. (24) * There was Mary’s car in the driveway.

Indeed, NPs with possessive determiners are typically regarded as belonging to the category of definite descriptions. Some kinds of quantified NPs cannot occur naturally in existential sentences, and this has led some people to consider them to be definite NPs. Some examples are given in (25). (25a) * There were all the students at the party. (25b) * There were most red buttons on the dress.

However, it is possible that these NPs should not be classified as definite and that there is some other reason that they cannot occur in locative existential sentences. Bare NPs

One interesting kind of NP in English has received a significant amount of attention. So-called bare NPs do not have any determiner, and the head noun must be either plural or a mass noun. These NPs have (at least) two distinct uses. Sometimes they are interpreted generically, as in (26). (26a) Mary likes sharpened pencils. (26b) Water with fluoride in it is good for the teeth.

As can be seen in (27b), when bare NPs occur in a locative existential sentence, they can have only the existential interpretation, and not the generic one. Other Types of Indefinite NPs

In addition to indefinite descriptions, and bare NPs with the existential interpretation, there are other types of NPs that go naturally in locative existentials. Some examples of quantified NPs are shown in (28). (28a) There are a few pieces of cake left. (28b) There were few, if any, freshpersons at the school fair. (28c) There are many problems for that course of action. (28d) There are some big flecks of paint on the back of your coat.

If we use natural occurrence in a locative existential as a diagnostic, then these other types would also be classified as indefinite NPs. In addition to these, there are some other unexpected cases of NPs that look as though they should be definite, because they have definite determiners, but that can appear naturally in a locative existential. One kind, noticed first by Prince (1981), uses the proximal demonstrative determiner (this, these), but with an indefinite reference, as in (29). (29a) There was this strange note on the blackboard. (29b) There are these disgusting globs of stuff in the bowl.

Note that this is definitely a different use of these determiners. The examples in (29) would not be used with any kind of pointing gesture, and indeed, they could be used in a phone conversation, where the addressee is not in the same perceptual situation as the speaker. Also, it is worth noting that this indefinite use of this and these is somewhat marked stylistically. Examples like those in (29) would not appear in a formal context. Finally, there are some kinds of NPs that look like definite descriptions, but whose sense is indefinite, and that can appear naturally in existentials. (30a) There was the nicest young man at the picnic! (30b) There were the same nominees on both ballots.

The sentences in (26) concern the whole category referred to by the underlined NP. On the other hand, sometimes these bare NPs have an existential interpretation, in that they are referring to just some members or a subpart of the category.

As Prince (1992) pointed out, an NP like the same nominees can have two interpretations. One is anaphoric, as in (31).

(27a) Mary bought sharpened pencils. (27b) There was water with fluoride in it in the test tube.

(31) The Executive Committee came up with a list of nominees, and it happened that the Nominating Committee chose the same nominees.

Definite and Indefinite 189

Here the same nominees refers back to the Executive Committee’s list and means that the Nominating Committee’s choices were the same as the Executive Committee’s. On this interpretation, the same nominees would be definite. However, the interpretation in (30b) is different: it means that the two ballots had the same choices. This interpretation is apparently indefinite, given the ability of this NP to occur naturally in the existential sentence in (30b).

Other Kinds of Categorizations A simple binary distinction like definite vs. indefinite may be too crude, especially if we are trying to classify NPs in general. Furthermore, it may be more useful to look at the role of NP form with respect to discourse function. A number of researchers have turned to the idea of information status – an extension of the familiarity idea, but with greater articulation.

thing denoted. At the other extreme, we find NPs denoting entities that are currently ‘in focus,’ and pronouns require this extreme of cognitive salience. Definite descriptions are about midway in the hierarchy, requiring unique identifiability for their referents. Just to the weaker side are the indefinite this/ these NPs. On the more salient side of definite descriptions are NPs with demonstratives (this, that, these, those) as determiners. One special aspect of this treatment is that the criteria for each place on the hierarchy are increasingly stringent and subsume the criteria for all less stringent points; that is, the hierarchy is an implicational one. Hence, indefinite descriptions, which have the weakest requirement, can appear anywhere in principle – even when their referents are in focus. However, general conversational principles militate against using an indefinite description in such a situation, as it would be misleading in suggesting that only the weakest criterion had been satisfied. The Accessibility Hierarchy

Old and New

Prince (1992) argued that we need to distinguish two ways in which information can be novel or familiar, new or old. One is with respect to (the speaker’s assumption about) the addressee, which Prince called Hearer-old and Hearer-new. The speaker assumes that the addressee is already acquainted with the referent of a Hearer-old NP, whereas Hearer-new NPs are assumed to introduce new entities to the addressee. On the other hand, entities can be new or old with respect to a discourse: Discourseold or Discourse-new. Discourse-old NPs refer to entities that have already been mentioned in the current discourse, in contrast to Discourse-new NPs. Prince found that it was the category of Hearer-old/Hearer-new that correlated roughly with the definite/indefinite distinction, rather than Discourse-old/Discourse-new. This seems to agree more with Christophersen’s than with Heim’s conception of definiteness and indefiniteness. The Givenness Hierarchy

Gundel et al. (1993) proposed a hierarchy of givenness corresponding to the degree to which the referent of an NP is assumed to be cognitively salient to the addressee. Each point in the hierarchy corresponds to one or more NP forms. At one end of the hierarchy is the weakest degree of knownness, which Gundel et al. labeled ‘type identifiable.’ This end corresponds to indefinite descriptions, and the criterion for their use is just that the addressee be familiar with the kind of

A third approach, similar to the one just mentioned but with its own distinct characteristics, was developed by Mira Ariel (1990, 2001). Ariel proposed an even more articulated accessibility hierarchy, reflecting the marking of NPs according to how accessible in human memory their referents should be. Upwards of 15 distinct categories are represented, ranging from full name plus modifier (at the least accessible end) to zero pronouns (represented with ø), which are found in constructions like those in (32). (32a) Mary wanted ø to build a cabin. (32b) Open ø in case of fire.

Full name plus modifier (e.g., Senator Hillary Clinton) is distinguished from full name, last name alone, or first name alone, each of which receives a separate spot on the hierarchy. Similarly long definite descriptions, with a lot of descriptive content, are distinguished from short ones, and stressed pronouns from unstressed pronouns. (The list does not contain quantified and [other] indefinite NPs, which, as noted in this article, are often considered not to be referential expressions.) Ariel’s claim was that the hierarchy of NP forms corresponds to accessibility, where the latter is determined by a number of factors, including topichood, recency and number of previous mentions, and how stereotypic the referent is for the context. The NP forms go generally from fullest and most informative to briefest and least informative, ø being the limiting case. The idea is that an NP form

190 Definite and Indefinite

typically encodes an appropriate amount of information for the addressee to achieve easy identification of the referent.

Definite and Indefinite in Other Languages The examples used so far in this article have been taken from English. However, many other languages have definite and/or indefinite articles, though by no means all of them. Lyons (1999) described the explicit marking of definiteness – whether with an article or a nominal inflection – as an areal feature that characterizes the languages of Europe and the Middle East in particular, although it can be found elsewhere in the world as well (Lyons, 1999: 48). Definite articles often seem to develop out of demonstrative determiners, as was the case in English. Indefinite articles, on the other hand, often come from the word for ‘one.’ Some languages, e.g., Irish (Irish Gaelic), have only a definite article, whereas others, e.g., Turkish, mark only indefinites explicitly. The examples in (33) and (34) are taken from Lyons (1999: 52). (33) Irish (33a) an bord (33b) bord

‘the table’ ‘a table’

(34) Turkish (34a) ev (34b) bir ev

‘house’, ‘the house’ ‘a house’

Even among languages that have both definite and indefinite marking, the usages typically do not match exactly across languages. Thus, bare NPs in English have a generic use (as in the examples in [26]). French also has both definite and indefinite determiners but, unlike English, would use the definite determiner in examples like those in (26). (35) French (35a) Marie aime les crayons bien taille´s. (35b) L’eau au fluor est bonne pour les dents.

In languages that do not use articles or some other explicit marking for definiteness or indefiniteness, word order may affect interpretation in that way, as in the examples in (36) and (37), from Chinese and Russian. (36) Mandarin Chinese (36a) Zhuo-zi shang you shu. table on have book ‘There is a book (or books) on the table.’ (36b) Shu zai zhuo-zi book is located table ‘The book is on the table.’

shang. on

(37) Russian (37a) Na stole´ lezhı´t karta. on table lies map ‘There is a map lying on the table.’ (37b) Karta lezhı´t na stole´. map lies on table ‘The map is lying on the table.’

However, it should be noted that word order variation also interacts with topicality and the distribution of new and old information in a sentence and that this affects the definiteness or indefiniteness of an NP’s interpretation. For a full discussion of the expression of definiteness and indefiniteness in a variety of the world’s languages, see Lyons (1999).

See also: Accessibility Theory; Definite and Indefinite Articles; Definite and Indefinite Descriptions; Indefinite Pronouns; Pragmatic Presupposition; Presupposition; Proper Names; Proper Names: Philosophical Aspects.

Bibliography Abbott B (2004). ‘Definiteness and indefiniteness.’ In Horn L R & Ward G (eds.) The handbook of pragmatics. Oxford: Blackwell. 122–149. Ariel M (1990). Accessing noun phrase antecedents. London: Routledge. Ariel M (2001). ‘Accessibility theory: an overview.’ In Sanders T, Schliperoord J & Spooren W (eds.) Text representation. Amsterdam and Philadelphia: John Benjamins. 29–87. Birner B J & Ward G (1998). Information status and noncanonical word order in English. Amsterdam and Philadelphia: John Benjamins. Carlson G N (1977). ‘A unified analysis of the English bare plural.’ Linguistics and Philosophy 1, 413–456. Christophersen P (1939). The articles: a study of their theory and use in English. Copenhagen: Munksgaard. Diesing M (1992). Indefinites. Cambridge, MA: MIT Press. Du Bois J W (1980). ‘Beyond definiteness: the trace of identity in discourse.’ In Chafe W L (ed.) The pear stories: cognitive, cultural, and linguistic aspects of narrative production. Norwood, NJ: Ablex. 203–274. Gundel J K, Hedberg N & Zacharski R (1993). ‘Cognitive status and the form of referring expressions in discourse.’ Language 69, 274–307. Haspelmath M (1997). Indefinite pronouns. Oxford: Oxford University Press. Hawkins J A (1978). Definiteness and indefiniteness. Atlantic Highlands, NJ: Humanities Press. Hawkins J A (1991). ‘On (in)definite articles: implicatures and (un)grammaticality prediction.’ Journal of Linguistics 27, 405–442. Heim I (1982). The semantics of definite and indefinite noun phrases. Ph.D. diss., University of Massachusetts.

Definite and Indefinite Articles 191 Heim I (1983). ‘File change semantics and the familiarity theory of definiteness.’ In Bauerle R, Schwarze C & von Stechow A (eds.) Meaning, use and the interpretation of language. Berlin: Walter de Gruyter. 164–189. Lewis D (1979). ‘Scorekeeping in a language game.’ Journal of Philosophical Logic 8, 339–359. Lo¨bner S (1985). ‘Definites.’ Journal of Semantics 4, 279–326. Lumsden M (1988). Existential sentences: their structure and meaning. London: Croom Helm. Lyons C (1999). Definiteness. Cambridge: Cambridge University Press. McCawley J D (1979). ‘Presupposition and discourse structure.’ In Oh C-K & Dinneen D (eds.) Syntax and semantics, vol. 11: Presupposition. New York: Academic Press. 371–388. Milsark G (1977). ‘Toward an explanation of certain peculiarities of the existential construction in English.’ Linguistic Analysis 3, 1–29. Neale S (1990). Descriptions. Cambridge, MA: MIT Press. Ostertag G (ed.) (1998). Definite descriptions: a reader. Cambridge: MIT Press.

Prince E F (1981). ‘On the inferencing of indefinite this NPs.’ In Joshi A K, Webber B L & Sag I A (eds.) Elements of discourse understanding. Cambridge: Cambridge University Press. 231–250. Prince E F (1992). ‘The ZPG letter: subjects, definiteness, and information status.’ In Mann W C & Thompson S A (eds.) Discourse description: diverse linguistic analyses of a fund-raising text. Amsterdam and Philadelphia: John Benjamins. 295–326. Reimer M & Bezuidenhout A (eds.) (2004). Descriptions and beyond. Oxford: Oxford University Press. Reuland E J & ter Meulen A G B (eds.) (1987). The representation of (in)definiteness. Cambridge, MA: MIT Press. Roberts C (2003). ‘Uniqueness in definite noun phrases.’ Linguistics and Philosophy 26, 287–350. Russell B (1905). ‘On denoting.’ Mind 14, 479–493. Strawson P F (1950). ‘On referring.’ Mind 59, 320–344. Woisetschlaeger E (1983). ‘On the question of definiteness in ‘‘an old man’s book’’.’ Linguistic Inquiry 14, 137–154.

Definite and Indefinite Articles P Juvonen, Stockholm University, Stockholm, Sweden ß 2006 Elsevier Ltd. All rights reserved.

In a narrow sense, articles can be said to be grammatical elements that in many, but by no means even most, languages encode as their core meaning the identifiability of a referent in a noun phrase. Hence, by using the English definite article the, the speaker implies that the recipient should be able to identify the intended referent, and by using the indefinite article a//an, the speaker implies that this is not necessarily the case. In English, a noun phrase cannot simultaneously take the INDEF a and the DEF the. In this respect, the English articles behave as members of one grammatical category. The use of one member excludes the other, i.e., the members of a category are each other’s paradigmatic alternatives in the same syntactic slot. In many languages, however, the two articles do not occupy the same syntactic slot, even though they may still be each other’s alternatives. In, for example, Danish, the indefinite article precedes the noun as a free morpheme, whereas the definite article is suffixed to it as a bound morpheme, as shown in examples (1a) and (1b): (1a) et barn INDEF.NEUT

‘child’

child

(1b) barn-et child-DEF.NEUT ‘the child’

As with most grammatical categories, not all languages have articles – in fact, probably the majority of all languages make do without them. In those languages, a simple noun phrase can be interpreted as either definite or indefinite depending on the context. So, e.g., in Finnish: (2) Anna lukee kirja-a Anna reads book-PART ‘Anna is reading the/a book’

Languages without articles are, of course, able to express the identifiability of a referent as well as other related senses. To achieve this, a number of strategies, including word order, stress, indefinite and definite pronouns, and morphological case, can be used. Even though both definite and indefinite articles may belong to a common category of articles in some languages, they are treated separately in the following discussions. The reasons for this have already been hinted at: a language having one type of article does not necessarily have the other. Also, the two articles in languages that have both do not always occupy the same syntactic slot.

192 Definite and Indefinite Articles

Definite Articles Diachronically, the most common source for definite articles in the languages of the world is a distal demonstrative (Heine et al., 2002; Greenberg, 1978), with which a definite article shares a number of usage contexts. Less commonly, definite articles may develop from possessive suffixes (Schroeder, 2005) or thirdperson personal pronouns. Of course, these two, in turn, share a number of features with demonstratives. Following Hawkins (1978), Himmelmann (1998: 323) illustrated the historical and pragmatic overlap between demonstratives and definite and specific articles by listing the major contexts of use for what he calls D-elements (Table 1). In a situational use, the referent of the noun phrase (NP) is present in the situation: Give me the//that, book. Demonstratives and definite articles can both be used in this kind of context, regardless of whether the referent has been mentioned earlier. In discourse deictic uses, the antecedent of the referring NP consists of a longer stretch of discourse, as when, as it were, summarizing a lengthy discussion about the reasons for global warming by the main problem. . . . Both definite articles and demonstrative determiners also appear in anaphoric mentions of a referent already mentioned within the current discourse, i.e., they are used in a tracking function: I met a man yesterday [. . .] and then two hours later I came across the//that, guy again. The development of a deictic demonstrative toward more article-like uses is often thought to begin as an increase in frequency in anaphoric contexts. The use of the demonstrative may further spread into contexts in which the identification of the intended referent requires utilization of knowledge of the situation and/or of the world. In recognitional uses, the referent is supposed to be known to the interlocutor by way of specific knowledge. For example, if you gave your mother a new television set for Christmas, you can bring it into a conversation by Is the TV working as it should? Himmelmann

Table 1 Major contexts of use for D-elements. From Himmelmann (1998: 323)

Situational Discourse-deictic Tracking Recognitional Larger situation Associative-anaphoric Specific-indefinite

Demonstrative pronouns

Definite articles

Specific articles

þ þ þ þ   

þ þ þ þ þ þ 

þ þ þ þ þ þ þ

proposes that demonstratives in general cannot be used in the remaining two functions, the larger situation use and the associative-anaphoric use. Hence, these functions could be used as a test case for articlehood. Larger situation use refers to uses in which an entity considered to be uniquely identifiable in a given speech community is referred to for the first time within the current verbal exchange, as when talking about the sun, or the King. Uttered in Sweden, in a nonscientific context, the former will uniquely identify the star closest to Earth and the latter will uniquely identify King Carl XVI Gustaf. The associative-anaphoric use refers to first mentions of new referents within a discourse such that the referent can be identified via another, already present referent, as in I have a bicycle, but the gears are out of order. As a demonstrative grammaticalizes toward an expected and automatized part of a nominal construction, it usually gets phonetically reduced (the English the < þat), but also, void of its original deictic meaning, it begins to mean ‘identifiable’ in general. By now, the erstwhile demonstrative is often recognized as a full-blown definite article that can in turn develop to a marker of specificity, and further to a mere gender marker or general marker for ‘nouniness’ (Greenberg, 1978). Languages with definite articles vary as to the functions in which these can be used. Most definite articles can be used in referring expressions, especially with singular count nouns, whereas they are less common in nonreferring, e.g., predicative, constructions. In some languages, they have just reached the stage of anaphoric use, for which uses the term ‘anaphoric article’ is occasionally used. In yet others, they are not only used in associative-anaphoric and larger situation uses, as in English, but also in generic expressions (French, Swedish) and/or with proper names (Portuguese o Joa˜o, a Ce´u). As already mentioned, definite articles also vary cross-linguistically both in form (free or bound morphemes) and in placement (pre- or postposed). In some languages, the definite article cannot co-occur with demonstratives; in yet others, as illustrated by Swedish in example (3), it can: (3) Den ha¨r DEM.NON-

gaml-a old-

NEUT

DEF

tidning-en newspaperDEF.NON-

kan can

du you

sla¨nga throw. away

here NEUT ‘You can throw away this old newspaper’

Cross-linguistic variation is also found as to the number of articles and their morphological properties. So, for example, in several varieties of Danish, Frisian, and German, there are two definite articles, one of

Definite and Indefinite Articles 193

which is primarily anaphoric (Schroeder, 2005, and references therein). As previously stated, definite articles may become void of their original deictic meaning in the course of time, but may also be able to mark other grammatical features such as gender and number. In spoken French, such development has led to a situation whereby articles often are the sole carriers of the number distinction, as exemplified in example (4):

definite articles, i.e., there are examples of articles at different stages of development. Whereas the English indefinite a/an is at stage IV, some languages have minified the numeral meaning of the article and it can be used with plural referents. This is the case, for example, in the Ibero-Romance languages, as seen in the following Spanish example: (5a) un-a INDEF-FEM

‘a woman’ (5b) un-a-s mujer-es INDEF-FEM-PL woman-PL ‘some women’

(4) written French pierre la pierre les pierres spoken French [pjER] [la pjER] [le pjER] ‘stone’ ‘the stone’ ‘the stones’

Indefinite Articles In the preceding section, indefinite articles were characterized as being the grammatical elements that encode the identifiability of a referent in a noun phrase as not necessarily identifiable for the interlocutor. In the same way that definite articles can develop into specific markers, indefinite noun phrases are commonly analyzed in terms of their specificity. In this context, specificity concerns whether a specific individual or a specific group of individuals can be pointed out as the referent of a noun phrase by any participant within the current verbal exchange. In languages worldwide, the numeral ‘one’ is the most common source of indefinite articles (Heine et al., 2002), but indefinite pronouns may also develop into articles. According to Heine (1997: 72–76; see also Givo´n, 1981), the grammaticalization cline from the numeral ‘one’ to an indefinite article generally goes through the following stages, briefly characterized within parentheses: Stage I: The numeral Stage II: The presentative marker (Introduction of main participants) Stage III: The specific marker (Singular nouns, specific to speaker) Stage IV: The nonspecific marker (Singular nouns, nonspecific to speaker) Stage V: The generalized article (Basically all nouns, including mass and plural nouns)

Most languages do not have an indefinite article (Dryer, 2005a), but almost all languages have numerals. Hence, many languages position themselves at the first stage in the cline. Once again, the decisive increase in frequencies of occurrence between the different stages may lead to the grammaticalization of an article, with the accompanying reduction in form (English a//an < one) and the weakening of the original numeral meaning of the element. The cross-linguistic variation found in the use of the indefinite articles goes along the same lines as that for

mujer woman

As is obvious from example (5), English must use some in this context. Hawkins (1991) has proposed that a phonetically reduced form sm indeed is an indefinite article in English, used with plural and mass nouns (much in the same way, thus, as the French partitive article).

Geographic Distribution With regard to the absence or presence of grammaticalized articles, languages worldwide can be classified into four major types: 1. 2. 3. 4.

Languages without articles. Languages with both DEF and INDEF articles. Languages with only DEF articles. Languages with only INDEF articles.

Languages without articles are found everywhere, but articles occur least frequently in the languages of Asia, South America, and the northern and eastern parts of North America. If a language has articles, it is cross-linguistically more common to only a definite article, rather than both definite and indefinite articles; it is least common to have only an indefinite article (see Dryer, 2006). Languages with both DEF and INDEF articles are particularly common in Western Europe, in Meso-America, in the Pacific, and in a wide coast-to-coast belt across central parts of Africa. No particular tendencies as to geographical distribution can be found for languages of types 3 and 4 (having only one type of article, see Dryer (in press a,b)).

Concluding Remarks As an effect of the gradual process of grammaticalization, the synchronic study of languages reveals great variation in the use of articles both between languages and within a language. An important parameter of this variation is degree of obligatoriness. Articles develop out of optional elements that undergo

194 Definite and Indefinite Descriptions

a dramatic increase in frequency as they gradually become obligatory, and are eventually totally predictable in certain contexts of use. For reasons of comparability, it is important to acknowledge the difference between optional and obligatory elements, or, grammaticalizing and grammaticalized constructions. See also: Definite and Indefinite; Demonstratives; Definite

and Indefinite Descriptions; Indefinite Pronouns; Numerals.

Bibliography Ariel M (1990). Accessing NP antecedents. London: Routledge. Chafe W L (1976). ‘Givenness, contrastiveness, definiteness, subjects, topics, and point of view.’ In Li C N (ed.) Subject and topic. New York: Academic Press. 25–55. Chesterman A (1991). On definiteness. A study with special reference to English and Finnish. Cambridge: Cambridge University Press. Cristophersen P (1939). The articles. A study of their theory and use in English. Copenhagen: Munksgaard. ¨ (1970). ‘Some notes on indefinites.’ Language 46, Dahl O 33–41. Diessel H (1999). Demonstratives. form, function and grammaticalization. Philadelphia, Amsterdam: John Benjamins. Donnellan K (1966). ‘Reference and definite descriptions.’ Philosophical Review 75, 281–304. Dryer M (2005a). ‘Indefinite articles.’ In Haspelmath M, Dryer M, Gil D & Comrie B (eds.) World atlas of language structures. In preparation. Dryer M (2005b). ‘Definite articles.’ In Haspelmath M, Dryer M, Gil D & Comrie B (eds.) World atlas of language structures. In preparation. Dryer M (2006). ‘Noun phrase structure.’ In Shopen T (ed.) Language typology and syntactic description, 2nd edn. Cambridge: Cambridge University Press. Fox B (ed.) (1996). Studies in anaphora. Amsterdam: John Benjamins.

Fraurud K (1990). ‘Definiteness and the processing of noun phrases in natural discourse.’ Journal of Semantics 7, 395–433. Fretheim T & Gundel J K (eds.) (1996). Reference and referent accessibility. Amsterdam: John Benjamins. Givo´n T (ed.) (1983). Topic continuity in discourse: A quantitative cross-language study. Amsterdam: John Benjamins. Greenberg J H (1978). ‘How does a language acquire gender markers.’ In Greenberg J H, Ferguson C A & Moravcsik E A (eds.) Universals of human language, vol. 3. Stanford, CA: Stanford University Press. 47–82. Gundel J K, Hedberg N & Zacharski R (1993). ‘Cognitive status and the form of referring expressions in discourse.’ Language 69, 274–307. Haspelmath M (1997). Indefinite pronouns. Oxford, New York: Oxford University Press. Hawkins J A (1978). Definiteness and indefiniteness – a study in reference and grammaticality prediction. London: Groom Helm. Hawkins J A (1991). ‘On (in)definite articles: implicatures and (un)grammaticality prediction.’ Journal of Linguistics 27, 405–442. Heine B & Kuteva T (2002). World lexicon of grammaticalization. Cambridge: Cambridge University Press. Himmelmann N P (1997). Deiktikon, Artikel, Nominalphrase: Zur Emergenz Syntaktischer Struktur (Linguistische Arbeiten, vol. 362). Tu¨bingen: Niemeyer. Kra´msky J (1972). The article and the concept of definiteness in language. The Hague: Mouton. Lo¨bner S (1985). ‘Definites.’ Journal of Semantics 4, 279–326. Lyons C (1999). Definiteness. Cambridge: Cambridge University Press. Prince E (1981). ‘Toward a taxonomy of given-new information.’ In Cole P (ed.) Radical pragmatics. New York: Academic Press. 223–256. Schroeder C (2005). ‘Articles and article systems in some areas of Europe.’ In Bernini G (ed.) Pragmatic organization of discourse in the languages of Europe. (EALT/ EUROTYP) Berlin: Mouton de Gruyter. In press. Strawson P F (1950). ‘On referring.’ Mind 59, 320–344.

Definite and Indefinite Descriptions G Ostertag, Nassau Community College, Garden City, NY, USA ß 2006 Elsevier Ltd. All rights reserved.

Definite descriptions in English take one of two forms: as the definite article the concatenated with a nominal (e.g., table, husband, game) or as either a possessive adjective (her, my) or noun phrase (everyone’s, John’s) concatenated with a nominal. Thus, the table, her

husband, everyone’s favorite game, my cat, and John’s bicycle are all definite descriptions. In contrast, indefinite descriptions take a single form: as the indefinite article a (or an) concatenated with a nominal. Examples of indefinite descriptions are a table, an employee, a thing I haven’t mentioned, a friend of Mary’s. Although this classification is not perfect – the friend of an acquaintance, although intuitively indefinite, comes out as definite – it conforms to usage standard among philosophers, logicians, and linguists.

Definite and Indefinite Descriptions 195

According to Bertrand Russell, descriptions – both definite and indefinite – are devices of quantification. That is, both the F is G and an F is G can be interpreted as expressing a relation between the properties F and G. Since Russell’s treatment is by far the most influential approach to descriptions in the philosophical literature, this entry will focus on his views. It begins by briefly reviewing the motivations behind Russell’s mature view on descriptions, which stem in part from inadequacies of his earlier approach, and proceeds to a statement of Russell’s mature view. Challenges to this view are then considered, as are alternative proposals.

Russell’s Theories of Description Russell’s Early Theory of Denoting

Intuitively, an utterance of the singular sentence I met Tony Blair expresses a proposition that, among other things, is about Tony Blair. In virtue of what does this relation of ‘aboutness’ hold? For Russell, circa The principles of mathematics (1903), the proposition that I met Tony Blair is about Mr Blair by virtue of containing him as a constituent. This in turn suggests an answer to a related question, namely, What is the contribution of the expression ‘Tony Blair’ to the aforementioned proposition? Russell identifies the contribution ‘Tony Blair’ makes to this proposition with the constituent that enables it to be about Tony Blair – namely, the individual Tony Blair himself (see Direct Reference). How are we to understand the parallel contribution the syntactically complex denoting phrase a man makes to the proposition I met a man? Russell’s answer was that it contributes, not an individual, but a certain complex concept – what he called a ‘denoting concept.’ Russell conceived of a denoting concept by analogy with what he called a ‘class concept’ (roughly, a property or condition that determines a class of entities). Whereas the nominal man contributes a class concept to the proposition that I met a man, the complex phrase a man contributes a denoting concept. However, as Russell noticed, denoting concepts possess a puzzling combination of features. For one, the relation between the denoting concept and its denotation is, as he later put it, ‘logical,’ and ‘‘not merely linguistic through the phrase’’ (Russell, 1905: 41). That is to say, the denoting concept denotes what it does because of something intrinsic to the denoting concept itself, not because of any facts attaching to the denoting phrase that expresses it. Second, a denoting concept is an ‘aboutness-shifter’ (Makin, 2000: 18). Although the denoting concept associated with a man is a constituent of the proposition that I met a man, the denoting concept is not what this latter proposition is about.

Third, denoting concepts fail to conform to a principle of compositionality, according to which the meaning of a complex expression is a function of its structure and the meanings of its constituents. Russell was keenly aware of this deficiency. He remarked that all men and all numbers seem to be analyzable into a concept associated with the determiner all and the respective class concepts men and numbers, continuing: But it is very difficult to isolate any further element of all-ness which both share, unless we take as this element the mere fact that both are concepts of classes. It would seem then, that ‘‘all u’s’’ is not validly analyzable into all and u, and that language, in this case as in some others, is a misleading guide. The same remark will apply to every, any, some, a, and the. (Russell, 1903: 72–73)

The inability of the theory of denoting concepts to reflect the compositional nature of denoting phrases is a serious defect of the approach. Not only does surface grammar overwhelmingly suggest that all men and all numbers possess a common semantic feature; in addition, speakers familiar with the nominal curator and with the determiner all will understand sentences containing all curators in subject position (assuming familiarity with the other expressions), even supposing they have never come across that particular combination. An acceptable theory of denoting phrases cannot leave this phenomenon unexplained. Russell’s Mature Theory

Russell (1905) developed an approach to denoting phrases that avoided each of the difficulties noted above. His revision makes use of the doctrine of contextual definition, or ‘meaning in use,’ dispensing with the idea that denoting phrases can be assigned meanings ‘in isolation.’ Rather, each of the aforementioned denoting phrases is defined within its sentential context. While the treatment of indefinite descriptions, if not wholly uncontroversial, is straightforward (an F is G is defined as something is both F and G), the treatment accorded definite descriptions is rather less intuitive: the F is G is defined as something is both uniquely F and G. This is equivalent to the conjunction of three claims: something is F; at most one thing is F; every F is G. It can be seen that, in this analysis, the F is G will be false if either nothing is F, more than one thing is F, or some F is not G. Definite Descriptions in Principia mathematica

Russell’s favored expression of the theory of descriptions is in the formal language of Principia mathematica, where the theory is rendered as follows: (R1) G(ix)Fx ¼ def9x(8y(Fy  y ¼ x) & Gx)

196 Definite and Indefinite Descriptions

The definiens is the formal analogue of the F is G, with the iota phrase corresponding to the definite article; the definiendum is the formal analogue of the sentence something is both uniquely F and G. As the definition shows, the surface grammar of G(ix)Fx is misleading with respect to its logical form. While (ix)Fx takes singular term position, its logical role is not that of a singular term (that is to say, unlike a singular term, it does not even purport to refer). Indeed, given the logical law t ¼ t, taking descriptions to be singular terms allows the derivation of (ix) (Fx & :Fx) ¼ (ix) (Fx & :Fx); this in turn allows the further derivation of the absurdity 9x x ¼ (ix) (Fx & :Fx). (Note that logical systems permitting empty singular terms do not license the second inference; see Lambert, 2003.) Russell’s theory, as encapsulated in R1, avoided the difficulties that plagued the doctrine of denoting concepts: their mysterious ability to determine their denotations logically; their disruption of the aboutness-as-constituency doctrine; and their failure to conform to a principle of compositionality. (It also enabled Russell to explain how a sentence containing a nondenoting description can nonetheless be meaningful.) Since the denoting phrase (ix)Fx disappears under analysis, and since the analysans introduces no new denoting phrases, the first two difficulties no longer arise. In addition, R1 shows how every context in which (ix)Fx occurs can be replaced with a context that is fully compositional (although the fact that R1 puts logical form at variance with surface grammar has led some to question its usefulness in a compositional semantics for English). Descriptions and Scope

R1 shows how to eliminate sentences containing (ix)Fx and replace them with sentences containing only the familiar logical vocabulary of variables, predicate constants, connectives, and quantifiers. But the definition fails to provide a unique replacement strategy for sentences such as G(ix)Fx  p. Both of the following are consistent with our definition: (1a) 9x(8y(Fy  y ¼ x) & (Gx  p) ) (2a) 9x(8y(Fy  y ¼ x) & Gx)  p

The former corresponds to the reading in which the scope of (ix)Fx is G(ix)Fx  p, and the latter corresponds to the reading in which the scope of (ix)Fx is G(ix)Fx (see Scope and Binding). Russell’s contextual definition of (ix)Fx in Principia mathematica employs an awkward but effective device to eliminate such structural ambiguities: (R2) [(ix)Fx] C(ix)Fx ¼df 9x(8y(Fy  y ¼ x) & Cx)

The scope of (ix)Fx is determined by the placement of [(ix)Fx]. The general rule is that the scope of an

occurrence of (ix)Fx is the entire context to which the scope operator [(ix)Fx] is prefixed. Using Russell’s notation, the readings corresponding to (1a) and (2a) are represented as (1b) and (2b): (1b) [(ix)Fx] (G(ix)Fx  p) (2b) [(ix)Fx] G(ix)Fx  p

Recent developments in syntactic theory provide a more natural method of indicating scope (see May, 1985). What Russell referred to as ‘denoting phrases’ are, in fact, natural language quantifiers. Since such quantifiers are invariably restricted, the most natural way to represent them is not in firstorder logic – the language of Principia mathematica – in which quantifiers are unrestricted, but rather in a language permitting restricted quantification. In such a language, a quantifier is the result of concatenating a determiner, subscripted with a variable, with a formula (for example, [somex: x is human]). As with unrestricted quantification, the resulting expression can itself combine with a formula to create a sentence. This allows us to express the F is G as [thex: Fx] (Gx). Instead of the serviceable but unwieldy (1b) and (2b), we get (1c) and (2c), which are far more natural renderings of the respective contexts: (1c) [thex: Fx] (Gx  p) (2c) [thex: Fx] (Gx)  p

Note that in this interpretation, descriptions, like quantified noun phrases generally, can be assigned meanings in isolation. Just as the quantifier all kings denotes the set of properties possessed by all kings [lP 8x(King(x)  Px)] and some kings denotes the set of properties possessed by some kings [lP 9x (King(x) & Px)], the definite description the king denotes the set of properties possessed by something which is uniquely king [lP 9x(8y(King(y)  y ¼ x) & Px)]. In this approach, the king is bald is true just in case being bald is among the properties the king possesses. (For details see Westerta˚hl, 1986.) This machinery pays off handsomely, disambiguating contexts involving the interaction of descriptions with modal operators and with verbs of propositional attitude (see Modal Logic; Propositional Attitude Ascription). In addition, it clarifies the relation between surface grammar and logical form, something that R1 leaves obscure.

Responses to Russell’s Theory of Definite Descriptions Strawson’s Critique of Russell

For almost half a century, Russell’s theory maintained the status of orthodoxy. But in an article published in Mind in 1950, the Oxford philosopher P. F. Strawson

Definite and Indefinite Descriptions 197

launched an influential attack. The attack focused on Russell’s alleged disregard for ordinary usage, a disregard manifested in Russell’s assumption that for every well-formed sentence-type S there exists a unique proposition expressed by S. In fact, Strawson noted, the same sentence can be used on one occasion to assert one proposition, on a different occasion to assert another proposition, and on a third to assert nothing at all. In order to maintain his view, Russell is forced to provide what amounts to a logical guarantee that in even the least propitious situation a sentence would, however implausibly, possess a definite truth-value. To ensure that the F is G expresses a proposition come what may, Russell interpreted it as a complex existential claim, rejecting the intuitive classification of the F as a singular term. Strawson claimed that this analysis is contradicted by common usage. If Russell were correct, then the proper response to The present king of France bald. Is this true? would be a firm no. In fact, the proper response would address the belief betrayed by the utterance, that is, that France is at present a monarchy. Strawson claimed that to understand the linguistic meaning of an abstract sentence-type S requires mastery of the ‘rules of use’ associated with S. In the case of the present King of France is wise, the rules require that it be used only when France is a monarchy. To use this sentence seriously and literally is, among other things, to present oneself as believing that the relevant conditions are fulfilled. If, in fact, the belief is false, then the utterance fails to express a proposition – it is something about which ‘the question of truth does not arise.’ Yet Strawson’s own theory is open to the following challenges. (1) It is not at all clear that utterances containing vacuous descriptions are devoid of propositional content. A contemporary utterance of yesterday Mick Jagger met the King of France is, contra Strawson, intuitively false. (2) The proposal fails to apply in any obvious way to relational descriptions bound by a higher quantifier. The description each girl’s mother in (3a) is properly unpacked as the mother of x, with the variable bound by the quantifier each girl. This is made explicit in (3b): (3a) Each girl’s mother attended. (3b) 8x(Gx  A(iy)Myx)

How such descriptions can be said to refer is a mystery. While there are formal responses to this worry, they are far from being intuitively satisfying (Evans, 1982: 51–57). Finally, it should be noted that Strawson partly misrepresented Russell’s actual view, or at least provided an unnecessarily inflexible interpretation of it. For example, Russell could happily incorporate the distinction between sentence and utterance that

Strawson accused him of overlooking. Indeed, Russell himself provided the context-sensitive expression my only son as an example of a description, indicating an awareness that different tokens of the same sentence may express different propositions. This reveals something Strawson failed to recognize: that the distinction has no direct bearing on the question of the logical form of description sentences. Contemporary Russellians, such as Stephen Neale (1990), accept the distinction, seeing in it no fundamental challenge to Russell’s theory. The Ambiguity Thesis

Keith Donnellan (1966) described a phenomenon that neither Russell nor Strawson had considered: that the F can be used to refer to a nonF. Donnellan gave as an example an utterance of Smith’s murderer is insane, made in court on observing the bizarre behavior of a man (call him Jones) accused of murdering Smith. Even if the description ‘fits’ someone else, Donnellan claims, the utterance clearly refers to Jones, and not the actual murderer, whoever that might be. But as Saul Kripke (1977) observed, appeals to usage alone cannot contradict Russell’s theory. One can use a sentence to communicate a proposition that departs from its literal meaning, so data about usage are in themselves insufficient to mount a successful challenge to Russell (see Referential versus Attributive). What is needed is a decisive intuition that, in the imagined utterance, the speaker literally states that Jones is insane. Such an intuition would strongly favor the thesis that definite descriptions are ambiguous between a nonRussellian, ‘referential’ use and a Russellian, ‘attributive’ use. Yet few theorists are ultimately willing to commit to the view that, in the imagined utterance, the speaker has literally stated that Jones is insane. A related argument, due initially to Strawson, provides a much stronger challenge. Utterances of description sentences typically exhibit incompleteness. That is, while the nominal is true of a plurality of objects, the speaker intends nonetheless to speak truly. For example, a librarian issues a polite reminder to a patron by uttering the book is overdue. For Russell, this utterance is true just in case there is exactly one book, and any book is overdue. This seems to get things wrong, as the librarian is clearly not intending to say – and is not taken to be saying – something that is, as things are, blatantly false. Though some have argued that, intuitions notwithstanding, the utterance is, strictly speaking, false, few have found this idea appealing (but see Bach, 1988). Russellians have responded by claiming that the context of utterance can be relied on in such cases to provide the missing completing material. Thus, what

198 Definite and Indefinite Descriptions

the librarian intends to convey is perhaps that the book you borrowed last month is overdue or that the book you just mentioned is overdue, and so on. Of course, the suggestion leads immediately to a worry. Typically, when a speaker utters a sentence containing a contextually incomplete description, there will be a multiplicity of completing properties to choose from. The question, Which completion is the one that both the speaker intended to convey and the hearer took the speaker to have intended? can receive no definite answer. And yet, such utterances are not typically accompanied by any indeterminacy or uncertainty as to what was said. Incomplete descriptions are used almost invariably to refer to a contextually definite object – in our example, to a particular book. In such cases, the speaker succeeds in communicating a proposition containing (or somehow about) the entity in question. Given that such sentences are typically used to communicate just such ‘referential’ propositions, Howard Wettstein (1981) has suggested that in such contexts, a speaker literally asserts the relevant referential proposition. This hypothesis is, after all, consistent with the fact that the librarian’s utterance was perfectly determinate, whereas the Russellian hypothesis, sketched above, is not. Of course, there remains the case of non-referentiallyused incomplete descriptions: for example, the murderer is insane, uttered at the scene of a brutal murder and without knowledge of the murderer’s identity. Wettstein’s suggestion is useless here. After all, such uses cannot be supposed to express referential propositions, since there is no referent. But to suppose, with Wettstein, that context provides completing information in such cases is no more plausible here than in the referential case.

we to understand the second sentence? After all, intuition suggests overwhelmingly that the pronoun refers to the individual introduced by the indefinite ‘a man.’ Doesn’t this force us to conclude that the indefinite is likewise referential? Not necessarily: as Lewis (1979: 243) noted, ‘‘What I said was an existential quantification; hence, strictly speaking, it involves no reference to any particular man. Nevertheless it raises the salience of the man that made me say it.’’ And this fact allows a subsequent pronoun to make literal reference to the man. This seems to settle the question in favor of Russell. But an example due to Michael Devitt raises a further difficulty for his theory: Several of us see a strange man in a red baseball cap lurking about the philosophy office. Later we discover that the Encyclopedia is missing. We suspect that man of stealing it. I go home and report our suspicions to my wife: ‘‘A man in a red baseball cap stole the Encyclopedia.’’ Suppose that our suspicions of the man we saw are wrong but, ‘‘by chance,’’ another man in a red baseball hat, never spotted by any of us, stole the Encyclopedia. (Devitt, 2004: 286)

In Russell’s theory, the utterance comes out as true; Devitt claims that it is false. But Devitt’s intuition seems mistaken. If the speaker uses the quoted sentence referentially, then his utterance is successful only if his audience in some way grasps the referential proposition intended. In other words, if the utterance is referential, it cannot be understood unless the audience has cognitive contact of some sort with the referent. Yet it seems clear that the hearer can fully understand the utterance even without any such contact with the speaker’s referent. So, it would seem, the case against Russell fails. (See further Ludlow and Neale, 1991.) An Alternative Nonreferential Account

Responses to Russell’s Theory of Indefinite Descriptions Referential Uses of Indefinite Descriptions

As we have seen, Russell defines an F is G in terms of the existential quantification, something is both F and G. As with definite descriptions, indefinite descriptions are often used referentially. Moreover, the relevant data seem open to the same response – that facts about usage cannot, by themselves, allow one to draw conclusions about literal meaning. But, as in the previous cases, matters are not so simple. Consider (4): (4) There’s a man at the door. He’s selling linguistics encyclopedias.

If Russell is correct, then the first sentence asserts that at least one man is at the door. But how then are

Recently, theorists have considered a third option: that indefinites lack ‘‘quantificational force of their own’’ (Chierchia, 1995: 11). In this view, an indefinite description is not a quantifier, nor is it a referring expression. Rather, it resembles a free variable in that it can be bound by a quantifier that c-commands it, with the significant difference that, when not c-commanded by a quantifier, it is interpreted as bound by an existential quantifier (on c-command, see Scope and Binding). For example, consider (5a): (5a) Whenever a man owns a donkey, he beats it.

Here, whenever has universal force, binding any free variables within its scope. This reading is captured in (5b): (5b) 8x, y (man(x) & donkey(y) & owns(x, y)  beats(x, y))

Definite and Indefinite Descriptions 199

A variant of this example – if a man owns a donkey, he beats it – receives the same analysis since, in the view in question, when an if clause lacks an overt quantifier, an unrealized universal quantifier is assumed to be present. As indicated, when an indefinite is not within the scope of a quantifier, it is interpreted as bound by an existential quantifier: (6a) Whenever a man owns a donkey, he beats it with a stick. (6b) 8x, y ((man(x) & donkey(y) & owns(x, y))  9z (stick(z) & beat-with(x, y, z)))

The analysis yields some counterintuitive results, however. Consider: (7a) If Mary finds a copy of Middlemarch at Bookworms, she’ll buy it. (7b) 8x ((copy-of-Middlemarch(x) & finds-at(Mary, x, Bookworms))  buys(Mary, x))

In the suggested analysis, this sentence is true just in case Mary buys every copy of Middlemarch she finds at Bookworms. But, intuitively, it can be true if, finding several copies, Mary, sensibly, buys just one. One recent response to this difficulty is to take indefinites as introducing a choice function – that is, a function from a predicate P to a specific member of P’s extension (see Reinhart, 1997). Informally, the choice-function analysis of the previous example is as follows: for some choice function f, Mary buys f(copy-of-Middlemarch(x) & finds-at(Mary, x, Bookworms)). This captures the desired truth conditions, but it seems unintuitive as an account of the literal meaning of the relevant sentence. It might, for example, be objected that speakers assertively uttering (7a) do not take themselves to be referring to, or quantifying over, functions; but this is precisely what the analysis implies. There is, it should be added, a competing Russellian account in the literature, according to which unbound anaphora are concealed definite descriptions. This view has the potential to provide a truth-conditionally-adequate approach to the data without the awkward commitments of the current approach. (See Neale, 1990: chapters 5 and 6).

See also: Definite and Indefinite; Definite and Indefinite Articles; Demonstratives; Direct Reference; Logical Form; Modal Logic; Pragmatic Determinants of What Is Said; Pragmatics and Semantics; Presupposition; Propositional Attitude Ascription; Quantifiers; Reference: Philosophical Theories; Referential versus Attributive; Scope and Binding.

Bibliography Bach K (1988). Thought and reference. New York: Oxford University Press. Chierchia G (1995). Dynamics of meaning. Chicago: University of Chicago Press. Devitt M (2004). ‘The case for referential descriptions.’ In Bezuidenhout A & Reimer M (eds.) Descriptions and beyond. Oxford: Oxford University Press. 280–305. Donnellan K (1966). ‘Reference and definite descriptions.’ Philosophical Review 75, 281–304. [Reprinted in Ostertag G (ed.) (1998). Definite descriptions: a reader. Cambridge: The MIT Press.] Evans G (1982). The varieties of reference. Oxford: Oxford University Press. Graff D (2000). ‘Descriptions as predicates.’ Philosophical Studies 102, 1–42. Kripke S (1977). ‘Speaker’s reference and semantic reference.’ In French P A, Uehling T E & Wettstein H (eds.) Contemporary perspectives in the philosophy of language. Minneapolis: University of Minnesota Press. 6–27. [Reprinted in Ostertag G (ed.) (1998). Definite descriptions: a reader. Cambridge: The MIT Press.] Lambert K (2003). Free logic. Cambridge: Cambridge University Press. Lewis D (1979). ‘Scorekeeping in a language game.’ Journal of Philosophical Logic 8, 339–359. Ludlow P & Neale S (1991). ‘Indefinite descriptions: in defense of Russell.’ Linguistics and Philosophy 14, 171–202. Makin G (2000). The metaphysicians of meaning. New York: Routledge. Mates B (1973). ‘Descriptions and reference.’ Foundations of Language 10, 409–418. May R (1985). Logical form: its structure and derivation. Cambridge: The MIT Press. Neale S (1990). Descriptions. Cambridge: The MIT Press. Neale S (2004). ‘This, that, and the other.’ In Bezuidenhout A & Reimer M (eds.) Descriptions and beyond. Oxford: Oxford University Press. 68–182. Ostertag G (ed.) (1998). Definite descriptions: a reader. Cambridge: The MIT Press. Reinhart T (1997). ‘Quantifier scope: how labor is divided between QR and choice functions.’ Linguistics and Philosophy 20, 335–397. Russell B (1903). The principles of mathematics. London: George Allen & Unwin. Russell B (1905). ‘On denoting.’ Mind 14, 479–493. [Reprinted in Ostertag G (ed.) (1998). Definite descriptions: a reader. Cambridge: The MIT Press.] SchifferS(1994).‘Descriptions,indexicals,andbeliefreports: some dilemmas (but not the ones you expect).’ Mind 104, 107–131. [Reprinted in Ostertag G (ed.) (1998). Definite descriptions: a reader. Cambridge: The MIT Press.] Strawson P F (1950). ‘On referring.’ Mind 54, 320–344. [Reprinted in Ostertag G (ed.) (1998). Definite descriptions: a reader. Cambridge: The MIT Press.] Szabo´ Z (2000). ‘Descriptions and uniqueness.’ Philosophical Studies 101, 29–57. Westerta˚hl D (1986). ‘Quantifiers in formal and natural languages.’ In Gabbay D & Guenthner F (eds.)

200 Definition in Lexicology Handbook of philosophical logic: topics in the philosophy of language, vol. 4. Dodrecht: Kluwer. 1–131. Wettstein H (1981). ‘Demonstrative reference and definite descriptions.’ Philosophical Studies 40, 241–257. [Reprinted in Ostertag G (ed.) (1998). Definite descriptions: a reader. Cambridge: The MIT Press.]

Whitehead A N & Russell B (1927). Principia mathematica (vol. I). (2nd edn.). Cambridge: Cambridge University Press. [Relevant portions reprinted in Ostertag G (ed.) (1998). Definite descriptions: a reader. Cambridge: The MIT Press.]

Definition in Lexicology P Hanks, Brandeis University, Waltham, MA, USA, and Berlin-Brandenburg Academy of Sciences, Berlin, Germany ß 2006 Elsevier Ltd. All rights reserved.

A definition is a statement of the meaning of a word, term, or symbol. This simple statement hides a wealth of philosophical and semantic issues, some of which have been the source of some confusion, in particular the question whether a definition of a word can arise from empirical observation of language in use or whether definitions are necessarily a matter of stipulation. The term ‘definition’ is also used in logic to denote an expression in propositional calculus that may be substituted for another expression (the definiendum) without affecting the truth value. Such definitions are transformation rules of a particular kind. In setting out the axiomatic basis (including definitions) of a propositional calculus, logicians are normally careful to avoid all reference to interpretation. The purpose is to identify a basic set of well-formed formulas, not to engage in interpretation. In setting out the definitions in a dictionary, lexicographers have the opposite goal. Their purpose is to offer an interpretation (or a menu of possible interpretations) of a word. Traditionally, definitions are applied to classes of entities (physical objects or abstract concepts, as the case may be). A traditional definition consists of a genus term and any of a number of differentia. The genus term answers the question, ‘What sort of thing is it?’ The differentia distinguish it from members of related sets. Thus, the genus term in a definition of canary is bird (or, in a more fine-grained hierarchy an intermediate term, finch, of which the genus term is bird), while the differentia typically include mention of its size (small), its color (yellow), and its song (melodious). This tradition goes back to the NeoPlatonists, in particular Porphyry (c. 232–303 A.D.), who wrote an introduction in Greek (Eisagoge¯) to

Aristotelian logic. Porphyry’s Eisagoge¯ was translated into Latin by Boethius, and it is through Boethius that the tradition of defining by genus and differentiae has been transmitted to us. The five concepts of Aristotle summarized in the Eisagoge¯ are actually somewhat more elaborate than genus and differentia. They are: Genus: e.g. a particular person (say, Socrates) is a member of the genus animal. Species: e.g. Socrates is a member of the human species. Differentia (distinguishing characteristics): e.g. humans are distinguished from other animals by rationality. Properties: e.g. laughter is a property of humans. All and only humans laugh. Accidents (particular characteristics): e.g. Paul is pale; Anna is dark.

The meaning statements found under each headword in a standard dictionary are commonly referred to as ‘definitions’ and are often worded according to genus and differentia. However, in languages with an established literary tradition, lexicographers almost universally assert or imply that their definitions are based on usage, denying any prescriptive or stipulative intention. Thus, they make no claim to be writing stipulative definitions. The desire to avoid ill-founded pontification is laudable. However, unfortunately, for the reasons explained below, the meaning of a word in a natural language cannot be satisfactorily determined by empirical observation of language in use. The large machine-readable natural-language corpora that have become available for all major languages during the 1990s and 2000s provide abundant evidence for words in use, and they show a profusion of different uses of words with different shades of meaning in different contexts that defy unification into a single definition. To a native speaker, the meaning of each use is generally clear from the context. However, an enumeration of a different sense for each context would be quite impractical, firstly because it would

Definition in Lexicology 201

lead to infinitely long dictionary entries (insofar as there is an infinite number of possible contexts for each word), secondly because it would miss the very generalizations that a dictionary is supposed to offer, and thirdly because the contexts themselves cannot be finitely enumerated: when one has found all the uses of a particular word in all the corpora that exist, one has not found all possible contexts. Consider a simple example. What is the definition of the transitive verb file? It may seem obvious that when a person files a document, he or she places it on record in some way. However, an attorney filing a lawsuit is doing something very different from a clerk filing papers in a filing cabinet. An attorney who files a lawsuit by putting the papers in a filing cabinet is not doing what is required, namely presenting the papers to a court of law in order to activate a procedure. In the case of the filing clerk, the papers are merely placed in an appropriate place for possible subsequent retrieval, while in the case of the attorney, a procedure is activated. If these were the only two contexts for this verb, the problem could easily be solved by listing two senses. But they are not. Other normal contexts under the general heading of ‘PERSON files DOCUMENT,’ each with a slightly different set of implicatures generated by the context, include judges filing opinions, pilots filing flight plans, journalists filing news reports and feature articles, and many other procedures in different domains. When someone files a complaint of some kind, is this the same sense of file as the legal sense, or a different one? Worse is to follow: after all the normal uses of a word have been enumerated, the definer must also allow for the possibility of abnormal but perfectly sincere and meaningful uses of the word. These range from the metaphorical and facetious (‘Do you always file your toothpaste in the fridge beside the milk?’) to the serious but unusual (‘He filed the seed packets alongside the gardening books’). The word definition implies setting limits, but actual usage seems to be unlimited, or rather the boundary between possible and impossible use of a word is a fuzzy grey area, not a clear-cut dividing line. The notion of setting limits explicitly denies an essential feature of common words, especially verbs, in ordinary language use, namely their availability for use in new and unusual contexts. There is no problem with definitions of logical symbols or terms of art, as these can be unashamedly stipulative. For example, in the Syste`me International of measuring units (SI units), a second, the basic unit of time, is treated as a term of art, defined as: the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom.

This definition was agreed in 1967 by the CIPM (Comite´ International des Poids et Mesures), which in 1997 added a rider to the effect that it ‘‘refers to a cesium atom in its ground state at a temperature of 0 K.’’ It undoubtedly defines, avoiding any possible ambiguity in the concept, but for ordinary language users it fails to explain. It is, of course, not aimed at ordinary language users. It would be of no use to a manufacturer of egg timers or even ordinary clocks. It is aimed at physicists and engineers who require absolute precision and the absence of ambiguity, rather than a model for practical applications such as clock making. It is a stipulative definition of a concept (‘let a second be . . .’), not a definition of a term in any natural language. However, it should also be noted that this very precise definition of the concept ‘second’ depends not only on great sophistication in particle physics, but also upon the preexistence of a natural language in which the sophisticated concept can be expressed. The miracle is that physicists, logicians, and other high-level theoretical thinkers can, as it were, bootstrap their way to absolute precision using the ordinary words of natural languages, assisted by numbers (which are precise), even though the words themselves are fundamentally imprecise. Some linguists now believe that this imprecision is not a fault but a design feature of natural language, enabling speakers to use existing words to say new things and to use language rapidly. Natural languages are designed to enable speakers to trade precision for flexibility and speed – with unfortunate consequences for the notion of definition. A distinction must be made, therefore, between explanatory definitions and stipulative definitions. The English word define and its European cognates are derived from Latin definire ‘to set a boundary or limit to,’ based on the root finis ‘end, limit, boundary.’ According to Lewis and Short’s Latin dictionary, the word is ‘‘common in Cicero.’’ Among the examples from Cicero cited in that dictionary are the following: unum hoc definio, tantam esse necessitatem virtutis . . . – De re publica 1, 1 ‘‘This one thing I define as being a necessary condition for manly virtue’’ genus universum brevi circumscribi et definiri potest – Oratio pro Sestio 45, 97 ‘‘A universal class can be briefly circumscribed and defined’’ definienda res erit verbis et breviter describenda – De inventione rhetorica 1, 8 ‘‘A thing must be defined in words and briefly described’’

The first two of these are clearly stipulative definitions rather than empirical observations about the way the words virtus and genus were used by the Latin speakers of the day. The third is a proposal for

202 Definition in Lexicology

the stipulative definition of concepts. Nevertheless, it is easy to see how the two could have become confused and the notion of stipulative definition been overextended. Cicero wrote in a tradition that goes back to Aristotle’s doctrine of essences and beyond that to Plato’s doctrines of ‘forms’ or ‘ideas.’ Plato drew an analogy between knowing and perceptual activities such as seeing and smelling. Just as we see colors and smell odors, so we know knowable objects, which according to Plato are unchanging idealizations, not subject to change over time. The ideal triangle is always a triangle. Necessary conditions for being a triangle are being a geometrical figure, having three and only three straight lines, and having three angles. Moreover, these three conditions add up to a sufficient condition for being a triangle. If you stumble across something that has three straight line joined by three angles, it must be a triangle. The aim of this approach to definition is to ‘shrinkwrap’ the conditions onto the concept so that together the necessary and sufficient conditions define all the triangles that there are or ever could be, and nothing except triangles (‘all and only triangles’). Necessary and sufficient conditions work very well as definitional criteria for ideal concepts such as triangles; they are indeed an essential foundation of Western scientific and technical reasoning. However, they do not work so well for the meaning of everyday words. As we have seen, any attempt to establish necessary and sufficient conditions for filing something runs into insuperable difficulties, and the same applies to other ordinary-language terms such as air and animal. How shall air or animal be defined? Because of this element of vagueness in natural language, a parallel vocabulary has built up consisting of terms used in scientific discourse, defined stipulatively, although in many cases they are influenced by the carefully recorded observations of empirical scientists. Thus, it may not be possible to define air with scientific precision, but terms such as oxygen and hydrogen were coined by scientists as terms that precisely correspond to scientific observation, whose definitions can therefore be stipulated. It is easy to point to an example of a typical animal (a cat or a fox, for example), but not so easy to delineate boundaries. Is a bird an animal? Is a fish? For many people, bird, fish, and animal are contrasting terms. Is a spider an animal? What about a sea anemone? What about an amoeba? Animal in modern English is a typical natural-language term, with a well-understood center but a very fuzzy perimeter. By contrast, mammal is a term of art, based on New Latin mammalia, coined in the 18th century as a genus term to denote all and only those animals that breast-feed their young.

Some 18th-century lexicographers made strenuous efforts to implement the substitution principle, a notion that can be traced back to Leibniz’s dictum (1704) that two things are the same if the one can be substituted for the other without affecting the truth value (salva veritate). The idea was – and is – that a definition should be substitutable in any context for the word being defined (the definiendum). A discussion of some traditional philosophical issues involved in the notion of definition, from Plato and Aristotle to Ogden and Richards, is in Robinson (1950). The Longman dictionary of contemporary English (Procter, 1978) defines words in traditional dictionary style, but using a ‘restricted defining vocabulary’ of only about 2,000 words. The Polish-Australian philosopher of language Anna Wierzbicka (1985, 1987, 1996) has gone much further, proposing definitions in terms of ‘universal semantic primitives’ or ‘atomic units of meaning’ – words (or concepts) that are undefinable and that cannot be decomposed and explained in other words. The number of primitives in Wierzbicka’s system rose steadily over the years from 14 in the 1970s to 55 in 1996. This part of her conception, though derived from a long tradition in philosophy and indeed endorsed by exponents of artificial intelligence such as Wilks (1977), is controversial. An example is given in Table 1. The meaning is defined in terms of primitives, but the discussion is in ordinary English. In the late 20th century, some fundamental issues of definition were reconsidered, following the work of

Table 1 Wierzbicka’s definition of urge (extracts) Urge

Meaning  I say: you should do X.  I assume that you may not want to do it.  I don’t want to stop saying this because I want to cause you to feel that you have to do it.  I say this, in this way, because I want to cause you to do it.  I don’t want you to think about it for a long time. Discussion  Urging is an attempt to get the addressee to do something.  Unlike ask and request, it doesn’t imply that the speaker is seeking a benefit for himself.  Unlike order and command it doesn’t imply that the speaker has power over the addressee.  [It is] pressing and forceful.  The speaker perceives or anticipates unwillingness on the addressee’s part.  There is usually some sense of urgency. The speaker wants the addressee to respond and to respond now. Unlike the case of command, however, it is not necessarily an external action which the speaker wants. Rather, it is a psychological response.

Definition in Lexicology 203

J. L. Austin (1975) on performatives. Some dictionaries discontinued the attempt to define all words by a substitutable phrase. Function words such as the definite and indefinite article are defined by their function in discourse (rather than substitutably), while the pragmatic contribution of other words (i.e. sentence adverbs such as unfortunately and broadly) was also recognized as not being suitable for substitutable definition. One important question concerns whether a word can (or should) be defined out of context at all. In 1979, Hanks proposed the commonsensical notion that a dictionary entry cannot really hope to define the meaning of a word; instead, it can only hope to typify it. This developed into the idea that dictionaries do not really contain definitions of a word’s meaning; instead they show its ‘meaning potential’ (Hanks, 1988). In the Cobuild dictionaries (Sinclair et al., 1987, 1988), the definiendum is not the word but the word in context. Headwords are still listed in alphabetical order in these dictionaries, but the headword is encoded in a typical context before it is explained. Typical Cobuild definitions (from the Cobuild Essential English dictionary, 1988) are: Gong A gong is a flat circular piece of metal that you hit with a hammer to make a sound like a loud bell. Instill If you instill an idea or feeling into someone, you make them think it or feel it. Swindle If someone swindles you, they deceive you in order to get something valuable from you, especially money. Syrupy (2) You can describe behavior as syrupy when it is sentimental in an irritating way. Urge If you urge someone to do something, you try hard to persuade them to do it.

The thinking behind the definition structure of Cobuild dictionaries is elaborated in Hanks (1987) and analyzed in some detail by Barnbrook (2002). Where Wierzbicka defines word meanings in the first person, aiming at explaining the thought processes that lie behind the use of a word, Cobuild’s definitions are in the second person, i.e., Cobuild addresses the reader directly. See also: Componential Analysis; Definitions; Dictionaries and Encyclopedias: Relationship; Dictionaries; Generative

Lexicon; Lexicology; Lexicon: Structure; Lexicon/Dictionary: Computational Approaches; Metalanguage versus Object Language; Selectional Restrictions; Semantic Primitives.

Bibliography Austin J L (1975). How to do things with words: the William James Lectures delivered at Harvard University in 1955. Oxford: Clarendon Press. Barnbrook G (2002). Defining language: a local grammar of definition sentences. Amsterdam and Philadelphia: Benjamins. Fetzer J H, Shatz D & Schlesinger G N (eds.) (1991). Definitions and definability: philosophical perspectives. Kluwer. Hanks P (1979). ‘To what extent does a dictionary definition define?’ In Hartmann R R K (ed.) Papers from the 1978 B. A. A. L. Seminar on Lexicography. Exeter Linguistic Studies. Hanks P (1987). ‘Definitions and explanations.’ In Sinclair J M (ed.) Looking up. London and Glasgow: Collins. Hanks P (1988). ‘Typicality and meaning potentials.’ In Snell-Hornby M (ed.) Zu¨riLEX ‘86 proceedings. Francke Verlag. Harras G, Hass U & Strauss G (1991). Wortbedeutung und ihre Darstellung im Wo¨rterbu¨cher. Berlin and New York: de Gruyter. Leibniz G W von (1704). ‘Table de de´finitions.’ In Couturat L (1903) (ed.) Opuscules et fragments ine´dits de Leibniz. Paris. Ogden C K & Richards I A (1923). The meaning of meaning. London: Routledge. Procter P (ed.) (1978). Longman dictionary of contemporary English. Harlow: Longman. Robinson R (1950). Definition. Oxford: Clarendon Press. Sager J C (ed.) (2000). Essays on definition. Amsterdam and Philadelphia: Benjamins. Sinclair J, Fox G et al. (eds.) (1988). Collins Cobuild Essential English dictionary. London and Glasgow: Harper Collins. Sinclair J, Hanks P et al. (eds.) (1987). Collins Cobuild English language dictionary. London and Glasgow: Harper Collins. Wierzbicka A (1985). Lexicography and conceptual analysis. Ann Arbor, MI: Karoma. Wierzbicka A (1987). English speech act verbs. Sydney: Academic Press. Wierzbicka A (1996). The primitives of linguistic meaning. Semantics: primes and universals. Oxford: Oxford University Press. Wilks Y (1977). ‘Good and bad arguments about semantic primitives.’ Communication and Cognition 10(3/4).

204 Definitions

Definitions G Longworth, Birkbeck College, University of London, London, England, UK ß 2006 Elsevier Ltd. All rights reserved.

Uses ‘Definition’ is the activity of explaining to an audience the meaning of an expression. ‘A definition’ is a product of that activity: a sentence the understanding of parts of which (the part of the sentence providing explanation, the definiens) can underwrite an audience’s understanding of the rest (the part of the sentence being defined, the definiendum). For example, understanding ‘is the activity of explaining the meaning of an expression’ (definiens) might enable one to understand a meaning of ‘definition’ (definiendum). Notice that ‘definition’ needn’t proceed via ‘definitions.’ Perhaps the first explanations of meaning to which a child is exposed don’t come via sentences. Besides the immediate purpose of underwriting explanations of meaning, a definition can serve countless others. One may stipulate that an expression is to enjoy a meaning – deploying a ‘stipulative definition.’ E.g., for purposes of this entry, let x be a definition if x is a sentence used to explain meaning. (Here and throughout, initial universal quantification is suppressed and use/mention distinctions ignored.) Other purposes of stipulation include abbreviation – hence, ‘abbreviative definition’ – itself at the service of tractability and comprehensibility – and marking out the definiens as of special interest, perhaps as carving at an important joint (Whitehead and Russell, 1910). An alternative purpose is to describe the meaning an expression carries with respect to some language or population – a ‘descriptive’ or ‘lexical definition.’ Thus, in English, x is a definition iff x is a sentence used to explain meaning. Less immediate purposes here include illuminating a less well understood definiendum by appeal to a better understood definiens; revealing the basis of one’s understanding of the definiendum; or establishing dependence of the definiendum on the definiens. But the basic purpose of descriptive definition – explaining the meaning of the definiendum – is independent of the viability of these other purposes. This is good, since it would be surprising if many expressions in use were redundant. A third purpose is ‘explication’ or ‘explicative definition.’ Here one stipulates with the aim of approximating to the meaning of an ordinary expression. The aim is to balance two requirements: first, the new expression should be fit to do duty for the old, at least for some purposes; and second, the new expression should improve upon the ordinary along some valued

dimension, perhaps clarity or consistency (Carnap, 1928, 1947). Explication is risky, as it is in general impossible to specify in advance the range of important duties an expression must perform. The definitions recently presented are sufficiently vague and ambiguous to meet the first requirement, if not the second. Whatever one’s purposes, the capacity of a definition to serve them is relative to the context (circumstances) in which the definition is offered. In particular, it is relative to the needs of one’s audience and to their capacities and informational situation. The role of audience capacities and collateral information is difficult to articulate in detail, but can be illustrated. Someone who lacks the capacities needed to understand ‘explain’ will not gain understanding of ‘definition’ from the definition offered above. Moreover, it’s plausible that, since true synonymy (sameness of meaning) is rare, most dictionary definitions rely heavily on audiences’ knowledge and abilities, often supplying little more than hints from which the intellectually privileged are able to derive understanding. Mention of contextual features is often suppressed, especially in logic. Suppression is motivated by aims, such as balancing maximal generality against formal tractability. Aiming for generality induces logicians to articulate assumptions and rely only on capacities widely possessed amongst thinkers. Seeking tractability induces restrictions on the range of uses of an expression that a definition is required to explain. Logicians typically require definitions to convey meaning to audiences competent with the logical apparatus and language of their logical theory, without relying on special capacities or features of the circumstances in which the definition is offered. It doesn’t follow that definitions offered by logicians are more than comparatively context-free. Neither does it follow that explanations of meaning outside of logic are required to attain a similar level of context-freedom.

Varieties Having seen something of the assortment of uses of definitions, we can consider some varieties of definitions. There are as many forms of definition as there are ways of using sentences to enable someone to discover the meaning of an expression. Those listed below are included for their exhibition of variety, and also because they illustrate the context sensitivity of definition. This is addressed more explicitly in the section ‘Uses Again.’ Comparatively Context-Free Forms of Definition

Explicit definition involves assuming an audience to understand the definiens in advance, and presenting

Definitions 205

the definiendum as something that can replace the definiens for current purposes. For example, (1) A brother is a male sibling

Here, an audience is informed that brother can be used wherever male sibling is used. An explicit definition accomplishes this by associating with the definiendum an expression that can serve in function-preserving replacements for that expression, perhaps through synonymy, or some weaker equivalence. An interesting form of quasi-explicit definition is ‘recursive’ or ‘inductive definition.’ (Quasi-explicit since it fails to provide an independent expression that can be used wherever the expression to be defined can be used.) A recursive process is one that takes its own outputs as inputs, generating new outputs that can serve as inputs, and so forth. Use of recursive definitions enables us to characterize the meaning – e.g., extension – of expressions when that meaning can, or can only, be generated by a recursive process. For example: (2) x is a direct ancestor of y iff x is a parent of y or x is a parent of a direct ancestor of y

Here, the definiendum appears in the definiens so that the extension of the definiens cannot be determined in advance of partial determination of the extension of the definiendum. This in turn cannot be determined in advance of partial determination of the extension of the definiens. This is apt to seem viciously circular, but it isn’t. Vicious circularity is avoided because the basis clause, x is a parent of y, affords a means to start the recursive process independently of grasp of the meaning of x is a direct ancestor of y. The ‘inductive step’ – or x is a parent of a direct ancestor of y – can then make use of the output of the basis clause to generate a new input, and so forth. Explicit definition is unhelpful when the framework in which it is given deploys the expression to be defined, or when an audience lacks other expressions able to do the same work. In such cases, one might deploy ‘implicit (or contextual) definition.’ An implicit definition explains an expression’s meaning through appeal to other elements in the definition. But unlike an explicit definition, the other elements in an implicit definition need not be equivalent to the definiendum. Implicit definition involves stipulating the truth of sentences involving the definiendum in a way that fixes its meaning as the unique meaning able to sustain the truth of the sentences so stipulated. One example was Bertrand Russell’s account of definite descriptions, sentences of the form The F is G like The King of France is bald. Rather than presenting an explicit definition of The or The F, Russell explicitly defined whole sentences in which they occur, and thereby implicitly defined them, via (3):

(3) The F is G iff (9x) (Fx & (8y) (Fy x = y) & Gy)

The right-hand side reads: there is exactly one F and every F is G (Russell, 1905). Another example involves the use of a definite description to explain a proper name’s reference. The reference of Jack the Ripper might be explained using the following sentence: (4) Jack the Ripper is the perpetrator of the Whitechapel murders

Although (4) can be used to explain the meaning of Jack the Ripper, it does so without identifying it with the meaning of the descriptive phrase the perpetrator of the Whitechapel murders. Proper names – unlike descriptive phrases – are ‘rigid designators.’ They refer, crudely, to the same object in every possible world. So, while the perpetrator of the Whitechapel murders denotes different individuals in different possible worlds – depending on who in those worlds committed the crimes – Jack the Ripper refers to the same person in each world: whoever committed the crimes in the actual world. It follows that (4) is only ‘contingently’ true (might have been false). But, arguably, since (4) is stipulated, it is knowable a priori (Kripke, 1980; Evans, 1979; Soames, 2003: 397– 422). This effect is mediated by the audience’s standing competence with the category of proper names. So this feature of context plays a role in mediating the transition from what is presented in a definition to the understanding conveyed. Comparatively Context-Dependent Definitions

More obviously context-dependent forms of definition involve appeal to examples, and so to the classificatory abilities of one’s audience. Ordinary explanations of meaning often employ ‘ostension’ – crudely, pointing. Thus, one may point to Hilary Putnam and utter (5): (5) That is Hilary Putnam

thereby explaining the meaning of Hilary Putnam. This is an ‘ostensive definition.’ An ‘enumerative definition’ serves to explain the meaning of an expression by listing at least some elements in the expression’s extension. So, for example, (6): (6) A Beatle is Ringo or John or Paul or George

Ostension can be used to facilitate enumeration: (7) This (pointing to Ringo) and that (pointing to Paul) are living Beatles

Often, enumerative definitions give only partial lists and include an ‘and so forth’ clause: (8) A philosopher is Hilary Putnam, or W. V. Quine, or Rudolf Carnap, or anything relevantly like those

206 Definitions

In (8), since there are indefinitely many ways of continuing the list, we rely on our audience’s capacities, in particular the continuations they find salient. In order to reduce reliance, we can give additional information concerning the similarities we expect our audience to track: (9) A philosopher is Hilary Putnam, or W. V. Quine, or Rudolf Carnap, or other things similar to those with respect to academic expertise

An important range of cases involves ostensive enumeration and direction. So, for example, (10) A sample is water iff it is the same liquid as that (pointing to a sample)

According to (10), whether a novel sample counts as water depends on the general requirements on sameness of liquid and on the nature of the original sample. As Hilary Putnam argued, both ‘‘ . . . may take an indeterminate amount of scientific investigation to determine’’ (Putnam, 1975c: 225). Arguably, something close is true of definitions of many ordinary expressions, especially those that employ examples. Development of definitions that are less reliant on context for their functioning than ordinary definitions may require detailed investigation of elements in the circumstances of definition.

Uses Again The utility of definition depends on how widely it is applicable. There are grounds for pessimism. One negative argument is that, in order for a definition to secure uptake, the definiens must be understood. Hence, some basic range of expressions must be understood in advance of any definition, and they will therefore be indefinable. If some expressions can be so understood, it becomes pressing to show that some others cannot. Another negative line is that the role of context in the explanation of meaning establishes that exposure to definitions is, in general, not necessary or sufficient to secure audience understanding. Exposure is insufficient, not only because of the role of context in enabling an audience to utilize a definition to fix on a meaning, but also because elements in the context can play a role in fixing a meaning incompatible with the explicit dictates of the definition. The role of context makes definitions only defeasible guides to meaning. The use of examples above in explaining the varieties of definition makes possible the development – or defeat – of the proffered general characterizations of that variety. Exposure to definitions is unnecessary for a related reason: just as contextual elements can defeat definitions, so they can enable understanding in the absence of definitions.

From the current perspective, these points do not apply to the activity of definition. Since we acquire knowledge of the meanings of many (if not all) of our expressions on the basis of others’ explanations – understood to include their uses of expressions in contexts – many expressions with which we are competent are thereby definable. What the points suggest is that meaning can fail to supervene on information acquired just through understanding the definiens. (Supervenience of a set of properties Q on a set of properties P requires that no two possible worlds – or portions of a possible world – can differ in the distribution of Q-properties without differing in the distribution of P-properties.) But failure of meaning to supervene on the information carried by definitions is perfectly compatible with that information playing a role in sustaining knowledge of meaning. So the two lines of argument canvassed above indicate, at most, that not every ordinary definition will exhibit the degree of freedom from context shown by definitions in logic. The importance of this (potential) result derives from the extent to which philosophers have aimed to offer definitions of key terms – e.g., knowledge, causation, or truth – in a (comparatively) context-free way. One of the major themes of late 20th century philosophy has been that the aim is inappropriate (Burge, 1993; Putnam, 1975a, 1975b, 1975c; Travis, 1989; Wittgenstein, 1953. Related issues arose from Quine’s critique of the view that definitions have distinctive epistemic status: Quine, 1936, 1951, 1963). See also: Context; Definition in Lexicology; Dictionaries and Encyclopedias: Relationship; Indeterminacy; Meaning Postulates; Metalanguage versus Object Language; Stereotype Semantics; Synonymy; Vagueness; Vagueness: Philosophical Aspects.

Bibliography Belnap N D (1993). ‘On rigorous definitions.’ Philosophical Studies 72, 115–146. Burge T (1993). ‘Concepts, definitions, and meaning.’ Metaphilosophy 24, 309–325. Carnap R (1928). Der logische Aufbau der Welt, Berlin: Weltkries; 2nd edn., (1961). Berlin: Felix Meiner; trans. George R (1969). The logical structure of the world, Berkeley, CA: University of California Press. Carnap R (1947). Meaning and necessity: a study in semantics and modal logic. Chicago, IL: University of Chicago Press; 2nd, enlarged edn., (1956). Coffa J A (1991). The semantic tradition from Kant to Carnap: to the Vienna Station. Cambridge: Cambridge University Press. Evans G (1979). ‘Reference and contingency.’ The Monist 62. In Evans G (ed.) (1985) Collected papers. Oxford: Clarendon Press. 178–213.

Demonstratives 207 Fetzer J H, Shatz D & Schlesinger G N (eds.) (1991). Definitions and definability: philosophical perspectives. Dordrecht: Kluwer Academic. Fodor J A (1998). Concepts: where cognitive science went wrong. Oxford: Clarendon Press. Kripke S (1980). Naming and necessity. Oxford: Blackwell. Putnam H (ed.) (1975). Mind, language and reality: philosophical papers (vol. II). Cambridge: Cambridge University Press. Putnam H (1975a). ‘The analytic and the synthetic.’ In Putnam (ed.) 33–69. Putnam H (1975b). ‘Explanation and reference.’ In Putnam (ed.) 196–214. Putnam H (1975c). ‘The meaning of ‘‘meaning.’’’ In Putnam (ed.) 215–271. Quine W V (1936). ‘Truth by convention.’ In Lee O H (ed.) Philosophical essays for A. N. Whitehead. New York: Longmans; repr. in his 1976, The ways of paradox (revised and enlarged edn.), Cambridge, MA: Harvard University Press. 77–106. Quine W V (1951). ‘Two dogmas of empiricism.’ The Philosophical Review 60. 20–43; In Quine W V (ed.) (1961) From a logical point of view, (2nd edn.). Cambridge, MA: Harvard University Press. 20–47.

Quine W V (1963). ‘Carnap on logical truth.’ In Schilpp P A (ed.) The philosophy of Rudolf Carnap. Lasalle, IL: Open Court; repr. in his Ways of paradox 107–132. Robinson R (1950). Definition. Oxford: Oxford University Press. Russell B (1905). ‘On denoting.’ Mind 14, 479–493; repr. in Marsh R (ed.) (1956) Logic and knowledge. London: Allen & Unwin: 41–56. Russell B (1927). The analysis of matter. London: Kegan Paul. Sager J C (2000). Essays on definition. Amsterdam: J. Benjamins. Soames S (2003). Philosophical analysis in the twentieth century (vol. 2). Princeton, NJ: Princeton University Press. Suppes P (1957). Introduction to logic. Princeton, NJ: Van Nostrand. Travis C (1989). The uses of sense. Oxford: Oxford University Press. Whitehead A N & Russell B (1910). Principia mathematica (vol. 1). Cambridge: Cambridge University Press; 2nd edn., (1925). Wittgenstein L (1953). Philosophical investigations. New York: Macmillan.

Demonstratives H Diessel, Friedrich-Schiller-Universita¨t Jena, Jena, Germany ß 2006 Elsevier Ltd. All rights reserved.

The Semantic Properties of Demonstratives Demonstratives are deictic expressions; examples in English include this and that or here and there. These expressions indicate the relative distance of a referent in the speech situation vis-a`-vis the deictic center (cf. Brugmann, 1904; Bu¨hler, 1936; Lyons, 1977; Fillmore, 1997). The deictic center is defined by the speaker’s location at the time of the utterance. For instance, in the following example, the referent of the proximal demonstrative this is closer to the deictic center (i.e., the speaker’s location at the point of the utterance) than is the referent of the distal demonstrative that: (1) This one (here) is mine, and that one (over there) is yours.

All languages have at least two demonstratives that indicate a deictic contrast (cf. Diessel, 1999a, 2005a), but the use of demonstratives is not generally contrastive. For instance, in example (2), the distal demonstrative that does not indicate a spatial contrast: the

referent of the demonstrative may be an element in close proximity to the speaker’s location or it may be a referent in great distance: (2) Can you see that spot (on the back of my hand/on the moon)?

In some languages, certain types of demonstratives do not carry a specific distance feature (cf. Himmelmann, 1997; Diessel, 1999: chap 3). For instance, the German demonstrative das does not indicate the relative distance of the referent to the deictic center. In order to differentiate between a proximal and distal referent, the locational adverbs hier, da, and dort can be added to das, as in example (3), German (Germanic): (3) Das hier gefa¨llt mir besser als this/that here like me better than das da(dru¨ber) this/that over there ‘this one I like better than that one over there’

The same strategy is found in many other languages across the world (e.g., French celui-ci ‘this [one] here’ vs. celui-la` ‘that [one] there’). In general, all languages may indicate a deictic contrast between a proximal and distal referent, but some languages employ demonstratives that do not indicate a distance contrast

208 Demonstratives

unless they are combined with spatial adverbs (cf. Diessel, 1999: chap 3, 2005a). Moreover, many languages employ more than two distance terms. For instance, Hunzib (Daghestanian) distinguishes three deictically marked demonstratives: bed ‘proximal,’ bel ‘medial,’ and eg ‘distal.’ A somewhat different three-term system is found in Japanese, in which the medial term denotes a referent near the hearer: kore ‘near the speaker,’ sore ‘near the hearer,’ and are ‘away from speaker and hearer.’ Languages with more than three distance terms occur, but they are uncommon. Figure 1 shows the percentage of languages that employ two, three, or more distancemarked demonstratives (cf. Diessel, 2005a). As can be seen in this figure, in the vast majority of languages, demonstratives indicate a two- or three-way distance contrast. Systems with more than three distance terms are relatively rare, and all one-term systems have at least one pair of demonstratives that is deictically contrastive. In addition to pure distance, demonstratives may carry other, more specific deictic features. For instance, they may indicate whether the referent is visible or out of sight, at a higher or lower elevation, uphill or downhill, upriver or downriver, or moving toward or away from the deictic center (cf. Anderson and Keenan, 1985; Diessel, 1999: chap 3). The occurrence of such specific deictic features is largely restricted to adverbial demonstratives (see later).

The Syntactic Properties of Demonstratives Demonstratives occur in various syntactic contexts. Most studies distinguish at least between pronominal demonstratives, which substitute for a full noun phrase, and adnominal demonstratives, which accompany a cooccurring noun. In English (examples (4a) and (4b)), pronominal and adnominal demonstratives have the same morphological forms, but in French (examples (5a) and (5b)), they are formally distinguished:

Figure 1 Number of distance contrasts in demonstratives.

(4a) give me that (one) (4b) that book (5a) donne-moi celui-la` give-me that/this-there ‘give me that (one)’ (5b) ce livre-la` that/this book-there ‘that book’

In about 70% of the world’s languages, pronominal and adnominal demonstratives have the same form, and in about 30% they are formally distinguished (cf. Diessel, 2005b). If they are formally distinguished, they are commonly assigned to different word classes: pronominal demonstratives are pronouns, whereas adnominal demonstratives are determiners or articles (in traditional grammar, they are often classified as adjectives). However, when pronominal and adnominal demonstratives have the same forms, their categorical status is controversial. Some studies suggest that they are independent pronouns in both positions (cf. Van Valin and LaPolla, 1997) and other studies assume that they function as determiners (cf. Abney, 1987); yet other studies argue that the categorical status of demonstratives is language specific: when adnominal demonstratives are only loosely adjoined to a coreferential noun, they can be seen as independent pronouns in apposition to the noun, but when they are syntactically associated with a particular slot in a fixed noun phrase, they should be seen as determiners or articles (cf. Diessel, 1999: chap 4). Apart from pronominal and adnominal demonstratives, all languages have locational deictics such as the English here and there, which indicate the relative distance of a location to the deictic center (cf. Himmelmann, 1997; Diessel, 1999a). Since the locational deictics tend to include the same deictic roots as pronominal and adnominal demonstratives, they are often seen as a particular subclass of demonstratives, called adverbial demonstratives (cf. Fillmore, 1982; Himmelmann, 1997; Diessel, 1999a). Across languages, adverbial demonstratives are almost always formally distinguished from demonstratives functioning as pronouns or noun modifiers (cf. Diessel, 1999a: chap 4). In addition to pronominal, adnominal, and adverbial demonstratives, many languages employ a particular class of demonstratives in copular or nonverbal clauses (cf. Diessel, 1999a, 1999b). For instance, in Ponapean, the demonstratives me(t) ‘near speaker,’ men ‘near hearer,’ and mwo ‘away from speaker and hearer’ are used as independent pronouns, whereas ie(t) ‘near hearer,’ ien ‘near hearer,’ and io ‘away from speaker and hearer’ occur exclusively in nonverbal clauses. Similarly, whereas the pronominal demonstratives in German are inflected for gender number and case, the demonstratives in copular clauses are

Demonstratives 209

invariable: the neuter-singular form das is the only form that may occur in copular constructions (see examples (6a) and (6b), German) (cf. Diessel, 1999a, 1999b): (6a) das / *die DEM.N.SG / DEM.F.SG ‘these are my parents’ (6b) das / *die DEM.N.SG / DEM.F.SG ‘that/this is my cat’

sind meine Eltern are my Eltern ist is

meine Katze my cat.F

The Pragmatic Functions of Demonstratives Demonstratives are primarily used to focus the hearer’s attention on elements in the surrounding situation, but they may also refer to elements of the ongoing discourse or elements that are already in the hearer’s knowledge store. Himmelmann (1996) distinguishes four basic pragmatic uses: (1) the exophoric use, (2) the anaphoric use, (3) the discoursedeictic use, and (4) the recognitional use (see also Diessel, 1999a: chap 5). The four uses are exemplified in examples (7)–(10): (7) Look at that [speaker points to an element in the speech situation]. (8) The Yukon lay a mile wide and hidden under three feet of ice. On top of this ice were as many feet of snow. (9) Speaker A: So why did you say Peter is a crook? Speaker B: I didn’t say that. (10) I couldn’t sleep last night. That lady downstairs was playing the piano all night.

Exophoric demonstratives (example (7)) function to orient the hearer in the outside world. In this use, demonstratives establish joint attention on an object, person, or location in the surrounding speech situation. Very often, exophorically used demonstratives are accompanied by a pointing gesture. Anaphoric demonstratives (example (8)) function to keep track of prior discourse referents. They are coreferential with a noun or noun phrase in the previous discourse. In contrast to other tracking devices (e.g., personal pronouns and pronominal affixes on the verb), anaphoric demonstratives are typically used to refocus the hearer’s attention; i.e., very often, they function to shift the focus of attention to a new referent (cf. Comrie, 1998; Diessel, 1999a: chap 5). Discourse-deictic demonstratives (example (9)) refer to propositions. Both anaphoric and discoursedeictic demonstratives are used with text-internal reference, but they serve different functions. Anaphoric demonstratives are used for reference tracking, i.e., they establish links between discourse participants,

whereas discourse-deictic demonstrative function to combine chunks of the ongoing discourse (cf. Diessel, 1999a: chap 5). Recognitional demonstratives (example (10)) signal that the referent is familiar to the interlocutors (cf. Himmelmann, 1997). Speaker and hearer know the referent from their common history, thus recognitional demonstratives often suggest emotional closeness, sympathy, and shared beliefs (cf. Lakoff, 1974). In the literature, the exophoric use is commonly seen as the basic use from which all other uses are derived (e.g., Brugmann, 1904; Bu¨hler, 1936; Lyons, 1977; but see Himmelmann, 1996 for a different view). The basicness of exophoric demonstratives is reflected in several features (cf. Diessel, 1999a: chap 5): they tend to be structurally unmarked, they appear first in language acquisition, and they play a particular role in the process of grammaticalization: The grammaticalization of demonstratives can be seen as a cline ranging from exophoric demonstratives serving language-external functions (exophoric demonstratives focus the hearer’s attention on elements in the outside world) to grammatical markers serving highly routinized language-internal functions. Anaphoric, discourse-deictic, and recognitional demonstratives are somewhere in between the two ends of the cline, referring to entities within the universe of discourse or in the hearer’s knowledge store (cf. Diessel, 1999a: chap 5).

The Grammaticalization of Demonstratives Crosslinguistically, demonstratives provide a common historical source for a wide variety of grammatical markers. The grammaticalization pathways are crucially determined by the syntactic functions of demonstratives (cf. Diessel, 1999a: chap 6, 1999b). Grammatical Markers Derived from Pronominal Demonstratives

Pronominal demonstratives provide a frequent source for third-person pronouns. The development originates from anaphoric demonstratives keeping track of discourse referents that are difficult to access. Demonstratives are reanalyzed as third-person pronouns when their use is extended to discourse referents that are more easily accessible. As part of this process, demonstratives become destressed and may turn into clitics, which may eventually disappear (example (11)): (11) DEM PRO > third person PRO > clitic PRO > verb agreement > zero

Pronominal demonstratives are also frequently reanalyzed as relative pronouns. In the source

210 Demonstratives

construction, the demonstrative functions as an anaphoric pronoun in a simple (main) clause. In the target construction, the anaphoric demonstrative has been reanalyzed as a relative pronoun in a subordinate clause (example (12)): (12) [ . . . . NPi]S [DEMi . . . .] S > [[ . . . NPi]S [RELi . . . ] SUB] S

Like relative pronouns, complementizers are frequently derived from pronominal demonstratives. For instance, the English complementizer that evolved in the context of a correlative construction in which the second clause was introduced by a copy of a cataphoric demonstrative in the initial clause (see example (13), Old English). When the cataphoric demonstrative was no longer used to anticipate the second clause, the initial þæt was reanalyzed as a complementizer, i.e., a formal marker of a subordinate clause (Hopper and Traugott, 1993: 186). (13) þæt gefremede Diulius hiora consul, DEM arranged Diulius their consul þæt þæt angin wearð COMP DEM beginning was tidlice þurhtogen in-time achieved ‘their consul Diulius arranged (it) that it was started on time’

Finally, pronominal demonstratives may develop into conjunctions linking clauses (or propositions). Very often, the nature of the link is specified by an adposition or adverb, as in example (14) from Hixkaryana, in which the demonstrative is followed by a causal adposition: (14) nomokyaknano tuna heno. i$re it.was.coming rain QUANT DEM ke romarai$n hokohra because.of my.field not.OCC.with wehxaknano I.was ‘It was raining heavily. Therefore I did not work on my field’

Grammatical Markers Derived from Adnominal Demonstratives

Adnominal demonstratives provide a very common source for definite articles. The development has been described in many studies for a wide variety of languages (cf. Kra´msky´, 1972; Ultan, 1978; Greenberg, 1978; Laury, 1997; Himmelmann, 1997). Apart from definite articles, various other grammatical morphemes may evolve from an adnominal demonstrative: nominal number markers (Frajzyngier, 1997), linkers (Himmelmann, 1997), relative markers (Sankoff and

Brown, 1976), determinatives (Himmelmann, 1997), and specific indefinite articles (Wright and Givo´n, 1987). Grammatical Markers Derived from Adverbial Demonstratives

Adverbial demonstratives are frequently reanalyzed as temporal adverbs. Across languages, spatial deictics such as here and there are often mapped onto the temporal domain, giving rise to temporal deictics such as the English now and then (cf. Anderson and Keenan, 1985). Moreover, adverbial demonstratives may develop into direction markers. Direction markers indicate the direction of an activity expressed by a verb. For instance, German has two direction markers, hin ‘hither’ and her ‘hither,’ that combine as prefixes with a wide variety of motion verbs (e.g., hin-herkommen ‘to come hither/thither’). The two direction markers developed from an old deictic root that survived only in hin and her and hier ‘here’ and heute ‘today’ (cf. Diessel, 1999a: chap 6). Grammatical Markers Derived from Demonstratives in Nonverbal Clauses

Finally, the demonstratives in nonverbal clauses may be reanalyzed as copulas and focus markers. The development of copulas originates from a construction in which a topicalized noun phrase (NP) is resumed by a demonstrative of a nonverbal clause. This construction may develop into a copular construction in which the topicalized NP functions as subject and the demonstrative functions as copula (cf. example (15)). Copulas that evolved from demonstratives in nonverbal clauses occur, for instance, in Mandarin Chinese, Modern Hebrew, Kilba, Panare, and Wappo (cf. Li and Thompson, 1977). (15) [NP]i [PROi NP]] ) [NP SUBJ COP NP]

Like nonverbal copulas, focus markers may arise from demonstratives in nonverbal clauses. For instance, in Ambulas (see examples (16a) and (16b)), the distal demonstrative wan (example (16a)) is also used as a focus marker (example (16b)): (16a) wan kiyade´-na kaye´kni? that who-POSS reflection ‘whose reflection is that?’ (16b) ve´te de´ wak a [wan me´ne´] see-and he said ah FOCUS you kaapuk ye´me´ne´n not you-went ‘he saw him and said, ‘‘ah, so you did not go’’’

The reanalysis occurred in a cleft construction in which the predicate nominal of a nonverbal clause

Demonstratives 211

was modified by a relative clause. In the target construction, the nonverbal clause has been reinterpreted as a noun phrase in which the demonstrative has assumed the function of a focus marker. The mechanism is shown in example (17): (17) [[DEM NP] [REL-clause]] > [ [FOCUS NP] . . .]]

The Diachronic Origin of Demonstratives and Their Status in Language Although demonstratives form only a small class of linguistic expressions, the grammaticalization of demonstratives occurs so frequently that probably all languages have at least some grammatical morphemes that originated from a demonstrative. But where do demonstratives come from? What is their historical source? Demonstratives are commonly seen as grammatical markers, and some linguists assume that all grammatical markers are eventually derived from symbolic expressions, i.e., from nouns, verbs, or adjectives. However, although all languages seem to have demonstratives, there is no evidence from any language that demonstratives developed from a symbolic source. What frequently happens is that demonstratives are reinforced by other lexemes. For instance, the French demonstrative ce can be traced back to an expression in which the weakened demonstrative ille was strengthened by the verb ecce meaning ‘behold,’ but this development did not create a new type of grammatical marker. In general, the deictic roots of demonstratives are not etymologically analyzable. It seems that demonstratives emerged so early in the evolution of language that we do not have any traces of their development. This suggests that demonstratives are part of the basic vocabulary of every language and should be kept separate from ordinary grammatical markers. Though (some) demonstratives and certain grammatical markers (e.g., articles and third-person pronouns) serve basically the same syntactic functions, their pragmatic functions are very different. Grammatical markers either indicate relationships between elements of the ongoing discourse (e.g., prepositions) or they qualify the meaning of a content word (e.g., auxiliaries). By contrast, demonstratives serve language-external functions. In their basic use, they draw the hearer’s attention on elements in the surrounding speech situation. In other words, demonstratives establish joint attention, which is one of the most fundamental prerequisites for the use of language (cf. Tomasello, 1999). Of course, anaphoric, discourse-deictic, and recognitional demonstratives

serve more abstract, language-internal functions, but these uses are derived from the exophoric use; in fact, they can be seen as the initial stages of the grammaticalization process whereby demonstratives develop into grammatical markers (see earlier). What is more, the use of (exophoric) demonstratives is often accompanied by a deictic point gesture. There is probably no other class of linguistic expressions that is so closely associated with a gesture than demonstratives. Deictic pointing gestures are of central significance to human communication. They establish joint attention and evolve very early in ontogeny. Moreover, the precursors of deictic pointing can be found in primate communication (cf. Tomasello, 1999). All of this suggests that demonstratives have a special status in language and should be kept separate from both symbolic expressions and grammatical markers. See also: Coreference: Identity and Similarity; Definite and Indefinite; Definite and Indefinite Articles; Direct Reference; Dthat; Indexicality; Pronouns.

Bibliography Abney S P (1987). The English noun phrase in its sentential aspect. Ph.D. diss., MIT. Anderson S R & Keenan E L (1985). ‘Deixis.’ In Shopen T (ed.) Language typology and syntactic description, vol. 3. Cambridge: Cambridge University Press. 259–308. Brugmann K (1904). Demonstrativpronomina der indogermanischen Sprachen. Leipzig: Teubner. Bu¨hler K (1934). Sprachtheorie: Die Darstellungsfunktion der Sprache. Jena: Fischer. Comrie B (1998). ‘Reference-tracking: description and explanation.’ Sprachtypologie und Universalienforschung 51, 335–346. Diessel H (1999a). Demonstratives. Form, function, and grammaticalization. Amsterdam: John Benjamins. Diessel H (1999b). ‘The morphosyntax of demonstratives in synchrony and diachrony.’ Linguistic Typology 3, 1–49. Diessel H (2003). ‘The relationship between demonstratives and interrogatives.’ Studies in Language 27, 635–655. Diessel H (2005a). ‘Distance contrasts in demonstratives.’ In Dryer M, Haspelmath M, Gil D & Comrie B (eds.) World atlas of language structures. Oxford: Oxford University Press. 170–173. Diessel H (2005b). ‘Demonstrative pronouns – demonstrative determiners.’ In Dryer M, Haspelmath M, Gil D & Comrie B (eds.) World atlas of language structures. Oxford: Oxford University Press. 174–177. Dixon R M W (2003). ‘Demonstratives. A cross-linguistic typology.’ Studies in Language 27, 61–112. Enfield N J (2003). ‘Demonstratives in space and interaction. Data from Lao speakers and implications for semantic analysis.’ Language 79, 82–117.

212 Dictionaries Fillmore C J (1982). ‘Towards a descriptive framework for spatial deixis.’ In Jarvella R J & Klein W (eds.) Speech, place, and action. Chichester: John Wiley. 31–59. Fillmore C J (1997). Lectures on deixis. Stanford: CSLI Publications. Frajzyngier Z (1997). ‘Grammaticalization of number: From demonstratives to nominal and verbal plural.’ Linguistic Typology 1, 193–242. Greenberg J H (1978). ‘How does a language acquire gender markers.’ In Greenberg J H, Ferguson C A & Moravcsik E A (eds.) Universals of human language, vol. 3. Stanford: Stanford University Press. 47–82. Himmelmann N (1996). ‘Demonstratives in narrative discourse: A taxonomy of universal uses.’ In Fox B (ed.) Studies in anaphora. Amsterdam: John Benjamins. 205–254. Himmelmann N (1997). Deiktikon, Artikel, Nominalphrase: Zur Emergenz syntaktischer Struktur. Tu¨bingen: Niemeyer. Kra´msky J (1972). The article and the concept of definiteness in language. The Hague: Mouton. Lakoff R (1974). ‘Remarks on this and that.’ Chicago Linguistic Society 10, 345–356. Laury R (1997). Demonstratives in interaction: The emergence of a definite article in Finnish. Amsterdam: John Benjamins.

Li C N & Thompson S A (1977). ‘A mechanism for the development of copula morphemes.’ In Li C N (ed.) Mechanisms of syntactic change. Austin: University of Texas Press. 419–444. Lyons J (1977). Semantics. Cambridge: Cambridge University Press. Sankoff G & Brown P (1976). ‘The origins of syntax in discourse: A case study of Tok Pisin relatives.’ Language 52, 631–666. Tomasello M (1999). The cultural origins of human cognition. Cambridge, MA: Harvard University Press. Ultan R (1978). ‘On the development of a definite article.’ In Seiler H (ed.) Language universals. Papers from the conference held at Gummersbach/Cologne, Germany, October 3–8, 1976. Tu¨bingen: Narr. 249–265. Van Valin R D Jr & LaPolla R J (1997). Syntax: structure, meaning, and function. Cambridge: Cambridge University Press. Wright S & Givo´n T (1987). ‘The pragmatics of indefinite reference: quantified text-based studies.’ Studies in Language 11, 1–33.

Dictionaries H Be´joint, Universite´ Lyon 2, Lyon, France ß 2006 Elsevier Ltd. All rights reserved.

Dictionaries are reference works designed to give information on the lexical units of a language: their spelling, pronunciation, etymology, morphology, meanings, connotations, register restrictions, collocations, syntagmatic behaviour, etc. They have several characteristics that make them useful tools for translators. Like all reference works, they are made up of independent entries, which are short concentrates of information. Those entries are arranged according to a code (traditionally alphabetical order) that is accessible to all without preparation, so that the dictionary can be consulted quickly and easily. In addition, the information contained in each entry is presented in such a way that the users normally have no difficulty finding what they need: every entry contains the same items of information, which are always ordered in the same way. What makes dictionaries special among reference works is the fact that they are linguistic tools: the entries are lexical units, the information in the entries is about those units, not about their referents (see Dictionaries and Encyclopedias: Relationship), and the sum total of the lexical units treated

in the entries is representative of the lexicon, or of a clearly identified part of the lexicon, of a language. Because of these features the dictionary of a language, imperfect as it can be, is the only means of access to the ‘whole’ lexicon of a language. There are several types of dictionaries: large or small, for adults or for children; with or without illustrations; monolingual or bilingual (or trilingual, etc.); general or specialized (e.g., dictionaries of phrasal verbs or etymological dictionaries); linguistic or encyclopedic, alphabetically or semantically ordered; for expression, for comprehension, or for word puzzles; in paper or in electronic form, etc. The type of dictionary that translators use most is the bilingual dictionary, in which the information about each lexical unit is its ‘equivalent’ in the other language. Equivalence is difficult to achieve because languages often differ in the way they create and use their lexical units, so that two equivalents are rarely ‘equal’ in meaning, connotation, usage, etc. Translators also use monolingual dictionaries, where they can find items that are absent from bilingual dictionaries, such as definitions, literary quotations, and encyclopedic information. The dictionaries used by translators must be accurate, reliable, extensive, and user-friendly. In terms of user-friendliness, extensiveness, rapidity, and ease

Dictionaries and Encyclopedias: Relationship 213

of consultation, dictionaries are often inferior to other sources of information, such as search engines on the Internet. However, nothing can replace dictionaries as thesauruses of the linguistic ‘norm’: translators can use the word list of the dictionary to make sure that a lexical unit is ‘part’ of a language or a variety of language – even though the absence of a unit does not mean much. Also, they can use the contents of the entries to verify that the meaning, the connotation, the collocations, etc. of a lexical unit are what is accepted as normal by the community of the users of that language. Faith in the dictionary as guardian of the norm must not be exaggerated – after all, dictionaries are only human artifacts – but it should not be underestimated either: there is so much work put into a dictionary by a highly skilled team of lexicographers that it necessarily encapsulates more knowledge about the language than any other source. Dictionaries are arbiters of usage, to an extent that often goes even beyond the intent of the lexicographer. One of the main defects of the dictionary, paper or electronic, is a consequence of the amount of work put into it: the dictionary text takes so much time and care to compile that it can never be as up-to-date as the translator would like it to be. It takes a while to accept a neologism, for example, and neologisms are precisely what translators are most likely to need help on. Other imperfections of the dictionary are consequences of our dictionary traditions. Bilingual dictionaries traditionally have no pictures and no definitions, which makes them useful only to people who bring with them, at the time of consultation, enough knowledge of the meaning of the sourcelanguage unit or of the target-language unit. Also, the traditional dictionary text, concerned above all to give as much information as possible in as little space as possible, tends to remain allusive and not to comment explicity on the information it gives.

For example, it does not differentiate clearly between synonyms or between frequent and infrequent words, even hapax legomena. Finally, dictionaries have traditionally been content with representing linguistic facts by means of a discourse that is strictly linear, thus failing to represent the intricacies of language; for example, polysemy. Modern dictionaries have been steadily improving: They now make use of corpora to indicate frequencies and various facts about actual usage that were hitherto unknown or not given, and they tend to do away with the more obscure coding traditions. Electronic dictionaries are incomparably easier to use; they allow consultations that are impossible with paper dictionaries, and they have begun to use multidimensional means, such as charts and video illustrations. Of course, dictionaries still fall short of representing all the complexities of a language: there are many aspects of the syntax and usage that no one is yet in a position to describe. See also: Definition in Lexicology; Dictionaries and Encyclopedias: Relationship; Lexicology; Lexicon/Dictionary: Computational Approaches; Lexicon: Structure; Thesauruses.

Bibliography Be´joint H & Thoiron P (1996). Les Dictionnaires bilingues. Louvain-la-Neuve: Duculot. Dubois-Charlier F (1997). ‘Review of Oxford-Hachette French Dictionary.’ International Journal of Lexicography 10/ 4, 311–329. Landau S (2001). Dictionaries. The art and craft of lexicography (2nd edn.). Cambridge: Cambridge University Press. Roberts R (1994). ‘Bilingual dictionaries prepared in terms of translators’ needs.’ In Translation in the global village. Proceedings of the 3rd Conference of the Canadian Translators and Interpreters Council.

Dictionaries and Encyclopedias: Relationship K Allan, Monash University, Victoria, Australia ß 2006 Elsevier Ltd. All rights reserved.

An encyclopedia functions as a structured database containing exhaustive information on many (perhaps all) branches of knowledge. A dictionary (synonymous with lexicon for this article) is a bin for storing listemes. A listeme is a language expression whose meaning is not determinable from the meanings

(if any) of its constituent forms and that, therefore, a language user must memorize as a combination of form and meaning. A listeme’s dictionary entry normally includes (a–c): a. Formal (phonological and graphological) specifications. b. Morphosyntactic specifications: properties such as the inherent morphosyntactic (lexical) category of the item; necessary subcategorization such as

214 Dictionaries and Encyclopedias: Relationship

the conjugations of verbs; regularities and irregularities (e.g. the past tense of English ‘strong’ verbs such as drank and thought); constraints on range (e.g. –ize is suffixed to nouns (atomize) and adjectives (legalize)). c. Semantic specifications identify the senses of a listeme; a sense describes the salient characteristics of the typical denotatum of that item (see Stereotype semantics, Prototype semantics). The form of the semantic specification depends on the chosen metalanguage (see Metalanguage versus Object Language; Cognitive Semantics; Dynamic Semantics; Frame Semantics; Generative Lexicon; Lexical Conceptual Structure; Natural Semantic Metalanguage).

compare her proposed entry for tiger (Wierzbicka, 1985: 164) with the entry quoted from the Encyclopaedia Britannica (Wierzbicka, 1985: 194). Langacker (1987: 154) says that the information in a dictionary is encyclopedic:

In practice, desktop dictionaries are artifacts serving a wide variety of purposes; most contain a great deal of encyclopedic and pragmatic information about people and places, as well as information about the history and usage of listemes; Pearsall (1998) includes 4500 place names, 4000 biographical entries, and 3000 other proper names. Such dictionaries function not only as an inventory of listemes but also as a cultural index to the language and the collective beliefs of its speakers. Nonetheless, this article focuses on the potential differences between dictionary and encyclopedia. Attempts in the field of artificial intelligence to program a machine to interpret a text so as to answer questions on it or to provide a summary for it reveal that the project requires input from what Schank and Abelson (1977) call ‘scripts,’ Lakoff (1987) ‘idealized cognitive models,’ Minsky (1977), Barsalou (1992), Fillmore (1975, 1982), and Fillmore and Atkins (1992) ‘frames.’ These are by no means identical, but they all call extensively on encyclopedic knowledge. The normal practice before the 1980s was to favor parsimonious dictionary knowledge against elaborated encyclopedic knowledge, but things have changed (see Peeters, 2000, for a useful survey; also Landau, 2001). Haiman (1980: 331) claimed ‘Dictionaries are encyclopedias’ which is certainly true of some existing dictionaries – The New Grove Dictionary of Jazz (Kernfeld, 1994) is more encyclopedia than dictionary; the Collins English Dictionary (Hanks, 1979) and the New Oxford Dictionary of English (Pearsall and Hanks, 1998) list encyclopedic information about bearers of certain proper names (see (3) below). Jackendoff (1983: 139f) suggested that information in the lexical entry ‘‘shades toward ‘encyclopedia’ rather than ‘dictionary’ information, with no sharp line drawn between the two types.’’ Wierzbicka developed semantic descriptions very similar to those in an encyclopedia;

Leech is surely correct (cf. Allan, 2001: Ch. 8), but there has to be a cognitive path from the listeme to directly access encyclopedic information about dogs (or whatever) because of Hearer’s ability to ‘shadow’ a text very rapidly – that is, to begin understanding it and making appropriate inferences milliseconds after Speaker has presented it (Marslen-Wilson, 1985, 1989). A dictionary entry is one access point to an encyclopedia entry. If the encyclopedia is a database, then the dictionary forms an integral component of the encyclopedia. It would seem incontrovertible that encyclopedic data is called on in

The distinction between semantics and pragmatics (or between linguistics and extralinguistic knowledge) is largely artifactual, and the only viable conception of linguistic semantics is one that avoids false dichotomies and is consequently encyclopedic in nature. [Sic]

Leech (1981: 84) offers the contrary view: [T]he oddity of propositions like ‘The dog had eighty legs’ is something that zoology has to explain rather than conceptual semantics.

a. metaphors like She’s a gazelle / a tiger; b. the extension of a proper name like Hoover to denote vacuum cleaners and vacuum cleaning (I assume that, because many proper names are shared by different name-bearers, there must be a stock of proper names located either partially or wholly in the dictionary); c. explaining the formation of the verb bowdlerize from the proper name Bowdler; d. making and understanding statements like (1) Caspar Cazzo is no Pavarotti! (2) Harry’s boss is a bloody little Hitler!

(1) implies that Caspar is not a great singer; we infer this because Pavarotti’s salient characteristic is that he is a great singer. (2) is abusive because of the encyclopedic entry for the name Hitler. Such comparisons draw on biodata that is appropriate in an encyclopedia entry for the person who is the standard for comparison but not a dictionary, which should identify the characteristics of the typical name-bearer, but not (contra Frege, 1892) any particular name bearer – any more than the dictionary entry for dog should be restricted to a whippet or poodle rather than the genus as a whole). Nonetheless, (3) is quoted from Collins English

Dictionaries and Encyclopedias: Relationship 215

Dictionary (the name Pavarotti is not to be found, although it is in Pearsall, 1998). (3) Hitler (|hItle) n. 1. Adolf (|ædOlf). 1889–1945, German dictator born in Austria. After becoming president of the National Socialist German Workers’ Party (Nazi Party), he attempted to overthrow the government of Bavaria (1923). While in prison he wrote Mein Kampf, expressing his philosophy of the superiority of the Aryan race, and the inferiority of the Jews. He was appointed chancellor of Germany (1933), transforming it from a democratic republic into the totalitarian Third Reich, of which he became Fu¨hrer in 1934. He established concentration camps to exterminate the Jews, rearmed the Rhineland (1936), annexed Austria (1938) and Czechoslovakia, and invaded Poland (1939), which precipitated World War II. He committed suicide. 2. a person who displays dictatorial characteristics.

Strictly speaking, the information under 1 is encyclopedic; that under 2 is proper to a dictionary – a fact recognized in Hanks (1979). We see that, for practical purposes, lexicographers do not consistently distinguish the two. Some people would regard (3)1 as prejudiced. Yet the prejudices of language users are just as relevant to a proper account of language understanding as the true facts. Should the dictionary and encyclopedia institutionalize the mainstream stereotype at some index (time and world)? Both dictionary and encyclopedia need an archiving device to facilitate information incrementation and updating. Should the dictionary contain every word in the language and the encyclopedia contain exhaustive information on all branches of knowledge, or should both be modular, rather like a collection of human minds? For instance, a medic’s dictionary is full of medical jargon and a medic’s encyclopedia contains medical knowledge unknown to the average patient; a botanist knows more about plants than I do and has the dictionary to talk about that knowledge; and so forth through the community for different interest groups. Even if the common core of a dictionary and an encyclopedia can be identified, they will have to be connected to specialist modules with jargon dictionaries and specialist encyclopedias. Multiple dictionaries and encyclopedias would model individual human capacities and divide data and processing into manageable chunks. Individuals in a community will have different mental encyclopedias comprehending partially different information (cf. Katz, 1977; Murphy, 2000). Suppose I say: (4) I know four Annas.

There is no requirement that you know, or even know of, any of the Annas referred to. It does, however, call for lexical knowledge about the name Anna, because you will understand ‘Speaker knows four female human beings each of whom is called Anna.’ By contrast, I can only appropriately utter (5) to someone (whom I believe) able to identify further facts about the person spoken of because we have partially coincidental encyclopedia entries for the intended referent that are sufficient to establish common ground. (5) I had an email from Anna yesterday.

The encyclopedia entry for Aristotle would be a refined version of (6) – in which numbers are addresses; f ¼ formal, e ¼ encyclopedic, s ¼ semantic specifications. Entry se703 is the kind of entry that an individual person might have in their mental encyclopedia; but clearly does not belong in a published encyclopedia. (6)

f500,fe500

Aristotle / |ærIstQtl/

f500Ns600

‘‘bearer of the name f500Aristotle, normally a male’’se701–3 BlZB fe500 Derived from Greek; AristotE se701 – Ancient Greek philosopher, born in Stagira in C4 BCE. Author of The Categories, On Interpretation, On Poetry . . . Pupil of Plato and teacher of Alexander the Great . . . etc. se702 – Onassis, C20 CE Greek shipping magnate . . . etc. se703 – Papadopoulos, friend whose phone number is 018 111 . . . etc.

s600

Combined (idealized) dictionary-encyclopedia entries should be networked as shown in Figure 1. The entry for Aristotle is sketched in Figure 2. For simplicity’s sake much requisite cross referencing is omitted from Figure 2: there is no encyclopedic information on N included, none on Ancient Greek philosophers, poetry, Stagira, Plato, etc. Further

Figure 1 Networked components of the lexicon with the encyclopedia. Formal data, F, in the triangle, morphosyntactic data in the circle, semantic data in the rectangle, and encyclopedic data in the ellipse.

216 Dictionaries and Encyclopedias: Relationship

Figure 2 Networked fragment of the combined lexiconencyclopedia entry for Aristotle.

complexity would result from there being more than one encyclopedia or if a single encyclopedia is divided into cross-referenced modules. Strictly speaking, the information at se701–703 in the encyclopedia is not of the kind that anyone should expect to find in a dictionary because it is not lexicographical information about a name in the language. However, as we have seen, practical lexicography does allow encyclopedic information about particular name-bearers into dictionaries. Similarly for information about things, whether natural kinds such as gold and dogs, or unnatural kinds such as polyester and computers. The whole encyclopedia entry can be accessed through part of the information in it, enabling the associated listeme and further information about the referent to be retrieved. If given names are included in the dictionary, presumably the English dictionary lists such common family names as Smith and Jones with an entry such as ‘bearer of the name Smith, normally a family name.’ This recognizes that some names typically occur as family names and are retained in memory as such. Should exotic foreign family names such as Sanchez, Papadopoulos, and Wong have any entry in an English dictionary? Native speakers of English readily recognize some names as Scottish, or Welsh, or Cornish, or Jewish; and immigrants to an Englishspeaking country who wish to assimilate sometimes Anglicize their names: e.g., Piekarsky becomes Parkes, Klein becomes Clyne. So it seems that even family names have lexical properties. Papadopoulos should be tagged as originally a Greek name, and Wong Chinese. Once again, there exist desk-top dictionaries of first names and of surnames, e.g., Hanks and Hodges (1988, 1990), Hanks (2003), which include etymological data and information about place of origin. Barnhart (1954) has purely encyclopedic knowledge about name-bearers. Here is evidence for the modularity of information in dictionaries and encyclopedias.

Although the family name offers a clue to the bearer’s ancestry, it gives no guarantee of it being a fact. This reflects a general truth about proper names which distinguishes them from common names: whereas one sense of the common name cat necessarily names something animal, the proper name Martha only probably names a female – a fact recorded in the dictionary. Unusual names such as If-Christ-Had-NotDied-For-You-You-Had-Been-Damned or Yahoo Serious would presumably have empty dictionary entries leading directly to encyclopedic entries. However, like topographical names beginning Mount or River, these are interpreted via the dictionary as multiword listemes. Once the component meanings are assembled and an interpretation (or partial interpretation) determined for the name, a matching encyclopedia entry is sought and, if none already exists, a new entry is created. Where Hearer encounters a new proper name, a dictionary entry is established on the basis of its formal and syntactic characteristics, and any sense that is assigned to the dictionary entry derives from encyclopedic information about the namebearer. The meanings of ‘content words’ – nouns, verbs, adjectives, adverbs – are influenced by the things that they denote and the circumstances in which the words are used. That is, semantic information in a large part of the dictionary is distilled from encyclopedic information about the salient characteristics of typical denotata. It is this information from which the senses of isomorphic listemes are abstracted. Such abstraction from particulars is evident in the ontogenetic development of listemes by children. Clark (1973) reports a child’s extension of bird to any moving creature (sparrows, cows, dogs, cats), moon to any round object, bow-wow to things that are bright, reflective and round (based on the dog’s eyes) such as a fur piece with glass eyes, pearl buttons, cuff links, a bath thermometer. The same process operates when adults encounter a new name or a new use for a name. The idealized model of the encyclopedia and dictionary necessitates a heuristic updating facility of this kind. Suppose that Z has only ever encountered female bearers of the name Beryl, so that the semantic specification in Z’s dictionary entry is ‘‘bearer of the name Beryl, normally a female.’’ If Z comes across the name Beryl used of a man, not only is Z’s encyclopedia expanded, but also Z’s mental dictionary entry will be updated to ‘bearer of the name Beryl, normally a female, but attested for a male.’ Strictly speaking, a dictionary is the part of an encyclopedia which stores information about the formal, morphosyntactic, and semantic specifications of listemes; for other views, see Boguraev and Briscoe (1989), Butterfield (2003), Hanks (1998, 2003), Hu¨llen and Schulze (1988), Jackendoff (1975, 1995,

Dictionaries and Encyclopedias: Relationship 217

1997), Lyons (1977), Mel’cuk (1992), and Pearsall (1998). Etymological and stylistic information are not strictly a part of the dictionary but encyclopedic data; this hardly matters if we assume that lexical information is just one kind of encyclopedic information, and that the encyclopedia is a general knowledge base of which the dictionary is a proper part. However, in practice, desktop dictionaries often include some encyclopedic information, although desktop encyclopedias do not seem to encroach on lexicography. See also: Cognitive Semantics; Definition in Lexicology; Dynamic Semantics; Formal Semantics; Frame Semantics; Lexical Conceptual Structure; Generative Lexicon; Lexicon: Structure; Metalanguage versus Object Language; Natural Semantic Metalanguage; Proper Names; Prototype Semantics; Stereotype Semantics.

Bibliography Allan K (2001). Natural language semantics. Oxford and Malden, MA: Blackwell. Barnhart C L (ed.) (1954). The new century cyclopedia of names. New York: Appleton-Century-Crofts. Barsalou L W (1992). ‘Frames, concepts, and conceptual fields.’ In Lehrer A & Kittay E (eds.) Frames, fields, and contrasts. Norwood, NJ: Lawrence Erlbaum. 21–74. Boguraev B & Briscoe T (eds.) (1989). Computational lexicography for natural language processing. London: Longman. Clark E V (1973). ‘What’s in a word? on the child’s acquisition of semantics in his first language.’ In Moore T E (ed.) Cognitive development and the acquisition of language. New York: Academic Press. 65–110. Fillmore C J (1975). ‘An alternative to checklist theories of meaning.’ In Cogen C et al. (eds.) Proceedings of the first annual meeting of the Berkeley Linguistics Society. Berkeley: Berkeley Linguistics Society. 123–131. Fillmore C J (1982). ‘Frame semantics.’ In Linguistic Society of Korea (eds.) Linguistics in the morning calm. Seoul: Hanshin. 111–138. Fillmore C J & Atkins B T (1992). ‘Toward a frame-based lexicon: the semantics of RISK and its neighbors.’ In Lehrer A & Kittay E (eds.) Frames, fields, and contrasts. Hillsdale: Lawrence Erlbaum. 75–102. Frege G (1892). ‘U¨ber sinn und bedeutung.’ Zeitschrift fu¨r philosophie und philosophische kritik 100, 25–50. Reprinted as ‘On sense and reference.’ In Geach P & Black M (eds.) Translations from the philosophical writings of Gottlob Frege. Oxford: Blackwell. 56–78. Haiman J (1980). ‘Dictionaries and encyclopedias.’ Lingua 50, 329–357. Hanks P (ed.) (1979). Collins English Dictionary (1st edn.). Bishopriggs: Collins. Hanks P (ed.) (2003). Dictionary of American family names. New York: Oxford University Press.

Hanks P & Hodges F (1988). A dictionary of surnames. Oxford: Oxford University Press. Hanks P & Hodges F (1990). A dictionary of first names. Oxford: Oxford University Press. Hu¨llen W & Schulze R (eds.) (1988). Understanding the lexicon: meaning, sense, and world knowledge in lexical semantics. Tu¨bingen: Max Niemeyer Verlag. Jackendoff R S (1975). ‘Morphological and semantic regularities in the lexicon.’ Language 51, 639–671. Jackendoff R S (1983). Semantics and cognition. Cambridge, MA: MIT Press. Jackendoff R S (1995). ‘The boundaries of the lexicon.’ In Everaert M, van der Linden E-J, Schenk A & Schreuder R (eds.) Idioms: structural and psychological perspectives. Hillsdale NJ: Erlbaum. 133–165. Jackendoff R S (1997). Architecture of the language faculty. Cambridge, MA: MIT Press. Katz J J (1977). ‘A proper theory of names.’ Philosophical Studies 31, 1–80. Lakoff G (1987). Women, fire, and dangerous things. Chicago: University of Chicago Press. Langacker R W (1987). Foundations of cognitive grammar, vol. 1: theoretical prerequisites. Stanford: Stanford University Press. Leech G N (1981). Semantics: a study of meaning (2nd edn.). Harmondsworth: Penguin. Landau S I (2001). Dictionaries: The Art and Craft of Lexicography (2nd edn.). New York: Cambridge University Press. Lyons J (1977). Semantics (vols 1 & 2). Cambridge: Cambridge University Press. Marslen-Wilson W (1985). ‘Speech shadowing and speech comprehension.’ Speech Communication 4, 55–73. Marslen-Wilson W (1989). ‘Access and integration: projecting sound onto meaning.’ In Marslen-Wilson W (ed.) Lexical Representation and Processing. Cambridge, MA: MIT Press. 3–24. Mel’cuk I A (1992). ‘Lexicon: an overview.’ In Bright W (ed.) International encyclopedia of linguistics, vol. 2. New York: Oxford University Press. Minsky M (1977). ‘Frame-system theory.’ In Johnson-Laird P N & Wason P C (eds.) Thinking: readings in cognitive science. Cambridge: Cambridge University Press. 355–376. Murphy M L (2000). ‘Knowledge of words versus knowledge about words: the conceptual basis of lexical relations.’ In Peeters B (ed.) The lexiconencyclopedia interface. Oxford: Elsevier Science. 317–348. Pearsall J & Hanks P (ed.) (1998). New Oxford dictionary of English. Oxford: Oxford University Press. Peeters B (2000). ‘Setting the scene: some recent milestones in the lexicon–encyclopedia debate.’ In Peeters B (ed.) The lexicon–encyclopedia interface. Amsterdam: Elsevier. 1–52. Schank R & Abelson R C (1977). Scripts, plans, goals and understanding: an inquiry into human knowledge structures. Hillsdale: Lawrence Erlbaum. Wierzbicka A (1985). Lexicography and conceptual analysis. Ann Arbor: Karoma.

218 Diminutives and Augmentatives

Diminutives and Augmentatives ¨ Dahl, Stockholm University, Stockholm, Sweden O ß 2006 Elsevier Ltd. All rights reserved.

Diminutives and augmentatives, as these terms are traditionally understood, are words formed by derivational processes that add a semantic element having to do with size to the meaning of the word. Thus, in English, adding the suffix -ette to the noun kitchen yields the diminutive kitchenette ‘small kitchen.’ In Russian, adding the suffix -isˇcˇe to the noun dom ‘house’ yields the augmentative domisˇcˇe ‘big house.’ However, diminutives and augmentatives tend to be used in a multitude of ways, involving semantic and pragmatic elements that go far beyond a simple notion of size. Although diminutives and augmentatives are extremely widespread in human languages, there is considerable crosslinguistic variation in their frequency and degree of elaboration. English is a language that has virtually no augmentatives and relatively few diminutives compared to some other European languages such as Spanish, Italian, and Russian, or even varieties close to English such as Scots. Still, English has more diminutives than Swedish, for example, where there is hardly any productive derivational process for forming diminutives, although a few lexicalized items exist (such as fossing ‘footie’ from fot ‘foot’). As for degree of elaboration, we may compare German, which has, by and large, only two diminutive suffixes, -chen and -lein, the choice between which depends mainly on regional variety, with the highly complex system found in Russian, with dozens of diminutive and augmentative formations with different semantic and pragmatic properties. Systematic typological data about the areal and genetic distribution of diminutives and augmentatives is still hard to come by. Jurafsky (1996) presents data on diminutives from about seventy languages, representing most larger phyla in the world, but in most cases relies on extant grammatical descriptions, which suggests that the sample is somewhat biased towards those languages whose systems of diminutives are salient enough to attract the attention of grammarians. A relatively safe crosslinguistic generalization is that augmentatives are universally less frequent and less elaborated than diminutives; consequently, there is also less to be said on augmentatives, and most of what follows concerns primarily diminutives. Diminutives and augmentatives are frequently formed by affixation, but other means also exist, most

notably reduplication and tone. The frequent occurrence in diminutives of ‘‘higher tonality, including high tones, high front vowels and fronted consonants’’ (Jurafsky, 1996: 534) suggests an iconic link between meaning and form. Diachronically, the most well-documented source of diminutives is ‘child,’ e.g., Ewe vı´ as in (b) kpe´vı´ ‘small stone’ from kpe´ ‘stone’ (Heine and Kuteva, 2002: 65). As noted by Jurafsky (1996: 569), there is, somewhat surprisingly, no evidence that lexical morphemes meaning ‘little’ develop into derivational diminutive affixes, although such words tend to develop functions similar to those of diminutives, English little being a case in point. An elaborated system may contain several different affixes or other devices for forming diminutives and augmentatives, each of which may be restricted to certain types of stems, or may be connected with particular connotations (positive or negative). In addition, many languages allow for the simultaneous application of more than one diminutive/augmentative formative. As an illustration, consider the root deˇv- ‘virgin, girl’ which shows up in a multitude of forms in the Slavic languages. In Russian, the nonderived version deva still exists with the meaning ‘virgin’ or archaically, ‘young woman.’ The firstdegree diminutive dev-k-a is a colloquial word for ‘young girl’ with possibly slightly negative connotations. The second-degree diminutives dev-ocˇ-k-a and dev-usˇ-k-a are lexicalized with the meanings ‘small girl’ and ‘young girl/woman,’ respectively. The third-degree diminutive dev-cˇ-on-k-a again has a slightly pejorative meaning, and can be further expanded into a fourthdegree diminutive dev-cˇ-on-ocˇ-k-a. Formations of this kind are particularly frequent in morepragmatically motivated uses, and may be explained through the need to renew forms that have undergone conventionalization or bleaching through extensive use. Diminutives and augmentatives may develop beyond the simple indication of size in several different ways. It is not always easy to distinguish here what is productive in a language from what is rather the frozen result of earlier processes, or what is universally available from what is a language-specific conventionalization. Moreover, morphemes that are used to form diminutives and augmentatives can also have various other functions that at least synchronically seem to have little to do with size, inviting the postulation of homonymy. Thus, in Russian, nouns ending in -ka are often straightforward diminutives, as in knizˇka ‘booklet’ (from kniga ‘book’), but -ka may also be a feminine ending, as in sˇvedka ‘Swedish woman’ (from sˇved ‘Swede’) or function as a general nominalizer, as in sotka, which may denote anything

Diminutives and Augmentatives 219

related to the number ‘one hundred’ (sto), e.g., ‘bus No. 100.’ There is a strong tendency for diminutives to become lexicalized, in which case they tend to develop special meanings: cigarette is not just a small cigar, and in Russian, lampocˇka (from lampa ‘lamp’) is ‘light bulb’ rather than ‘small lamp.’ By a very general, possibly universal, mechanism, diminutives acquire connotations of affection or endearment. In fact, such connotations are not restricted to morphological diminutives but seem available to any expression whose semantics in any way involves smallness and are plausibly explainable in terms of a general tendency for humans to associate smallness with children, the latter being natural objects of affection and endearment. Jurafsky (1996: 564) goes further in hypothesizing that ‘‘diminutives arise from semantic and pragmatic links with children.’’ Indeed, as was mentioned above, ‘child’ is the best attested historical source for diminutives, and diminutives are typical of what Dressler and Barbaresi (1994: 173) call ‘child-centered speech situations,’ where a child is either a participant or a topical referent or extensions of those, e.g., to situations involving pets, lovers, or playful adults. It follows from what has been said that there is a close connection between diminutives and hypocorisms, although the latter notion may be understood as including also truncated forms, such as Ed from Edward, which are not obviously diminutive. Jurafsky (1996: 654) quotes a number of cases where diminutives affixes have been claimed to have originated as affixes on names only. The ‘child’ link can hardly provide the whole explanation for the existence of diminutives, however. While the ‘affection’ connotations are basically positive (meliorative), diminutives sometimes also have clearly pejorative meanings. Thus, in Russian, certain diminutive affixes are conventionally associated with negative connotations, e.g., -isˇko as in domisˇko ‘small and pitiful house.’ It is less clear how such readings arise. In augmentatives, on the other hand, pejorative connotations seem rather to be the unmarked case. Another possibly universal aspect of diminutives is their pragmatic use for mitigating, downgrading, or softening a speech act. Cf. parallel examples from different languages such as Italian Aspettami un’ or-etta ‘(lit.) Wait for me a little hour!’, Spanish Espera un minut-ito ‘(lit.) Wait a little minute!’

(Dressler and Barbaresi, 1994: 238) and Russian Podozˇdi minut-ocˇ-k-u! (the same). This function is not strictly limited to morphological diminutives but in principle works for any linguistic element that conveys a notion of ‘smallness.’ A straightforward example is when a person asks for a favor of some kind: the chance that the request will be granted increases if the favor is presented as insignificant, hence formulations such as ‘I have a small request.’ It may well be, however, that morphological diminutives are particularly suited to this function, being less obtrusive than other smallnessdenoting expressions. In the case of count nouns denoting concrete objects, the ‘size’ interpretation can be straightforward. Diminutives (and to a lesser extent augmentatives) can also be formed from other types of words, in which case a reinterpretation is necessary. With mass nouns, a common interpretation is ‘small quantity of,’ as in Russian Pop’em cˇaj-k-u ‘Let’s drink a little tea.’ But there are also diminutives, or diminutive-like formations, from adjectives and verbs, in which case the most typical interpretation is ‘to a limited degree or extent.’ The following example of an Italian diminutive adjective simultaneously illustrates another pragmatic use of diminutives in ironic speech acts: someone may say of a very tall person Un po’ altino ‘A bit on the tall side’ (Dressler and Barbaresi, 1996: 238). For verbs, an example could be German hu¨st-el-n ‘to cough slightly’ from hust-en ‘to cough’ (Dressler and Barbaresi, 1994: 238). See also: Classifiers and Noun Classes; Connotation; Hyponymy and Hyperonymy; Partitives; Pragmatic Determinants of What Is Said; Proper Names; WordNet(s).

Bibliography Bratus´ B V (1969). The formation and expressive use of diminutives. Cambridge: Cambridge University Press. Dressler W U & Barbaresi L M (1994). Morphopragmatics: diminutives and intensifiers in Italian, German, and other languages. Berlin: Mouton de Gruyter. Heine B & Kuteva T (2002). World lexicon of grammaticalization. Cambridge: Cambridge University Press. Jurafsky D (1996). ‘Universal tendencies in the semantics of the diminutive.’ Language 72, 533–578. Volek B (1987). Emotive signs in language and semantic functioning of derived nouns in Russian. Amsterdam: Benjamins.

220 Direct Reference

Direct Reference A Sullivan, Memorial University of Newfoundland, St. John’s NL, Canada ß 2006 Elsevier Ltd. All rights reserved.

What Is Direct Reference? Let us begin with a brief introduction to some essential terms. A singular term is an expression whose role is to specify a particular individual – proper names (e.g., ‘John’) and pronouns (e.g., ‘she’) are paradigm cases. A proposition is the meaning expressed by a sentence. A sentence whose subject-expression is a singular term (e.g., ‘John is tall,’ ‘She is happy’) expresses a singular proposition. (In contrast, a sentence whose subject-expression is a general term expresses a general proposition [e.g., ‘Tigers are mammals’].) Direct reference is a contrastive term; the contrasting view is ‘mediated’ or ‘indirect’ reference. (For brevity, I’ll use the labels ‘DR’ for the former and ‘IR’ for the latter.) The classic IR position is developed by Frege (1892). On Frege’s view, terms express a sense that determines a referent. All IR views posit some such semantic mediator between words and referents, and the characteristic virtue of the IR approach is that it affords a clear explanation of how co-referential terms can differ in meaning. (For other examples of influential IR views, see Carnap [1947] and Searle [1983].) One finds a very different approach to the semantics of (at least some) singular terms in the work of Mill and Russell. However, even though both Mill (1843: 20) and Russell (1919: 283) use the expression ‘direct’ in relevant contexts, the term ‘direct reference’ was first explicitly coined, and given a precise sense, by Kaplan (1977: 483). On Kaplan’s usage, DR expressions do not conform to one particular tenet of the IR approach, according to which the semantic mediator, the sense or the manner in which the term presents its referent, is a constituent of the proposition expressed. Kaplan’s (1977) central thesis is that indexical expressions (such as ‘I,’ ‘yesterday,’ or ‘that duck’) are DR. He also suggests (1977: 558–563) that proper names are DR. A central component of Kaplan’s picture is an account of a kind of meaning he calls ‘character’ (1977: 505–507), which is intended to explain why it seems that coreferential DR terms can make distinct contributions to propositional content. The heart of Kaplan’s notion of DR is Russell’s criterion for individuating singular propositions: ‘‘A name is merely a means of pointing to the thing, and does not occur in what you are asserting, so that if

one thing has two names you make exactly the same assertion, whichever of the names you use. . .’’ (Russell, 1918: 245). If a term is DR, in Kaplan’s sense, then the propositions expressed by sentences in which it figures are individuated solely in terms of the individuals and properties that they are about, as opposed to in terms of more finely grained senses or concepts. Because the contribution a DR term makes to propositional content is its referent, sentences that differ only in the interchange of co-referential DR terms express the same proposition. The difference between the DR and IR views, then, is most stark concerning such pairs of sentences – e.g., ‘That [pointing to the heavens] is Mars’ versus ‘Mars is Mars.’ On the IR view, there is a clear difference between the propositions expressed by such pairs of sentences; for one has as a constituent the meaning or sense of ‘that,’ whereas the other has as a constituent the meaning or sense of ‘Mars.’ However, the price paid is that IR views posit mediators between terms and referents, and critics allege that these mediators create more problems than they solve. On the DR view, there are no semantic mediators – no senses or concepts – involved in the content of a singular proposition; proponents of the view argue that this affords a more satisfactory account of the content and truth-conditions of such propositions. (That is, singular propositions are about the relevant referents per se, not about whatever might happen to satisfy a certain sense or concept.) However, the DR view allows no room for the intuition that such pairs of sentences can express distinct propositions: if ‘that’ and ‘Mars’ are co-referential, there is no semantic difference between the propositions expressed by sentences which differ only in their interchange. (See later for more on this difference, and its consequences.) Recanati (1993) is an important subsequent work on DR. (Recanati is heavily influenced by the works of Perry, collected in Perry [1993].) In the interim, Evans (1982) and McDowell (1986) had spurred a neo-Fregean approach to singular terms by arguing that many criticisms of Frege’s views can be met if senses are conceived as object-dependent (and so ‘rigid,’ in Kripke’s [1972] terminology). Recanati defines DR terms as those with a semantic feature that indicates that the truth-condition expressed by sentences containing them is singular or objectdependent, and argues that this gets to the core of the difference between referring expressions (such as names and indexicals) and quantified noun phrases (such as ‘a man’ or ‘all Texans’). Recanati’s notion of DR is weaker than Kaplan’s, in that neo-Fregean singular terms (which express object-dependent

Direct Reference 221

senses) would be classified as DR in Recanati’s sense but not in Kaplan’s (because object-dependent senses figure as constituents of propositions for Recanati, which is inconsistent with Kaplan’s Russellian criterion for individuating propositions). On Recanati’s view, as distinct from Kaplan’s, sentences that differ only in the interchange of co-referential DR terms, although truth-conditionally equivalent, express distinct propositions. Neither Kaplan nor Recanati deny that DR expressions are semantically associated with something like a sense or manner of presentation. (This can hardly be denied for indexicals.) What Kaplan explicitly denies, in calling an expression DR, is that the sense or manner of presentation affects propositional content. What Recanati explicitly denies is that the sense or manner of presentation is truth-conditionally relevant. So, strictly speaking, the contemporary authorities use ‘direct reference’ to label an approach to propositional content, more so than as a label for any specific approach to reference.

Some Closely Related Concepts Kaplan’s DR is a part of an anti-Fregean tide that swept the philosophy of language in the 1970s. In much of the secondary literature, the distinction between DR and other aspects of that movement – such as the causal-historical theory of reference, the notion of rigid designation, and the Millian view of proper names – is lamentably blurred. It is not uncommon to find a bundle of such notions lumped together under the umbrella term ‘the new theory of reference.’ The aim of this section is to be more discriminating about the relations between DR and these other concepts and views. The Millian view has it that names are connotationless labels: ‘‘A proper name is but an unmeaning mark which we connect in our minds with the idea of the object. . .’’ (Mill, 1843: 22). This is no mere claim about truth-conditions; it is a very strong claim about the semantics of names. Mill even says that: ‘‘The name, therefore, is said to signify the subjects directly. . .’’ (1843: 20). Nonetheless, it is clear that Mill’s notion of direct reference is quite distinct from Kaplan’s. Witness the fact that indexical expressions (e.g., ‘I,’ ‘yesterday’) are the very paradigm of DR for Kaplan, but are clearly not unmeaning marks. Kaplan (1977: 520) discusses this ‘‘drawback to the terminology ‘direct reference’’’ – that it ‘‘suggests falsely that the reference [of an indexical] is not mediated by the meaning, which it is.’’ Explicitly, Kaplan (1989: 568) does not deny that DR terms are semantically associated with something like a sense. What he denies is that the sense is a propositional constituent.

In the case of proper names, though, this conflation of DR with the Millian view is more prevalent, and more difficult to rebut. One reason is that there are strong considerations against the view that names are semantically associated with a particular semantic mediator (see especially Donnellan [1970] and Kripke [1972]). So, in the case of names, as compared with indexicals, it is more difficult to identify anything that plays a semantic role akin to Kaplan’s character, singling out the referent without affecting propositional content. Hence, although it is implausible to hold that indexicals are Millian (even if they are DR), it is not uncommon to encounter the idea that names are Millian (i.e., a name’s meaning is just its referent) and that names are DR (i.e., a name’s contribution to propositional content is just its referent) are two sides of the one coin. However, to the contrary, there is clearly conceptual space for the view that proper names are DR but not Millian (i.e., names are semantically associated with some kind of sense or meaning, but nonetheless that sense or meaning is truth-conditionally irrelevant, no part of propositional content). Although a term could clearly be DR without being Millian, it is plausible that if a term is Millian then it is DR. That is, if all that there is to the semantics of a term is that it labels a specific referent, then it is hard to see what else but the referent could affect propositional content. A similar relation holds between Kaplan’s DR and Kripke’s (1972) rigid designation. (A designator is rigid if it designates the same thing in every possible world.) There are clearly rigid designators that are not DR (say, ‘the even prime’), but nonetheless any DR term will satisfy the criterion for rigidity (see Kaplan [1977: 492–498, 1989: 569–571]). So, there are rigid terms that are not DR, and DR terms that are not Millian; but any Millian term would be DR, and all DR terms are rigid. There is a nice fit between the semantics of DR and the causal-historical story of how reference is determined. Donnellan’s (1970) and Kripke’s (1972) influential arguments that the meaning of a proper name is not some kind of descriptive sense have been taken to demonstrate, or to suggest, general and fundamental problems with any IR approach to names. From here, it looks compelling to conclude that a name’s contribution to propositional content is just its referent. Furthermore, it is possible that appeal to distinct causal-historical chains of transmission can explain why it seems that co-referential names differ in meaning. (More on this later.) In any case, the present point is just that these are all are distinct doctrines, addressed to quite different questions. The Millian view is a bold conjecture about the semantics of names, DR is a somewhat

222 Direct Reference

weaker claim about propositional content, rigid designation is a still weaker modal claim about certain terms, and the causal-historical theory is a picture of how reference is determined. There are deep and interesting relations between these concepts and views, but they should not be conflated.

Problems with Direct Reference The classic problems for DR are, not surprisingly, the very problems that led to the development of the IR view in the first place. The central problem concerns questions of substitutivity – that is, there are reasons to think that interchanging co-referential expressions fails to preserve propositional content. Substitutivity arguments against the DR view of names have been part of the canon since Frege (1892). Competent speakers can fail to recognize that co-referential names do in fact name the same object, and so sentences that differ only in the interchange of co-referential names seem to express distinct propositions. Clearly, similar things can happen with indexicals. Consider, for example, Kaplan’s (1977: 537) case wherein he sees a reflected image of a man whose pants are on fire, then subsequently come to recognize that he is that man. Even though his initial ‘His pants are on fire’ and subsequent ‘My pants are on fire’ are truth-conditionally equivalent, there are significant semantic differences between them. So, the DR theorists need to accommodate the considerable reasons for thinking that sentences that differ only in the interchange of co-referential terms can express distinct propositions. Alternatively put, although the IR view issues in more finely grained propositions, which are better suited to capture the content of a belief, the DR view issues in propositions which are too coarsely grained for this purpose. (For example, believing that ‘His pants are on fire’ and believing that ‘My pants are on fire’ would prompt different actions; insofar as the DR view is committed to the claim that these beliefs have the same content, then, something is amiss.) Kaplan’s (1977) notion of ‘character’ is intended to solve this problem for the case of indexicals – i.e., to explain the differences between ‘His pants are on fire’ and ‘My pants are on fire,’ in a way that is consistent with the semantics of DR. The extent to which Kaplan’s account is successful is one of the central points of controversy in the subsequent literature on this topic. Note also that there are cases, first raised by Perry (1977), which character cannot accommodate. The cases concern two uses of the same demonstrative – and so character remains constant – wherein, unbeknownst to the speaker, the same individual is referred to twice. (For instance, a speaker

utters ‘This ship (pointing to the stern of a ship through one window) is American but this ship (pointing to the bow of a ship through a different window) is Japanese.’) Kaplan (1977: 514ff) makes some effort toward accommodating this kind of problem, but his remarks are sketchy, and have been contested. (Cf. Braun [1994] for discussion, and King [2001] for an overview of the burgeoning literature.) In the case of names, DR theorists have drawn on the causal-historical theory of reference to explain why it seems that co-referential names differ in meaning. This explanation, developed by Kripke (1972), Kaplan (1989), and Stalnaker (1997), relies on a sharp distinction between the semantic question of what a word refers to and the metasemantic question of why a word refers to what it does. DR is a semantic claim (i.e., sentences containing names express Russellian singular propositions), and the causalhistorical theory suggests a complimentary metasemantic claim (i.e., co-referential names are distinct words with different causal histories, and so there may be all manner of non-semantic differences between them). The thought is that these (nonsemantic) differences can explain why uses of coreferential names can communicate different things, even though the names are semantically equivalent. Whether such an account holds any promise of saving DR from Frege’s problem is also much contested. For attempts to make the case, see Salmon (1986) and Soames (2002); for arguments against its promise, see Schiffer (1987, 2003: Chap. 2). These and other debates surrounding DR continue to be among the most vibrant in contemporary philosophy of language.

See also: Character versus Content; Dthat; Proper Names; Proper Names: Philosophical Aspects; Propositional Attitudes; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Rigid Designation; Sense and Reference.

Bibliography Braun D (1994). ‘Structured characters and complex demonstratives.’ Philosophical Studies 74, 193–219. Carnap R (1947). Meaning and necessity. Chicago: University of Chicago Press. Donnellan K (1970). ‘Proper names and identifying descriptions.’ Synthese 21, 256–280. Evans G (1982). The varieties of reference. Oxford: Oxford University Press. Frege G (1892). ‘On sense and reference.’ In Sullivan A (ed.) Logicism and the philosophy of language. Peterborough: Broadview, 2003. 175–192.

Disambiguation 223 Kaplan D (1977). ‘Demonstratives.’ In Almog J, Perry J & Wettstein H (eds.) Themes from Kaplan. Oxford: Oxford University Press, 1989. 481–564. Kaplan D (1989). ‘Afterthoughts.’ In Almog J, Perry J & Wettstein H (eds.) Themes from Kaplan. Oxford: Oxford University Press. 565–614. King J (2001). Complex demonstratives. Cambridge, MA: MIT Press. Kripke S (1972). Naming and necessity. Cambridge, MA: Harvard University Press. McDowell J (1986). ‘Singular thought and the extent of inner space.’ In MacDowell J & Pettit P (eds.) Subject, thought, and context. Oxford: Oxford University Press. 137–168. Mill J S (1843). A system of logic. London: Longmans. Perry J (1977). ‘Frege on demonstratives.’ Philosophical Review 86, 474–497. Perry J (1993). ‘The Problem of the Essential Indexical’ and other essays. Oxford: Oxford University Press. Recanati F (1993). Direct reference. Oxford: Blackwell.

Russell B (1918). The philosophy of logical atomism. In Marsh R C (ed.) Logic and Knowledge. London: Unwin Hyman, 1956. 175–282. Russell B (1919). ‘Descriptions.’ In Sullivan A (ed.) Logicism and the philosophy of language. Peterborough: Broadview, 2003. 279–287. Salmon N (1986). Frege’s puzzle. Cambridge, MA: MIT Press. Schiffer S (1987). ‘The ‘Fido’-Fido theory of belief.’ Philosophical Perspectives 1, 455–480. Schiffer S (2003). The things we mean. Oxford: Oxford University Press. Searle J (1983). Intentionality. Cambridge: Cambridge University Press. Soames S (2002). Beyond rigidity. Oxford: Oxford University Press. Stalnaker R (1997). ‘Reference and necessity.’ In Hale B & Wright C (eds.) A companion to the philosophy of language. Oxford: Blackwell. 534–553.

Disambiguation P Edmonds, Sharp Laboratories of Europe, Oxford, UK ß 2006 Elsevier Ltd. All rights reserved.

Introduction Lexical ambiguity is common to all human languages. Indeed it is a fundamental defining characteristic of a human language: a relatively small and finite set of words is used to denote a potentially infinite space of meaning and so we find that many words are open to different semantic interpretations depending on the context. These interpretations can be called word senses. From very frequent words such as call (28 verb senses in the Princeton WordNet 2.0) to medium-frequency words such as bank (10 noun senses) to infrequent words such as crab (4 verb senses) to very rare words such as quoin (3 noun senses), lexical ambiguity is pervasive and inescapable. Table 1 lists some of the WordNet senses of these words. Lexical disambiguation in its broadest definition is nothing less than determining the meaning of a word in context. Thus, as a computational problem it is thought to be AI-complete – it is as difficult as any of the hard problems in artificial intelligence including machine translation and commonsense reasoning. Of course, it is not an end in itself but an enabler for other tasks and applications of natural language processing such as parsing, semantic analysis of text, machine translation, information retrieval,

lexicography, and knowledge acquisition. In fact, it was first formulated as a distinct computational task during the early days of machine translation in the late 1940s, making it one of the oldest problems in natural language processing. Lexical disambiguation is at the intersection of several fields, including linguistics, cognitive science, lexical semantics, lexicography, and, of course, computational linguistics. But it is the last two fields that have had the most influence on the research, the majority of which has focused on more constrained versions of the problem. In the field of computational linguistics, the problem is generally called word sense disambiguation (WSD): to computationally determine which sense of a word is activated by the use of the word in a particular context. For example, the context (in its broadest sense, including both the sentence and text itself and any other knowledge the reader might have of such situations) can disambiguate call in She was called into the director’s office. Thus, WSD is essentially a task of classification; word senses are the classes. This is a traditional and common characterization of WSD that sees it as an explicit process of disambiguation with respect to a fixed inventory of word senses. Words are assumed to have a finite and discrete set of senses – a gross reduction in the complexity of word meaning. This characterization has led to a dream that an accurate generic component for WSD will one day be

224 Disambiguation Table 1 Examples of lexical ambiguity (senses from Princeton WordNet 2.0) Word

Number of senses

Examples

call

28 verb senses, 13 noun senses

bank

8 verb senses, 10 noun senses 4 verb senses, 7 noun senses 3 noun senses

‘to assign a name to’, ‘to get into communication by telephone’, ‘to utter a sudden loud cry’, ‘to lure by imitating the characteristic call of an animal’, ‘order, request, or command to come’, ‘order or request or give a command for’ ‘financial institution’, ‘sloping land’

crab quoin

‘to direct an aircraft into a crosswind’, ‘to scurry sideways like a crab’, ‘to fish for crab’, ‘to complain’ ‘expandable metal or wooden wedge used by printers to lock up a form within a chase’, ‘the keystone of an arch’, ‘solid exterior angle of a building; especially one formed by a cornerstone’

developed. But we may never see this dream come true because WSD is highly application-dependent and domain-dependent. For one, a task-independent sense inventory is not a coherent concept: each task requires its own division of word meaning into senses relevant to the task. For example, the ambiguity of mouse (animal or device) is not relevant in EnglishFrench machine translation, but is relevant in information retrieval. The opposite is true of river, which requires a choice in French (fleuve ‘flows into the sea’ or rivie`re ‘flows into a river’). Moreover, in any given domain of language use, many words are not ambiguous. Second, completely different algorithms might be required by different tasks. In machine translation, the problem takes the form of target word selection. Here the senses are words in the target language, which often correspond to significant meaning distinctions in the source language (bank could translate to French banque ‘financial bank’ or rive ‘edge of river’). In information retrieval, a sense inventory is not necessarily required because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is is unimportant. Third, explicit WSD has not yet been convincingly demonstrated to have a positive effect on any significant application. In many applications, lexical disambiguation occurs implicitly by virtue of other operations such as domain identification or a phenomenon called mutual disambiguation. Nonetheless, as a scientific endeavor, explicit WSD is very attractive: it is easy to define, experiment with, and evaluate and, as a result, is leading us to a better understanding of word meaning and context. Research has progressed steadily to the point where explicit WSD systems achieve consistent levels of accuracy on a variety of word types and ambiguities. The best performing systems use a supervised corpusbased approach, in which a classifier is trained for each distinct word over a corpus of manually annotated examples of each word in context. Bayesian learning and support vector machines have been the

most successful algorithms to date, probably because they can cope with the very high dimensionality of the feature space. Virtually any feature derivable from the surrounding context of a word has been used. The field is particularly rich in the variety of techniques employed, from dictionary-based methods that use the knowledge encoded in lexical resources to completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Current accuracy on the task is difficult to state without a host of caveats. On English, accuracy at the homograph level (a coarse-grained level of sense distinction) is routinely above 90%, with some methods on particular homographs achieving 96.5%. On finer-grained sense distinctions, 73% accuracy was reported at Senseval-3, an open evaluation exercise held in 2004. The baseline accuracy, the performance on the simplest possible algorithm of always choosing the most frequent sense, was 55%. An upper bound on accuracy, a measure of the difficulty of the task based on human performance, was 67% (this figure may seem low, but see the section on Evaluation). Unsupervised systems do not perform as well. At Senseval-3, the best unsupervised systems achieved approximately 58% accuracy (below the baseline). Performance is highly affected by many factors including the granularity of the sense distinctions, the quality of the sense inventory, and the words chosen for evaluation. The rest of this article discusses these issues in greater detail. Note that although lexical ambiguity is pervasive in all human languages, to a large extent the methods of disambiguation are independent of any particular language. Thus, most of the examples in this article are drawn from the research done on English, the language most employed in research.

Making Sense of Words Humpty Dumpty said [. . .] : ‘‘There’s glory for you.’’ ‘‘I don’t know what you mean by ‘glory,’’’ Alice said. Humpty Dumpty smiled contemptuously. ‘‘Of course

Disambiguation 225 you don’t – till I tell you. I meant, ‘There’s a nice knockdown argument for you!’’’ ‘‘But ‘glory’ doesn’t mean ‘a nice knock-down argument,’’’ Alice objected. ‘‘When I use a word,’’ Humpty Dumpty said in rather a scornful tone, ‘‘it means just what I choose it to mean – neither more nor less.’’ (Lewis Carroll, Through the Looking Glass) Polysemy

Lexical semantics (see Lexical Semantics) is the theoretical study of word meaning, one aspect of which is lexical ambiguity, or polysemy. Word meaning is in principle infinitely variable and context sensitive. It is does not divide up easily into distinct or discrete submeanings. Lexicographers frequently discover in corpus data loose and overlapping word meanings and standard or conventional meanings extended, modulated, and exploited in a bewildering variety of ways. The result is that most sense distinctions are not as clear as the distinction between bank as ‘a money lender’ and bank as ‘a riverside’. For example, the former bank has several closely related meanings including: . . . . . . . .

the company or institution the building itself the counter where money is exchanged a money box (piggy bank) the funds in a gambling house the dealer in a gambling house a supply of something held in reserve a place where the supply is held (blood bank).

Ambiguity of this sort is pervasive in languages and is often difficult to resolve, even for people. A given use of a word will not always clearly fall into one of the available meanings in any particular list of meanings. Nevertheless, lexicographers do manage to group a word’s uses into distinct senses, and all practical experience on WSD confirms the need for representations of word senses. Lexical semantics defines a spectrum of distinctions in word meaning in terms of granularity, as shown in Figure 1. At the coarse-grained end of the spectrum (the top-left end), a word might have a small number of senses that are clearly different, but, as we move to finer-grained distinctions (the bottom-right end), the coarse-grained senses break up into a complex structure of interrelated senses. At a coarse grain, many words do have clearly distinguishable senses. A word has part-of-speech ambiguity if it can occur in more than one part of speech. For example, sharp is an adjective (‘having a thin edge’), a noun (‘a musical notation’), a verb (‘to

Figure 1 The spectrum of distinctions in word meaning.

raise in pitch’), and an adverb (‘exactly’). Part-ofspeech ambiguity does not necessarily indicate distinct meanings (e.g., the relation between a verb and its nominalization), but it can be resolved by part-ofspeech tagging (see ), a simpler and more accurate class of algorithms than the WSD algorithms given in this article. In the majority of WSD systems, part-ofspeech tagging is used as an initial step, leaving the WSD algorithm to focus on within-part-of-speech ambiguity. A homograph is a word that has two or more distinct meanings, but the definition is somewhat arbitrary. Etymology (see ) is a major source of homographs; for example, the bow of a ship derives from the Low German boog, whereas the bow for firing arrows derives from the Old English boga. (Incidentally, bow is a good example of the potential for WSD in a text-to-speech application to point to the right pronunciation.) Resolving homographic ambiguity routinely achieves above 90% accuracy and is generally considered a solved problem. Hence, polysemy is the real challenge. Most common words have a complex structure of interrelated senses below the homograph level, as exemplified by bank. Even rare and seemingly innocuous words (e.g., quoin, see Table 1) have polysemous senses. Individual senses are often related by a process of extension or modification of meaning – it could be historical, functional, semantic, or metaphorical. For example, the mouth of a bottle, a cave, and a river are defined by analogy to the mouth of a person. Sometimes the relation is so close as to make disambiguation almost impossible without background knowledge on why the distinction was drawn. Consider two WordNet 2.0 (see WordNet(s)) senses of national: (1) ‘in the interests of the nation’ and (2) ‘concerned with an entire nation or country’. When the relation is systematic across a class of words it is called regular polysemy and includes

226 Disambiguation

ambiguities such as physical object/content (book) and institution/building (bank). Regular polysemy is not usually explicitly treated in dictionaries or in WSD, and indeed in some cases both senses can be active at once (book in I’m going to buy John a book for his birthday). Many other phenomena make word meaning difficult to formalize, including slightly differing word use in context (e.g., ball as a tennis ball or football has different associations in text), fixed expressions (piggy bank), metonymy and metaphor (crown in the lands of the crown), and vagueness in context (national and book). Words can have as many meanings and subtle variations as people give to them. So, is the very notion of word sense suspect? Some argue that task-independent senses simply cannot be enumerated in a list because they are an emergent (psychological) phenomenon, generated during production or comprehension with respect to a given task. Others go further to argue that the only tenable position is that a word must have a different meaning in every distinct context in which it occurs – words have an infinite number of senses. Notwithstanding the theoretical concerns of the logical or psychological reality of word senses, the field of WSD has successfully established itself by largely ignoring them. As with modern lexicography (see ), which is based on the intuition that word uses do group into coherent semantic units, the field has been defined by a practical problem, which happens to be well suited to empirical and computational techniques. The inherent difficulty of lexical disambiguation proper is, of course, acknowledged – our understanding of lexical semantics is just far from adequate. Context and Disambiguation

If polysemy is an intrinsic quality of words, then ambiguity is an attribute of text. Whenever there is uncertainty as to the meaning that a speaker or writer intends, there is ambiguity. So, polysemy indicates only potential ambiguity, and context works to remove ambiguity. Principles of effective communication would have us avoid vagueness and ambiguity. This would mean eliminating all potential lexical ambiguity by creating a context that forces only one possible interpretation of every word. Difficult to achieve, many a verbal dispute hinges on the confused multiple meanings of key terms. But sometimes ambiguity is desired and explicitly fashioned. Puns, for instance, require not only that two (or more) meanings be active simultaneously but that the reader recognizes the ambiguity: time flies like an

arrow; fruit flies like a banana. Intentional ambiguity is not just for humor. Everyone is familiar with the politician who uses ambiguous or vague terminology in the service of diplomacy, equivocation, or the evasion of difficult questions. And sometimes potential ambiguity just does not matter and is not worth the effort to resolve because either reading is acceptable (e.g., book or national). Now, in normal well-written text or flowing conversation, potential ambiguity generally goes unnoticed by people. The effect is so strong that some people cannot find the pun that is in front of their nose. Evidence suggests that people use as little as one word of context in lexical disambiguation. This indicates that context works very efficiently behind the scenes in disambiguation by people. But to a WSD system, every polysemous word is ambiguous. It must resolve the ambiguity by using encoded knowledge of word meaning and the evidence it can derive from the context of a word’s use. Thus, word meaning and context are core issues in WSD. Measures of Difficulty

This section introduces several measures of the difficulty of WSD that can be computed from the distribution of word senses in text: average polysemy, the most frequent sense of a word, and the entropy of a sense distribution. A fourth measure, interannotator agreement, is discussed in the section on Evaluation. How much potential ambiguity is there in text? First, consider dictionaries. In practical terms, there is a limit to the amount of polysemy that a vocabulary can bear; that is, only a finite number of concepts are lexicalized and granted the status of word sense. Longman’s Dictionary of Contemporary English (LDOCE), for example, lists 76 060 word senses spread over 35 958 unique words (lexical units, to be precise). Of these unique lexical units, 38% (14 147) are polysemous, so the average polysemy of LDOCE is 3.83 senses per polysemous word. Every dictionary has a different division of meaning. WordNet 2.0 has an average polysemy of 2.96 senses per lexical unit (125 784 unique lexical units, 26 275 ambiguous covering 77 739 senses). Now consider text. Table 1 provides a clue that the more frequent a word is in actual text, the more senses it is likely to have. This skewed distribution was first observed by George Zipf, who attributed it to his Principle of Least Effort (Zipf, 1949). Zipf argued that to minimize effort a speaker would ideally have there be a single word with all meanings, whereas the hearer would prefer each word to have a single different meaning. These competing pressures led Zipf to the Law of Meaning, a power-law

Disambiguation 227

relationship between the number of senses of a word, s, and its rank, r, in a list sorted by word frequency: s / rk He empirically estimated an exponent k ¼ 0.466 using the Thorndike-Century dictionary. Zipf thereby explained the origin of word senses. (Note that this law is different from Zipf’s Law about the distribution of word frequencies.) Figure 2 graphs the skew of words in the British National Corpus (BNC) with respect to WordNet 2.0 senses. BNC words (root forms of nouns, verbs, and adjectives) in rank order by frequency in the BNC are plotted against the number of WordNet 2.0 senses per word. Each point actually corresponds to the mean number of senses in a bin of 100 words in rank order. The distribution is a power-law with the exponent k ¼ 0.404, very close to Zipf’s estimate. Clearly, a few very frequent words are very polysemous, and most words, on the tail, have only one or two senses. Thus, the average polysemy of a text, considering word occurrences, will be higher than statistics covering a dictionary suggest. The BNC has an average polysemy of 8.04 WordNet 2.0 senses per polysemous word (84% of word occurrences are potentially ambiguous) and 10.02 LDOCE senses (Table 2). Data sparseness is unavoidable for most ambiguous words in the corpus, which implies there will be a problem in discovering the contextual clues for disambiguation. Average polysemy is unsatisfactory as a measure of difficulty because it might actually be an overestimate – the division of meaning might not

match the domain of discourse or the task. A heuristic called one sense per discourse states that words are not ambiguous within a single discourse; a given word will be used in the same sense throughout a given document or more strongly throughout texts in the same domain. For example, in weather reports, wind will always have the obvious sense and none of its other senses (8 noun senses in WordNet 2.0). Average polysemy would drop to 1.0, putting WSD out of a job. However, even domain-specific texts can contain potentially ambiguous words. For example, line in text about electronics can mean at least ‘a wire in a circuit’, ‘a product line’, ‘a production line’, and ‘a bottom line’. One study reports that 33% of words in SemCor have multiple senses per document. So, a system has to decide for which words and domains the one-sense-per-discourse heuristic applies. Moreover, many applications are open domain, such as wide-coverage machine translation and web/news search engines, and would benefit from a domain-independent WSD component. A more accurate way to calculate average polysemy is to use a sense-tagged corpus to count the senses that are actually attested in the corpus. SemCor is a 234 000-word corpus manually tagged with WordNet 1.6 senses. It has been extremely valuable in WSD research. The average polysemy of SemCor is 6.3 senses per word – not all senses are used in the corpus. Not only is the distribution of words with respect to number of senses skewed but also the distribution of senses of a word. Figure 3 reveals that in SemCor,

Figure 2 Skew of the distribution of words by number of senses. BNC words are plotted on the horizontal axis in rank order by frequency in the BNC. Number of WordNet senses per word is plotted on the vertical axis. Each point represents a bin of 100 words and the average number of senses of the words in the bin.

228 Disambiguation

the most frequent sense of a word accounts for the majority of the word’s occurrences. The distributions (power-law relationships again) of 12 word classes in SemCor ranging from 1-sense words to 12-sense words are shown in 12 columns. Senses are ordered bottom to top by the proportion of occurrences of the word that they account for, normalized per word, and averaged over all words in the class. Data sparseness is also a problem for the rarer senses of a word. Choosing the most frequent sense provides a high baseline to measure performance against; in SemCor it achieves 39% accuracy against 18% for random choices. Difficulty can also be assessed with respect to an individual word, in terms of its number of senses, the proportion of its most frequent sense, and sense entropy. Sense entropy is a measure of the skew in a

Table 2 Average polysemy of WordNet 2.0 and LDOCE and the BNC Resource

WordNet 2.0

LDOCE

Number of words Number of polysemous words Number of senses Average polysemy (all words) Average polysemy (polysemous words) Average polysemy of BNC (all words) Average polysemy of BNC (polysemous words)

125 784 26 275 77 739 0.618 2.96

35 958 14 147 76 060 2.12 3.83

7.23 8.04

8.87 10.02

word’s sense distribution. High entropy represents a less-skewed, and therefore more difficult, problem. Studies show that the accuracy of WSD algorithms (supervised learning methods, in particular, were analyzed) is roughly correlated with task difficulty according to any of these measures. For example, when the proportion of the most frequent sense exceeds 80%, algorithms do not do any better than the baseline of choosing the most frequent sense.

Applications and the Sense Inventory A long-standing debate is whether WSD should be thought of as a generic component; a kind of black box that can be dropped into any application, much like a part-of-speech tagger; or as a task-specific component designed for a particular application in a specific domain and integrated deeply into a complete system. On the one side, research into explicit WSD has progressed steadily and successfully to a point where some people question if the upper limit in accuracy has already been attained. On the other side, explicit WSD has not yet been convincingly demonstrated to have a positive effect on any significant application. Only the integrated approach, with disambiguation often occurring implicitly by virtue of other operations, has been successful. The one side is clearly easier to define, experiment with, and evaluate; the other has applications and threatens the need for explicit WSD altogether. The majority of researchers who focus on WSD take the former side.

Figure 3 Skew in the distribution of the senses of words. The distributions for 12 word classes in SemCor ranging from 1-sense to 12-sense words. In each class (column), the senses are ordered by frequency, normalized per word, and averaged over all words in the class.

Disambiguation 229

The debate can be explained in terms of the sense inventory. Every application of word sense disambiguation requires a sense inventory, an exhaustive listing of all the senses of every word that an application must be concerned with. The nature of the sense inventory depends on the application, and the nature of the disambiguation task depends on the inventory. The three Cs of sense inventories are clarity, consistency, and complete coverage of the range of meaning distinctions that matter. Sense granularity is a key consideration: too coarse and some critical senses may be missed, too fine and unnecessary disambiguation errors may occur. For example (repeated from the introduction), the ambiguity of mouse (animal or device) is not relevant in English-French machine translation, but is relevant in information retrieval. The opposite is true of river (fleuve ‘flows into the sea’ or rivie`re ‘flows into a river’). Thus, the source of the sense inventory is the main decision facing all researchers and application developers. Next, the four main sources of sense inventories and three main application areas are described. Four Sources of Sense Inventories

Dictionary-based inventories have their source in machine-readable dictionaries (MRDs). Because of their early availability, before large textual corpora, some of the seminal work in WSD relied on MRDs, and many current methods extract knowledge from MRDs. LDOCE has seen the most use in WSD. It provides hierarchical sense distinctions from the homograph level down to a fine granularity, and entries include extra information useful in WSD such as subject codes and example sentences. LDOCE is a commercial product, but another dictionary, HECTOR, was developed primarily as a research tool by Digital Equipment Corporation and the Oxford University Press, one of whose goals was to support WSD research. HECTOR was used in Senseval-1 (see Evaluation section) and could have developed into a very well-used resource. It is linked to a sense-annotated corpus, from which the senses were derived. However, it is incomplete, covering approximately 1400 lexical entries. Dictionary-based inventories have several disadvantages. Dictionaries, whose market is people (not natural language processing (NLP) researchers or application developers), are subject to standard market pressures, which dictate the size of the dictionary, the coverage and depth, and crucially the granularity and interpretation of sense distinctions. As a result, the senses may not match those that are required by the application. Dictionaries also assume the vast knowledge of a human reader and so leave out commonsense information that is very useful in WSD.

A lexical database (or lexical knowledge base) is a step beyond the MRD. The main example, WordNet, has become the de facto standard in WSD research (for English; WordNets in other languages have also been used in WSD). WordNet shares many of the advantages and disadvantages of MRDs because, although it was designed for research, it was not specifically designed for WSD. It has the significant advantage that senses, or synsets, form a semantic network (primarily a hierarchy), which has been very useful in WSD, for example, to compute the relatedness between word senses. Its disadvantages are that it focuses on concept similarity rather than on what makes two senses different and that it is too fine-grained for applications and even for human annotators to reach high agreement. The latter disadvantage can be overcome by grouping closely related senses, depending on the task and corpus. A thesaurus, especially Roget’s International Thesaurus with its extensive index, can also be used as a sense inventory: each entry of a word under a different category usually indicates a different sense. A multilingual dictionary can also form a sense inventory. The translations of a word into another language can serve as word sense labels because the different meanings often translate into different words. This phenomenon is most consistent for homographs (e.g., change into French changement (‘transformation’) or monnaie (‘coins’)), but even very fine-grained distinctions are sometimes lexicalized differently, especially in distantly related language pairs (e.g., Chinese lexicalizes the building/institution polysemy of church: ‘building, e.g., temple’ and ‘institution’). One advantage of translations is to provide a practical level of sense granularity for many applications, especially machine translation. But the major advantage is the possibility of easily acquiring large amounts of training data from parallel texts. The disadvantage is that, outside of machine translation in the given language pair, the word senses do not always carry over to other language pairs or applications; for example, interest in three of its major senses (‘sense of concern’, ‘legal share’, and ‘financial accrual’) corresponds to a single word in French, inte´reˆt. We also lose the extra information contained in MRDs and lexical databases. Automatically induced sense inventories are a response to the disadvantages of dictionaries and other hand-built resources. By deriving a sense inventory directly from a corpus, the right level of sense granularity can be achieved and no external resources are required. A bilingual sense inventory can also be induced from a parallel corpus (i.e., a corpus in two languages) by word-aligning the corpus. The advantage of the approach is also its disadvantage: An inventory that directly characterizes the sense

230 Disambiguation

distributions of a corpus cannot be easily used with a different corpus. Also, it can be difficult to get a corpus that is large enough to contain evidence for each important sense (at least 50 instances per sense). The induced senses might not have human-readable labels, making it difficult to map the induced inventory to another (such as WordNet), which makes system comparison problematic. Applications

Machine Translation Early researchers in machine translation (MT) felt that the inability to automatically resolve sense ambiguity was a key factor in the intractability of general MT. However, explicit WSD has yet to be shown to be useful in real MT applications. Instead, implicit disambiguation, such as target word selection, has been used in MT. Domain plays a strong role in disambiguation (recall the one-sense-per-discourse heuristic previously discussed). Most real MT systems rely on specialized dictionaries for each domain that leave most words unambiguous. Any remaining serious ambiguity can often be handled using hand-crafted rules. In fact, even general-domain MT systems, such as Systran, reportedly use extensive sets of hand-crafted rules to get major sense distinctions right. So, WSD is not ineffective; it is just subsumed by a different semantic process, developing lexical resources. Statistical MT systems resolve ambiguity in a different manner. Roughly, statistics model how a source word or sequence of words translates into the target language. The model induces a sense inventory with translation probabilities for target word selection. A good model of target language sequences is also required. For example, one early statistical MT system makes the following incorrect translation (from French to English): Je vais prendre ma propre de´cision. ‘I will take my own decision.’

This is because it chooses the most common translation of prendre ‘take’; the model does not realize that take my own decision is improbable because it knows only about three-word sequences (trigrams). In this case, an explicit WSD component improved the accuracy of the overall system by 22% (Yarowsky, 2000). But this line was never pursued because it was thought to be better to improve the translation model itself by using a more structured representation of the context. Then, lexical disambiguation would occur implicitly, but would rely on the same type of contextual information as explicit WSD uses. Lexicography and Information Extraction A broad range of applications in knowledge acquisition can

make use of WSD. In particular, lexicography and WSD have a mutual relationship in that lexicographers build the sense inventories that WSD disambiguates to. In productive use, WSD and lexicography can work in a loop, with WSD providing rough sense groupings to lexicographers, who provide better sense inventories and sense-annotated corpora to WSD. The HECTOR project was the first attempt to do this, but it was never fully realized. Later efforts have occurred within the Senseval framework – senses that are difficult for human annotators or for systems are fed back to lexicographers for improvement. Lexical resources and knowledge-bases are continuing to grow in many languages. WSD is playing a key role in mapping between resources to create consistent multilingual resources (for example, mapping between WordNets in different languages). WSD has been used to disambiguate the definitions and example sentences in dictionaries, to better link up the dictionary. End-user applications might include an intelligent dictionary that can find the right word sense given the context of a word, making dictionaries easier to use for second-language learners. In other knowledge acquisition efforts, such as information extraction and filtering used in the intelligence community, word meaning is crucial. Information extraction has to build a database of, say world events, by linking textual references to the right concepts in the database or ontology. An information filtering system might be set up to flag references to, say, illegal drugs; false hits involving medical drugs would have to be avoided. Often such systems rely on hand-crafted disambiguators for the word and senses in question. Named-entity classification and co-reference determination is basically WSD for proper names. Information Retrieval Information retrieval (IR) has seen the most work to prove explicit WSD in an application. Our intuition is that WSD should help to improve IR systems by removing those hits to a query in the wrong sense of a word in the query. Consider querying for banks to invest with and receiving results about the Amazon River. However, the general consensus in the IR community is that explicit WSD makes only marginal improvements in precision and, in some cases, degrades performance. The reasons are the same as for MT: either the IR system is domain-specific, which significantly reduces the problem, or mutual disambiguation occurs. Mutual disambiguation is the phenomenon that the natural co-occurrence of words in queries and in documents tend to disambiguate one another. For example, the query ‘‘bank to invest with’’ would retrieve a

Disambiguation 231

document containing bank and invest (because IR systems generally index and retrieve on words), in which bank most likely happens to be used in the financial sense (bank in its river sense would not tend to co-occur with invest). Mutual disambiguation is another form of implicit disambiguation, directly encoding the same type of contextual information as explicit WSD uses. In IR, explicit WSD is applied by indexing word senses rather than words and then by performing WSD on any input query. It has been suggested that 90% accuracy is necessary to improve performance and that a 20–30% error rate is equivalent to no disambiguation at all. Anything more will degrade performance. Current WSD does not approach this level of accuracy except for homographs; but, then, it is often said that only homograph-level distinctions are relevant in IR because matches of different polysemous senses could well be desirable to the user. But consider the word ball, which has a fine-grained ambiguity with respect to different sports that could be relevant to a user’s information need. This implies that choosing the right sense inventory is dependent not only on the collection but also on information needs of the users. WSD would be potentially effective in two cases. First, it would improve performance on short twoto four-word queries (common on Web search engines), where mutual disambiguation does not work consistently. Unfortunately, short queries are also difficult for WSD techniques for the same reasons – lack of context. Second, when query expansion is used (i.e., to add synonyms and other related words to queries), WSD can ensure that only synonyms in the right sense are added. Cross-lingual IR does benefit from explicit WSD to translate and expand the query properly, avoiding the noise added by incorrect translations. In several experiments to automatically induce a sense inventory from a IR collection, a 7–14% improvement in IR precision was observed (Schu¨tze, 2000). The induced inventory can pick out the fine-grained ambiguities (such as ball) when they are present. Disambiguation errors, because of the mismatch between external sense inventory and collection, are reduced. Finally, WSD has been applied in several IR-based end-user applications including news recommenders and automatic advertisement placement. For example, the word ticket in a query could trigger ads about airline tickets, traffic tickets, or theater tickets, depending on its sense in the query.

Historical Context This section acknowledges a few of the visionaries, firsts, and influential works about WSD. It cannot come close to acknowledging all contributors.

WSD as a distinct computational problem has its roots in the first research on machine translation, and early researchers well understood the significance and difficulty of WSD. Warren Weaver, director of the Natural Sciences Division of the Rockefeller Foundation, circulated a now-famous memorandum in 1949, which already formulated the general methodology to be applied in all future work: If one examines the words in a book, one at a time through an opaque mask with a hole in it one word wide, then it is obviously impossible to determine, one at a time, the meaning of words. ‘Fast’ may mean ‘rapid’; or it may mean ‘motionless’; and there is no way of telling which. But, if one lengthens the slit in the opaque mask, until one can see not only the central word in question but also say N words on either side, then, if N is large enough one can unambiguously decide the meaning . . . (Weaver, 1955)

Weaver also recognized the basic statistical character of the problem and proposed that statistical semantic studies be undertaken as a first step. In 1949, George Zipf published the Law of Meaning in his book Human Behaviour and the Principle of Least Effort (Zipf, 1949). Abraham Kaplan, in 1950, called ambiguity the ‘‘common cold of the pathology of language’’ (Kaplan, 1955: 39). His study determined that two words of context on either side of the ambiguous word were equivalent to a whole sentence of context in resolving ambiguity. The 1950s then saw much work in estimating the degree of ambiguity in texts and bilingual dictionaries and in applying simple statistical models (e.g., choosing the most frequent sense) or a Bayesian formula to determine the probability of a sense given the domain. By the mid-1960s, MT was in decline because the perceived intractability of general MT had reached a zenith. Yehoshua Bar-Hillel argued that even the relatively simple case of the ambiguity of pen in the following now-famous example could not be resolved by ‘‘electronic computer’’ because of the need to model, in general, all world knowledge (Bar-Hillel, 1960). Little John was looking for his toy box. Finally he found it. The box was in the pen. John was very happy.

Arguments such as this led to the 1966 ALPAC report, which in turn caused the end of most MT research and of WSD research along with it. In the 1970s, WSD revived within artificial intelligence research on full natural language understanding. Margaret Masterman and Ross Quillian had in the early 1960s pioneered the use of semantic

232 Disambiguation

networks (of words and senses) and spreading activation to solve WSD. Yorick Wilks then developed preference semantics, one of the first systems to explicitly account for WSD. The system used selectional restrictions and a frame-based lexical semantics to find a consistent set of word senses for the words in a sentence. The idea of individual word experts evolved over this time (Steven Small and Charles Rieger). Word experts encode for each word the constraints and procedural rules necessary to disambiguate it and interact with each other to disambiguate all words in a sentence. In the end, such work faced an impractical knowledge-acquisition bottleneck because of the hand-coding required, but the idea of word experts carried on within the statistical paradigm. A turning point for WSD occurred in the 1980s, when large-scale lexical resources and corpora became available. Hand-coding could be replaced with knowledge extracted from the resources. Michael Lesk’s short but seminal work (Lesk, 1986) used the overlap of word sense definitions in the Oxford Advanced Learner’s Dictionary of Current English to resolve word senses. The technique is often used as a baseline today. Other researchers used LDOCE subject codes (e.g., EC for Economics), which label domain-specific senses of words, and Roget’s International Thesaurus. The 1990s saw two main developments: the statistical revolution in NLP swept through and Senseval began. Consequently, there was an exponential increase in the research output on WSD, and it becomes difficult to single out any one researcher. Weaver had recognized the statistical nature of the problem. Early corpus-based work by Stephen Weiss in 1973 on WSD for IR and by Edward Kelley and Philip Stone in 1975 on content analysis demonstrated the potential of empirical evidence and machine-learning approaches, presaging the statistical revolution. Peter Brown and his IBM colleagues demonstrated the first use of corpus-based WSD in statistical MT. By the mid-1990s, a wide variety of supervised and unsupervised machine-learning techniques had been applied to WSD (David Yarowsky and his colleagues were influential), but it remained difficult to compare the various results because of disparities in the words, sense inventories, and corpora chosen for evaluation. Senseval, a forum for the common evaluation of WSD, was first discussed in 1997 (Adam Kilgarriff and Martha Palmer). Senseval has provided a consensus on the appropriate tasks and framework for evaluation, three open competition-based evaluation exercises, and substantial resources (e.g., senseannotated corpora) for WSD in many languages.

Statistical corpus-based techniques have now been extensively researched, and supervised learning algorithms consistently achieve the best performance on explicit WSD, given sufficient training data.

Methods for Word Sense Disambiguation This section covers many of the methods for explicit, stand-alone WSD. Implicit disambiguation usually relies on similar contextual evidence and knowledge sources, but the algorithm is entwined with the other processes of an application. The methods are described at a high level of abstraction. Accuracy is given in some cases, but direct comparisons are difficult because the conditions of each experiment were different (see the Evaluation section). Computational Formulation of the Problem

Explicit WSD is a natural classification problem: Given a word and its possible senses, classify each instance of the word in context into one or more of its sense classes. The features of the context provide the evidence for classification. WSD is characterized by having a very high-dimensional feature space. That is, the surrounding context of a word has many features that can bear on the classification of the word, including features of the surrounding words: . Word strings (or root words or morphological segments) . Part-of-speech tags (e.g., ‘Noun’, ‘Transitive verb’) . Subject/domain codes (e.g., ‘EC’ for Economics in LDOCE) . Sense classes (of disambiguated or partially disambiguated words) . Semantic classes and selectional restrictions (e.g., ‘Person’, ‘Drinkable’) features of the relational structure taken part in by the instance of the word: . Syntactic relations (e.g., modification by an adjective) . Collocational patterns (i.e., recurrent fixed patterns such as river bank) . General co-occurrence relations (e.g., invest anywhere in the local context of bank) . Semantic relations (e.g., similarity or hypernymy) and features of the text as a whole: . Topical features (e.g., words and concepts commonly found in wider contexts) . Subject/domain codes or other classification of a text . Genre (e.g., financial news)

Disambiguation 233

Specific word order or syntactic structure is often crucial (e.g., the word plant to the immediate right of pesticide indicates a factory but in other positions indicates flora). The features nearest to the target word typically provide the most predictive power. A separate classifier, or word expert, is constructed for each word based on various knowledge sources. Hand-construction is one possibility, as in the early artificial intelligence paradigm; automatic acquisition is more common, either from the knowledge in lexical databases (including definitions, example sentences, semantic relations, and subject codes), from corpora (sense-annotated or not), or both. Some systems perform probabilistic classification, in which a word instance is assigned to multiple sense classes with a probability distribution, when they lack sufficient evidence for any one sense. This can be effective when combining multiple different sense disambiguators or in applications such as information retrieval in which the later processing is probabilistic itself. Two less-common formulations of WSD are as a filter and an inducer. A filter removes unlikely senses; for example, a single piece of evidence, say a selectional restriction, might immediately rule out a sense. A sense inducer discovers sense classes by clustering the contexts of a word’s instances. Finally, a note about computational processing required by all methods. Generally, the input text (and training corpus) is preprocessed by standard NLP components including part-of-speech tagging, stemming, morphological analysis and segmentation, and sometimes parsing. Feature vectors are then created in the required formalism. Beyond the basic lexical resources used, the training corpus is sometimes processed to build lexical networks and neighborhoods and bilingual word alignments. The computational complexity of WSD has not yet been a general concern except when it makes running hundreds or thousands of experiments infeasible. Dictionary-Based Methods

In many respects, dictionary-based methods are the easiest to comprehend because it is obvious why they work when they work. The Lesk Method, as it has come to be known, was the first to use dictionary definitions, the obvious source of knowledge about word meanings. It is based on the hypothesis that words used together in text are related to one another and that the relation can be observed in the definitions of the words and their senses (cf. mutual disambiguation). Thus, the method disambiguates a word by comparing its definition to those of the surrounding words. In the case of two words, it considers all combinations of the senses of the two words,

computing the overlap of every pair of definitions. The pair with the largest overlap is selected. For example, in pine cone, the senses ‘seven kinds of evergreen tree with needle-shaped leaves’ of pine and ‘fruit of certain evergreen trees’ of cone have the largest overlap (two words) of all combinations. One implementation achieved 50–70% accuracy on a small test set (Ide and Ve´ronis, 1998). This basic method suffers from data sparseness and is sensitive to the exact wording of definitions. Simple extensions include additional elements in the overlap calculation: example sentences, definitions of words in the sense definitions, definitions of related word senses (e.g., by hypernymy in WordNet), and sentences from a sense-annotated corpus. The Lesk Method is often inefficient for more than a few words because there are too many combinations of word senses to consider. (An approximate solution uses simulated annealing.) But, because of its simplicity, it is often used as a baseline to assess the performance of other systems. The Lesk Method can be generalized to use general word-sense relatedness rather than definition overlap. For instance, a hierarchical lexical database such as Roget’s or WordNet can be used to compute the semantic similarity of any two word senses. A very simple method of WSD is then to determine which sense of a target word has the greatest similarity to the words in its surrounding context. However, reported accuracy is slightly worse than Lesk using WordNet glosses and relations. Roget’s International Thesaurus is also a good source of knowledge about semantic relationships; the approximately 1000 heads under which all words are categorized can be thought of as semantic classes or word senses. Masterman’s early work (see section on Historical Context) used Roget’s for target word selection in machine translation by examining overlaps in the lists of heads that words fall under. A second approach uses Roget’s (or, actually, any lexical database with semantic categories including LDOCE’s subject codes) as a source of word lists for the different semantic classes of an ambiguous word. A word-class classifier can then be trained on the aggregate context of all the members of each class (see section on Supervised Methods). For example, to disambiguate crane, a classifier is built to distinguish between the bird and machine classes using the word lists (heron, grebe, hawk, . . .) and (jackhammer, drill, bulldozer, . . .) and their respective contexts. Even though some of the words will add noise through their own polysemy, enough are monosemous to still build an effective classifier. This unsupervised method has achieved 92% accuracy on homograph distinctions (Yarowsky, 2000).

234 Disambiguation Selectional Restriction–Based Methods

A selectional restriction (see Selectional Restrictions) is a constraint on the semantic type of the argument or modifier of the head of a syntactic constituent. For example, to drink gin is to ‘drink an alcoholic beverage’, not to ‘quaff a card game’, because drink selects for an object of type liquid. Common in the artificial intelligence paradigm of semantic analysis, this method can be combined with syntactic analysis to progressively eliminate inappropriate senses and so compose a consistent set of semantic templates into a semantic representation of a sentence. Selectional restrictions are limited because they can be too general or too strict (e.g., my car drinks gasoline violates the restriction that the subject must be animate). One solution is to view selectional restrictions as preferences (Wilks’s preference semantics) or as selectional associations. A selectional association is a probabilistic distribution over the classes of a concept hierarchy, such as WordNet, that can express the likelihood of any class occurring as, say, the object of drink (e.g., Prob(BEVERAGE | drink) vs. Prob(GAME | drink)). The distribution is computed analogously to a word-class classifier by combining corpus statistics of occurrences of drink and its many syntactic objects with the semantic classes of the objects in the concept hierarchy, such as WordNet. Still, the improved method does not perform well enough on its own and should be treated as a filter. Connectionist Methods

Connectionist methods are based on psycholinguistic theories that semantic priming plays a role in disambiguation in humans. In connectionist disambiguation, spreading activation operates over a network of word concept nodes and disambiguates all words simultaneously. Successive words in a sentence activate nodes in the network, and activation spreads to related concepts and inhibits other concepts. For example, drink activates the beverage sense of gin and inhibits the game sense. At the end of a sentence, the concept node with the highest activation for each word is output. Early experiments were not conclusive because building the networks was problematic, requiring manual intervention. However, lexical networks can be built from definition texts of MRDs in a version of the Lesk Method; the Collins English Dictionary was used in one experiment that achieved 71.7% accuracy (Ide and Ve´ronis, 1998). Domain-Based Methods

Domain-based methods make explicit use of domain information to filter out senses of a word that are inappropriate in the current domain. A basic

approach first determines the domain of a text by finding the LDOCE subject code or similar code (e.g., WordNet DOMAINS, a domain-annotated WordNet) that has the maximum frequency over all content words. It then selects the sense of a word with the most frequent subject code. Improved versions determine the domain more accurately by, for example, considering only the words in a window around the ambiguous word and then choosing the sense that maximizes the similarity with a relevant domain in the window. A different approach builds a domain-specific neighborhood of words or topic signature for each sense of an ambiguous word. In one such method, inspired by the Lesk Method, a domain-specific neighborhood of a word contains the words that co-occur significantly with the word over all sense definitions labeled by a given LDOCE subject code (e.g., word senses labeled with the Economics code that significantly co-occur with bank include account, into, out, and money). To disambiguate the word in context, the neighborhood with the greatest overlap with the context is chosen. The one-sense-per-discourse heuristic has been used in at least two ways. First, if one instance of a word can be reliably disambiguated in a given text, then all other occurrences of the word can be labeled with that sense. Second, the contexts surrounding all instances of a word in a given text can be aggregated as evidence for a single sense. A completely separate approach to domain-specific disambiguation is domain-tuning the sense inventory by removing unnecessary senses and words, grouping related senses together, and extending it with specialized senses and terms. Domain-tuning turns WSD on its head to determine which senses in an inventory are relevant to a given domain. Supervised Corpus–Based Methods

Supervised machine learning has proven to be the most successful approach to WSD as a result of extensive research since the early 1990s. As a rule, supervised learning of WSD derives its model directly and predominantly from sense-annotated training examples, whereas unsupervised learning might make use of a priori knowledge but a secondary source. (Unsupervised methods are discussed in the next section.) Supervised learning methods all follow the same basic methodology: 1. A training collection is created by hand-annotating a sufficient number of instances of each target word with their sense classes. Often hundreds of examples are required for each word. A subset of the collection is reserved for testing.

Disambiguation 235

2. Each instance of a word and its context is reduced to a feature vector that contains features of the sort previously described. 3. For each word type, a training procedure builds a classifier using frequency statistics of feature occurrences within each class, gathered from the feature vectors. 4. The set of classifiers is tested on the reserved data, and more iterations are performed, modifying the conditions (e.g., selected features, training/ test split, and algorithm parameters), until a conclusion is reached. This methodology generates a set of classifiers capable of classifying new instances, represented by their feature vectors. Many algorithms for supervised learning have been applied to WSD, including Bayesian networks, boosting, decision lists, decision trees, k-nearest neighbor, maximum entropy, Naı¨ve Bayes, memorybased learning, neural networks, support-vector machines (SVM), transformation-based learning, and vector similarity models. A binary (two-class) classifier, such as an SVM, can be applied to WSD by building a separate binary classifier for each sense of a word, which classifies the word as a member or not of the sense class. A major result is that choosing the right feature space is more important than choosing the right algorithm. For example, eliminating a whole feature type (say collocations) has been shown to degrade performance more than changing the algorithm. That said, the current best-performing algorithm for WSD is the SVM, because, in theory, SVMs can handle very high-dimensional feature spaces, make no assumptions about the independence of features, and allow the easy combination of multiple sources of evidence. However, its relative performance over, say, Naı¨ve Bayes (which naively assumes feature independence) is quite small. In the Senseval-3 English task, SVMs, a modified Naı¨ve Bayes, and ensembles were all in the top 10 (between 71.8% and 72.9% accuracy) separated by fractional percentages (Mihalcea et al., 2004). A general distinction can be made between discriminative and aggregative algorithms. The former base their classification on a few pieces (sometimes one piece) of evidence in any given context, whereas the latter accumulate all of the evidence in favor of each class. Experiments show that each method has its strengths and weaknesses depending on the word, its sense granularity, and sense distribution. A discriminative algorithm will be more capable in contexts where a single feature is decisive – often for verbs and adjectives and for many homograph-level

distinctions. Aggregative algorithms perform better when many pieces of weak evidence combine to reach a level of confidence – more often in nouns and fine-grained sense distinctions. Every learning algorithm has its biases, so combinations, or ensembles, of diverse algorithms tend to outperform single algorithms by a modest margin (up to 5%). Various combination strategies including voting (by count or confidence), probability mixture models, and metalearning have been explored, voting performing best. Many common machine-learning issues arise in WSD, such as feature selection, determining the optimum size of the training data, and portability to new domains, but one problem that has defined the field over the past decade is the knowledge-acquisition bottleneck: Training data are difficult and expensive to produce. Senseval has alleviated the problem somewhat by organizing a wide-ranging data annotation effort (see Evaluation section); however, unsupervised methods have the potential to overcome the problem in the long run. Unsupervised Corpus–Based Approaches

The holy grail of WSD is to learn to disambiguate without any training data. In their purest form, unsupervised approaches eschew any a priori knowledge of word meaning. This section describes two types of unsupervised approach. The first is sense induction, used to actually discover word senses in a corpus using no a priori knowledge of word senses, in effect acting as an automated lexicographer. (Note that the senses induced are often called word uses because their character is different to the word senses elucidated by lexicographers.) The second disambiguates to an existing sense inventory but requires a secondary source of knowledge such as a parallel corpus or small amount of seed data in an approach called bootstrapping. Hence, these second approaches are usually considered to be minimally supervised. The underlying assumption of sense induction is that similar senses occur in similar contexts. Thus, the problem is characterized as clustering by similarity of context rather than as classification. Three methods differing in clustering technique are described next. The first method is to apply a clustering algorithm directly to the feature vectors (see previous discussion) of the instances of a word using a vector similarity function such as cosine similarity. Data sparseness is often a problem in smaller corpora and for the rarer senses of a word, but can be somewhat alleviated through dimensionality reduction. Nevertheless, rarer senses (e.g., smaller clusters) must still

236 Disambiguation

be removed from the model. Because senses are not labeled, merely discriminated one from another, direct comparisons to other methods of WSD are impossible. Applied to information retrieval, one experiment using a model called Context Group Discrimination yielded a 14.4% improvement in retrieval precision (Schu¨tze, 2000). The second method clusters the list of nearest neighbors of a target word, that is, the list of words that are semantically similar to the target word. Contextual word similarity, the degree to which two words occur in similar contexts, can be computed from the feature vectors. For example, plant has the neighbors factory, facility, refinery, shrub, perennial, and bulb. Clustering these words by their semantic similarity results in two clusters that represent two senses of plant: (factory, facility, refinery) and (shrub, perennial, bulb). No results are available for its application to WSD. The third method is also based on word similarity. It first builds a graph (i.e., network) of words linked by relations of semantic similarity and/or co-occurrence. The local graph surrounding a target word is then clustered using a graph-clustering algorithm. The intuition is that the senses of the target word will correspond to loosely connected components of the local graph (i.e., the words in each component will be related to one another more than they are to the words in another component). No results are available for its application to WSD. A word-aligned parallel corpus can be used in minimally supervised WSD. It has been observed that an ambiguous word in a source language is often translated into different words in a target language depending on the sense of the word. The words in the target language may themselves be ambiguous, either sharing two or more senses with the source word or having other senses. However, the fact that multiple different source words will translate to the same target word can be used in WSD. For example, the three English words disaster, tragedy, and situation all translate to catastrophe in a English-French parallel corpus. Even though the three English words are ambiguous, a single sense for them (‘calamity’) can be determined using a variant of the Lesk Method. An implementation of this method achieved 53.3% accuracy using an EnglishFrench machine translated corpus (the second highest unsupervised score on Senseval-2 data) (Diab and Resnik, 2002). The bootstrapping approach starts from a small amount of seed data for each word, either handlabeled training examples or a small number of surefire decision rules (e.g., play in the surrounding

context of bass almost always indicates the musical instrument). The seeds are used to train an initial classifier using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed. Seed decision rules can be extracted from dictionaries, lexical databases, or automatically extracted collocations. One system using the last approach achieved 96.5% accuracy on a few homographs (Yarowsky, 2000). A further variant combines both a bilingual corpus (not necessarily word aligned or parallel) with bootstrapping: in each step, classifiers are trained for both languages simultaneously using previously classified data from both languages. Experiments achieve a 3–8% improvement over monolingual bootstrapping on the same data. Finally, an unsupervised technique for determining the most frequent sense of a word in a corpus has recently been developed. It is closely related to the second clustering method. If we consider the list of nearest neighbors of a target word, then, following from the generalized Lesk Method and the skew in word sense distributions (Figure 3), a majority of its neighbors will be most similar to one of its senses, the most frequent sense. Although this method cannot disambiguate a word, it can be used as a back-up strategy when another method is not sufficiently confident. Alternatively, if one sense per discourse holds for a given target word, then WSD is replaced by the identification of the domain of the text.

Evaluation To progress as a science, WSD needs to be evaluated on a common playing field; this has proven to be a serious challenge. Evaluating WSD is difficult because of the different goals involved in the research and application of WSD algorithms. To illustrate, just about every system in the previous section was evaluated on different words, sense inventories (crucially, of different sense granularities), and types of corpus and application, rendering direct comparison meaningless. Furthermore, a large reference corpus is required, with enough hand-annotated examples of each word to cover all of its senses in a representative mixture of contexts. Sense annotation by hand is labor-intensive, difficult to do reliably, and unlikely to carry over to another application. As a result, most systems had been evaluated on only a few words and often only at the homograph level. However, over the past decade, Senseval has established a common framework for the evaluation of explicit and generic

Disambiguation 237

WSD algorithms. And, in a reversal, the task of explicit WSD is now defined by the evaluation rather than the evaluation by the task. Accuracy against a Reference Corpus

WSD can be evaluated in vitro, independent of any particular application, or in vivo, in terms of its contribution to an application such as information retrieval. In vitro evaluation, by far the most common method, allows for the detailed analysis of explicit WSD algorithms over a range of conditions, whereas in vivo evaluation provides an arguably more realistic assessment of ultimate utility of WSD and is the only way to evaluate implicit WSD. The rest of this section focuses on in vitro evaluation; evaluation in IR and other applications has already been discussed. The basic metric for evaluation is simple accuracy: the percentage of correct taggings taken over all instances of all words to be tagged in a reference corpus. Creating a reference corpus is a process of manual annotation. The accepted practice is to use at least two trained annotators with a final arbitrator to resolve disagreements, possibly through discussion. Because annotation often uncovers inconsistencies or other problems in a sense inventory (such as missing or unclear senses), annotators can provide feedback to lexicographers. For reasons of objectivity and consistency, trained lexicographers should be used, but this view is challenged by the Open Mind Word Expert project, a large-scale Web-based annotation effort (Chklovski and Mihalcea, 2002). Two types of reference corpus are available: sampled and running. The former annotates a sample of words and often provides only a short surrounding context for each instance. The latter annotates all words in running text. Table 3 lists the main reference

corpora for English (Senseval has also provided many corpora in other languages). The relative performance of a system is generally assessed against the baseline of selecting the most frequent sense, information readily available from many dictionaries (often, the first sense listed), from the manually annotated reference corpus, or indeed from an unsupervised method. The Lesk Method has also been used as a baseline, as noted earlier. An upper bound on WSD is more difficult to come by: perfect disambiguation cannot be expected even from a person, given the nature of word meaning. Thus, as noted earlier, the natural upper bound is interannotator agreement, the percentage of cases in which two or more annotators agree before arbitration. Interannotator agreement also serves as an indication of the difficulty and integrity of the task. A low upper bound implies that the task is ill-defined and that WSD is without foundation. One early study reported the dangerously low value of 68% (Kilgarriff, 1999), and the Senseval-3 English lexical sample task had an equally low value of 67%, which was based on the agreement of the first two out of four annotators on a superset of the data used in the evaluation (Mihalcea et al., 2004). However, interannotator agreement is a misleading upper bound on WSD because an arbitrator provides a third voice. Replicability is arguably a more sensible upper bound. Replicability is the level of agreement between two replications of the same annotation exercise, including arbitrators. A respectable 95% has been reported (Kilgariff, 1999); however, replicability has not been used in practice because it doubles the annotation workload. Evaluation is not as simple as this. If a system can abstain from tagging a target word instance or give multiple answers, accuracy must be broken into precision (the percentage of system answers that are

Table 3 Manually annotated reference corpora in English Corpus

Number of word types

Size (tagged instances)

Sense inventory

Line, hard, serve Interest HECTOR SemCor DSO Corpus Senseval-1 Senseval-2 sample Senseval-2 running Senseval-3 sample Senseval-3 running Open Mind Word Expert 1.0 Open Mind Word Expert 2.0

3 1 300 23 346 191 41 73 1082 59 960 288 60

12 000 2369 200 000 234 113 192 800 8448 12 939 2473 11 804 2041 29 430 21 378

WordNet 1.5 LDOCE HECTOR WordNet 1.6 WordNet 1.5 WordNet 1.6 WordNet 1.7.1 WordNet 1.7.1 WordNet 1.7.1 WordNet 1.7.1 WordNet 1.7.1 WordNet 1.7.1

Compiled from Edmonds and Kilgarriff (2002), Senseval workshop proceedings, and personal contacts.

238 Disambiguation

correct) and recall (the percentage of all test instances that a system answers correctly). An additional scheme reports fine-grained and coarse-grained scores, the latter grouping subsuming all fine-grained senses into a single coarse-grained sense so that choosing any of the senses is considered correct. This scheme is possible using a hierarchical inventory in which the coarse-grained level might represent homographs or other groups of related senses. A final scheme provides partial credit for tagging with a similar albeit incorrect sense of a target word. One problem with averaging over all instances (and all senses) is that performance on particular words and word senses cannot be observed. Because the distribution of senses is so skewed, these metrics could cover up the actual performance of an algorithm that is only accurate on the most frequent senses but completely fails on the rarer senses. Senseval

Senseval has established through three open evaluation exercises, in 1998, 2001, and 2004, a framework for the evaluation of WSD that includes standardized task descriptions and evaluation methodology. It represents a significant advance in the field because it has focused research, produced benchmarks, and generated substantial resources in many languages. Senseval defines two main tasks: the lexical sample task and the all-words task. The lexical sample task is to tag a small sample of word types. The sample is a stratified random sample that varies on part of speech, number of senses, and frequency. Corpus instances covering as many of each word’s senses as possible are selected and manually annotated to create a sampled reference corpus. The all-words task is to tag all instances of ambiguous words in running text. Here, the issue is to select complete texts with a sufficient variance in terminology and average polysemy. The all-words task is a more natural disambiguation task because the whole text is provided as evidence for disambiguation and could lead ultimately to a generic component for WSD. However, the lexical sample task is arguably better science: it allows us to analyze a wider range of phenomena and to focus on problematic words or words that will have a significant impact on an application. Other Ways to Evaluate

When there is no reference corpus to either train from or test on, pseudo-words provide an alternative. A pseudoword is created by treating all the instances of two or more randomly selected words as the same word. For example, the pseudo-word banana-kalashnikov would replace all occurrences of banana and kalashnikov in a

corpus. The artificially ambiguous word has as its senses the original words. WSD then proceeds normally. Accuracy is given in terms of the number of correct replacements of the original words. Pseudo-words seem attractive, but they have been criticized because (1) they do not necessarily have natural skewed wordsense distributions and (2) they do not have senses related to one another the way that a polysemous word’s senses relate. Thus, it is questionable what we can learn about context and word meaning through pseudo-words. Unsupervised sense induction cannot be easily evaluated against a reference corpus. In vivo evaluation is one option. A second option is to manually map the clusters to word senses, but this is subjective. If the clusters are labeled, as in the nearest neighbor approach, then automated alignment is possible; however, the alignments are unlikely to be perfect because of disparities between word uses and word senses. If a parallel corpus is used, then one method is to create the parallel corpus by machine translation of a reference corpus; however, this method could have problems because the MT system could easily make the same errors in target word selection that an explicit WSD algorithm would make.

Current Research Efforts Explicit WSD to a fixed sense inventory (as a constrained case of general lexical disambiguation) is a robust task. The three evaluation exercises run by Senseval show that, over a variety of word types, frequencies, and sense distributions, systems are achieving consistent and respectable accuracy levels that are approaching human performance on the task. The main variation in accuracy is due to the sense inventory not meeting the three Cs (consistency, clarity, and coverage). Disambiguating to the homograph level is essentially solved if greater than 90% accuracy is enough for the application. At finer levels, support-vector machines are currently the best method, followed closely by Naı¨ve Bayes (both supervised corpus–based methods), achieving an accuracy of 73%. However, effective application-specific WSD is still an open problem. Current research efforts are focused on: . Error analysis to determine the factors that affect WSD and the specific algorithms; that is, what makes some words and senses easier to disambiguate than others? . Unsupervised approaches to overcoming the dataacquisition bottleneck, especially through bootstrapping and machine-learning techniques such as co-training.

Disambiguation 239

. Exploring the sense distributions of individual words and especially focusing on the rarer senses that are currently difficult for WSD. . Developing better sense inventories and sense hierarchies. . Establishing an evaluation framework for application-specific WSD (within Senseval). . Domain-specific issues, such as domain tuning and domain identification. . Treating named entities and reference as a WSD problem; that is, when do two occurrences of the same name refer to the same individual? See also: Connotation; Context and Common Ground; Definition in Lexicology; Hyponymy and Hyperonymy; Lexical Fields; Lexical Semantics; Lexicology; Lexicon/ Dictionary: Computational Approaches; Meronymy; Metaphor and Conceptual Blending; Metonymy; Natural Language Understanding, Automatic; Onomasiology and Lexical Variation; Polysemy and Homonymy; Selectional Restrictions; Synonymy; Taboo, Euphemism, and Political Correctness; Thesauruses; Vagueness; WordNet(s).

Bibliography Agirre E & Edmonds P (2006). Word sense disambiguation: algorithms and applications. Dordrecht: Springer. Bar-Hillel Y (1960). ‘Automatic translation of languages.’ In Alt F, Booth A D & Meagher R E (eds.) Advances in computers. New York: Academic Press. Chlovski T & Mihalcea R (2002). ‘Building a sense tagged corpus with Open Mind Word Expert.’ In Proceedings of the workshop on word sense disambiguation: recent successes and future directions. Philadelphia. 116–122. Diab M & Resnik P (2002). ‘An unsupervised method for word sense tagging using parallel corpora.’ In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. Philadelphia. 255–262. Edmonds P & Kilgarriff A (eds.) (2002). Natural Language Engineering 8(4). Special issue on Evaluating Word Sense Disambiguation Systems. Edmonds P, Mihalcea R & Saint-Dizier P (eds.) (2002). Proceedings of the workshop on word sense disambiguation: recent successes and future directions. Philadelphia. Gale W A, Church K W & Yarowsky D (1992). ‘Estimating upper and lower bounds on the performance of wordsense disambiguation programs.’ In Proceedings of the 30th annual meeting of the Association for Computational Linguistics. Newark, Delaware. 249–256. Hirst G (1987). Semantic interpretation and the resolution of ambiguity. Cambridge, UK: Cambridge University Press. Ide N & Ve´ronis J (eds.) (1998). Computational Linguistics 24(1). Special issue on Word Sense Disambiguation.

Jurafsky D & Martin J H (2000). Speech and language processing. Upper Saddle River, NJ: Prentice Hall. Kaplan A (1955). ‘An experimental study of ambiguity in context.’ Mechanical Translation 2(2), 39–46. Kilgarriff A (1999). ‘95% Replicability for manual word sense tagging.’ In Proceedings of the 9th conference of the European chapter of the Association for Computational Linguistics. Bergen, Norway. 277–278. Kilgarriff A & Palmer M (eds.) (2000). Computers and the humanities 34(1–2). Special issue on Senseval. Lesk M (1986). ‘Automated sense disambiguation using machine-readable dictionaries: how to tell a pine cone from an ice cream cone.’ In Proceedings of the 1986 SIGDOC conference. Toronto, Canada. 24–26. Manning C & Schu¨tze H (1999). Foundations of statistical natural language processing. Cambridge, MA: MIT Press. Mihalcea R & Edmonds P (eds.) (2004). Proceedings of Senseval-3: 3rd international workshop on the evaluation of systems for the semantic analysis of text. Barcelona, Spain. Mihalcea R, Chlovski T & Kilgarriff A (2004). ‘The Senseval-3 English lexical sample task.’ In Proceedings of Senseval-3: 3rd International Workshop on the evaluation of systems for the semantic analysis of text. Barcelona, Spain. Preiss J & Stevenson M (eds.) (2004). Computer, Speech, and Language 18(4). Special issue on Word Sense Disambiguation. Preiss J & Yarowsky D (eds.) (2001). Proceedings of Senseval-2: 2nd international workshop on evaluating word sense disambiguation systems. Toulouse, France. Ravin Y & Leacock C (eds.) (2000). Polysemy: theoretical and computational approaches. Oxford: Oxford University Press. Schu¨tze H (2000). ‘Disambiguous and connectionism.’ In Ravin Y & Leacock C (eds.) Polysemy: theoretical and computational approaches. Oxford: Oxford University Press. 205–219. Stevenson M (2003). Word sense disambiguation: the case for combining knowledge sources. Stanford, CA: CSLI Publications. Weaver W (1955). ‘Translation.’ In Locke W L & Booth A D (eds.) Machine translation of languages. New York: John Wiley & Sons. Wilks Y A, Slator B M & Guthrie L M (1996). Electric words: dictionaries, computers, and meanings. Cambridge, MA: MIT Press. Yarowsky D (2000). ‘Word sense disambiguation.’ In Dale R, Moisl H & Somers H (eds.) Handbook of natural language processing. New York: Marcel Dekker. 629–654. Yarowsky D & Florian R (2002). ‘Evaluating sense disambiguation across diverse parameter spaces.’ Natural Language Engineering 8(4), 293–310. Zipf G K (1949). Human behaviour and the principle of least effort. Cambridge, MA: Addison-Wesley.

240 Discourse Anaphora

Discourse Anaphora F Cornish, University of Toulouse-Le Mirail, Toulouse, France ß 2006 Elsevier Ltd. All rights reserved.

Introduction Discourse anaphora is a means of managing the memory representation of the discourse being constructed by the speech participants on the basis of a cotext as well as a relevant context (for further details of this view, see Cornish, 1999, 2003; but see also Anaphora, Cataphora, Exophora, Logophoricity). Where discourse is concerned, it is clear that not all referents will have been introduced via an explicit textual antecedent; it is also possible for them to be evoked ‘obliquely’ in terms of an association or a (stereotypical) inference of some kind (see especially example (1), which follows). This article takes what might be termed a ‘discourse – cognitive’ view of anaphoric reference, rather than a textual–syntactic one. The use and interpretation of nonbound anaphors – that is, anaphoric expressions whose interpretation is not determined primarily by features of the clause in which they occur – require not only a relevant cotext as well as context, but also, crucially, a psychologically salient representation of the discourse evoked via what in previous work I have called the antecedent-trigger (an utterance token, gesture, or percept). See the section titled ‘The Antecedent-Trigger’ for a discussion of this term.

Some Useful Concepts and Distinctions in the Study of Indexical Reference: ‘Anaphora,’ ‘Deixis,’ and ‘Textual/ Discourse Deixis’ Let us start by drawing the more fundamental distinction between the dimensions of text and of discourse. Very briefly, text refers to the ongoing physical, perceptible trace of the discourse partners’ communicative or expressive activity. This includes not only the verbal content of an utterance, but also prosody, pausing, semiotically significant gestures, and of course punctuation, layout, and other graphic devices in the written form of language. The addressee or the reader exploits these textual features in order to infer the discourse being coconstructed by the participants. Discourse in this sense refers to the hierarchically structured product of the constantly evolving sequence of utterance, illocutionary, propositional, and indexical acts jointly performed by the discourse

partners (see, for an illustration, representation (7) of the discourse corresponding to text (5)). This product is, of course, partly determined by the context invoked. Discourse anaphora, then, constitutes a procedure (realized via the text) for the recall of some item of information previously placed in discourse memory and already bearing a minimal level of attention activation. It is essentially a procedure for the orientation of the interlocutor’s attention, which has as essential function the maintenance of the high level of activation that characterizes a discourse representation already assumed to be the subject of an attention focus by the interlocutor at the point of utterance. It is not only the anaphoric expression that is used (typically, a third-person pronoun) that realizes (discourse) anaphora, but also the clause in which it occurs as a whole. This predicational context acts as a kind of ‘pointer,’ orienting the addressee toward the part of the discourse representation already cognitively activated and which will make it possible to extend in terms of an appropriate coherence relation (see Kleiber, 1994: Ch. 3; as well as the articles Cohesion and Coherence and Coherence: Psycholinguistic Approach). (1) [Fragment of dialogue in film:]Woman: ‘‘Why didn’t you write to me?’’ Man: ‘‘I did . . ., started to, but I always tore ’em up.’’ (Extract from the film Summer Holiday. Figure (5.5) from Cornish F (1999). Anaphora, Discourse and Understanding. Evidence from English and French. Oxford: Clarendon Press. 157. By permission of Oxford University Press (URL www.oup.com). Also reprinted as example (6d), p. 204 from Cornish (2005) by permission of John Benjamins Publishing Company.)

In (1), an instance of indirect anaphora, it is the illocutionary point of the woman’s initial question, which bears on the nonexistence of a letter or letters that she had expected the man to write to her, together with the lexical–semantic structure of the verbal predicate write (in the sense ‘‘engage in correspondence’’), that provides an interpretation for the unstressed pronoun ’em in the third conjunct of the man’s reply. The example clearly shows the extent to which inferences based on existing discourse representations, lexical, and general knowledge are mobilized in the operation of discourse anaphora, which clearly does not require the copresence of an explicit textual antecedent, under the traditional cotextual account of anaphora, in order to exist (see also Blackwell, 2003, in connection with a

Discourse Anaphora 241

study of Spanish conversations and spoken narratives, and Ziv, 1996). Here is an example involving different possible continuations of the antecedent-trigger predication in terms of distinct anaphoric predications: (2) Jasoni witnessed a terrible accidentj yesterdayk at the Dunton crossroadsl. Hei was very shaken/Itj resulted in two deaths/#Itk was dull and overcast/?#Itl/The placel is a known danger-spot.

Note: Subscripted letters indicate identity or otherwise of the intended referents of the expressions so marked. In (2), the first two argument referents introduced (‘Jason’ and ‘the terrible accident Jason witnessed the day before the utterance of (2)’) may be naturally continued via unaccented pronouns – but not the scenic referent ‘the day before utterance time,’ nor (or at least, not as easily as with the first two entity referents evoked) ‘the Dunton crossroads,’ which is expressed by an adjunct and which serves as a locative frame of reference for the situation evoked as a whole (see also the point made in regard to certain natural Spanish conversational data by Blackwell, 2003: 118, 122–3). The slashes here are meant to indicate alternative continuations of the initial sentence. The crosshatch preceding an example is intended to signal that, as a potential utterance, it is unnatural in the context at hand. Example (2) is intended to be discourse-initial, and not part of an earlier, ongoing discourse. Deixis, on the other hand, is a procedure that relies on the utterance context to redirect the interlocutor’s attention toward something associated with this context (hence that is potentially familiar to him or her), but to which she or he is assumed not already to be attending. As Kleiber and other pragmasemanticists have observed, deixis causes a break in the continuity of the discourse at the point where the deictic procedure is used, so that in effect the interlocutor is invited to ‘step out’ of this discourse context to grasp a new referent in terms of the current situation of utterance – or, alternatively, another aspect of a same referent, which has already been focused upon. So deixis serves to introduce a new referent into the discourse, on the basis of certain features of the context of utterance. Now, textual as well as discourse deixis provide a transition between the notions of deixis and anaphora, because they consist in using the deictic procedure to point to part of a pre- or postexisting textual or memory representation, but which is not necessarily highly activated. The interlocutor will therefore need to exert a certain cognitive effort in order to

retrieve it. This interpretative effort will involve constructing an ‘entity,’ on the basis of the discourse representation in question, in order for it to be the subject of a predication, an anchor for the introduction of new information. Where there is a difference in topic-worthiness between the representation introduced by a trigger and the intended referent, the discourse-deictic and not anaphoric procedure must be used, as in (3), an attested utterance: (3) [End of the words of welcome uttered by the director of the Language Centre, at the start of a conference, University of Edinburgh, 19 September 1991] ‘‘. . . We intend to record the guest speakers, so these will be available to participants at the end of the Conference . . ..’’ (Example (20) in Cornish, 2005: 212.) (Permission to reprint granted by John Benjamins Publishing Company.)

In order to access the referent targeted via the proximal demonstrative pronoun these (namely, ‘the recordings of the guest speakers’ papers’), the hearer will have to draw an inference of the type: ‘‘If the guest speakers’ papers are recorded at time t0, then at time tn (tn > t0), there will be recordings of these papers.’’ Unlike the indirect referent in (1) and the first two more ‘direct’ ones in (2), here the implicit referent has not attained the status of a potential topic by the time the initial clause is processed, for it is ‘the guest speakers’ that enjoys this status at this point. So it is predictable that the elaborative so-clause that immediately follows will continue to be about these entities. The demonstrative pronoun these in (3) directs the hearer’s attention toward a referent that she or he must create on the basis of the representation introduced via the initial conjunct, as well as in terms of his or her knowledge of the world. So it is an instance of discourse deixis rather than of anaphora. Indeed, the (anaphoric) personal pronoun they in its place would have maintained the situation evoked via the initial conjunct, resulting in the retrieval of the only salient topic-worthy entity within it, ‘the guest speakers’ – an interpretation leading to quite severe incoherence here.

Three Essential Ingredients of the Operation of Discourse Anaphora: ‘Antecedent-Trigger,’ ‘Antecedent,’ and ‘Anaphor’ The Antecedent-Trigger

This is not necessarily an explicit, textual expression (a phrase of some kind). It may also be a percept or a

242 Discourse Anaphora

nonverbal signal (see Cornish, 1996, 1999: ch. 4). In (1) it is the illocutionary point of the woman’s initial question, in conjunction with the use of the verb write that triggers the discourse representation in terms of which the pronoun ’em refers, whereas in (2) it is the use of the descriptive noun phrases Jason, a terrible accident, and the Dunton crossroads. The broader notion of ‘antecedent-trigger,’ in relation to the traditional, canonical textual ‘antecedent’ of normative written prose, which is required to be morphosyntactically and semantically parallel to the anaphor, is useful in that it enables us to include both exophora and indirect anaphora (see example (1)) within the purview of anaphora per se – of which both these phenomena are instances (see also Cornish, 1999: 41–3). The Antecedent

This is a psychologically salient discourse representation in terms of which the anaphor refers or denotes. As this characterization suggests, it is a unit of discourse, not of text (see the distinction drawn earlier) and may be constructed via direct interpretation of the cotext in terms of a relevant context or in terms of the context alone in conjunction with relevant aspects of mutual knowledge, or in terms of inferences from either of these. See, as an example, the informal description of the antecedent of the unstressed pronoun ’em in (1): ‘the set of unfinished, torn and unsent letters which the man had begun writing to the woman.’ See also Dahl and Hellman (1995), Langacker (1996), van Hoek (1997), and Cornish (1999: 44–7). A given antecedent-trigger may give rise to several distinct ‘antecedents’ (in this sense), as a function of the possible drawing of inferences, of what is predicated of the former’s referent, or of the functioning of the type of anaphor chosen to target it. Sample (4a, b) provide examples, where the antecedent-trigger one of the new Toyota models in the first sentence of (4a) gives rise to different ‘antecedents’ targeted by the pronouns they, the it in the second anaphoric continuation, and one; whereas the entire initial sentence acts as antecedenttrigger for the antecedents created via the anaphors that and the it in the final anaphoric continuation in this example: (4a) John bought one of the new Toyota models yesterday. They are really snazzy cars/ It is standing outside his front door/ Mary bought one too/At least, that’s what he told me/It took only half an hour to complete. (4b) ‘‘The grouse season begins today, and they’re being shot in large numbers’’ (Today Programme, BBC Radio 4, 10.12.04)

The Anaphor

This is a referentially dependent indexical expression. The relation is not exclusively between antecedenttrigger and anaphor (except in the case of metalinguistic occurrences, as in this example: A: Psephism was much in vogue in those times. B: What does that mean?: but these in any case, as Lyons (1977) points out, are instances of textual deixis). First, then, the anaphor refers, not to its antecedent(-trigger), but in terms of whatever its antecedent(-trigger) refers to (see Lyons, 1977, vol. 2: 660). Second, the discourse referent evoked via the antecedent-trigger is not necessarily the same at the point of retrieval via the anaphor as it was at the point of introduction: minimally, what will have been predicated of the referent concerned within the antecedent-trigger predication (and potentially within subsequent predications) will have altered that referent’s representation – perhaps even radically. Third, it is not simply the anaphor on its own that retrieves the (updated) discourse referent at the point where it occurs in the cotext, but the anaphoric (or ‘host’) predication as a whole: compare the anaphoric continuations in examples (2) and (4a) in particular in this respect. So what is predicated of the referent of the anaphor acts as a filter, ruling out theoretically possible referents or denotata, and as a pointer, targeting and selecting a salient discourse representation that is compatible with what is predicated of the anaphor’s referent (see also Yule, 1981; Dahl and Hellman, 1995). As we shall see in analyzing text (5) in the next section, there is a variety of types of anaphor – zero forms; ordinary pronouns (see Pronouns); demonstrative pronouns (see Demonstratives); reduced proper names; demonstrative, definite, and possessive full NPs; ellipses of various kinds; and so on – which each have distinct indexical properties. As such, they each function to establish different kinds of discourse anaphoric structures and are each sensitive to specific types of discourse context and function. See Cornish (1999: 51–68) for some discussion. On the use of demonstratives in narrative discourse, see in particular Himmelmann’s (1996) typological study.

The Text – as Well as Discourse – Sensitivity of Discourse Anaphora The text we are going to analyze for illustration is taken from a British newspaper, The Guardian (1 July 1998, p. 3), reproduced under (5) (for convenience in the analysis that follows, the paragraphs are each numbered in the left-hand margin).

Discourse Anaphora 243 (5) Monet waterlilies set £20m record Luke Harding 1. A painting of the most famous garden in the history of art last night sold for £19,801,500, shattering all records for a work by Claude Monet. 2. Two frenzied telephone bidders pushed the price for Monet’s Waterlily Pond and Path by Water to almost £20 million at Sotheby’s, suggesting that good times are back again for the fickle art market. 3. The price, reached after six minutes of bidding, comfortably shatters the previous £13 million record for a painting by the artist. Waterlily Pond is now the most expensive Impressionist work sold by a European auction house since 1990. Sotheby’s had estimated the sale price more modestly at £4–£6 million. 4. The oil painting, executed in 1900, was acquired by a private British collector in 1954 and has not been shown in public since then. 5. The identity of the buyer is a mystery. ‘‘We are still totting up the figures for the total auction,’’ a jubilant Sotheby’s spokeswoman said last night. ‘‘It’s been a very very good night.’’ 6. Monet was passionate about flowers and intrigued by landscape architecture. In 1893 he purchased a plot of land which adjoined the rural house in Giverny, near Paris, where he had moved 10 years earlier. A small stream ran through the plot, and Monet turned the garden into a horticultural paradise. 7. Monet worked tirelessly during the summer months, producing 12 pictures in 1899 and six in 1900. The oil sold last night shows the left section of his water garden, with the Japanesestyle footbridge and path gently curving through patches of purple irises and tall grass. 8. ‘‘It took me some time to understand my waterlilies,’’ Monet said in a conversation with the author Marc Elder in 1924. ‘‘All of a sudden I had the revelation of how enchanting my pond was. Since then I have had hardly any other subject.’’ Waterlily Pond and Path by the Water is now the 11th most expensive ever painting sold at auction. Its sale price is easily eclipsed, though, by another work completed just nine years earlier – Portrait du Dr. Gachet – by a then little-known artist, Vincent Van Gogh, which went for $82,500,000 (£55 million) in 1900. 9. Last night’s sale follows a gradual recovery in the art market – unlike the overheated boom of the late 1980s, where it was focused in just one or two areas. Recent sales of Impressionist and Old Master works have been encouraging – despite allegations that many of Van Gogh’s best-known works are fakes (Example (8) in Cornish, 1998: 30–31). (Permission to reprint granted by Guardian Newspapers and Cahiers de Grammaire.)

In this text, there are several ‘topic chains’ (see Cornish, 1998, 2003 for further details). A topic chain is a sequence of mainly anaphoric (referentially dependent) expressions within a text that retrieve the same referent, which is thus the subject of several predications for a segment of the text. This referent may have been introduced explicitly via a referentially autonomous expression, such as a full proper name, an indefinite NP, or a full definite NP. This is the ‘head’ of the chain, the anaphoric expressions retrieving its referent then being the ‘links.’ We will adopt Dik’s (1997: 218) definition of topic chains (what he calls ‘anaphorical chains’) in recognizing three theoretical discoursefunctional positions within them: (1) the head of the chain, which introduces the topic referent into the discourse; (2) a second-link position (only exploited in ‘macro’-topic chains), whose function is to ‘reconfirm’ the installation of the topic referent in question – that is, it has an essentially addresseeoriented function; and (3) a third position, which may be multiply filled, consisting of purely anaphoric retrievals of the topic referent whose function is to maintain the high-attention focus now accorded (or assumed to be so accorded) to that referent by the addressor. By the third link, then, the referent retrieved is taken as enjoying full topic status in the discourse. The four most important topic chains in text (5) are the following: (1) the one dealing with the article’s overall topic, the painting by Monet which had just been sold by auction for a record price; (2) the one bearing on the price fetched by the sale; (3) the one having to do with the artist himself; and finally (4) the one dealing with the plot of land that he had bought at Giverny in 1893, of which the stream that flowed through it served as a model for his painting. These chains are made up of the following successions of expressions: 1. A painting of the most famous garden in the history of art . . . Monet’s Waterlily Pond and Path by Water . . .. Waterlily Pond . . . The oil painting . . . ø . . . The oil sold last night . . . Waterlily Pond and Path by the (sic) Water . . . Its . . . ; 2. £19,801,500 . . . the price for Monet’s Waterlily Pond and Path by Water . . . The price . . . the sale price .. . . Its sale price . . .; 3. Claude Monet . . . Monet . . . the artist . . . Monet . . . ø . . . he . . . he . . . Monet . . . Monet . . . ø . . . his . . . Monet; 4. A plot of land which adjoined the rural house in Giverny, near Paris, where he had moved 10 years earlier . . . the plot . . . the garden . . . his water garden . . . .

244 Discourse Anaphora

Let us represent these four topic chains schematically, using the abbreviations ‘R-A’ for ‘referentially autonomous expression’ and ‘R-NA’ for ‘referentially nonautonomous expression,’ as follows (‘H’ ¼ ‘Head of chain,’ ‘L2’ ¼ ‘Link-2,’ and ‘L3’ ¼ ‘Link-3’): (6) Schematic representation of the four topic chains in (5)  Topic Chain 1: H: R-A; L2: R-A; L3: R-NA, R-NA, R-NA, R-NA, R-A, R-NA. (‘the painting by Monet’)  Topic Chain 2: H: R-A; L2 : R-A; L3: R-NA, R-NA, R-NA. (‘the sale price reached by the painting’)  Topic Chain 3: H: R-A; L2: Ø; L3: R-NA, R-NA, R-NA, R-NA, R-NA, R-NA, R-NA, R-NA, R-NA, R-NA, R-NA. (‘Claude Monet’)  Topic Chain 4: H: R-A; L2: Ø; L3: R-NA, R-NA, R-NA. (‘Monet’s garden’) (Item (9) in Cornish, 1998: 32, slightly adapted.) (Permission to reprint granted by Cahiers de Grammaire.)

This representation points up the fact that referentially autonomous and anaphoric expressions do not occur indiscriminately in any position within a chain. For apart from the autonomous expression that occurs in fifth position within the link L3 in chain 1 (Waterlily Pond and Path by the (sic) Water), autonomous referring expressions always occur in the central positions within chains (positions H and L2); whereas anaphoric expressions appear only within link-position L3. See Ariel (1996) on the distinction between referentially autonomous and non-autonomous indexical expressions (particularly as far as the distinction between full and reduced proper nouns is concerned). Interestingly, it is precisely in the two topic chains that are intuitively the most central to text (5) as a whole (namely, chains 1 and 2) that we find link L2 realized by an autonomous expression. The other two chains (3 and 4), where this same link is by hypothesis unfilled, evoke referents that are subsidiary within this discourse in relation to the referents developed by chains 1 and 2: the article deals, after all, with the particular work by Monet as well as with the record price it fetched in auction, rather than with the artist or his garden as such. Furthermore, the representation in (7) (following) of the discourse structure associated with (5) shows that although chains 1 and 2 are set up within central discourse segments (paragraphs 1–3, 5, 8b, and 9), chains 3 and 4 are restricted to background, subsidiary segments (the segments

corresponding to paragraphs 4 and 6–8a). So it is not surprising that the last two topics should not have required an L2 link for their installation within the discourse. Let us look now at the relationship between the occurrence of an expression realizing a given link in a chain and the discourse function of the unit in which it occurs, in terms of the structure of the discourse as a whole. Schema (7) represents the structure of text (5) as discourse (indentations indicate subsidiary segments): (7) Discourse structure corresponding to text (5) 1.

[Para 1: Introduction of the global discourse topic, the painting by Monet and its record price reached at an auction in London] 2. [Para 2: Continuation of the sequence of events surrounding 1] 3. [Para 3: Development on the record price reached by the sale of the work] 4. [Para 4: Background segment on the history of the painting, from its inception to the present] 5. [Para 5: Return pop to the central topic. Introduction of two local topics (not developed in the remainder of the text): the buyer’s identity, and the calculation of the total price of the sale] 6. [Para 6: Flashback to the subject matter of the painting and its origin: the purchase by Monet of a plot of land near his country house in Giverny – the inspiration behind the work. No reference to the painting as such] 7. [Para 7: Development of this background topic: what the painting shows of the garden] 8a. [1st half of Para 8: Continuation of the topic of Monet’s inspiration drawn from his garden at Giverny] 8b. [2nd half of Para 8: Return to the central topic of the record price of the painting and comparison with the astronomical price reached by another painting of the same period] 9. [Para 9. Conclusion: Extrapolation to the art market in general – the recovery of the art sale market precipitated by this auction. No reference to Monet’s painting] (Item (10) in Cornish, 1998: 34) (Permission to reprint granted by Cahiers de Grammaire.)

The structure of the first two of the four topic chains in (5) in this respect is as follows. The title of the article already sets up the global theme, the sale of Monet’s painting Waterlily Pond and Path by Water and the record price it fetched. In the introductory paragraph, this dual aspect of the global theme is made explicit in a complex sentence. As an introductory sentence-paragraph, it has a ‘thetic’ character, where the information it presents to the reader is entirely new.

Discourse Anaphora 245

The second paragraph is an elaboration of the situation established by the first, dealing more specifically with the price of this sale at auction; but it also serves to identify the painting by Monet, which is the topic of the first paragraph. This cannot be an instance of cataphora, where the antecedent-trigger follows the ‘cataphor,’ for all that, given the referentially autonomous character of both nominal expressions used here (full indefinite NP and full proper name); so the referential dependency of anaphor on ‘antecedent-trigger’ does not obtain – that is, we have coreference without anaphora in the strict sense here. The use of two indexically strong (referentially autonomous) expressions at this point – Monet’s Waterlily Pond and Path by Water and the price for Monet’s Waterlily Pond and Path by Water – is no doubt motivated by the concern to promote their referents to global topics within the article, following their brief introduction in the initial paragraph. The third paragraph continues this theme of the price fetched by the sale of the painting. Note that the two references to the dual global topics of the article are made via lexically explicit NPs (a reduced proper name for the painting (Waterlily Pond) and a definite, also reduced, NP for its price (the price)), and not via pronouns. (The fact that these are reduced NPs means that they are not referentially autonomous, but are potentially anaphoric, like pronouns.) See Geluykens (1994) on the question of anaphoric ‘repairs’ in spoken interactions, where the speaker mis-assesses their addressee’s current attention state, and uses a reduced indexical form type (a pronoun of some kind), which she or he immediately corrects to a fuller form (a definite or demonstrative NP or a proper name). There are two reasons behind the use of the price as subject of the initial sentence of this third segment of the discourse: first, this reference is followed immediately by a nonrestrictive relative clause in apposition, a position from which unaccented pronouns (here it) are excluded (this is also the case with the lexical NP the oil painting in paragraph 4); and second, the repetition of the definite article and of the lexical head of the complex NP, which were used in the previous paragraph to ‘topicalize’ the referent at issue, signals at the start of this new segment that it will continue to be about it. In other words, the referent in question, though topical, is nonetheless reevoked at the very beginning of a new discourse segment and no longer within the one in which it was originally topicalized. See Fox (1987) in this respect, who argued that repeated proper nouns in English spoken and written texts may have this function, and also Blackwell (2003) in relation to her spoken Spanish data.

Similarly, the use of a proper noun, albeit reduced (Waterlily Pond), at the point in this segment where this reference occurs, is made necessary by the evident need to distinguish this referent from the other central referent, which has already been evoked in this paragraph (‘the painting’s sale price’), but which enjoys an advantage over it in terms of topic-worthiness at the point where the reference is made. The pronoun it used in its place would certainly have retrieved this latter referent, and not ‘the painting’ as such. This fits in well with what is stated by Levinson’s (1995) ‘‘M-principle’’ (see also Huang, 2000: 208), to the effect that the use by a speaker of a phonologically and lexically more substantial expression where a more attenuated one could have been used in its place is normally intended and interpreted as not meaning the same as if the more unmarked expression had been used. As for the two references to these two macrotopics throughout paragraph 4, where the focus switches to background considerations relating to the central theme, the first is made via a definite lexical NP (The oil painting, in initial subject position of the segment) and the second by means of a null form, the ellipsed subject of the second conjunct of the clause, which realizes this segment. The motivation behind the use of the former expression type at the beginning of this segment is exactly the same as that of its counterpart the price in the same position at the start of the previous paragraph. Because it is followed by a nonrestrictive relative, a pronoun could not have occurred in its place; but even if one could, it would be excluded for reasons of anaphoric ambiguity: for the pronoun it here – leaving aside what is predicated of the referent of this expression in this context – would have retrieved the referent ‘the sale price’ evoked by the immediately preceding clause. Paragraph 7, which falls together with paragraphs 6 and the first half of 8 in a background discourse segment, includes a reference to the painting, a reference that reevokes at the same time the event of the sale on June 30 (the oil sold last night). Once again, we have to do here with a definite (elliptical) NP; and just as in the previous cases, the reason for it is the existence of comparable referents that are in competition in terms of topichood. The first half of paragraph 8 (8a) continues the theme of the subject of the painting (‘Monet’s garden’), a theme that is abruptly interrupted in the middle of this paragraph by the opening of a segment returning to the macrotopic of the record sale price reached by this painting at auction at Sotheby’s. Now, it is precisely by means of a full proper name that this transition to a segment dealing with the circumstances of the sale of the painting is carried

246 Discourse Anaphora

out. As such, we can hypothesize that it corresponds to what Dik calls a resumed topic. It is this particular marked discourse function realized via this referentially autonomous expression that motivates its exceptional filling of the ‘anaphoric’ L3 link in this macrotopical chain (recall the structure of Chain 1 given in schema (6) earlier). Once its topic status has been reestablished within this new segment – which does not correspond this time to the start of a new paragraph in terms of textualization (see the text/ discourse distinction mentioned earlier in the second section) – the next (anaphoric) reference may be realized by means of an unaccented pronoun: in this case, by the pronoun contained in the possessive determiner Its.

Conclusion As we have seen in connection with text (5) in particular, the occurrence of different types of nonautonomous, potentially anaphoric expressions in a text is in large part determined by the discourse function of the unit of discourse corresponding to the textual segment in which that expression appears, as well as by its position within that segment. In the case of written newspaper articles of the kind seen in (5) at least, it is clear that initial position within a unit is reserved for lexically based NPs (reduced proper nouns and definite NPs, as well as demonstrative ones), whatever the degree of topicality and accessibility their intended referent may enjoy at that point; unaccented pronouns and of course null pronouns are virtually excluded from such positions, because they serve to mark the continuity of the attention focus established prior to their occurrence. We have also seen how the copresence of competing referents in the immediately prior cotext may favor the use of an indexically stronger form type than a pronoun or a null anaphor in order to avoid unintended anaphoric continuities. Discourse anaphors, in sum, are sensitive to the hierarchical structure of the discourse that may be assigned to a given text, in conjunction with an appropriate context, and their choice by a speaker or writer is clearly a function of his or her ongoing assessment of the conditions under which their addressee or reader will be operating at the point of use. In the case of discourse anaphors, it is clear, in Dahl and Hellman’s (1995: 84) colorful words, that their ‘antecedents’ ‘‘aren’t just sitting there, waiting to be referred to, but rather ha[ve] to be created by some kind of operation’’: this is a reflex of the fact that anaphora operates within the dynamic, ongoing construction of discourse, rather than exclusively in terms of the more static dimension of text.

See also: Accessibility Theory; Anaphora, Cataphora, Exophora, Logophoricity; Anaphora Resolution: Centering Theory; Coherence: Psycholinguistic Approach; Cohesion and Coherence; Context; Context and Common Ground; Coreference: Identity and Similarity; Definite and Indefinite; Definite and Indefinite Articles; Demonstratives; Discourse Representation Theory; Natural Language Understanding, Automatic; Pronouns.

Bibliography Ariel M (1996). ‘Referring expressions and the þ/ coreference distinction.’ In Fretheim T & Gundel J K (eds.) Reference and referent accessibility. Amsterdam & Philadelphia: John Benjamins. 13–25. Blackwell S E (2003). Implicatures in discourse. The case of Spanish NP anaphora. Amsterdam & Philadelphia: John Benjamins. Cornish F (1996). ‘‘‘Antecedentless’’ anaphors: deixis, anaphora, or what? Some evidence from English and French.’ Journal of Linguistics 32, 19–41. Cornish F (1998). ‘Les ‘‘chaıˆnes topicales’’: leur roˆle dans la gestion et la structuration du discours.’ Cahiers de Grammaire 23, 19–40. Cornish F (1999). Anaphora, discourse and understanding. Evidence from English and French. Oxford: Clarendon Press. Cornish F (2003). ‘The roles of (written) text and anaphortype distribution in the construction of discourse.’ Text 23, 1–26. Cornish F (2005). ‘Degrees of indirectness: two types of implicit referents and their retrieval via unaccented pronouns.’ In Branco A, McEnery T & Mitkov R (eds.) Anaphora processing: linguistic, cognitive and computational modelling. Amsterdam & Philadelphia: John Benjamins. 199–200. ¨ & Hellman C (1995). ‘What happens when we Dahl O use an anaphor?’ In Moen I, Simonsen H G & Lødrup H (eds.) Proceedings of the XVth Scandinavian Conference on Linguistics. Oslo, Norway. Oslo: Department of Linguistics, University of Oslo. 79–86. Dik S C (1997). ‘Anaphora.’ In The Theory of Functional Grammar, part 2: Complex and derived constructions (Ch. 10). Berlin & New York: Mouton de Gruyter. 215–228. Fox B A (1987). Discourse structure and anaphora. Cambridge: Cambridge University Press. Fox B A (ed.) (1996). Studies in anaphora. Amsterdam/ Philadelphia: John Benjamins. Geluykens R (1994). The pragmatics of discourse anaphora in English: evidence from conversational repair. Berlin: Mouton de Gruyter. Himmelmann N P (1996). ‘Demonstratives in narrative discourse: a taxonomy of universal uses.’ In Fox B A (ed.) Studies in anaphora. Amsterdam/Philadelphia: John Benjamins. 206–254. Huang Y (2000). Anaphora. A cross-linguistic study. Oxford: Oxford University Press. Kleiber G (1994). Anaphores et pronoms. Louvain-laNeuve: Duculot.

Discourse Domain 247 Langacker R W (1996). ‘Conceptual grouping and pronominal anaphora.’ In Fox B A (ed.) Studies in anaphora. Amsterdam/Philadelphia: John Benjamins. 333–378. Levinson S C (1995). ‘Three levels of meaning.’ In Palmer F (ed.) Grammar and meaning. Essays in honour of Sir John Lyons. Cambridge: Cambridge University Press. 90–115.

Lyons J (1977). ‘Deixis, space and time.’ In Semantics (Ch. 15), vol. 2. Cambridge: Cambridge University Press. 636–724. van Hoek K (1997). Anaphora and conceptual structure. Chicago: University of Chicago Press. Yule G (1981). ‘New, current and displaced entity reference.’ Lingua 55, 41–52. Ziv Y (1996). ‘Pronominal reference to inferred antecedents.’ Belgian Journal of Linguistics 10, 55–67.

Discourse Domain P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

A discourse domain D is a cognitive space for the middle-term storage of the information conveyed by subsequent utterances. The notion of discourse domain is part of the theory of discourse semantics, which holds that the interpretation of utterances is codetermined by the information stored in the D at hand. For an utterance u to be interpretable, u must be anchored in a given D. The anchoring of u requires, at least, that all referring expressions in u, including anaphoric pronouns, link up with an address in D that represents the object or objects referred to. The information conveyed by the main predicate in the sentence S underlying u is then distributed over the relevant addresses. This process is called the incrementation of S or i(S). When S presupposes P, P must be incremented before S. When D does not yet contain i(P), i(P) is supplied post hoc by accommodation (see Presupposition), unless blocked. Ds are subject to a condition of consistency and a condition of cognitive support, ensuring the compatibility of accommodated increments with available world knowledge, unless D specifies otherwise. A D contains object addresses, domain addresses, and instructions. Object addresses represent real or fictitious objects. A (singular or plural) object address is created, in principle, by an existentially quantified sentence, say, There is a king, resulting in a labeled address (where n is an arbitrary natural number): (1) d–n [a | King(a)]

or ‘there is an a such that a is a king’ (disregarding tense). The sentence He is rich results in:

value). The change is due to address closure, represented as //, which takes place when an open address is called on by a subsequent definite term. Address closure enables the semantic distinction between the open address in (3a) and the closed address in (3b): (3a) John has few clients who are dissatisfied. (open address) (3b) John has few clients. And they are dissatisfied. (closed address)

Some addresses are subdomains, representing what has been specified as someone’s belief (hope, knowledge, etc.), as being possible or necessary, or the alternatives of an or-disjunction. There is an intricate system of interaction between the main Do and subdomains Ds. Information within Do can be called on in any Ds, unless blocked by contrary information in Ds (downward percolation). Likewise, unless blocked in Do, presuppositions accommodated in a Ds are also accommodated in Do (projection). A D may also contain instructions constraining its further development. Negation, as in not-S, is an instruction banning i(S) from D. This explains the incoherence of example (4), where the pronoun it calls on an address that has just been banned: (4) !John has no car. It is in the garage.

The study of discourse domains is still in its infancy. Yet it already provides explanations for phenomena that have so far remained obscure. See also: Anaphora, Cataphora, Exophora, Logophoricity; Context; Context and Common Ground; Definite and Indefinite; Discourse Representation Theory; Discourse Semantics; Donkey Sentences; Presupposition; Projection Problem.

(2) d–n [a | King(a) // Rich(n)]

or ‘n is rich,’ where n stands for ‘the a such that a is a king.’ In (1), a is an existential quantifier; in (2), a is a definite determiner – a function from the predicate extension [[King]] to an object (the reference

Bibliography Seuren P A M (1985). Discourse semantics. Oxford: Blackwell.

248 Discourse Parsing, Automatic

Discourse Parsing, Automatic D Marcu, University of Southern California, Marina del Rey, CA, USA ß 2006 Elsevier Ltd. All rights reserved.

Introduction Automatic discourse parsing is a young research area that finds itself in a somewhat awkward position today. From a natural language engineering perspective, the need for text-level processing systems is uncontroversial: because sentence-level processing modules (syntactic and semantic parsers, named-entity recognizers, language translators and generators, etc.) operate at sentential level, they are not able to make text-level inferences and/or produce outputs that are text-level coherent/consistent. Despite the clear need, two factors conspire to prevent rapid progress in the field: 1. On one hand, text/discourse linguistics is not sufficiently mature yet as a research field. Humans can, for example, examine two adjacent paragraphs in a text and immediately assert whether one elaborates on, contradicts, or provides supporting evidence for the other. However, these human intuitions and judgments are not formalized at a sufficient level of detail yet to enable computational linguists to standardize the process. Contrast this state of affairs with the state of affairs in syntax, for example, where sufficiently mature linguistic theories have been already adopted and employed by computational linguists to create open-domain syntactic parsers that operate at 90% levels of accuracy. 2. On the other hand, natural language engineers themselves perceive that discourse/text-level parsing is not a low-hanging fruit. On the ‘most-likely-tohave-an-immediate-impact list’ on the accuracy of popular end-to-end language applications, such as machine translation and speech recognition, discourse/text-level models are not a high priority. Solutions to apparently simpler problems, such as dealing with unknown words, names, morphology, and local syntax, are likely to have a larger immediate impact on the performance of these applications. Given these factors, it is likely that the field of discourse parsing will have a quantifiable impact on many end-to-end applications, not immediately, but in a slightly more distant future. However, as adequate solutions to the apparently simpler problems are found and as progress is made in discourse linguistics, the field of discourse parsing is poised to drive significant advances in natural language and

enable many new applications that we cannot even conceive of today. The current state of the art in discourse parsing can be easily summarized along the following dimensions: . Depending on the linguistic theory one subscribes to, discourse/text phenomena are represented using sequences of segments, trees, or graphs. . Depending on educational background and personal bias, discourse/text phenomena are inferred by exploiting anything from simplistic observables, such as cue phrases, to sophisticated hidden variables, such as complete semantic representations. . Reflecting recent algorithmic advances in other computational linguistics areas, the derivation of discourse/text structures employs techniques grounded in human encoded knowledge, supervised, and unsupervised learning techniques. This article reviews the state of the art in the area of automatic discourse parsing in the context of the following structure. It first discusses several competing discourse-level representations that have proven influential in the field and the concept of ‘discourse relation,’ which is fundamental to many of these representations. It then discusses the sources of knowledge that have been exploited in the context of inferring discourse relations and the techniques used for making these inferences. It reviews the main approaches specific to discourse structure derivations and outlines several applications in which discourse parsing has been shown to have a positive or potentially positive impact. It ends with a list of public resources (data and programs) that can further progress in this field.

Discourse Structure Representations There are three main abstractions that subsume discourse structure representations: sequences, trees, and graphs. The simplest assumes that discourses/texts can be sequenced into labeled or unlabeled segments of topical continuity (see Hearst, 1997 and Burstein et al., 2003 for representative examples). Since the automatic derivation of such representations is discussed extensively in another section of this encyclopedia, this article is not going to be concerned with that subject here. The most ubiquitous abstraction reflects the assumption that discourse representations are trees. For example, a tree-like representation constructed in the style of Rhetorical Structure Theory (Mann and Thompson, 1988) of text fragment (1), is shown in Figure 1.

Discourse Parsing, Automatic 249

Figure 1 A rhetorical structure theory-like representation of text (1).

(1) [Mr. Roman also brushed aside reports about infighting between him and Mr. Phillips, his successor at Ogilvy.1] [The two executives could hardly be more different.2] [Mr. Roman comes across as a low-key executive;3] [Mr. Phillips has a flashier personality.4] [During time off, Mr. Roman tends to his garden;5] [Mr. Phillips confesses to a fondness for, among other things, fast cars and planes.6]

The representation makes explicit the text fragments that represent the minimal units of discourse (elementary discourse units) – the leaves of the tree. The internal nodes of the tree correspond to contiguous text spans. In Figure 1, each text span/discourse unit has a distinct nuclearity – a nucleus indicates a more essential unit of information, while a satellite indicates a supporting or background unit of information; nuclei are represented using straight lines, while satellites correspond to arc origins. Non-overlapping, adjacent text spans are connected via ‘discourse relations’ that make explicit the intentional, semantic, or textual relation that holds between two segments. For example, the text span that consists of units 2 to 6 enables the reader to evaluate the information provided in unit 1; and units 5 and 6 enable the reader understand the contrast between Mr. Roman’s and Mr. Phillips’ personalities. There is no agreement as to the most appropriate level of granularity for defining elementary discourse units (subclauses, clauses, sentences, paragraphs); the

nature and number of discourse relations (intentional, semantic, textual); whether the relations hold between adjacent or embedded segments; etc. But some discourse theories are more influential than others. The theories proposed by Grosz and Sidner (1986), Mann and Thompson (1988), Hobbs (1990), and Asher (1993) are among the most influential in the area of computational linguistics, so far. (See Marcu, 2000 for detailed discussions of these theories). The least constrained abstraction favored by discourse theorists is the graph. Although graph-based discourse representations are used in the context of summarization applications, linguistically, they are the least well-understood/defined: very little can be said, for example, about a discourse graph being well-formed or valid.

Observables Used for Inferring Discourse Relations The goal of a discourse parser is to take as input arbitrary texts and automatically construct their most likely discourse structures. Independent of the discourse theory that underlies it, and the mechanisms employed for discourse structure derivation and disambiguation, every discourse parser needs to infer whether and what discourse relation holds between two given text spans. There are significant differences with respect to the observables used for making this inference.

250 Discourse Parsing, Automatic

. Cue phrases: The simplest discourse parsers (Marcu, 2000) exploit discourse marker occurrences, i.e., phrases such as for example, because, and but, in order to hypothesize discourse relations that correlate with the usage of these markers. . Cohesion: Cross-segment word-based similarity and other word-based correlations (Marcu and Echihabi, 2002) are also used as discourse relation indicators. Low similarities, for example, are assumed to correlate with topical segment boundaries and high-level text-based relations (Morris and Hirst, 1991; Hearst, 1997; Marcu, 2000). . Syntax: Several discourse parsers assume that some syntactic constructs correlate with certain discourse relations – for example, interclause elaboration relations are more likely to have an SBAR as satellite than an S or an NP. Soricut and Marcu (2003) and Lapata and Lascarides (2004) describe discourse parsers and relation identifiers, respectively, that exploit such syntactic knowledge. . Referential expressions: The choice of pronouns and other referential devices has also been shown to correlate with discourse relation usages (see Kehler, 2002 for a comprehensive overview). . Semantics: The knowledge-richest approaches assume access to fully fleshed semantic representations (Hobbs, 1990; Asher, 1993; Webber et al., 2003). Traditionally, the derivation of these observables as well as the inference of discourse relations and discourse structures proceeds incrementally, in a compositional manner.

Algorithms Algorithms for Identifying Discourse Relations

At the foundation of every discourse parser lies an algorithm that identifies whether and what discourse relation holds between two discourse elements. All discourse relation identification algorithms proposed to date are built with the assumption that the discourse relation identification component is necessarily ambiguous: one cannot assert with complete certainty that a given discourse relation holds between two elements. Depending on the context, the relation may span smaller or larger text segments; or the context may provide sufficient evidence that another relation is more likely to hold between two discourse elements than the given one. The design patterns used to implement discourse relation identification algorithms are similar to those employed in many other natural language processing areas: . Manually created rules: Depending on the discourse framework and observables used for identifying

discourse relations, one manually creates rules that exploit constructs as simple as cue phrases (Marcu, 2000) or as sophisticated as logical expressions (Hobbs, 1990; Asher, 1993; Webber et al., 2003) in order to solve the task at hand. . Supervised and unsupervised learning techniques: By capitalizing on sufficiently large, manually annotated corpora (Soricut and Marcu, 2003) or automatically annotated corpora (Marcu and Echihabi, 2002; Lapata and Lascarides, 2004), some researchers use traditional supervised and unsupervised machine learning techniques to construct discourse relation identifiers. The learning frameworks provide a natural means for expressing the uncertainty specific to this task as the outputs of these algorithms have a probabilistic flavor.

Algorithms for Discourse Structure Derivation (Discourse Parsing)

The problem of imposing structure on top of sequences is specific not only to discourse parsing but to syntactic parsing as well. Hence, it comes at no surprise that many discourse parsing algorithms replicate, extend, and modify algorithms that have been already applied in the context of syntactic parsing. . Discourse parsing as compositional semantics: Emulating earlier work in compositional semantics, some discourse parsers derive the discourse representation of a text in an incremental, compositional manner that exploits knowledge-rich constraints and inference mechanisms expressed in formal languages (Asher, 1993; Webber et al., 2003). In an effort to manage efficiently the set of alternatives associated with each incremental step, such approaches often rely on discourse-specific constraints that are not usually formalized in syntactic parsing frameworks. For example, such parsers often assume that a new discourse unit can be attached only to a node on the right frontier of the partial discourse structure constructed up to processing the new unit (Polanyi, 1988). . Discourse parsing as a bottom-up derivation process: Expanding on earlier work on bottom-up syntactic parsing (nonprobabilistic and probabilistic), bottom-up discourse parsers construct discourse trees from leaves and smaller discourse constituents to larger constituents up to the root, by systematically exploring a large number of choices that can be ranked according to some scoring functions. Marcu (2000) and Soricut and Marcu (2003) provide good examples of such parsers that use both arbitrary and well-formed

Discourse Parsing, Automatic 251

probabilistic scoring models. Parsing in this framework is usually accomplished in two steps: in the first step, the elementary discourse units are identified; in the second, the discourse structure is built up. . Discourse parsing as tree adjoining: Expanding on the tree adjoining syntactic parsing framework, lexicalized tree adjoining discourse parsers construct discourse structures using tree-level substitution and adjunction operations (Webber, 2004). . Discourse parsing as a decision-based process: Emulating work on data-driven syntactic parsers, decision-based discourse parsers (Marcu, 2000) decompose the parsing process into a sequence of steps/decisions whose likelihood can be estimated from annotated data. In this framework, discourse parsing amounts to a search for the most probable sequence of decisions that consumes all the discourse units in the input and constructs a discourse tree that incorporates all the discourse units. The elementary discourse units are identified via separate mechanisms. Performance

In contrast to work in syntactic parsing, discourse parsing is still in its infancy. Discourse parsers that employ techniques rooted in compositional semantics or tree-adjoining cannot be applied yet on opendomain documents. Also, to my knowledge, the accuracy of these parsers has never been reported, so it is difficult to estimate how good they are even in a limited domain. Evaluations carried out on opendomain parsers show that much remains to be done before these parsers can make a significant impact on a wide variety of applications. Identifying elementary discourse units can be done with an accuracy of approximately 84%; while the labeled recall and precision on discourse structure derivation is around 50–60% (Marcu, 2000; Soricut and Marcu, 2003). The relation between syntax and discourse appears to be sufficiently well understood to yield sentencelevel discourse parse trees that are very similar to those built by humans under the assumption that one has access to a perfect syntactic parser and not a state of the art one, which has an accuracy of only 90% (Soricut and Marcu, 2003). However, at the text level, too little is known about the correlates between discourse and other observable linguistic constructs to enable one to derive high-accuracy discourse parse trees. In spite of this, not all hope is lost. Until we are able to understand better the relation between text structure, reference, and semantics, we may be able to find niches where crummy discourse parsers can have a positive impact on open-domain applications. For example, Marcu (2000) has shown that automatically

produced discourse trees can be used to build highperformance summarization systems in the popular science genre. Exploiting discourse parsers in the context of machine translation and question answering applications also appears to be promising research directions.

Public Resources The most comprehensive corpus available to researchers today is the RST Discourse Treebank (Carlson et al., 2003). The corpus, which is made available via the Linguistic Data Consortium, provides access to RST-annotated trees for approximately one-sixth of the Penn Treebank. Another ongoing effort targets the annotation of coherence relations associated with discourse markers, including their argument structures and anaphoric links (Miltsakaki et al., 2004). Soricut and Marcu (2003) have also made their sentence-level discourse parser publicly available. These, and other annotation efforts, are likely to have a positive impact on subsequent developments in the field. See also: Anaphora Resolution: Centering Theory; Context; Context and Common Ground; Discourse Domain; Discourse Representation Theory.

Bibliography Asher N (1993). Reference to abstract objects in discourse. Dordrecht: Kluwer Academic Publishers. Burstein J, Marcu D & Knight K (2003). ‘Finding the WRITE stuff: automatic identification of discourse structure in student essays.’ IEEE Intelligent Systems Jan/Feb, 32–39. Carlson L, Marcu D & Okurowski M E (2003). ‘Building a discourse-tagged corpus in the framework of rhetorical structure theory.’ In van Kuppevelt J & Smith R (eds.) Current directions in discourse and dialogue. Kluwer Academic Publishers. 85–112. Grosz B J & Sidner C L (1986). ‘Attention, intentions, and the structure of discourse.’ Computational Linguistics 12(3), 175–204. Hearst M (1997). ‘TextTiling: segmenting text into multiparagraph subtopic passages.’ Computational Linguistics 23(1), 33–64. Hobbs J R (1990). Literature and cognition. CSLI LECTURE NOTES 21. Stanford, CA: Cambridge University Press. Kehler A (2002). Coherence, reference, and the theory of grammar. Stanford: CSLI Publications. Lapata M & Lascarides A (2004). ‘Inferring sentence-internal temporal relations.’ In Proceedings of the Human Language Technology Conference and of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004) Boston, MA, May 2–7. 153–160.

252 Discourse Representation Theory Mann W C & Thompson S A (1988). ‘Rhetorical structure theory: towards a theory of text organization.’ Text 8(3), 243–281. Marcu D (2000). The theory and practice of discourse parsing and summarization. Cambridge, Massachusetts: The MIT Press. Marcu D & Echihabi A (2002). ‘An unsupervised approach to recognizing discourse relations.’ In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002) Philadelphia, PA, July 7–12. 368–375. Miltsakaki E, Prasad R, Joshi A & Webber B (2004). ‘The Penn Discourse Treebank.’ In Proceedings of the Language Resources and Evaluation Conference. Portugal: Lisbon. Morris J & Hirst G (1991). ‘Lexical cohesion computed by thesaural relations as an indicator of the structure of text.’ Computational Linguistics 17(1), 21–48. Polanyi L (1988). ‘A formal model of the structure of discourse.’ Journal of Pragmatics 12, 601–638.

Soricut R & Marcu D (2003). ‘Sentence level discourse parsing using syntactic and lexical information.’ In Proceedings of the Human Language Technology and of the North American Association for Computational Linguistics Conference (HLT-NAACL 2003) May 27–June 1. Canada: Edmonton. 228–235. Webber B (2004). ‘D-LTAG: extending lexicalized TAG to discourse.’ Cognitive Science 28(5), 751–759. Webber B, Stone M, Joshi A & Knott A (2003). ‘Anaphora and discourse structure.’ Computational Linguistics 29(4), 545–587.

Relevant Websites http://www.ldc.upenn.edu – Linguistic Data Consortium. http://www.isi.edu – Soricut and Marcu’s sentence-level discourse parser.

Discourse Representation Theory J van Eijck, Centre for Mathematics and Computer Science, Amsterdam, The Netherlands and Research Institute for Language and Speech, Utrecht, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

The Problem of Unbound Anaphora The most straightforward way to establish links between anaphoric pronouns and their antecedents is to translate the pronouns as variables bound by their antecedents. This approach does not work when the link crosses a sentence boundary, as in example (1). (1) A man1 met an attractive woman2. He1 smiled at her2.

It should be possible to interpret the first sentence of this discourse as soon as it is uttered, and then later on, while processing the second sentence, establish the links between the pronouns and their intended antecedents. One possible solution is translating the indefinites by means of existential quantifiers with scopes extending beyond the sentence level and then allowing the variables for the pronouns to be captured by these quantifiers. But this will not do: at some point the scope of a quantifier must be ‘closed off,’ but further on another pronoun may occur that must be linked to the same antecedent. The bound variable approach to anaphora also fails for cases where a pronoun in the consequent of

a conditional sentence is linked to an indefinite noun phrase in the antedent of the conditional, as in example (2). (2) If a man1 meets an attractive woman2, he1 smiles at her2.

A possible approach here would be to view (2) as a combination of the noun phrases a man and an attractive woman with a structure containing the appropriate gaps for antecedents and pronouns, viz., (3). This is the approach of quantifying-in, taken in traditional Montague grammar (see Montague Semantics). (3) If PRO1 man meets PRO2, PRO1 smiles at PRO2.

This approach does not work here, however. Quantifying-in the indefinite noun phrases in (3), i.e., in a structure that has the conditional already in place, would assign the wrong scope to the indefinites with respect to the conditional operator. Note that the meaning of (2) is approximately the same as that of (4). (4) Every man who meets an attractive woman1 smiles at her1.

In this case as well, quantifying-in does not allow one to generate the most likely reading where the subject of the sentence has wide scope over the embedded indefinite. Sentences with the patterns of (2) and (4) have reached the modern semantic literature through Geach (1962). Geach’s discussion revolves

Discourse Representation Theory 253

around examples with donkeys, so these sentences became known in the literature as ‘donkey sentences.’ As has repeatedly been remarked in the literature, there are quite striking structural parallels between nominal and temporal anaphora. The past tense can be viewed as an anaphoric element in all those cases where it is not to be understood as ‘sometime in the past’ but as referring to some definite past time. (5) John saw Mary. She crossed the street.

In example (5), presumably the seeing takes place at some specific time in the past, and the crossing takes place immediately after the seeing. Again, we have an anaphoric link across sentence boundaries, and a traditional operator approach to tense does not seem to fit the case. Although tense is not treated in the pioneer papers on discourse representation, it is clear that the problem of temporal anaphora is a very important subproblem of the general anaphora problem that discourse representation theory sets about to solve.

Basic Ideas Discourse representation theory as it was presented in Kamp (1981) addressed itself specifically at the problem of the previous section, although confined to nominal anaphora. The basic idea of the approach is that a natural language discourse (a sequence of sentences uttered by the same speaker) is interpreted in the context of a representation structure. The result of the processing of a piece of discourse in the context of representation structure R is a new representation structure R0 ; the new structure R0 can be viewed as an updated version of R. The interpretation of indefinite noun phrases involves the introduction of ‘discourse referents’ or ‘reference markers’ for the entities that a piece of discourse is about. In the following, the term ‘discourse referent’ will be used. Discourse referents are essentially free variables. Thus, indefinite noun phrases are represented without using existential quantifiers. The quantification is taken care of by the larger context. It depends on this larger context whether an indefinite noun phrase gets an existential reading or not. The life span of a discourse referent depends on the way in which it was introduced. All ‘alive’ referents may serve as antecedents for anaphors in subsequent discourse. Anaphoric pronouns are represented as free variables linked to appropriate antecedent variables. Definite descriptions in their simplest use are treated in a way that is similar to the treatment of anaphoric pronouns: definite noun phrases in their anaphoric use are treated like indefinite noun

phrases; i.e., they are translated as free variables, but give rise to additional anaphoric links. The treatment of other, functional uses of definite noun phrases (as in A car crashed. The driver emerged unhurt.) is more involved. The difference between indefinite noun phrases, on the one hand, and definite noun phrases and pronouns, on the other, is that indefinites introduce new variables, whereas the variables introduced by definites and pronouns are always linked to an already established context. In other words, the difference between definites (including pronouns) and indefinites is that the former refer to entities that have been introduced before, i.e., to familiar entities, whereas the latter do not. Quantifier determiners, i.e., determiners of noun phrases that are neither definite nor indefinite, can bind more than one variable. Specifically, they can bind a block of free variables, some of which may have been introduced by indefinites. Conditional operators (if . . . then . . . constructions) can also bind blocks of free variables. Not all variables introduced by indefinites are in the scope of a quantifier or a conditional operator. Those that are not are existentially quantified over by default. The processing of a piece of discourse is incremental. Each next sentence to be processed is dealt with in the context of a structure that results from processing the previous sentences. The processing rules decompose a sentence, replacing the various parts by conditions to be added to the structure. Assume one is processing discourse (6) in the context of representation structure (7) containing just one discourse referent and one condition. (6) A man walked down the street. He whistled. (7) (x) (street (x))

As was mentioned before, indefinite noun phrases give rise to new discourse referents, and definite noun phrases are linked to existing discourse referents. The indefinite in the first sentence of (6) introduces a new discourse referent y and two conditions man(y) and y walked down the street. The second condition can be decomposed further by introducing a fresh discourse referent in the structure, linking this discourse referent to an existing discourse referent, and replacing the definite noun phrase with the discourse referent in two new conditions. This gives three new conditions all in all: z ¼ x, street(z) and walked-down(y, z). The discourse representation structure now looks like (8) (x, y, z) (street(x), man(y), z ¼ x, street(z), walked-down(y, z))

254 Discourse Representation Theory

Processing the second sentence of (1) gives rise to a new link and a new condition. The final result is (9). (9) (x, y, z, u) (street(x), man(y), z ¼ x, street(z), walked-down(y, z), u ¼ y, whistled(u))

All representation conditions in the above example are atomic. Quantified noun phrases or logical operators, such as conditionals or negations, give rise to complex conditions. The representation structure for (4) given in (10) provides an example. (10) ((x, y) (man(x), woman(y), attractive(y), meet(x, y))) ) (( ), (smiles-at(x, y)))

Note the appearance of an arrow ) between components of the structure, glueing two nonatomic pieces of representation together. Note also that the right-hand component starts with an empty list ( ), to indicate that on the right-hand side no new discourse referents are introduced. In the box format that many people are perhaps more familiar with, (10) looks like (11). (11)

Formal definitions and truth conditions for these representation structures are given in the next section. Kamp (1981) and Kamp and Reyle (1990) spell out the rules for processing sentences in the context of a representation structure in all the required formal detail. An important feature of the rules is that they impose formal constraints on availability of discourse referents for anaphoric linking. Roughly, the set of available discourse referents consists of the discourse referents of the current structure, plus the discourse referents of structures that can be reached from the current one by a series of steps in the directions left (i.e., from the consequent of a pair R ) R0 to the antecedent), and up (i.e., from a structure to an encompassing structure). The constraints on discourse referent accessibility are used to explain the awkwardness of anaphoric links, as in (12). (12) *If every man1 meets an attractive woman2, he1 smiles at her2.

Such data can be disputed, but space does not permit such indulgence here. Discourse referents for proper names are always available for anaphoric reference; to reflect this fact, such discourse referents are always included in the list of discourse referents of the top-level structure.

To account for deictic uses of pronouns, use is made of anchored structures. An anchored structure is a pair consisting of a representation structure R and a function f, where f is an anchor for a subset of the discourse referents of R; i.e., f assigns appropriate individuals in a model to these discourse referents. For example, structure (7) could be anchored by mapping discourse referent x to an appropriate street. Deictic pronouns are handled by linking them to anchored discourse referents. Essentially the same approach to natural language analysis as was proposed in Kamp (1981) is advocated in Heim (1982). Heim uses the metaphor of a filing cabinet: the established representation structure R is a file, and additions to the discourse effect a new structure R0 , which is the result of changing the file in the light of the new information (see Dynamic Semantics). The main program of discourse representation theory (in its generic sense) is an attempt to regard semantic interpretation as a dynamic process mapping representations plus contexts to new representations plus contexts. As Partee (1984) remarked, this shift from static semantics to dynamic semantics cum pragmatics means an enrichment of the enterprise of formal semantics and should therefore make it easier to establish contact with other schools of semantics and/or pragmatics. Partee’s prediction was proved correct in subsequent years by the widespread use of discourse representation theory in computational linguistics and by the application of techniques of anaphora resolution from Artificial Intelligence in systems based on discourse representation theory. Discourse representation theory has also provided new inspiration to traditional Montague grammarians, who tend to be less than satisfied with the contextual rules for analyzing discourse on the grounds that the influence of context makes it difficult to work out what contribution individual phrases make to the meaning of the whole. A suitable dynamic perspective on the process of interpretation has shown these compositionality qualms to be unfounded, and discourse representation theory has been instrumental in bringing about this dynamic turn (see Dynamic Semantics for details). Heim (1990) contains a perceptive appraisal of various alternatives to the approach of discourse representation theory (in its generic sense) to the problem of unbound anaphora.

Discourse Representation Structures (DRSs) Formally, a discourse representation structure R consists of two parts: a finite list of discourse referents

Discourse Representation Theory 255

and a finite list of conditions. The discourse referents in the list are called the discourse referents of R. The conditions of a structure R may contain discourse referents that are not included in the list of discourse referents of R. Conditions can be atoms, links, or complex conditions. An atom is a predicate name applied to a number of discourse referents, a link is an expression t ¼ r, where r is a discourse referent and t is either a proper name or a discourse referent. The clause for complex conditions uses recursion; a complex condition is a condition of the form R ) R0 , where R and R0 are discourse representation structures. Next, one defines truth for discourse representation structures with respect to a model. Call M ¼ hD, Ii an appropriate model for discourse representation structure R if I maps the discourse referents of R to members of D, the n–place predicate names in the atomic conditions of R to n–place relations on D, the names occurring in the link conditions of R to members of D, and (here is the recursive part of the definition) M is also appropriate for the structures in the complex conditions of R. Let M ¼ hD, Ii be an appropriate model for structure R. An assignment in M ¼ hD, Ii is a mapping of discourse referents to elements of D. Assignment f verifies R in M if there is an extension f 0 of f with the following properties: 1. f 0 is defined for all discourse referents of R and for all discourse referents occurring in atomic or link conditions of R. 2. If P(r1, . . . , rn) is an atomic condition of R, then h f 0 (r1), . . . , f 0 (rn)i 2 I(P). 3. If t ¼ r is a link condition of R, and t and r are both discourse referents, then f 0 (t) ¼ f 0 (r); if t is a proper name and r a discourse referent, then I(t) ¼ f 0 (r). 4. If R1 ) R2 is a complex condition of R, then every assignment for R1 that verifies R1 and agrees with f 0 on all discourse referents that are not discourse referents of R1 also verifies R2.

A structure R is true in M if the empty assignment verifies R in M. These definitions can be modified to take anchors into account in the obvious way, by focusing on assignments extending a given anchor. Clearly, the expressive power of this basic representation language is quite limited. In fact, there is an easy recipe for translating representation structures to formulae of first-order predicate logic. Assuming that discourse referents coincide with predicate logical variables, the atomic and link conditions of a representation structure are atomic formulae of predicate logic. The translation function  , which maps representation structures of V to formulae V predicate logic, is Ci , where indicates a finite defined as R ¼

conjunction and the Ci are the translations of the conditions of R. The translation for conditions is in turn given by the following clauses.  For atomic conditions: C ¼ C.  For complex conditions: (R1 ) R2) ¼ 8x1 . . . 8xn (R1 ) 9y1 . . . 9ym R2 ), where x1, . . . , xn is the list of discourse referents of R1 and y1, . . . ym the list of discourse referents of R2.

It is easy to show that R is true in M under the definition given above if and only if R is true in M for some assignment, under Tarski’s definition of truth for first-order predicate logic. A slight extension of the discourse representation language allows for the treatment of negation. Negated conditions take the form :R, where R is a discourse representation structure. Negations of atomic conditions are treated as negations of discourse representation structures containing just one atomic condition. The discourse referents of a negated structure are not available for anaphoric linking outside that structure. The definition of satisfaction must take negated conditions into account. Here is the required extension of the definition. Assignment f verifies R in M if there is an extension f 0 of f with the following properties: 1–4. As above. 5. If :R0 is a complex condition of R, then no assignment that agrees with f 0 on all discourse referents that are not discourse referents of R0 verifies R0 .

Translation into predicate logic now must take care of negation as well. The translation clause for negated conditions runs as follows:  (:R) ¼ :9x1 . . . 9xn R

Here x1, . . . , xn is the list of discourse referents of R. It is easy to see that the given translation is meaning preserving. It is also not difficult to give a meaningpreserving translation in the other direction. This shows that the discourse representation language extended with negation has precisely the same expressive power as first-order predicate logic.

Extensions: Tense and Plurals Partee (1984) gave a survey of proposals to extend discourse representation theory with discourse referents for times and events to exploit the parallels between nominal and temporal anaphora. In example (5) from section 1, where first reference is made to a seeing event in the past and then to an event of crossing the street that takes place immediately after the seeing event, an anchoring mechanism can be

256 Discourse Representation Theory

used to link the seeing event to the appropriate time, and an anaphoric link between events can constrain the time of the crossing event in the appropriate way. Also, the dynamic effect of shifting the reference time can be incorporated by using a designated discourse referent for the reference time and specifying that this discourse referent be updated as a side effect of the processing of sentences denoting events. Next, there are examples where a reference is picked up to an indefinite time in the past. (13) Mary arrived during the day. She let herself into the house.

In example (13), the arrival takes place at some indefinite time on a specific day (presumably anchored) in the past. The event of Mary’s entering the house is then linked to the time of arrival. Again, all that is needed is the introduction of an event discourse referent for the arrival event and an appropriate linking of this event discourse referent to the reference time discourse referent: the reference time discourse referent starts pointing at a time interval just after the time of arrival. The processing of the next sentence introduces an event that is constrained to be included in the reference time interval and has again as a side effect that the reference time discourse referent is shifted to refer to a time interval just after the house-entering event. Sentence (14) provides an example of quantification over times. (14) When Bill called, Mary was always out.

The example gives rise to a complex representation of the form R ) R0 , with an event discourse referent and a reference time discourse referent introduced in the left-hand structure, and a state discourse referent of the right-hand structure, with the state constrained to include the reference time interval. An operator account of tenses and temporal adverbs has the awkwardness that the tense operator is redundant if a temporal adverb is present, as in (15), but not otherwise. Also, assigning the correct scopes to these operators poses problems. (15) Bill called last Friday around noon.

In the discourse representation approach, where tenses translate into event or state variables linked to an appropriate reference time, temporal operators are simply translated as predications on the event discourse referent, and the awkwardness vanishes. See Kamp and Rohrer (1983) and Partee (1984), plus the references cited therein, for details. As for the incorporation of the singular versus plural distinction, an obvious first move in any

attempt to accommodate plural anaphoric pronouns is to make a distinction between singular and plural discourse referents. Singular pronouns are linked to singular discourse referents, and plural pronouns are linked to plural discourse referents. Plural indefinite noun phrases (some women, three men) introduce plural discourse referents, but it turns out that many other introduction mechanisms must be postulated to obtain a reasonable coverage of plural anaphoric possibilities. Plural discourse referents may result from summation of singular discourse referents. This is to account for uses of they that pick up a reference to a set of individuals that have been introduced one by one. Next, plural individuals may be the result of abstraction from complex conditions. Consider example (16). (16) John bought every book Mary had mentioned. He started reading them straight away.

Obviously, them refers to the set of all books mentioned by Mary. No plural discourse referent is introduced by the first sentence, so the only way to make one available is by calling it into being through abstraction. So-called dependent plurals should be handled differently again, because here the plurality seems closely linked to syntax. Sentence (17) provides an example. (17) All my friends have children.

It is clear that (17) is still true if each of my friends has exactly one child. Dependent plurals call for a kind of in-between discourse referent that is neutral between singular and plural. The chapter on plurals in Kamp and Reyle (1990) gives a very detailed account of these and related matters. Plurality provides further information on general issues of the interpretation of plurals.

Incorporating Generalized Quantifiers Extending discourse representation theory with nonstandard quantifiers, and then getting the truth conditions right, is not completely straightforward. (18) Most farmers who own a donkey beat it.

Applying a routine strategy for building a representation structure for example (18), one arrives at structure (19), where R )m ) R0 is true if most verifying assignments for R are verifying assignments for R0 . (19) ((x, y)(farmer(x), donkey(y), own(x, y))) )m ) (( )(beat(x, y)))

This analysis does give the wrong truth conditions, because it quantifies over farmer–donkey pairs

Discourse Representation Theory 257

instead of individual farmers. In a situation where there are five kind farmers who each own 1 donkey and treat it well, and one cruel, rich farmer who beats each of his 10 donkeys, the analysis makes sentence (18) true, though intuitively it should be false in this situation. The remedy (proposed in Kamp and Reyle, 1990) involves a complication in the notation. Generalized quantifiers are introduced explicitly in the representation structures. The revised representation for (18) is (20). (20) ((x, y)(farmer(x), donkey(y), own(x, y))) ) most x ) (( )(beat(x, y)))

At the place of most in (20) one could in principle have any generalized quantifier (see Quantifiers). In other words, for every binary generalized quantifier Q and every pair of representation structures R, R0 , 0 the following is a complex condition: R ) Q v ) R. The truth conditions are modified to reflect what is expressed by the quantifier Q. Generalized quanti0 fiers express relations between sets, so R ) Q x ) R is true in case the two sets are in the appropriate quantifier relation. The truth conditions must pick out the two relevant sets. Here is the new part of the definition. Assignment f verifies R in M if there is an extension f 0 of f with the following properties: 1–5. As above. 6. If R1 )Q v ) R2 is a complex condition of R, then f 0 verifies this condition if the sets B and C are in the quantifier relation denoted by Q, where B ¼ {b | f 0 has an extension g with g(v) ¼ b, which verifies R1 in M} and C ¼ {c | f 0 has an extension h with h(v) ¼ c, which verifies R1 and R2 in M}.

It is left to the reader to check that this gets the truth conditions for (18) correct. The following representation of (18) brings the incorporation of generalized quantifiers more in line with standard logical notation: (21)

Discourse Structures and Partial Models There is more than an occasional hint in the original papers of Kamp and Heim that discourse representation structures are closely connected to partial models. If the suggestion is not that these representation

structures are themselves partial models, it is at least that they are intended to be interpreted with respect to partial models. That the structures are themselves partial models cannot be right: complex conditions are constraints on models rather than model components. They specify general properties that a model must satisfy the condition. Interpretation of discourse representation structures in partial models has never really been worked out. The truth definitions for representation structures, e.g., in Heim (1982), Kamp (1981), Kamp and Reyle (1990), define satisfaction in classical (i.e., ‘complete’) models. Because the representation structures contain identity links and negated identity links, evaluation in partial models where not only the predicates used to translate the vocabulary of the fragment, but also the identity predicate receives a partial interpretation, is feasible. Interestingly, this sheds light on some puzzling aspects of identity statements. Current studies of partial model theory interpret identity as a total predicate (see Langholm, 1988). Partializing identity leads to a more radical form of partiality; it has the effect that the objects in the model are not proper individuals but rather proto-individuals that can still fuse into the same individual after some more information acquisition about the identity relation. Technically, this form of radical partiality can be implemented by evaluating discourse representation structures with respect to models where the identity relation is a partial relation. The formal development of a theory of partial identity involves an interpretation of identity as a pair hIþ, Ii, with Iþ an equivalence relation that denotes the positive extension of identity, and I an anti-equivalence relation, that is to say, a relation that is irreflexive, symmetric, and anti-transitive, i.e., satisfying the requirement that if Ixy, then it holds for all z that Ixz or Izy. The assumption that proto-individuals rather than regular individuals populate the partial models is attractive from the point of view of knowledge representation: often human beings have only partial information about identities. Famous paradoxes and puzzles are based on this fact. One example is Frege’s morning star, evening star paradox; see the article Coreference: Identity and Similarity. Another is Saul Kripke’s Pierre puzzle. Pierre is a Frenchman who has read about a famous and wonderful city he knows as Londres, and because of his reading he thinks that Londres is pretty. Later on, he is abducted and forced to work in a slum in a city that, as he learns, is called London, and this new experience leads him to conclude that London is ugly. The important point to note is that as long as all this information is processed with respect to a partial model where London and

258 Discourse Representation Theory

Londres name different proto-individuals, Pierre’s beliefs are not incoherent. They only become incoherent once it is discovered that London and Londres are identical, i.e., once Pierre acquires additional information about the extension of the identity relation. From outside, from a situation where London and Londres are anchored to the same individual, the belief may seem incoherent as well, but the point is that Pierre does not have full information about the nature of this anchor. The example is discussed in the context of discourse representation theory in Asher (1986), but the solution proposed there is still phrased in terms of classical models.

Conditions on the formation rule for a DRS

1. {v1 . . . vn} \ {vnþ1 . . . vm} ¼ Ø. 2. [i M(Ci)  {v1 . . . vm}

We define the condition

as

Reasoning with DRSs Here is a semantics for DRT in terms of partial assignments, following the original set-up in Kamp (1981). Definition 2 (Semantics of DRT)

(

The plausibility of using Discourse Representation Structures to model belief and other propositional attitudes is closely connected with the existence of cognitively plausible inference systems for DRSs. Proof theories for DRSs are given in Saurer (1993), Kamp and Reyle (1996), and Van Eijck (1999). The calculus of Van Eijck (1999) is perhaps the simplest of these, and we present it here. We switch to the version of DRT where DRS negation is primitive and D1 ) D2 is defined in terms of negation. The precise definition is given below. A slight modification of the DRS definition is to make a distinction between the fixed discourse referents and the introduced discourse referents of a DRS (first proposed in Visser 1994). This allows for a natural definition of DRT consequence. If a DRS is inferred, its fixed discourse referents are supposed to be supplied by the premises of the inference. They are ‘fixed by the context of the inference,’ so to speak. Thus, we view a DRS as a triplet consisting of a set of fixed referents F, a set of introduced referents I, and a set of conditions C1 . . . Cn constrained by the requirement that the free variables in Ci must be among F [ I. Concretely, the syntax of DRT looks like this (equality statements left out for simplicity of exposition):

Here g F denotes the restriction of function g to the set F. The following definition of DRT consequence makes essential use of the distinction between fixed and introduced discourse referents. Definition 3 (DRT Consequence)

Definition 1 (DRT)

A DRT calculus is given in Figure 1 (lists C1 . . . Ck, abbreviated as C). The calculus uses substitution in constraints and DRSs. This notion is defined by

We will use ? as an abbreviation of : >. The (active) discourse referents of a term t or condition C or DRS D are given by: Of course, when a substitution [t/v] is mentioned in a rule, it is assumed that t is free for v in D. It is also

Discourse Representation Theory 259

Figure 1 The calculus for DRT.

assumed that all DRSs mentioned in the rules satisfy the syntactic well-formedness conditions for DRSs.

defined extends g. By (an appropriate DRT version of) the substitution lemma,

Theorem 4

The calculus for DRT is sound. Proof Induction on the basis that the test axiom is sound and that the rules preserve soundness. Here is one example soundness check, for the rule of marker introduction. Assume M, f, g D. Then by the soundness of the premise, there is an h with

Thus, M, h [t/v] C. Since v 2 = F, h0 given by h0 (v) ¼ 0 , and h (w) ¼ h(w) for all w 6¼ v for which h is [t]M h

This proves

Theorem 5

The calculus for DRT is complete. For the proof of this, we refer to Van Eijck (1999), where the proof system for DRT is related to a proof system for dynamic predicate logic (Groenendijk and Stokhof, 1991).

260 Discourse Representation Theory

The Treatment of Ambiguities

Bibliography

If an expression of a formal language is viewed as a tree, a partial specification of how the expression is built up from its components can be given by means of a description of constraints on syntax tree construction. This is the approach to the treatment of ambiguities taken in Underspecified DRT (UDRT) (Reyle, 1993). In UDRT, a DRS is viewed as a tree, and an UDRS is viewed as a set of constraints on tree formation.

Asher N (1986). ‘Belief in discourse representation theory.’ Journal of Philosophical Logic 15, 127–189. Geach P T (1962). Reference and generality/An examination of some medieval and modern theories (3rd revised edn., 1980). Ithaca/London: Cornell University Press. Groenendijk J & Stokhof M (1991). ‘Dynamic predicate logic.’ Linguistics and Philosophy 14, 39–100. Heim I R (1982). The Semantics of Definite and Indefinite Noun Phrases. Ph. D. diss. University of Massachusetts, Amherst. Published 1987, New York: Garland Press. Heim I R (1990). ‘E-Type pronouns and donkey anaphora.’ Linguistics and Philosophy 13, 137–177. Kamp H (1981). ‘A theory of truth and semantic representation.’ In Groenendijk J, Janssen T & Stokhof M (eds.) Formal methods in the study of language. Amsterdam: Mathematisch Centrum. 277–322. Kamp H & Reyle U (1990). From discourse to logic/An introduction to the modeltheoretic semantics of natural language, formal logic and discourse representation theory. Dordrecht: Kluwer. Kamp H & Reyle U (1996). ‘A calculus for first order discourse representation structures.’ Journal of Logic, Language and Information 5, 297–348. Kamp H & Rohrer C (1983). ‘Tense in texts.’ In Ba¨uerle, Schwarze & Von Stechow (eds.) Meaning, use and interpretation of language. Berlin: Walter de Gruyter. 250–269. Kripke S (1979). ‘A puzzle about belief.’ In Margalit A (ed.) Meaning and use. Dordrecht: Reidel. Langholm T (1988). Partiality, truth and persistence. CSLI Lecture Notes Number 15, Stanford University (distributed by University of Chicago Press). Partee B H (1984). ‘Nominal and temporal anaphora’ Linguistics and Philosophy 7, 243–286. Reyle U (1993). ‘Dealing with ambiguities by underspecification: Construction, representation and deduction.’ Journal of Semantics 10, 123–179. Saurer W (1993). ‘A natural deduction system of discourse representation theory.’ Journal of Philosophical Logic 22, 249–302. Seuren P A M (1985). Discourse semantics. Oxford: Basil Blackwell. Van Eijck J (1999). ‘Axiomatising dynamic logics for anaphora.’ Journal of Language and Computation 1, 103–126. Van Eijck J & Kamp H (1997). ‘Representing discourse in context.’ In van Bentham J & ter Meulen A (eds.) Handbook of Logic and Language. Amsterdam: Elsevier. 179–237. Visser A (1994). ‘Digressions on and desiderata for the design of dynamic discourse denotations.’ Lecture Notes, Department of Philosophy, Utrecht University.

(22) All students found most solutions.

To represent the scope ambiguity between the two quantifiers in (22), one needs a representation that is ‘in between’ the following two DRSs:

The UDRT solution is to take the DRSs apart, to label the parts, and to define an UDRS as a set of labeled DRS parts plus a list of constraints between labels. An UDRS for the example case has a top node > labeled l0, nodes

labeled l1, l2, l3, respectively, and constraints l0 l1, l0

l2, l1 l3, l2 l3. Full disambiguation can be achieved by adding a further constraint. Adding the constraint l1 l2 disambiguates the UDRS to (23), while adding the constraint l2 l1 results in disambiguation to (24). See also: Anaphora, Cataphora, Exophora, Logophoricity;

Anaphora Resolution: Centering Theory; Context and Common Ground; Coreference: Identity and Similarity; Default Semantics; Donkey Sentences; Dynamic Semantics; Formal Semantics; Montague Semantics; Plurality; Propositional Attitudes; Quantifiers.

Discourse Semantics 261

Discourse Semantics P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands ß 2006 Elsevier Ltd. All rights reserved.

Introduction Discourse Semantics (DSx) holds that the meaning of a sentence A makes A usable only in a certain class of contexts called discourse domains or Ds, built up on the basis of preceding utterances and available situational and world knowledge. The production of a meaningful utterance by a speaker S starts with S’s intent, in terms of a given D, to make public a proposition – that is, a mental structure in which S assigns a property to one or more objects – under some form of socially binding commitment or appeal, the intended force or speech act type. The intent is fed into the language machinery, which categorizes and structures the cognitive elements in such a way that a sentence is formed, then realized as an utterance. Utterance comprehension involves the reconstruction of S’s intent (force plus proposition) by a listener L. The intent conveyed by each new utterance is incremented to D, which thus changes with each new increment. The first to propose mechanisms of this kind were Seuren (1972, 1975), Isard (1975), and Gazdar (1979). Since no theory of the force element in Ds is available, the present article is restricted to the incrementation of propositions only, implicitly assuming an overarching assertive (truth-vouching) force. The incrementation of an uttered sentence A, or i(A), is achieved in terms of addresses mentally representing the objects or sets of objects mentioned in A. These objects are of widely divergent kinds, including (sets of (sets of)) individuals, substances, facts, and all kinds of abstractions or reifications constructed by the mind as ‘objects.’ A special kind of ‘object’ is represented by subdomains. A subdomain represents a thought-up scenery of its own, to which a property has been assigned in the proposition at hand. In John believes that the earth is flat or It is unlikely that the earth is flat, the information conveyed by that the earth is flat is stored in a subdomain representing what John believes or what is unlikely. Besides addresses, a D may contain instructions, which constrain its further development. Negation of a sentence A, for example, results in an instruction to block i(A) in D. In this perspective, the linguistic meaning of a sentence A is the contribution potential of i(A) to any given D in virtue of A’s linguistically defined properties. More formally, the linguistic meaning of a sentence A is a function from given Ds to incremented Ds.

The fact that interpretation is co-determined by a given D is reflected mainly by three kinds of phenomena (a) identification across domains and subdomains, (b) anaphora phenomena and (c) presuppositions. Identification across (sub)domains is illustrated by a sentence like The girl with brown eyes has blue eyes, which seems inconsistent and hence uninterpretable. Yet it makes perfect sense if interpreted as ‘the girl represented in the picture with brown eyes in reality has blue eyes,’ or as ‘the girl who in reality has brown eyes is represented in the picture as having blue eyes.’ Fauconnier (1985) posits ‘mental spaces,’ autonomous but interconnected cognitive domains of interpretation whose elements are open to denotation by definite NPs under certain conditions. Anaphora occurs when an anaphoric expression, usually a pronoun, takes over the (constant or variable) reference function of another expression, its antecedent, whose reference function is independently grounded. In DSx, an anaphoric expression selects the correct D-address via its antecedent. Presupposition, neglected in other semantic theories, is central to DSx. A presupposition A of a sentence B is a semantic property of B restricting its usability to those contexts to which A has already been incremented or which admit the incrementation of A without inconsistency or incompatibility with available knowledge. A sentence must be anchored in its context or D to be interpretable. Moreover, when used seriously (not in play or fiction), it requires a force field in which the social position-taking is valid, and the proposition expressed must be intentionally focused or keyed to a verification domain V in the world. Anchoring accounts for interpretation; the force field accounts for the social position-taking; keying accounts for truth or falsity. The building up of any D, whether serious or in play or fiction, is subject to the condition that it must be possible for D to be socially valid and true. This makes social liability conditions and truth conditions directly relevant to the analysis of meaning. The semantic analysis (SA) of the sentence to be incremented is the input to the incrementation procedure. The SA is a level of semantic representation consisting of a speech-act operator expressing the social position-taking and a semantically regular linguistic expression of the proposition, in terms of a variety of modern predicate calculus. Surface structures (SSs) too often conceal or distort their meanings (see Seuren, 1985: 61–110), which makes them unfit for direct semantic interpretation. A grammar G is required relating SSs to their corresponding SAs and vice versa.

262 Discourse Semantics

DSx thus posits two intermediate stages between SSs and whatever they be about in any V: the linguistic object SA and the mental proposition anchored in a given D. This D has open access to a knowledge base (KB) of encyclopedic and situational knowledge. A set of incrementation rules (IC) relates SAs to Ds and vice versa. The overall structure of the theory is presented in Figure 1. DSx is divided into two main sections, incrementation and subdomain structures. The former deals with the incrementation procedure of linguistic clauses to the appropriate (sub)domains, the latter with the embedding of subdomains.

in a discourse is meant as an answer to an often implicit question. The emphasis in this section is on definite reference to individuals, quantification, and plurality. Consider, as a possible starting point of D, the deceptively simple sentence (1a), with its (matrix-S) SA (1b), represented as a linguistic tree structure in (1c), and resulting in the D-address (1d): (1a) There was a cat. (1b) anx[be(x), cat(x)] (1c)

Incrementation The serial incrementation of sentences in a D is subject to semantic conditions, the most basic being the sequentiality condition (SC). SC does not apply to actual texts, which are hardly ever fully sequential (they would be unbearably verbose if they were), but to Ds. SC consists of three subconditions, presuppositional precedence, consistency, and informativity. The first requires that presuppositions are incremented before their carrier sentence. The second requires that a D, at any stage, must have the possibility of being true and thus must be logically and semantically consistent. The third requires every new increment to be informative – that is, the set of situations in which D is true gets further restricted with each new increment (see Presupposition). Thus, sequences of the form ‘A and possibly A’, or ‘A and (A or B)’ are, though logically coherent (the first conjunct entails the second), discoursewise unacceptable. Besides SC there are other structuring principles for texts and Ds not discussed here. Foremost among these is the principle of topic-comment modulation, based on the notion that each new sentence

Figure 1 Overall structure of the theory.

(1d) d–1½a j CatðaÞ

In (1b and 1c), an is the existential quantifier (aka 9), treated as a binary higher-order predicate over pairs of sets, with the subject term be(x) and the object term cat(x). The index x in anx binds the variables x in be(x) and Cat(x). An requires for truth that there be at least one element common to the sets denoted by the two terms. In standard logic, the existential quantifier posits actual existence of the common element. But that will not do for language, given sentences like (2a), which do not entail that the cat in question actually existed: (2a) A cat was worshipped there. (2b) anx[be worshipped(x), #cat(x)] (2c) d–2½a j #CatðaÞ; Be worshippedðaÞ (3a) A child laughed. (3b) anx[laugh(x), child(x)] (3c) d–3½a j ChildðaÞ; LaughðaÞ

Discourse Semantics 263

For the semantics of language, it is stipulated that when one term of the existential quantifier has an intensional predicate denoting a cognitive process and capable of yielding truth also for thought-up entities (Frege’s thought predicates), while the other term is extensional, yielding truth only for actually existing objects, the extensional term is intensionalized, so that it applies to thought-up entities as well. Since in (3b) both terms are extensional by nature, as only really existing individuals can truthfully be said to laugh or to be a child, (3a) is rendered as (3b). But in (2a) be worshipped is a cognitive intensional predicate as it may yield truth also for fictitious objects. Therefore, cat(x) is intensionalized to #cat(x), where # indicates that the set denoted by #cat(x) may also contain virtual (thought-up) cats. This allows true existential quantification over virtual objects (see Virtual Objects). Be(x) in (1) is taken to be intensional by nature (and thus not marked by #), denoting the axiomatically given nonnull set of actual or virtual objects that ‘are there.’ But since be(x) is not a thought predicate, it does not intensionalize the other term under an but is itself extensionalized when the other term is extensional. This ensures an entailment of actual existence for extensional terms, but also the absence of such an entailment for intensional terms, as in:

formula 9 x[cat(x)].) The address d–1 has a truth value and is read as ‘there is/was a cat’. Existentially quantified addresses are open addresses. An open address is closed when denoted in a subsequent clause by a definite term. Address closure, symbolized by a double slash, changes the address head from being a function to truth-values to being a reference function selecting an object (the referent) from a set of objects. Reference functions over individuals are typed ((e,t),e) (taking a set and delivering an individual), and are thus type-reducing. The reference function has been a source of discomfort to modern semantics because it cannot be defined within the confines of standard compositional model theory: there is no way of selecting one individual from a plural set by mathematical means. For it to work, an external input from cognition is needed – a further indication that meaning is subservient to cognition. Let D contain d–1 as above and also an open address d–4½a j MouseðaÞ . Then a sentence like (5a), with the SA (5b), results in the two parallel increments (5c) and (5d). (5a) The cat caught the mouse. (5b)

(4) There was an imaginary cat.

The SA-structure (1c) is the tree-structure counterpart of (1b). NP1 is the subject term, NP2 the object term, and ^ is a set-denoting operator: ‘the set of things x such that ...’ The grammatical process transforming (1c) into (1a) is not at issue here. (Roughly, NP2 is incorporated into V[an], forming the complex predicate V[an-^x[cat(x)]], which is then lowered into the position of the subject term x of be. For details, see Seuren, [1996: 300–309].) The present concern is the incrementation procedure IC turning (1c) into the D-address (1d). IC scans the SA-predicate first. An is an instruction to create a new singular address (a first-order address over individuals). An address label d–1 is set up, identifying the address for later reference. The contents of the address is given between large square brackets. Here ‘a’ is the address head, representing an and binding the variables. It stands for the first-order existential quantifier – a function from sets of individuals to truth values, typed ((e,t),t). The object term is incremented first, giving d–1½a j CatðaÞ . Normally, the subject term is then added, but not in the case of be(x), it being axiomatically understood that there are (actual or virtual) things. Therefore, only NP2 is incremented after the upright bar indicating the scope of a. (One notes the structural – but not the semantic – analogy of d–1 with the Russellian

(5c) d–1½a j CatðaÞ == Catchð1;4Þ (5d) d–4½a j MouseðaÞ == Catchð1;4Þ

NP1 in (5b) reads ‘the x such that x is a cat,’ and analogously for NP2. In (5c) and (5d), the head a has been retyped from ((e,t),t) to ((e,t),e); the propositional function preceding closure – Cat(a) or Mouse(a) – denotes the input set typed (e,t). Catch(1,4) is the proposition saying that the individual selected by the reference function a in d–1 – the cat – caught the individual selected in d–4 – the mouse. IC first scans the predicate of S0. Catch being a binary lexical verb, IC is put to work on the definite NP1 first, to be followed by the definite NP2. The denotation procedure d for definite NPs is as follows: For any NPi under a definite-NP operator the: . the takes the predicate of the S under NPi and selects the matching address d–n. There must, in principle, be only one such address in D. . d–n is closed (if still open), and the SA-tree is added to the closed address, with the number n of d–n for the NPi-constituent.

264 Discourse Semantics

Thus, d(NP1) in (5b) selects d–1½a j CatðaÞ , and NP1 is replaced with 1; d–1 is closed and the SA-tree, with 1 in place, is added to the now closed d–1:

provided it contains either the variable bound by d–n (for open addresses) or the definite term n (for addresses after closure). Double existential quantification is treated analogously. (7a), with SA (7b) yields the open address (7c): (7a) A cat caught a mouse. (7b)

The procedure is repeated for NP2 in d–1, yielding the two parallel increments:

(7c) d–1½a j CatðaÞ;½b j MouseðbÞ; Catchða;bÞ

For practical reasons, trees are written as bracketed strings, giving (5c) and (5d), respectively. A sentence like (6a), with SA (6b) is incremented as follows, with D containing d–1½a j CatðaÞ : (6a) The cat caught a mouse. (6b)

(6c) d–4½a j MouseðaÞ; Catchð1;aÞ (6d) d–1½a j CatðaÞ ==½b j MouseðbÞ; Catchð1;bÞ

The new d–4, created in virtue of V[an], is fitted out with S2 and S1, in that order (with a for x):

IC causes d–1 to be set up in such a way that the cat-address takes scope over the mouse-address: ‘there is a cat a such that there is a mouse b such that a caught b’. Since, however, it is possible to refer subsequently to the mouse caught by the cat, an independent open address for the mouse in question is also required. To that end, an address of the form (8) is set up in virtue of a process of inferential bridging: (8) d–4½a j MouseðaÞ;½b j CatðbÞ; Catchðb;aÞ

Here the mouse-address takes scope over the cataddress, but the difference is irrelevant, as the simple existential quantifier is symmetrical. The discourse may now continue with definite terms like the cat that caught a mouse, closing d–1, or the mouse that was caught by a cat, closing d–4. Now to plurality, which requires the notion of plural power set (Ppl). For any set X, Ppl(X) ¼ P(X) minus Ø and all singleton sets. Thus, if X has cardinality n, Ppl(X) has cardinality 2n(nþ1). Moreover, to distinguish distributive from group readings, the type-raising distributive operator ‘::’, defined over predicates, is required for the language of SAs. Let [[P(x)]] denote the extension of P(x) – the set of individuals x such that x satisfies P – then [[::P(x)]] is defined as follows (x ranges over sets of individuals): ½½:: PðxÞ ¼ Def Ppl ð½½PðxÞ Þ

d–1 in (6d) contains a subordinate open address. An open address can be stored under another address d–n

The extension of ::P(x) is thus the set of sets of at least two individuals x such that each x satisfies P; ‘::happy(the children)’ reads ‘the set of children in question is among the sets of at least two individuals each of whom is happy’. When P is transitive and both of its terms are definite and plural, :: distributes indiscriminately over the subject and the object term

Discourse Semantics 265

referents. In the group reading, The men carried the bags reads as saying that the men as a group carried the bags as a group (while, say, the women carried the pots), leaving it open whether there were subgroups of men carrying subgroups of bags. The distributive reading says that each of the men carried one or more bags and each of the bags was carried by one or more men. The linguistic expression thus underdetermines the actual state of affairs. An open plural address is normally of the form (9c), representing (9a) with SA (9b): (9a) There were cats.

Plurals are difficult, and only some of the problems can be dealt with here. One such problem is the distinction between distributive and group readings. Group readings are incremented analogously to singular increments. Thus, given an open plural cat-address, such as d–5 in (9c), the addition They ran away (group reading) is incremented as in (13a), where closure has turned a into a reference function (((e,t),t),(e,t)), selecting a set from a set of sets: (13a) d–5½a j :: CatðaÞ == Runawayð5Þ (13b) d–5½a j :: CatðaÞ == :: Runawayð5Þ

(9b)

(9c) d–5½a j ::CatðaÞ

Some is the plural existential quantifier again yielding truth just in case there is a nonnull intersection of at least one plural set of individuals of the two term extensions concerned, which now are sets of sets of individuals. Some is again an instruction to set up a new address of the right type. In (9c), a represents plural SOME and binds the variable. Example (9c) thus requires that there be at least one set of at least two actually existing cats. When Pred is second order by nature, typed ((e,t),t), the distributive operator :: is not needed. Then IC gives Pred(a), with the variable a, as in the singular address (10) with the second-order predicate platoon (army unit consisting of men), representing There is/was a platoon: (10) d–6½a j PlatoonðaÞ

The two forms can be combined, as in: (11a) d–5½a j ::Cat ðaÞ; Disperse ðaÞ (11b) d–5½a j ::CatðaÞ; 3ðaÞ

representing, respectively, Some cats dispersed and There were three cats. Mathematically, an address can be of any order, yet language stops at secondorder predicates requiring third-order addresses for their plurals, as in (12), which reads There are/ were platoons. Any higher-order nouns are treated as second-order. (12) d–7½a¯ j ::PlatoonðaÞ ¯

To compute the truth value of ‘Run away(5),’ it is necessary to type-raise the predicate run away from (e,t) to ((e,t),t), so that it can process an input of type (e,t). Type-raising of a predicate P implies that, despite the transition from individuals to groups, the satisfaction conditions of P remain unchanged. This condition cannot be fulfilled by all predicates. Most nominal predicates, such as cat, dog, tree, house, being reserved for individual predication, disallow type-raising. Example (13a) is thus read intuitively as ‘the set of cats referred to by½a j :: CatðaÞ is a set of at least two individuals running away as a group’. The distributive reading of They ran away is incremented as in (13b), which lets the cats in question run away individually. Typical group readings are found in sentences like: (14a) The mice have been at the cheese. (14b) The Americans were the first to land on the moon.

In their common reading, these do not imply that all the mice have been at the cheese, or that all Americans were the first to land on the moon, as they are about the mice, or the Americans, as a group. Now consider (15a) with the SAs (15b) and (15c), each of which represents a group reading and a distributive reading. (15a) The men carried a bag. (15b)

266 Discourse Semantics (15c)

The difference corresponds with the closure operation. Let D already contain an address d–10½a j ‘Nob’ (a) . Both (17a) and (17b) set up a new address d–11 for the letters Nob sent. For (17a), d–11 is left open, but for (17b), d–11 is closed: (18a) d–11½a j :: LetterðaÞ; :: RudeðaÞ; :: Sendð10; aÞ; FewðaÞ (18b) d–11½a j :: LetterðaÞ; :: Sendð10; aÞ; FewðaÞ== :: Rudeð11Þ

(15d) d–8½a j :: ManðaÞ ==½b j BagðbÞ; ð::ÞCarryð8;bÞ (15e) d–8½a j :: ManðaÞ == ð::Þ½b j BagðbÞ; Carryð8;bÞ

In (15c), the main predicate is the propositional function S2, denoting the set of sets of at least two individuals that have a bag to carry, either as a group or individually. S2 is the tree-structure version of what is known in logic as a lambda predicate. Like any other semantically appropriate predicate, S2 can be placed under the distributive operator. The four readings of (15a) are as follows ([16b2] and [16c2] are equivalent): (16b1) ‘there was one bag carried by each of the men individually’ (16b2) ‘there was one bag collectively carried by the men’ (16c1) ‘each of the men carried a bag’ (16c2) ‘the group of men collectively carried a bag’

Let D contain an open address d–8½a j ::Man (a¯Þ . Then for (15b) IC creates a new open address d–9½b j BagðbÞ; ð::ÞCarryð8;bÞ , saying that there is/ was a bag which the men of d–8 carried collectively (without ::) or individually (with ::). d–8 is now closed, analogously to d–1 in (6d), representing the two readings (16b1) and (16b2). This gives (15d). Analogously for (15e). The group readings of (15b) and (15c) result in an identical d–8. Inferential bridging sets up an open singular bag-address in the group reading of (15e) – that is, (16c2) – but the distributive (16c1) requires an open plural bag-address, enabling subsequent reference to the bags that the men carried. The formal treatment of this form of inferential bridging has not been elaborated so far. The closure operation is important for empirical reasons, as appears, for example, from (17a) and (17b), whose semantic distinction is unaccounted for in most semantic theories: (17a) Nob sent few letters that were rude. (17b) Nob sent few letters, and they were rude.

We now consider the universal quantifier all, as in (19a) with SA (19b) and the IC-result (19c) treated as a binary higher-order predicate. D already contains d–12½a j :: MouseðaÞ (‘there are mice’): (19a) All (the) mice escaped. (19b)

(19c) d–12½a j :: MouseðaÞ == All ([::Escape(x)], (12))

All takes a first-order definite plural object term and a second-order set-denoting subject term, delivering truth just in case the object set is an element of the subject set. This analysis provides a unified solution to the type problem caused by standard analyses for sentences like All (the) mice dispersed. The analysis 8x(mouse(x) ! disperse(x)) or, in generalized quantification, 8x(disperse(x),mouse(x)) will not do as ‘disperse(x)’ is of the wrong type. d–12 is closed, following the definite reference to a particular set of mice. ‘[::Escape(x)]’ denotes the set of sets of at least two individually escaping individuals. ‘All([::Escape (x)], (12))’ says that the mice in question form a set of at least two individually escaping individuals. This makes All([::Escape(x)],(12)) equivalent to ::Escape(12), and it gives all existential import, as in traditional predicate calculus. The difference between all and plural the seems to be that while they both select the set M of elements defined by the address head after closure (the mice), and let the sentence say that M is a member of the set of sets E defined by ::escape, all specifically requires (redundantly) that no member of M be left out. This makes all unsuitable for higher order predicates like numerous, but suitable for other higher-order predicates like disperse or sit in a circle. Every distinguishes itself from all mainly in that every excludes group readings. The specifics of each are not touched upon here.

Discourse Semantics 267

Finally, consider (20), which has both a group and a distributive reading. The existential predicates few and many say that the intersecting set required by the existential quantifier is small or large, respectively: (20a) Few cats caught many mice. (20b)

(20c) d–7½a j :: CatðaÞ; FewðaÞ;½b j :: MouseðbÞ; ManyðbÞ; ð::ÞCatchða; bÞ

The group reading is unproblematic: ‘a small group of cats caught a large group of mice’. In this reading subsequent definite reference can be made to the large group of mice, as in These mice had escaped from a laboratory, which requires the inferentially added address: (21) d–12½a j :: MouseðaÞ; ManyðaÞ;½b j :: CatðbÞ; FewðbÞ; Catchðb; aÞ

But in the distributive reading, with ::Catch(b; a), subsequent definite reference is not possible, which means that inferential bridging of the kind at issue must be blocked. The passive of (20a):

For IC, negation is an instruction preventing the addition of the predication immediately following it to the address in question. The non-negated predication must be normally incrementable (‘have the right papers’). IC takes the subject-S of not in the SA-tree and processes it first without negation. Subsequently, not places an asterisk before the predicate. In the case at hand, there must, therefore, be an appropriate mouse-address available in D. * Is presupposition-preserving and differs therefore from the negation in standard logic (see Presupposition). Open addresses can be negated, as in (24a), incremented as (24b): (24a) No mouse was caught. (24b) d–n½*a j MouseðaÞ; Be caughtðaÞ

This blocks the addition of ‘a j Mouse(a), Be caught(a)’ to any address in D. Therefore, negated open addresses cannot be closed: there is nothing to close (but see [27] below). Example (24a) is not equivalent with (25a) and (25b) but these are equivalent with (26a) and (26b). (25a) All the mice were not caught. (25b) d–12½a j :: Mouseða) == All ð[*:: Be caught(x)], ð12ÞÞ (26a) None of the mice were caught. ð26bÞ d–12½a j :: MouseðaÞ== ½*b j 2 12(b), Be caughtðbÞ

(22) Many mice were caught by few cats.

is equivalent to (20a) only in the group reading. In the distributive reading, scope differences destroy the equivalence. As with the distributive reading of (15e), the formal criterion for this blocking has as yet not been elaborated. Negation is treated as an abstract predicate in SAstructure, like the quantifiers. Its subject-S is what is normally called its scope. Thus, (23a) has the SA (23b): (23a) The mouse did not escape. (23b)

(23c) d–4½a j MouseðaÞ == *Escapeð4Þ

Subdomain Structures A D may contain subdomains, which are either alternatives or subordinates. Alternative subdomains are created by or (disjunction) and if (implication). i(A or B) is ‘A / not(A) and B’ – with the alternative disjuncts ‘A’ and ‘not(A) and B’. The truth condition is that at least one disjunct be true. The tacit exclusion (negation) of the first disjunct in the second rests on the True Alternatives Condition for Ds, requiring that the alternative increments under disjunction be truly distinct. This explains the much debated ‘exclusive’ character of or as resulting from principles of coherent discourse construction. Implication is like disjunction: i(if A then B) consists of two alternative increments: ‘A and B / not(A)’, with the tacit inference that when ‘not(A)’ is chosen, ‘B’ is excluded, since if ‘B’ is added after ‘not(A)’, the disjunction of ‘A’ and ‘not(A)’ has been vacuous.

268 Discourse Semantics

This analysis accounts for definite anaphora after a negative existential disjunct, as in (27a), or a positive existential antecedent clause, as in (27b): (27a) Either Nancy has no husband or he is Norwegian. (27b) If Nancy has a husband, he is Norwegian.

Example (27a) is explained by the tacit incrementation of ‘she has a husband’ in the second disjunct, and (27b) by the fact that i(he is Norwegian) is made possible by the preceding i(Nancy has a husband) in the same alternative. Subordinate subdomains are a special kind of address. Like ordinary addresses, they can be open or closed. They also have D-properties, in that they may contain their own addresses, increments, and instructions. They are represented both as a special kind of address and as an indexed domain. A sentence like (28a) is incremented as (28b), where the variable D ranges over virtual facts. (28b1) reads ‘there is a possible fact D’. Possible carries an instruction to set up a subdomain specifying the virtual fact in question. This subdomain is represented in (28b2) read as ‘there is a planet called ‘‘Minerva’’ and it is inhabited’: (28a) There may be a planet Minerva and it may be inhabited. (28b1) D–1½D j PossibleðDÞ (28b2) j! D–1: d–1½a j ‘‘Minerva’’(a), Planet(a) == Inhabitedð1Þ

It in (28a) refers opaquely, as it finds its antecedent within D–1. Transparent reference, with d–13 given in the superordinate D, is shown in (29). In (29c), d–13 is closed and the predication D–1[Inhabited(13)] is added, saying that 13 is represented as being inhabited in subdomain D–1: (29a) There is a planet Minerva and it may be inhabited. (29b) D–1½D j PossibleðDÞ D–1: Inhabited(13) (29c) d–13½a j ‘‘Minerva’’ (a), Planet(a) == D1 ½Inhabitedð13Þ

A few general principles hold for subdomains. First, addresses from the commitment domain ‘percolate downward’ into subdomains. As shown in (29), where d–13 in D–1 is taken from D. This downward percolation is stopped only if the subdomain in question explicitly blocks the address in question. Then, presuppositions of clauses incremented in

subdomains ‘percolate upward’ into higher domains, including D, unless blocked either by their explicit negation or by lack of cognitive backing. This process is called *PROJECTION. Both processes follow from the principles of Maximal Unity (MaU) and Minimal Size (MiS), which serve the functional purpose of ensuring maximal unity and coherence in the overall D-structure. Anaphora may delve into subdomains under intensional predicates. In (30), for example, the brother Marion believes she has is anaphorically referred to by he under the intensional predicate be the talk of the town: (30) Marion believes that she has a brother, and he is the talk of the town.

The machinery of the incremental construction of discourse domains and subdomains is the main explanatory factor for the lack of substitutivity salva veritate in intensional contexts, which has been the dominant driving force in theoretical semantics.

See also: Discourse Anaphora; Discourse Domain; Discourse Parsing, Automatic; Discourse Representation Theory; Extensionality and Intensionality; Meaning Postulates; Operators in Semantics and Typed Logics; Presupposition; Selectional Restrictions; Semantic Change, the Internet and Text Messaging; Virtual Objects.

Bibliography Fauconnier G (1985). Mental spaces. Aspects of meaning construction in natural language. Cambridge: MIT Press. Gazdar G (1979). Pragmatics. Implicature, presupposition and logical form. New York, San Francisco, London: Academic Press. Geach P (1962). Reference and generality. An examination of some medieval and modern theories. Ithaca NY: Cornell University Press. Isard S (1975). ‘Changing the context.’ In Keenan E (ed.) Formal semantics of natural language. Cambridge: Cambridge University Press. 287–296. Seuren P A M (1972). ‘Taaluniversalia in de transformationele grammatika.’ Leuvense Bijdragen 61, 311–370. Seuren P A M (1975). Tussen taal en denken. Een empirische bijdrage tot de semantiek. Utrecht: Oosthoek, Scheltema, Holkema. Seuren P A M (1985). Discourse semantics. Oxford: Blackwell. Seuren P A M (1996). Semantic syntax. Oxford: Blackwell.

Donkey Sentences 269

Donkey Sentences P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

The problem of ‘donkey sentences’ occupies a prominent place in the logical analysis of natural language sentences. The purpose of logical analysis of sentences is to assign them a structure suitable for logical calculus – that is, the formal derivation of entailments. Some variety of the language of predicate calculus (LPC) is normally used for logical translations. In LPC, a term in a proposition that has a truth value must either be an expression referring to an individual (or a set of individuals) that actually exists or be a bound variable. Modern predicate calculus is essentially extensional: truth values are computed on the presumption that term referents actually exist, so that it allows in all cases for substitution of coextensional constituents salva veritate. Intensional or virtual objects – objects that have merely been thought up but that lack actual existence – have no place in modern logic, just as they have no place in Quine’s ‘desert landscape’ ontology, which has gained currency in large sections of Anglo-Saxon philosophy. That being so, modern logic has no choice but to posit that any argument term of a predicate in a proposition that has a truth value either refers to an actually existing object or is a bound variable ranging over such objects. Since in natural language one often encounters expressions that have the appearance of being referring argument terms but in actual fact fail to refer – such as the famous sentence in Bertrand Russell’s (1905) article The present king of France is bald – Quine (1960) started a ‘‘program of elimination of particulars’’ aimed at reformulating natural language sentences exclusively in terms of the quantificational language of modern predicate calculus, without any referring terms. Thus, for Quine and large sections of the logical community, LPC bans all definite terms and allows only for variables in argument positions. This, however, will not do for natural language, which has sentences that express purely extensional propositions and yet contain terms that neither refer to an actually existing object nor allow for an analysis as bound variable. These are the so-called donkey sentences. The fact that natural language resists analysis in terms of LPC constitutes the problem posed by the donkey sentences.

The currency of the term ‘donkey sentences’ originates with the British philosopher Peter Geach, whose discussion of certain sentences, all about donkeys, awakened the interest of modern logicians (Geach, 1962). Geach did not mention – apart from a token reference (1962: 116) to ‘‘another sort of medieval example’’ – that he took his cue from Walter Burleigh (c.1275–after 1344), who introduced donkey sentences in the context of supposition theory, the medieval equivalent of reference theory. In Burleigh (1988: 92), written around 1328, one finds this example: (1) Omnis homo habens asinum videt illum. (‘Every man owning a donkey sees it.’)

Burleigh’s problem had nothing to do with LPC, which did not yet exist. His problem was of a different nature. Having noticed that there exist what we now call bound variable pronouns, as in (2), and having stated that these may never take as antecedent a constituent of the same clause (‘propositio categorica’), he presented (1) as an apparent counterexample, since the pronoun illum takes as antecedent asinum, which stands under the same verb (videt) and is thus in the same clause. (2) All boys expected that the dog would bite them.

His answer was that the antecedent of illum, i.e., asinum, is not a main constituent of the same clause but a constituent of a subordinate predication, i.e., habens asinum (‘owning a donkey’). Geach (1962) discussed the same problem: how to account for the antecedent relation when the antecedent occurs in a relative clause contained in a complex predicate. It stands to reason, he said (1962: 117), to treat man who owns a donkey in the sentences (3a) and (3b), which he considered contradictories, as a complex predicate ‘‘replaceable by the single word ‘donkey-owner’.’’ But if we did that, (3a) and (3b) ‘‘become unintelligible . . . because ‘it’ is deprived of an antecedent’’: (3a) Any man who owns a donkey beats it. (3b) Some man who owns a donkey does not beat it.

A solution could conceivably be found in rewording these sentences as (4a) and (4b) (1962: 117): (4a) Any man who owns a donkey, owns a donkey and beats it. (4b) Some man who owns a donkey owns a donkey and does not beat it.

270 Donkey Sentences

Yet, he says, whereas (3a) and (3b) are contradictories, at least according to native speakers’ intuitions, (4a) and (4b) are not (1962: 118): [F]or both would be true if each donkey-owner had two donkeys and beat only one of them. Medieval logicians would apparently have accepted the alleged equivalences; for they argued that a pair such as [(3a)] and [(3b)] could both be true . . . and were therefore not contradictories. But plainly [(3a)] and [(3b)], as they would normally be understood, are in fact contradictories; in the case supposed, [(3b)] would be true and [(3a)] false.

The ‘‘medieval logicians’’ Geach argues against are in fact Walter Burleigh, who added the following comment to his discussion of (1), thereby denying that (3a) and (3b) are contradictories (1988: 92–93; translation mine): It follows that the following are compatible: ‘Every man owning a donkey sees it’ and ‘Some man owning a donkey does not see it’. For assuming that every man owns two donkeys, one of which he sees and one of which he does not see, then it is not only true to say ‘Every man owning a donkey sees it’, but also to say ‘Some man owning a donkey does not see it’. In the same way, suppose that every man who has a son also has two sons, and that he loves the one but hates the other, then both the following are true: ‘Every man who has a son loves him’ and ‘Some man who has a son does not love him’.

Geach’s own solution was to analyze a relative clause within a predicate as an implication under universal, and a conjunction under existential quantification, as in (5). (5a) Any man, if he owns a donkey, beats it. (5b) Some man owns a donkey and he does not beat it.

This ‘‘is quite unforced and does give us a pair of contradictories, as it ought’’ (Geach, 1988: 92–93). Yet Geach apparently failed to realize that (5a) does not translate into modern predicate logic. Its translation would have to be something like (6), which contains the free variable y in Beat (x, y) (6) 8x[Man(x) ! [9y[Donkey(y) ^ Own(x,y)] ! Beat(x,y)] ]

Had he realized that, he would have hit on the donkey sentences problem as it lives in modern formal semantics. Geach strengthened his putative solution by arguing (1972: 115–127) that a sentence like (7) should not be translated as a conjunction of two propositions – as A ^ B – but rather as a single quantified

proposition with it translated as a bound variable, as in (8). (7) Smith owns a donkey and he beats it. (8) 9x[Donkey(x) ^ Own(Smith,x) ^ Beat(Smith,x)]

His argument amounts to saying that A ^ B and A ^ :B cannot be true at the same time, whereas (7) and (9) can. All it takes is for Smith to own two donkeys, only one of which he beats. (9) Smith owns a donkey and he does not beat it.

Therefore, Geach argues, the logical translation (8) is correct, since it is compatible with (10), which simply posits a second ass, owned by Smith but not beaten by him (10) 9x[Donkey(x) ^ Own(Smith,x) ^ :Beat(Smith,x)]

This analysis, however, cannot be correct, as pointed out in Seuren (2001: 316–318), since it lacks generality in view of cases like (11). (11a) Smith must own a donkey, and he may beat it. (11b) I believe that Smith owns a donkey, and I fear that he beats it. (11c) This made Smith own a donkey and kept him from beating it.

No analysis of the type shown in (8) or (10) is applicable here, since they either require large scope for a donkey, which is contrary to what these sentences mean, or have to place the second operator (may, fear, keep) in the scope of the first (must, believe, make), which again is not what these sentences mean. Geach’s analysis thus comes to nothing. All this, however, is still beating about the bush. The real problem shows up in (12): (12a) Every farmer who owns a donkey feeds it. (12b) If Smith owns a donkey, he feeds it. (12c) Either Smith does not own a donkey or he feeds it.

In the standard logical analysis of if and or, they come out as true if Smith owns no donkey. But then it cannot be translated as a referring expression (the donkey), as it lacks a referent. It should therefore be translatable as a bound variable. But that, too, turns out to be impossible. Universal quantification, proposed by Quine (1960: 139) and many others as a solution, again falls foul of possible intervening operators, as in (13) (see Seuren, 1998). (13a) If Smith wants to own a donkey he must promise to feed it. (13b) Either Smith no longer owns a donkey or he still feeds it.

Dthat 271

There thus seems to be a hard core of sentences resisting translation into LPC. They contain definite expressions, preferably pronouns, that are neither referring expressions nor bound variables. Also, these pronouns behave like referring expressions anaphorically linked to an antecedent, and not like bound variable pronouns. The former allow for substitution by a lexical noun phrase (‘epithet anaphora’); the latter do not. Thus, it in (14a), (14b), and (14c) can be replaced by, for example, the animal, without much change in meaning, but it in (14d), which represents a bound variable, does not allow for such substitution. (14a) Smith owns a donkey and he feeds it/the animal. (14b) If Smith owns a donkey he feeds it/the animal. (14c) Either Smith does not own a donkey or he feeds it/the animal. (14d) Every donkey owned by Smith expects that he will feed it/*the animal.

Donkey pronouns, therefore, behave like referring expressions even though they are not allowed to do so under the statutes of current logic. Kamp and Reyle (1993) recognized the fundamental nature of this problem and proposed a radical departure from standard notions and techniques of semantic interpretation. They defended an analysis whereby the donkey pronouns and other definite expressions do not refer directly to entities in the world at hand but instead denote mental representations of possible real-world entities. In this theory, known as Discourse Representation Theory, the mechanism of reference is mediated by a cognitive system of mental representations whose relation to any actual world is a matter of independent concern. This halfway station of mental representations creates some extra room for a semantic account of donkey sentences. Even so, however, standard logical analyses are

inadequate for natural language. What logic will do better justice to the facts of language is still an open question. Groenendijk and Stokhof (1991) was an attempt at answering that question. See also: Anaphora, Cataphora, Exophora, Logophoricity; Constants and Variables; Coreference: Identity and Similarity; Discourse Representation Theory; Dynamic Semantics; Extensionality and Intensionality; Formal Semantics; Propositional and Predicate Logic.

Bibliography Burleigh W (1988). Von der Reinheit der Kunst der Logik. Erster Traktat. Von den Eigenschaften der Termini. (De puritate artis logicae. De proprietatibus terminorum). [Translated and edited by Peter Kunze, with introduction and commentary]. Hamburg: Felix Meiner Verlag. Geach P T (1962). Reference and generality: an examination of some medieval and modern theories. Ithaca, NY: Cornell University Press. Geach P T (1972). Logic Matters. Oxford: Blackwell. Groenendijk J & Stokhof M (1991). ‘Dynamic predicate logic.’ Linguistics and Philosophy 14, 39–100. Kamp H & Reyle U (1993). From discourse to logic: introduction to model-theoretic semantics of natural language, formal logic and discourse representation theory. Dordrecht: Kluwer. Quine W V O (1960). Word and object. Cambridge, MA: MIT Press. Russell B (1905). ‘On denoting.’ Mind 14, 479–493. Seuren P A M (1998). ‘Towards a discourse-semantic account of donkey anaphora.’ In Botley S & McEnery T (eds.) New approaches to discourse anaphora: proceedings of the second colloquium on Discourse Anaphora and Anaphor Resolution (DAARC2). University Centre for Computer Corpus Research on Language, Lancaster University. Technical Papers, Vol. 11, Special Issue. 212–220. Seuren P A M (2001). A view of language. Oxford: Oxford University Press.

Dthat D Braun, University of Rochester, Rochester, NY, USA ß 2006 Elsevier Ltd. All rights reserved.

The expression dthat (pronounced as a single syllable) was introduced by David Kaplan (1978, 1989b) as a formal surrogate for the English demonstrative that. On Kaplan’s theory of demonstratives, an utterance of the English demonstrative that is typically accompanied by a demonstration that presents an object, namely the demonstratum of the context.

The demonstratum is the referent of the demonstrative in the context. The structure of dthatterms reflects this theory of demonstratives. Each dthat-term has the form dthat[a], where a is a singular term. Such a term may be used to represent an English demonstrative cum demonstration. When so used, the expression dthat serves as a formal surrogate for the demonstrative, while a takes the form of a definite description that serves as a formal surrogate for the demonstration and fixes the reference of dthat in every context. In Kaplan’s formal possible worlds

272 Dynamic Semantics

semantics, the extension of a definite description a, with respect to a context, world, and time, is an individual. The extension of dthat[a] with respect to a context, world, and time is the extension of a with respect to the world and time of the context. Thus, dthat-terms are rigid designators with respect to a context: Given a context, the extension of a dthatterm is the same with respect to all worlds and times. In this respect, dthat-terms are like the indexical first-person pronoun I, whose extension, given a context, is the same individual (namely the agent of the context) with respect to all worlds and times. Although Kaplan (1989b) presents his formal semantics in a possible worlds format, he prefers a semantics that uses Russellian structured propositions – propositions whose constituents include individuals, properties, and relations. On such a semantics, the Russellian content of a predicate, with respect to a context, is a property or relation; the content of I, with respect to a context, is the agent of the context; and the content of a definite description, with respect to a context, is a complex entity containing the content of the definite description’s predicate and the higher-order property or relation expressed by ‘the.’ Kaplan’s (1989b) informal remarks about dthat allow for two quite different interpretations of its Russellian semantics (as pointed out in Kaplan, 1989a). On the first interpretation, the Russellian content of a dthat-term, in a context, is an individual, namely the referent of a in that context. Kaplan (1989a) calls this the ‘‘demonstrative surrogate’’ interpretation of dthat-terms and says that, on this interpretation, the expression dthat standing

alone is a singular term. But this semantics does not fit well with the syntax that Kaplan (1989b) ascribes to dthat in his formal system, where dthat appears to be an operator or functional expression. On the second interpretation, the content of a dthat-term is a complex entity that contains the content of the definite description a together with a higher-order content for dthat. On this interpretation, dthat is a functional expression that rigidifies the attached definite description; it is similar in some respects to the expression actually in the x: actually Fx. Most theorists who use dthat-terms specify that a complete dthat-term, dthat[a], is a singular term whose content, in a context, is the referent of a in the context. See also: Demonstratives; Definite and Indefinite Descrip-

tions; Indexicality; Possible Worlds; Propositions; Rigid Designation.

Bibliography Almog J, Perry J & Wettstein H (eds.) (1989). Themes from Kaplan. Oxford: Oxford University Press. Braun D (2001). ‘Indexicals.’ In Zalta E (ed.) Stanford encyclopedia of philosophy. Available at: http://plato. stanford.edu. Kaplan D (1978). ‘Dthat.’ In Cole P (ed.) Syntax and semantics, vol. 9. New York: Academic Press. 229–253. Kaplan D (1989a). ‘Afterthoughts.’ In Almog et al. (eds.) 565–614. Kaplan D (1989b). ‘Demonstratives.’ In Almog et al. (eds.) 481–563.

Dynamic Semantics J Groenendijk and M Stokhof, Universiteit van Amsterdam, Amsterdam, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

Information and Information Change Dynamic semantics is a branch of formal semantics that uses dynamic logic. Dynamic logic was originally developed for the logical analysis of so-called declarative programming languages. The basic idea is that a program can be semantically interpreted as a transition relation between machine states. Two states s and t are related by a program p if, when p is executed in an input state s, t is a possible output state. For example, if the current value of x is 4, the effect of executing the assignment x: ¼ xþ1 will be that the

new value of x is 5. A program is interpreted in terms of the change that it brings about. (Note that for a deterministic program, like the example just given, there is just one possible output state, for a given input state. But if the program threw a die, there would be several possible outputs.) In standard logical semantics, (indicative) sentences are interpreted in terms of their truth conditions. And subsentential expressions are interpreted in terms of their contribution to the truth conditional content of the sentences in which they occur. The truth conditional content of a sentence corresponds to the information it provides about the world. Dynamic semantics takes a different view on meaning. The static notion of information content is replaced by a dynamic notion of information change.

Dynamic Semantics 273

The meaning of a sentence is the change in information that it brings about: meaning is ‘information change potential.’ Clearly, for a program, a static interpretation in terms of truth conditions makes little sense (as could be argued for imperative or interrogative sentences in natural language). But it is less obvious what the advantages of a dynamic semantics for indicative sentences would be. At first sight, it seems that a static semantics in terms of information content also already makes it possible to give a general characterization of information change potential: ‘Add the information content of the sentence to the information about the world that is already available.’ One general type of argument in favor of a dynamic approach is that, even when we restrict ourselves to purely informative language use, there is more to meaning than can be captured in the notion of meaning as truth conditional content. A standard example (due to Barbara Partee) is the contrast between the following two sequences of sentences: (1) I dropped ten marbles and found all of them, except for one. It is probably under the sofa. (2) I dropped ten marbles and found only nine of them. ??It is probably under the sofa.

The first sentences in (1) and (2) are truth conditionally equivalent: they provide the same information about the world. Hence, if meaning is identified with truth conditional content, they have the same meaning. However, one may observe that whereas the continuation with the second sentence in (1) is completely unproblematic, the same continuation in (2) is not equally felicitous. The conclusion must be that the two opening sentences differ in meaning; hence, meaning cannot be equated with truth conditional content. From the viewpoint of dynamic semantics, the two opening sentences in (1) and (2) differ in the way they change information, even though the information they provide about the world is the same. Unlike the opening sentence in (2), the opening sentence in (1) also creates an informational context that licenses the pronoun ‘it’ in the second sentence. The notion of context is already important in standard semantics, in the sense that sentences are interpreted relative to a context of utterance, which may include several parameters, such as the time of utterance, etc. So, interpretation depends on context, or on the information available to the speech participants. What dynamic interpretation adds to this is the insight that the process of interpretation also brings about a change of the context, viz., in the information of the speech participants. Both context dependency

and the creation of context in speech situations are taken into account. One of the things this change of perspective brings is that attention shifts from isolated sentences to discourse, or text. It makes it possible to study and describe semantic phenomena that cross the border of a single sentence. As illustrated by the example given above, anaphoric relations across sentences are one such phenomenon.

Discourse Representation Theory and File Change Semantics Dynamic semantics is not the only logical semantical theory that deals with the incremental interpretation of discourse, and it was not the first one, either. Whereas the logical roots of dynamic semantics lie in dynamic logic and the semantics of programming languages, its linguistic roots are discourse representation theory and file change semantics. In discourse representation theory, the interpretation of a discourse – a sequence of sentences – takes the form of an incremental construction of a discourse representation structure. In file change semantics, it is not a single structure that is built, but a system of so-called file cards. The two main elements of discourse representation structures and file card systems are discourse referents and conditions. The discourse referents behave like logical variables, and the conditions are open formulae containing these variables, thereby putting constraints on the possible values of these variables. Discourse referents are introduced by certain noun phrases, in particular indefinites, and correspond to ‘individuals talked about’ by the discourse. In file change semantics the introduction of a new discourse referent means adding a file card to the system. The conditions are written on the file cards, and can contain information that is spread over several cards, thereby linking them to each other. In discourse representation theory the conditions can be atomic formulae, or more complex structures that include other discourse representation structures. The discourse referents also function as possible referents for anaphoric expressions. Anaphoric expressions have to be linked to an accessible discourse referent in the discourse representation structure or file card system as it has been constructed at that point. For example, in discourse (1) above, the phrase ‘except for one’ in the first sentence will have introduced a discourse referent to which the pronoun ‘it’ in the second sentence can be linked. In discourse (2) the first sentence does not introduce a suitable discourse referent to link ‘it’ to. So, discourse representation theory and file change

274 Dynamic Semantics

semantics can account for the difference in acceptability of the two discourses. The incremental construction of discourse representation structures is the first step in the interpretation process of a discourse. It clearly has a dynamic nature. Interpreting a sentence may depend on the nature of the structure created by previous sentences, and will result in a change of structure. The second step in the interpretation process of a discourse is more traditional. It consists in a semantic evaluation of the one resulting discourse representation structure relative to a standard logical model. A structure is true in a model if objects from the domain of the model can be assigned to the discourse referents in such a way that all the conditions in the structure are satisfied. So, although the discourse representation structures are constructed dynamically, their semantic interpretation, and hence, indirectly, the semantic interpretation of the discourses they represent, are interpreted in terms of ‘static’ truth conditional content. In our discussion of the discourses (1) and (2), we saw that the opening sentences have the same truth conditions. This will also hold for the discourse representation structures they give rise to. We concluded above that since the two discourses as a whole show different semantic behavior, the opening sentences must differ in meaning, and that hence the meaning of sentences cannot be equated with their truth conditions. Discourse representation theory accounts for the difference in meaning, but not in terms of a difference in information content, but in terms of a difference in the representation of information. The intermediate representation level thus becomes an essential ingredient of the theory of interpretation of natural language. Dynamic semantics, in particular dynamic predicate logic, was invented to show that the linguistic phenomena that were dealt with in discourse representation theory can also be treated in such a way that the need to postulate an essential intermediate level of representation does not arise. Methodologically and philosophically, this is of importance, since postulating a representational level means to assume a language of thought as an intermediary between language and interpretation. It leads to a mentalistic theory of meaning, inheriting all the philosophical problems that come with such a view. All other things being equal, a theory that allows one to remain neutral on this issue is preferred. Concerning interpretation, we concentrated on discourse representation theory. The interpretation procedure in file change semantics is different, and it is not so clear that the critique dynamic semantics puts forward against discourse representation theory applies in precisely the same way to file change semantics.

Dynamic Predicate Logic The typical type of phenomena that discourse representation theory and file change semantics are designed to deal with are exemplified in the following two examples: (3) A man is walking in the park. He whistles. (4) Every farmer who owns a donkey beats it.

In standard predicate logic such (sequences of) sentences are a problem. Appropriate translations are as follows: (5) 9x(man(x) ^ walk(x) ^ whistle(x)) (6) 8x8y((farmer(x) ^ donkey(y) ^ own(x,y)) ! beat(x,y))

The problem with (5) as a translation of (3) is that the translation of the second sentence in (3) has to be brought under the scope of the existential quantifier in the translation of the indefinite in the first sentence. The problem with (6) as a translation of (4) is that the indefinite, which occurs in (4) inside the relative clause in the subject, turns up in (6) as a universal quantifier that scopes over the implication as a whole. Both facts are problematic from a compositional point of view: the logical formulae are composed at odds with the composition of the sentences. From a compositional point of view, we prefer the following formulae as translations: (7) 9x(man(x) ^ walk(x)) ^ whistle(x) (8) 8x((farmer(x) ^9y(donkey(y) ^ own(x,y))) ! beat(x,y))

But, under the standard interpretation of predicate logic, these formulae do not express what (3) and (4) express, because the last occurrence of the variable x in (7) and the last occurrence of y in (8) are free occurrences and are outside the scope of the existential quantifiers. In a nutshell, what dynamic predicate logic achieves is that the translations in (7) and (8) are appropriate translations of (3) and (4), and, in fact, are logically equivalent with (5) and (6), respectively. It succeeds in this by giving a different, dynamic interpretation to the logical constants of the language of predicate logic. The basic feature of the dynamic existential quantifier is that it can bind variables outside its syntactic scope. This is done by interpreting formulae as transition relations between assignments (of values to variables) much in the same way that programs can be viewed as transition relations between machine states (as seen above). That two assignments a and b are related means that when a is an input assignment, b is a possible output assignment. So formulae are interpreted as such input–output relations. Conjunction is interpreted as the composition of the transition relations of the left

Dynamic Semantics 275

and the right conjunct. For example, the output of interpreting ‘9x(man(x) ^ walk(x))’ will consist of assignments that assign an object to x which is both a man and is walking. In interpreting (7) as a whole, these assignments are passed on as input to ‘whistle(x)’ and will remain as output for the whole sequence if the value assigned to x is also an object that whistles. For (8) the story is slightly more complicated. Possible outputs for ‘(farmer(x) ^ 9y(donkey(y) ^ own(x,y)))’ are assignments that assign a particular farmer to x and donkey to y so that that farmer owns y. The effect of processing the implication ‘(farmer(x) ^ 9y(donkey(y) ^ own(x,y))) ! beat(x,y)’ is that it is checked to see whether every output of the antecedent still has an output after it has served as input for the consequent. In other words, it is checked whether the farmer in question beats all the donkeys that he owns. So, in effect, an existential quantifier inside the antecedent of a conditional amounts to universal quantification over the implication as a whole. Finally, the contribution of the universal quantification over farmers performs the check we just described for every farmer. The empirical results that dynamic predicate logic obtains for these sentences are precisely the same as those of discourse representation theory and file change semantics, but the tools being used are much more orthodox. Dynamic predicate logic uses the familiar language of predicate logic, and the only difference in the interpretation of the language compared with the standard interpretation is that assignments of values to variables – which are the result of existential quantification – are remembered beyond the syntactic scope of such quantifiers. Of course, even small changes can have rather big consequences. The logical properties of the system of dynamic predicate logic differ quite radically from those of standard predicate logic.

Update Semantics Whereas the dynamic semantics of programs and predicate logic were originally formulated in terms of transition relations between states or assignments, update semantics gives a more simple and intuitive way of viewing the dynamics of interpretation. Statements are interpreted in terms of update functions on information states (or contexts, as they are sometimes called). Schematically, the interpretation of a sentence j in an information state s, written as s[j], results in a new state t, which now incorporates the information provided by j. In the simplest case, information states are modeled as sets of possible worlds (as they are used in modal logic). Since our information about the world is

usually partial, an information state leaves several possibilities for the way the world could be. If a world w is an element of a state s, this means that w is a way that the actual world could be, according to the information embodied by s. An update s[j] will in general lead to a state t such that t  s. Adding information eliminates possibilities. Growth of information makes the information state shrink. Truth and falsity are not central notions in update semantics. Sentences are not statically evaluated with respect to the world but are interpreted in terms of their effects on information states. Two special cases of such effects are s[j] ¼ Ø (the empty set), and s[j] ¼ s. The former means that updating s with j leads to the absurd state, a state in which there are no possibilities left. This typically happens if j is inconsistent with the information embodied in s. When s[j] ¼ s, this means that the update of s with j has no effect. The sentence j provides no new information relative to s. In such cases we say that s supports or accepts j. Now, clearly, update semantics gives us an alternative format to spell out the interpretation of a language, as compared to the standard truth conditional approach, but what are the advantages of this alternative view? One way to motivate update semantics is that there are types of sentences that do not obviously have truth conditional content, but that have meanings that can be described in terms of their effects on information states. Perhaps not the best, but certainly the simplest case to illustrate this with, is provided by epistemic modalities such as ‘might.’ Consider the following example: (9) It might be raining outside . . . . It isn’t raining outside.

The dots indicate that between the utterance of the two sentences some time passes during which, say, the speaker walks to the window to look outside. The point about the example is that this is a consistent, or coherent, piece of discourse. But at the same time, the following is not: (10) It isn’t raining outside . . . . It might be raining outside.

This would be hard to explain if both sentences were analyzed in terms of truth conditions (for example, using standard modal logic for the analysis of the modality). Since they are composed of the same two sentences, how could the truth conditions of both sequences differ? From an informational dynamic semantic perspective, it is quite clear what makes the difference. Updating the first sentence of (10) leads to an information state in which there are no worlds left where it is raining at this moment. Such an information state would be inconsistent

276 Dynamic Semantics

with the second sentence in (10). (The counterfactual ‘It might have been raining’ is all right, of course.) If things are presented in the opposite order, as in (9), we do not meet this problem. As long as our information state leaves open the possibility that it is raining outside, we can happily accept the first sentence in (9). And neither do we need to have a problem in accepting next the information that it isn’t actually raining. That order matters is a distinctive feature of update semantics, and of dynamic semantics and dynamic logic in general. Also, in the case of anaphoric relations, order is obviously important. The use of an anaphoric expression, such as a pronoun or anaphoric definite description, is only felicitous in case the preceding discourse has provided a suitable discourse referent to link it to.

Presuppositions Dynamic semantics is not only suited to give an account of how the information or the context is changed – by eliminating possibilities and adding discourse referents – but also to formulate the conditions that the current information state or context should meet for a sentence to be acceptable at that point in the discourse. Updates in update semantics are typically partial in that they are only defined in states that meet certain requirements. If a sentence has certain presuppositions, it is required that these are already supported or accepted in the current state, and hence do not provide novel information. Among the different dynamic frameworks, file change semantics most explicitly takes both anaphoric relations and presuppositions into account. Definites, in particular anaphoric definites, are associated with the property of familiarity. In terms of the file card metaphor, a definite requires the presence of a card that already carries the information presented in its descriptive content. In contrast, indefinites carry a novelty constraint, and should add a new file card to the stock. The notion of presupposition has a long and complicated history in logical semantics. Without claiming that all puzzles have already been solved, it seems fair to say that the shift from truth conditions to information change and context change potentials has made a substantial contribution to the study of this phenomenon on the border of semantics and pragmatics.

Further Reading A much more extensive introduction and overview of dynamic semantics and dynamic logic is Muskens, Van Benthem and Visser (1997). Chierchia (1995) provides an introduction in dynamic semantics with

emphasis on linguistic motivation. Groenendijk and Stokhof (2000) gives an informal introduction of dynamic semantics, illustrating it with several different kinds of discourse phenomena. Discourse representation theory was first presented in Kamp (1981). For an extensive overview, see Van Eijck and Kamp (1997); for a full textbook, see Kamp and Reyle (1993). The original sources for file change semantics are Heim (1983, 1989). An important source of inspiration, for dynamic semantics in general, and for Heim in particular, is Stalnaker (1974, 1979). Dynamic predicate logic was first presented in Groenendijk and Stokhof (1991). A more detailed study of the dynamics of existential quantification can be found in Dekker (1993). A merge of dynamic predicate logic with Montague grammar is given in Groenendijk and Stokhof (1999). A similar merge with discourse representation theory is proposed in Muskens (1995). The original source for update semantics is Veltman (1996). In Groenendijk, Stokhof and Veltman (1996), the combination of dynamic predicate logic and update semantics is discussed. An extensive overview of the study of presuppositions in dynamic semantics is given in Beaver (1997). See also: Anaphora, Cataphora, Exophora, Logophoricity; Default Semantics; Definite and Indefinite; Discourse Anaphora; Discourse Representation Theory; Donkey Sentences; Formal Semantics; Montague Semantics; Syntax-Semantics Interface.

Bibliography Beaver D (1997). ‘Presuppositions.’ In Van Benthem & Ter Meulen (eds.). 939–1008. Chierchia G (1995). Dynamics of meaning. Anaphora, presupposition, and the theory of grammar. Chicago: The University of Chicago Press. Dekker P (1993). ‘Existential disclosure.’ Linguistics and Philosophy 16, 561–587. Groenendijk J A G & Stokhof M J B (1991). ‘Dynamic predicate logic.’ Linguistics and Philosophy 14, 39–100. Groenendijk J A G & Stokhof M J B (1999). ‘Dynamic Montague grammar.’ In Ka´lma´n L & Po´los L (eds.) Papers from the second symposium on logic and language. Budapest: Akade´miai Kiado´. 3–48. Groenendijk J A G & Stokhof M J B (2000). ‘Meaning in motion.’ In Von Heusinger K & Egli U (eds.) Reference and anaphoric relations. Dordrecht: Kluwer. 47–76. Groenendijk J A G, Stokhof M J B & Veltman F J M M (1996). ‘Coreference and modality’ In Lappin S (ed.) The handbook of contemporary semantic theory. Oxford: Blackwell. 179–214. Heim I (1983). ‘File change semantics and the familiarity theory of definiteness.’ In Ba¨uerle R, Schwarze C &

Dynamic Semantics 277 Von Stechow A (eds.) Meaning, use and interpretation of language. Berlin: De Gruyter. 164–189. Heim I (1989). The semantics of definite and indefinite noun phrases. New York: Garland. Kamp H & Reyle U (1993). From discourse to logic. Dordrecht: Reidel. Kamp H (1981). ‘A theory of truth and semantic representation.’ In Groenendijk J A G, Janssen T M V & Stokhof M J B (eds.) Truth, interpretation and information. Dordrecht: Foris. 1–41. Muskens R (1995). ‘Combining Montague semantics and discourse representation.’ Linguistics and Philosophy 19, 143–186. Muskens R, VanBenthem J F A K & Visser A (1997). ‘Dynamics.’ In Van Benthem & Ter Meulen (eds.). 587–648.

Stalnaker R (1974). ‘Pragmatic presuppositions.’ In Munitz M & Unger P (eds.) Semantics and philosophy. New York: New York University Press. 197–213. Stalnaker R (1979). ‘Assertion.’ In Cole P (ed.) Syntax and semantics 9: pragmatics. New York: Academic Press. 315–332. VanBenthem J F A K & Ter Meulen A G B (eds.) (1997). Handbook of logic and language. Amsterdam: Elsevier. VanBenthem J F A K (1991). Language in action. Categories, lambdas and dynamic logic. Amsterdam: North Holland. VanEijck J & Kamp H (1997). ‘Representing discourse in context.’ In Van Benthem & Ter Meulen (eds.). 587–648. Veltman F J M M (1996). ‘Defaults in update semantics.’ Journal of Philosophical Logic 25, 221–261. Zeevat H (1992). ‘Presupposition and accommodation in update semantics.’ Journal of Semantics 9, 379–412.

This page intentionally left blank

E Event-Based Semantics P Lasersohn, University of Illinois at UrbanaChampaign, Urbana, IL, USA ß 2006 Elsevier Ltd. All rights reserved.

This approach has an advantage over one in which the adverbials are treated as arguments of the verb, so that butter expresses a five-place relation as in (3): (3) butter(Jones, the toast, a knife, the bathroom, midnight)

The notion of events may be used in semantic theory in a wide variety of ways, but the term event-based semantics normally refers to semantic analyses that incorporate or adapt the proposal of Davidson (1967) that certain predicates take an implicit variable over events as an argument. In Davidson’s original proposal, this event argument is accommodated by analyzing the predicate as having one more argument place than is assumed in more traditional analyses. The event variable is existentially quantified, with the result that Sentence (1)a is assigned a logical structure like (1)b, rather than the more traditional (1)c: (1) a. Jones buttered the toast. b. 9e butter(Jones, the toast, e) c . butter(Jones, the toast)

Thus, butter is analyzed as expressing a three-place relation between an individual who butters, an object that gets buttered, and a buttering event; and the sentence is analyzed as asserting that such an event exists. The initial motivation for this proposal is that it provides a way to analyze adjuncts such as locative, temporal, and instrumental adverbial phrases. These are also treated as predicates of events – or more specifically as predicates of the same event as the verb. Each adjunct gives rise to its own clause in logical structure, and these are combined with the clause corresponding to the verb and its arguments by ordinary propositional conjunction. The existential quantifier binding the event variable takes scope over the whole structure, so that Sentence (2)a is assigned a logical structure like (2)b, for example: (2) a. Jones buttered the toast with a knife in the bathroom at midnight. b. 9e[butter(Jones, the toast, e) & with(e, a knife) & in(e, the bathroom) & at(e, midnight)]

If we adopt a formula like (3), but continue to represent Jones buttered the toast as in (1)c, with a two-place relation, we would seem to deny that butter expresses the same meaning in both sentences and claim instead that it is ambiguous. Nor is this a simple two-way ambiguity; butter will have to express different relations in each of the sentences in (4) (4) a. b. c. d.

Jones buttered the toast with a knife. Jones buttered the toast in the bathroom. Jones buttered the toast at midnight. Jones buttered the toast with a knife at midnight. e. Jones buttered the toast with a knife in the bathroom. f. Jones buttered the toast at midnight in the bathroom.

This massive ambiguity is undesirable. We might try to avoid the ambiguity by claiming that butter always expresses a five-place predicate, and that in examples in which fewer than five arguments appear overtly, the missing argument places are filled by implicit existentially bound variables. However, this strategy ignores the fact that additional adverbials can always be added, with no limit on the number; as long as adverbials are analyzed as arguments, one cannot specify a fixed number of argument places for the verb, even if one allows for implicit arguments. These problems are avoided completely under Davidson’s proposal; butter is consistently analyzed as a three-place predicate. An unlimited number of adverbials may be added because these combine with the verb by ordinary conjunction, and not by filling argument places. A second advantage to this analysis is that it correctly captures the fact that Sentence (2)a entails all the examples in (4) as well as (1)a, that (4)d entails (4)a, (4)c, and (1)a, and so on. Without some extra

280 Event-Based Semantics

stipulation, these entailment relations do not fall out of an analysis in which adverbials are arguments to the verb. Extra stipulations are also required to capture these entailment relations in other alternative approaches to the semantics of adverbials, such as an approach in which they are treated as higherorder predicates taking verb intensions as arguments, as in (5): (5) [at-midnight(^in-the-bathroom(^with-aknife(^butter)))](Jones, the toast)

Davidson limited his original proposal to ‘action sentences’ and was explicit that it should not be applied to sentences such as 2 þ 3 ¼ 5. Nonetheless, it is sometimes assumed that a similar analysis should be extended to some or all stative sentences (see especially Parsons, (1987/1988, 1990). In analyses employing both states and events, the term eventuality (introduced by Bach, 1986) is often used for the general category covering both. The issue of which predicates have a hidden argument place for an eventuality, and which do not, is addressed in a well-known proposal by Kratzer (1995); see also Higginbotham (1983) and Fernald (2000). Kratzer suggests that individual-level predicates do not have such an argument place, and that stage-level predicates do. This position is supported by the following pattern of acceptability: (6) a. When Mary speaks French, she speaks it well. b. *When Mary knows French, she knows it well. c. When a Moroccan speaks French, she speaks it well. d. When a Moroccan knows French, she knows it well.

Assuming that indefinites contribute free variables to semantic representation (as in File Change Semantics or Discourse Representation Theory) and that when-clauses serve to restrict the domain of an implicit generic quantifier that can bind these variables, the acceptability of (6)c–d is expected. The unacceptability of (6)b follows from a simple prohibition on vacuous quantification: The when-clause contains no free variables for the generic quantifier to bind. Why, then, is (6)a acceptable, as it does not contain any indefinite noun phrases either? Kratzer suggests it is because the stage-level predicate speak contributes a free Davidsonian event variable, whereas the individual-level predicate know does not. Another area in which event variables have proven useful is in the semantics of perception reports (Higginbotham, 1983; Vlach, 1983; Parsons, 1990). Sentences like (7)a have been cited in support of thoroughgoing revisions to semantic theory of the kind adopted in Situation Semantics; but if we

analyze this sentence as meaning that there is an event e of Mary’s leaving, and an event e0 of John’s seeing e, we may assign it the logical structure in (7)b and obtain a reasonable analysis without using resources beyond those of ordinary first-order logic: (7) a. John sees Mary leave. b. 9e[leave(Mary, e) & 9e0 see(John, e, e0 )]

Davidson’s technique of representing adjunct phrases as expressing separate logical clauses raises the possibility that major arguments of the verb such as the subject, direct object, and so on, might be treated in the same way. Davidson himself rejected this sort of extension, but a variant of it has been very popular in later work. Often termed the NeoDavidsonian approach, this style of analysis treats the verb as a one-place predicate of eventualities; the subject, direct object, and so on do not serve directly as arguments to the verb, but instead stand in thematic relations to an event that fills the verb’s sole argument place. A sentence such as (8)a thus receives a logical structure like that in (8)b: (8) a. Brutus stabbed Caesar. b. 9e[stab(e) & agent(e, Brutus) & patient(e, Caesar)]

This approach to thematic relations appears first to have been proposed by Parsons (1980, 1985, 1990); see also Carlson (1984) and Krifka (1989, 1992). Note that this style of analysis requires an eventuality argument for all predicates that assign thematic roles, not just action predicates or stage-level predicates – at least on the assumption that thematic roles are represented in a uniform fashion for all predicates that assign them. One advantage of this approach is that it allows a nice analysis of ‘semantically optional’ arguments: The direct object of stab may be omitted, as in (9)a; to give a logical form, we simply omit the clause for the corresponding thematic relation, as in (9)b: (9) a. Brutus stabbed. b. 9e[stab(e) & agent(e, Brutus)]

In a more conventional analysis, we might represent this sentence as in (10)a: (10) a. 9x stab(Brutus, x) b. stab(Brutus)

But as Parsons points out, this entails that Brutus stabbed something, whereas (9)a does not: Brutus could have stabbed and missed. If we try to avoid this entailment by representing (9)a as (10)b, we treat stab as ambiguous, expressing a different meaning in (9)a than it does in (8)a; but this is avoided in the Neo-Davidsonian analysis.

Event-Based Semantics 281

(11) a. In every burning, oxygen is consumed. Agatha burned the wood. Therefore, oxygen was consumed. b. 8e[burn (e) ! 9e0 [consume(e0 ) & theme (e0 , oxygen) & in(e, e0 )]] 9e[burn(e) & agent(e, Agatha) & patient (e, the wood)] 9e0 [consume(e0 ) & theme(e0 , oxygen)]

Event arguments have also been used extensively in the analysis of Aktionsart. One line of research in this area, exemplified by Pustejovsky (1991, 1995) and Grimshaw (1990), represents events of certain complex types as structurally composed of events of simpler types. For example, an accomplishment predicate such as build may be associated with events of the structure illustrated in (12): (12)

Here e1 represents the building process itself, whereas e2 represents the resultant state of something having been built. As a telic predicate, build involves reference not just to the building process, but also to its culmination in the transition to a result state, represented by e0. A rather different approach to the event-theoretic representation of Aktionsart was developed by Krifka (1989, 1992). Here, a ‘sum’ operation is assumed on events, so that for any two events e1, e2, a complex event e1 t e2 consisting of them is assumed to exist. A part/ whole relation is definable in terms of the sum operation: e1 v e2 (‘e1 is a part of e2’) iff e2 ¼ e1 t e2. Predicates are assumed to denote sets of events, allowing aspectual classes to be defined in terms of closure conditions on these sets. Cumulative predicates are those denoting sets that are closed under the sum operation: (13) CUM(P) $ 8x, y [[P(x) & P(y)] ! P(x t y)]

For example, if x is a walking event, and y is a walking event, their combination is also a walking event. In contrast, quantized predicates denote

sets from which proper parts of their members are excluded: (14) QUA(P) $ 8x, y [[P(x) & P(y)] ! x 6 y] u

The idea that verbs are predicates of events has also been exploited in the analysis of certain types of nominalization (Higginbotham, 1985; Parsons, 1990). Combining the Neo-Davidsonian analysis with an assumption that nominals may express the same predicate of events as the verbs they derive from makes it possible to account for the validity of the argument in (11)a in a very straightforward fashion. This argument is represented as in (11)b, which is licensed by standard principles of first-order logic:

For example, if x is an event of drinking a glass of wine, and y is also an event of drinking a glass of wine, x cannot be a proper part of y. Cumulative and quantized predicates of events correspond roughly to the familiar categories of atelic and telic predicates, respectively. However, by assuming a sum operation and corresponding part/whole relation on individuals, and not just events, it is possible to apply these concepts to predicates of individuals as well. For example, if x is wine and y is wine, their sum must also be wine, establishing wine as a cumulative predicate; if x is a glass of wine and y is a glass of wine, x may not be a proper part of y, establishing glass of wine as a quantized predicate. The status of a verb’s arguments as cumulative or quantized often affects the aspectual category of the verb phrase or sentence; hence (15)a is cumulative whereas (15)b is quantized: (15) a. John drank wine. b. John drank a glass of wine.

Assuming a Neo-Davidsonian representation of thematic relations, Krifka explored the mathematical properties such relations must have to give this effect. A related application of part/whole structures for events was developed by Lasersohn (1990, 1995) for the semantics of plurality (see also Schein, 1993). Distributive readings of predicates with plural subjects are analyzed as representing events that divide into smaller events corresponding to the individual members of the group denoted by the subject. For example, an event of John and Mary sitting divides into a smaller event of John sitting and one of Mary sitting. Collective readings correspond to events that cannot be divided in this way; an event of John and Mary being a happy couple does not divide into an event of John being a happy couple and an event of Mary being a happy couple. Representing the collective/distributive distinction in terms of event structure in this way makes possible an extensional analysis of adverbials like together, which imposes a collective reading on the predicates it modifies. Event-based semantics has also been fruitfully applied to a wide variety of other problems, of which space limitations prevent a discussion here: plurality (Lasersohn, 1990, 1995; Schein, 1993; Landman, 2000), temporal anaphora and narrative progression (Hinrichs, 1986; Partee, 1984), cognate objects (Mittwoch, 1998), adjectives (Larson, 1998), and many others. There is also a large philosophical literature on events, much of which relates directly to Davidsonian-style event-based semantics;

282 Evidentiality

see Davidson (1980), LePore and McLaughlin (1985), Higginbotham et al. (2000), and the references cited therein. See also: Aspect and Aktionsart; Boole and Algebraic

Semantics; Discourse Representation Theory; Generics, Habituals and Iteratives; Perfects, Resultatives, and Experientials; Plurality; Situation Semantics; Thematic Structure.

Bibliography Bach E (1986). ‘The algebra of events.’ Linguistics and Philosophy 9, 5–16. Carlson G (1984). ‘Thematic roles and their role in semantic interpretation.’ Linguistics 22, 259–279. Davidson D (1967). ‘The logical form of action sentences.’ In Rescher N (ed.) The logic of decision and action. Pittsburgh: University of Pittsburgh Press. 81–94. Reprinted in Davidson (1980), 105–122. Davidson D (1980). Essays on actions and events. Oxford: Oxford University Press. Fernald T (2000). Predicates and temporal arguments. Oxford: Oxford University Press. Grimshaw J (1990). Argument structure. Cambridge, MA: MIT Press. Higginbotham J (1983). ‘The logic of perceptual reports: an extensional alternative to Situation Semantics.’ Journal of Philosophy 80, 100–127. Higginbotham J (1985). ‘On semantics.’ Linguistic Inquiry 16, 547–593. Higginbotham J, Pianesi F & Varzi A C (2000). Speaking of events. Oxford: Oxford University Press. Hinrichs E (1986). ‘Temporal anaphora in discourses of English.’ Linguistics and Philosophy 9, 63–82. Kratzer A (1995). ‘Stage-level and individual-level predicates.’ In Carlson G N & Pelletier F J (eds.) The generic book. Chicago: University of Chicago Press. 125–175. Krifka M (1989). ‘Nominal reference, temporal constitution, and quantification in event semantics.’ In Bartsch R, van Benthem J & van Emde Boas P (eds.) Semantics and contextual expression. Dordrecht: Foris Publications. 75–115. Krifka M (1992). ‘Thematic relations as links between nominal reference and temporal constitution.’ In Sag I A &

Szabolcsi A (eds.) Lexical matters. Stanford, CA: Center for the Study of Language and Information. 29–53. Landman F (2000). Events and plurality: the Jerusalem lectures. Dordrecht: Kluwer Academic Press. Larson R (1998). ‘Events and modification in nominals.’ In Strolovitch D & Lawson A (eds.) Proceedings from semantics and linguistic theory VIII. Ithaca, NY: CLC Publications. 145–168. Lasersohn P (1990). ‘Group action and spatio-temporal proximity.’ Linguistics and Philosophy 13, 179–206. Lasersohn P (1995). Plurality, conjunction and events. Dordrecht: Kluwer Academic Publishers. LePore E & McLaughlin B (eds.) (1985). Actions and events: perspectives on the philosophy of Donald Davidson. Oxford: Basil Blackwell. Mittwoch A (1998). ‘Cognate objects as reflections of Davidsonian event arguments.’ In Rothstein S (ed.) Events and grammar. Dordrecht: Kluwer Academic Publishers. 309–348. Parsons T (1980). ‘Modifiers and quantifiers in natural language.’ Canadian Journal of Philosophy 6(suppl.), 29–60. Parsons T (1985). ‘Underlying events in the analysis of English.’ In LePore E & McLaughlin B (eds.) Actions and events: perspectives on the philosophy of Donald Davidson. Oxford: Basil Blackwell. 235–267. Parsons T (1987/1988). ‘Underlying states in the semantical analysis of English.’ Proceedings of the Aristotelian Society 88, 13–30. Parsons T (1990). Events in the semantics of English: a study in subatomic semantics. Cambridge, MA: MIT Press. Partee B (1984). ‘Nominal and temporal anaphora.’ Linguistics and Philosophy 7, 243–286. Pustejovsky J (1991). ‘The syntax of event structure.’ Cognition 41, 47–81. Pustejovsky J (1995). The generative lexicon. Cambridge, MA: MIT Press. Rothstein S (ed.) (1998). Events and grammar. Dordrecht: Kluwer Academic Publishers. Schein B (1993). Plurals and events. Cambridge, MA: MIT Press. Tenny C & Pustejovsky J (eds.) (2000). Events as grammatical objects. Stanford, CA: CSLI Publications. Vlach F (1983). ‘On situation semantics for perception.’ Synthese 54, 129–152.

Evidentiality A Y Aikhenvald, La Trobe University, Bundoora, Australia ß 2006 Elsevier Ltd. All rights reserved.

Evidentiality is a grammatical category that has source of information as its primary meaning – whether the narrator actually saw what is being described,

or made inferences about it based on some evidence, or was told about it, etc. Languages vary in how many information sources have to be marked. Many just mark information reported by someone else; others distinguish firsthand and nonfirsthand information sources. In rarer instances, visually obtained data are contrasted with data obtained through hearing and smelling, and through various kinds of inference.

Evidentiality 283

As Boas (1938: 133) put it, ‘‘while for us definiteness, number, and time are obligatory aspects, we find in another language location near the speaker or somewhere else, source of information – whether seen, heard, or inferred – as obligatory aspects.’’ The terms ‘verificational’ and ‘validational’ are sometimes used in place of ‘evidential.’ French linguists employ the term ‘mediative’ (Guentche´va, 1996). The term ‘evidential’ was first introduced by Jakobson (1957). A summary of work on recognizing this category, and naming it, is in Jacobsen (1986) and Aikhenvald (2004). Evidentiality is a verbal grammatical category in its own right, and it does not bear any straightforward relationship to truth, the validity of a statement, or the speaker’s responsibility. Neither is evidentiality a subcategory of epistemic or any other modality (pace Palmer 1986: 51): in numerous languages irrealis and other modalities can occur together with evidentials (also see discussion in De Haan, 1999; Lazard, 1999, 2001; and DeLancey, 2001). In Tariana, an Arawak language spoken in the multilingual area of the Vaupe´s in northwest Amazonia, speakers have to specify whether they saw the event happen, or heard it, or know about it because somebody else told them, etc. Omitting an evidential typically results in an ungrammatical and highly unnatural sentence (see details in Aikhenvald, 2004). If one saw Jose´ play football, (1) would be appropriate: (1) Juse Jose´

irida football

di-manika-ka 3person.masculine. singular-playRECENT.PAST. VISUAL ‘Jose´ played football (we saw it)’

If one just heard the noise of a football game but could not see what was happening, (2) is the thing to say: (2) Juse irida di-manika-mahka Jose´ football 3person.masculine. singular-playRECENT.PAST.NONVISUAL ‘Jose´ played football (we heard it)’

If one sees that the football is not in its normal place in the house, Jose´ and his football boots are gone, with crowds of people coming from the football ground, these details are enough for us to infer that Jose´ is playing football: (3) Juse Jose´

irida football

di-manika-nihka 3person.masculine. singular-playRECENT.PAST.INFERRED ‘Jose´ played football (we infer it from visual evidence)’

If Jose´ is not at home on a Sunday afternoon, we can safely say (4). Our inference is based on general knowledge about Jose´’s habits: he usually plays football on Sunday afternoon. (4) Juse Jose´

irida football

di-manika-sika 3person.masculine. singular-playRECENT.PAST.ASSUMED ‘Jose´ played football (we infer it from general knowledge)’

If one learnt the information from someone else, then (5) – with a reported evidential – is the only correct option: (5) Juse Jose´

irida football

di-manika-pidaka 3person.masculine.singularplay-REC.P.REP ‘Jose´ played football (we were told)’

Languages that have ‘evidentiality’ as a grammatical category vary in how many types of evidence they mark. Some distinguish just two terms (eyewitness and noneyewitness, or reported and everything else), while others six or more terms. Every language has some lexical way of referring to information source, e.g., English reportedly or allegedly. Such lexical expressions may become grammaticalized as evidential markers. Nonevidential categories may acquire a secondary meaning relating to information source. Conditionals and other nondeclarative moods may acquire overtones of uncertain information obtained from some other source, for which the speaker does not take any responsibility; the best known example is the French conditional (Dendale, 1993). Past tense and perfects acquire overtones of nonfirsthand information in many Iranian and Turkic languages, and resultative nominalizations and passives (also with a resultative meaning) can express similar meanings. In other languages, the choice of a complementizer or a type of complement clause may serve to express meanings related to how one knows a particular fact. In English, different complement clauses distinguish an auditory and a hearsay meaning of the verb hear: saying I heard Brazil beating France implies actual hearing, while I heard that Brazil beat France implies a verbal report of the result. These evidential-like extensions are known as ‘evidentiality strategies’ (Aikhenvald, 2003a). Historically, they may give rise to grammatical evidentials. Languages with evidentials fall into a number of subtypes, depending on how many information sources acquire distinct grammatical marking. Small systems with just two choices cover: A1. Firsthand vs. nonfirsthand. The firsthand term typically refers to information acquired through

284 Evidentiality

vision (or hearing, or other senses), and the nonfirsthand covers all other sources, including information acquired by senses other than seeing, by inference and by verbal report. A useful overview of such systems, especially in Turkic and Iranian languages, is in Johanson and Utas (2000). In (6), from Jarawara, an Arawa´ language from Brazil (Dixon, 2003), a firsthand evidential marks what the speaker could see, and the nonfirsthand refers to what he could not see (6) Wero name

kisa-me-no, get.down-BACK-IMMEDIATE.PAST. NONFIRSTHAND.masculine ka-me-hiri-ka be.in.motion-BACKRECENT.PAST.FIRSTHAND.masculineDECLARATIVE.masculine ‘Wero got down from his hammock (which I didn’t see), and went out (which I did see)’

A2. Nonfirsthand and everything else. The nonfirsthand evidential covers a large domain of information acquired through senses other than seeing, and through hearsay and inference of all sorts, as in many Caucasian languages, and also in Turkic and Finno-Ugric languages (Johanson, 2003; Aikhenvald, 2003a). Forms unmarked for evidentiality are evidentially neutral (they do not have any reference to information source). The nonfirsthand evidential in Abkhaz, a Northwest Caucasian language (Chirikba, 2003), can describe inference, as in (7): that the woman was crying is inferred from the fact that her eyes are red. The same form is used for reported information. (7) je-q"a-n d"oewa-zaaren it-be-PAST (s)heþcry-NONFIRSTHAND ‘(when she came up to the light, to the fire, her eyes were very red) Apparently, she had been crying’ (speaker’s inference)

A3. Reported (or ‘hearsay’) and everything else. Systems of this sort with one, reported, evidential, which covers information acquired through someone else’s narration, are widespread all over the world (see, for instance, Silver and Miller (1997: 38) on North American Indian languages). (8) comes from Estonian: (8) Ta olevat he be.REPORTED arsti-teaduskonna doctor-faculty.GENITIVE.SINGULAR lo˜peta-nud finish-PAST.PARTICIPLE ‘He is said to have completed his studies of medicine (but I wouldn’t vouch for it)’

A4. Sensory evidence and reported. The sensory evidential refers to something one has seen, heard, smelt, or felt (by touch); the other evidential refers to verbal report, as in the Australian languages Ngiyambaa (Wangaaybuwan-Ngiyambaa) and Diyari (Dieri). Of these, A1 and A4 are clear-cut two-term systems, while A2 and A3 include an ‘everything else,’ or evidentially neutral, term. Systems with three choices are: B1. Direct (or visual), Inferred, and Reported. Depending on the system, the first term can refer to visually acquired information, as in Qiang, a Tibeto-Burman language from China; or to information based on sensory evidence, which covers seeing, hearing, smelling and touching something, as in Mosete´n, an isolate from Bolivia, the Jaqi, or Aymara languages from Bolivia (Hardman, 1986), and Shilluk (a Western Nilotic language from Sudan). Quechua languages have three evidentiality specifications: direct evidence (-mi), conjectural (-chi, -chr(a)) and reported (-shi) (see Floyd, 1999, on Wanka Quechua: (9) trabaja-an˜a-m li-ku-n go-REFLEXIVE-3person work-PURPOSE. MOTION-nowDIRECT. EVIDENTIAL ‘He’s gone to work’ (I saw him go) (10) chay lika-a-nii that see-NOMINALISER-1person juk-ta-chra-a lika-la other-ACCUSATIVEsee-PAST CONJECTURAL. EVIDENTIAL-TOPIC ‘The witness (‘my seer’) must have seen someone else’ (her house was robbed; she saw someone next to her house, it was not me, I infer it was (-chr) someone else) (11) Ancha-p-shi too.much-GENITIVE-REPORTED.EVIDENTIAL wa"a-chi-nki cry-CAUSATIVE-2person wamla-a-ta girl-1person-ACCUSATIVE ‘You make my daughter cry too much’ (they tell me)

B2. Visual, Nonvisual sensory, Inferred are found in Washo, from the California-Nevada border and in Siona, a West Tucanoan language from Ecuador. B3. Visual, Nonvisual sensory, Reported are found in Oksapmin (isolate from Papua New Guinea),

Evidentiality 285

Maricopa, a Yuman language from Arizona, and Dulong, a Tibeto-Burman language from Burma. B4. Nonvisual sensory, Inferred, Reported are found in the Samoyedic languages Nganasan and Enets. B5. Reported, Quotative and everything else. Only reported and quoted information requires a special marker in a few North American Indian languages, e.g., Comanche, a Uto-Aztecan language from Oklahoma. These systems include at least one sensory specification. The nonvisual sensory evidential in B2, B3, and B4 systems typically covers information acquired by hearing, smelling, and touching, and feeling. The inferred evidential refers to inference based on visible traces and assumption, while the reported evidential describes any kind of verbal report. If a language has an evidentially unmarked form, its evidentiality value is typically recoverable from the context. Four-term systems cover: C1. Visual, Nonvisual sensory, Inferred, Reported, as in numerous East Tucanoan languages spoken in northwest Amazonia, and in Eastern Pomo, a Pomoan language from California. C2. Visual or direct evidence, Inferred, Assumed, REPORTED, as in Shipibo-Konibo, a Panoan language from Peru. The sensory evidential in a C2 system can refer to firsthand knowledge acquired through any physical sense, be it vision, hearing, smell, taste, or touch, as in ShipiboKonibo; or it may refer just to information acquired by seeing, as in Tsafiki, a Barbacoan language from Ecuador (Dickinson, 2000). The visual evidential is formally unmarked; there is one suffix marking information inferred from direct physical evidence, another for inference from general knowledge, and an additional one for reported, or hearsay. (12) Manuel ano fi-e Manuel food eat-DECLARATIVE ‘Manuel ate’ (the speaker saw him) (13) Manuel Manuel

ano food

fi-nu-e eat-INFERREDDECLARATIVE ‘Manuel ate’ (the speaker sees the dirty dishes)

(14) Manuel Manuel

ano food

fi-n-ki-e eat-NOMINALISERVERB.CLASS:doDECLARATIVE ‘Manuel ate’ (He always eats at 8:00 and it’s now 9:00)

C3. Direct (or visual) Inferred, Reported, Quotative are found in Cora, a Uto-Aztecan language from Mexico.

Four-term systems involve at least one sensory specification. If there is just one sensory evidential, additional complexity arises within inferred evidentials (as in C2: one evidential then refers to inference based on visible results, and the other one to inference based on reasoning and assumption). Additional choices between reported evidentials involve distinguishing reported and quoted information (C3). The only type of multiterm system found in more than one language involves: D1. Visual, Nonvisual sensory, Inferred, Assumed, and Reported. This system was exemplified in (1)–(5) above and is also found in Tucanoan languages in Brazil and Colombia, such as Tuyuca and Desano (Barnes, 1984). Systems with more than five terms have just two sensory evidentials, and a number of evidentials based on inference and assumption of different kinds, as in the Nambiquara languages from Brazil, and Foe and Fasu, of the Kutubuan family spoken in the Southern Highlands of Papua New Guinea. In some languages, a wide variety of evidential meanings may be expressed in different slots of the verbal word or within a clause. Different evidentiality specifications are ‘scattered’ throughout the grammar, and by no means form a unitary category, as in Makah, a Wakashan language from Washington State (Jacobsen, 1986), in Eskimo languages and in Western Apache, an Athabaskan language from Arizona (de Reuse, 2003: 97). Evidentials can be expressed with a wide array of morphological mechanisms and processes. There is no correlation between the existence of evidentials and language type. Even pidgins and creoles are known to have had evidentials (as did Chinese Pidgin Russian). Examples of a truly functionally unmarked form in an evidentiality system are rare. The firsthand, visual, or a combined visual and sensory, evidential tends to be less formally marked than any other term. This term is formally unmarked in some languages, as we saw in (12), from Tsafiki. Evidentiality neutral terms are a property of a few systems where an evidential is opposed to ‘everything else’ (these are A2, A3, A5, and B5). This is quite different from omitting an evidential, which can happen either if the information source is clear from the context, or if evidentials are mutually exclusive with some other morpheme, e.g., mood, as in Samoyedic languages. Cooccurrence of different evidentials in one clause – and the different morphological statuses of evidentials – provides a tool for distinguishing evidentiality subsystems within one language. If a language has several distinct evidentiality subsystems,

286 Evidentiality

the reported specification is most likely to be set apart from others. Evidentials differ from other grammatical categories in a number of ways. The information source can be marked more than once in a clause. Two sources can be different, but somehow linked together, as in Tsafiki (Dickinson, 2000: 408): (15) Manuel Manuel

ano food

fi-nu-ti-e eat-INFERENCE. PHYSICAL.EVIDENCEREPORTEDDECLARATIVE ‘He said/they say Manuel has eaten’ (they didn’t see him, but they have direct physical evidence)

In Eastern Pomo, the two sources can be fully distinct: describing information source of a blind man, one uses a nonvisual evidential, while the story is told in reported evidential because the narrator heard it from someone else. These features make evidentiality similar to a predication in its own right. Further arguments to the same effect include: . An evidential may be within the scope of negation, as in Akha, a Tibeto-Burman language (Hansson, 2003). In (16), the visual experience and not the verb itself is being negated: (16) a`jOq a´N dı` he NOUN.PARTICLE beat O a`shu´ Wa` ma` Na´ VERBAL.PARTICLE who not VISUAL ‘I do not know/can’t see who is beating him’

. An evidential can be questioned, as in Wanka Quechua (Floyd, 1999: 132). . The ‘truth value’ of an evidential may be different from that of the verb in its clause. Evidentials can be manipulated to tell a lie. One can give a correct information source and wrong information, as in saying ‘He is dead-REPORTED,’ when you were told that he is alive, or correct information and wrong information source, as in saying ‘He is alive-VISUAL,’ when in fact you were told that he is alive, and did not see him die. . And finally, an evidential can have its own time reference, distinct from the time reference of the event talked about (see Aikhenvald, 2003b, for Tariana). Evidentials vary in their semantic extensions, depending on the system and its structure. The firsthand term in two term-systems typically refers to visual and often other sensory information, and can be extended to denote the direct participation, control, and volitionality of the speaker. The sensory evidential in A4 systems refers to sensory perception of any kind, without any epistemic or other overtones. The nonfirsthand term in A1 and A2 systems

means the opposite of firsthand. The nonfirsthand often implies lack of participation, lack of control, nonspecific evidence (or no evidence at all), inference, and hearsay. An extension to hearsay is sometimes found but is not universal. There are hardly any epistemic extensions in A1 evidentiality systems with two choices. Languages tend to have other ways of expressing probability and possibility. In systems with three or more terms, the visual or the direct evidential usually covers information acquired through seeing, and also generally known and observable facts. It may be extended to indicate certainty. The nonvisual sensory evidential in B2, B3, and B4 systems refers to information acquired by hearing, smell, touch, or feeling (such as an injection), and has no epistemic extensions. No language has a special evidential to cover smell but not auditory information. The inferred evidential typically covers inference based on visual evidence, on nonvisual sensory evidence, on reasoning or on assumption. It is also used to refer to someone else’s ‘internal states’ – feelings, knowledge, and the like. It may acquire an epistemic extension of ‘conjecture,’ uncertainty, and lack of control. The reported evidential is semantically uniform in systems of all types. Its core meaning is to mark that information comes from someone else’s report. A reported evidential can be used as a quotative, to indicate the exact authorship of the information, or to introduce a direct quote. It can be used for a secondhand or thirdhand report. A reported evidential may develop an epistemic extension of unreliable information, as a means of ‘shifting’ responsibility for the information to some other source one does not vouch for, as in Estonian: example (8) has overtones of ‘I don’t vouch for this information.’ Such extensions are not universal. As Valenzuela (2003: 57) remarks for Shipibo-Konibo, the selection of reported evidential over the direct evidential ‘‘does not indicate uncertainty or a lesser degree of reliability but simply reported information.’’ Languages with multiterm evidentials generally tend to have a multiplicity of other verbal categories, especially ones that relate to modalities. The larger the evidential system, the less likely it is that the evidential terms will develop epistemic extensions. A nonfirsthand term in a two-term system, or an inferred term in a three-term system, tend to subsume all sorts of information acquired indirectly. These evidentials may then evolve mirative extensions (to do with unexpected information, the ‘unprepared mind’ of the speaker, and speaker’s surprise: DeLancey, 1997, 2001). When used with a first person subject, the nonfirsthand evidentials in A1 and A2 systems, nonvisual

Evidentiality 287

evidentials in larger systems, and reported in systems of varied types may acquire additional meanings of lack of intention, control, awareness, and volition on the part of the speaker. Verbs covering internal states may require obligatory evidential choice depending on person. As a result of these correlations evidentials acquire the implicit value of person markers. Evidentials interrelate with clause types and other grammatical categories in the following ways: 1. The maximum number of evidential specifications tends to be distinguished in declarative main clauses. 2. The most frequent evidential in commands is reported (‘do what someone else told you to’). The choice of an evidential in questions may contain reference to the source of information available to the speaker, to the addressee or to both. 3. Fewer evidentials may be used in negative clauses than in positive. 4. Nonindicative modalities (conditional, dubitative and so on) may allow fewer evidential specifications than the indicative. In many languages, evidentials may not be used in future which is, by its nature, a kind of modality. 5. The maximum number of evidential specifications is expected in past tenses. In some languages, as in Jarawara (Dixon, 2003), firsthand and nonfirsthand evidentials are distinguished only in the past. The source of information for an event is often based on its result, hence the link between firsthand/nonfirsthand, on the one hand, and past, perfect, perfective, and resultative on the other. Evidentials often come from grammaticalized verbs. The verb of ‘saying’ is a frequent source for reported and quotative evidentials, and the verb ‘feel, think, hear’ can give rise to a nonvisual evidential in large systems. Closed word classes – deictics and locatives – may give rise to evidentials, both in small and in large systems. Evidentiality strategies involving past tenses and perfects, and nominalizations, can develop into small evidentiality systems (A1 and A2). The creation of a reported evidential may involve reanalysis of subordinate clauses (typically, complement clauses of verbs of speech) as main clauses (as in Estonian). Nonindicative moods and modalities may give rise to a term in a large evidentiality system; however, there are no examples of a modal system developing into a system of evidentials. This lack of evidence confirms the separate status of evidentiality and modality. Large evidential systems tend to be heterogenous in origin. Evidentiality is a property of a significant number of linguistic areas, including the Balkans, the Baltic

area, India, and a variety of locations in Amazonia (Aikhenvald and Dixon, 1998). Evidentials may make their way into contact languages, such as Andean Spanish (see papers in Hardman, 1981). If several information sources are available – for instance, I both saw and heard a dog barking and later someone told me about it – any one of three evidentials can potentially be used: visual, nonvisual, and reported. In this situation, the visual evidential tends to be preferred. The genre of a text may determine the choice of an evidential. Traditional stories are typically cast in reported evidential. Evidentials can be manipulated in discourse as a stylistic device. Switching from a reported to a direct (or visual) evidential creates the effect of the speaker’s participation and confidence. Switching to a nonfirsthand evidential often implies a backgrounded aside. Evidentiality is interlinked with conventionalized attitudes to information and precision in stating the source of information (Hardman, 1981, 1986).

See also: Future Tense and Future Time Reference; Inference: Abduction, Induction, Deduction; Mood and Modality; Perfects, Resultatives, and Experientials.

Bibliography Aikhenvald A Y (2003a). ‘Evidentiality in typological perspective.’ In Aikhenvald & Dixon (eds.). 1–31. Aikhenvald A Y (2003b). ‘Evidentiality in Tariana.’ In Aikhenvald & Dixon (eds.). 131–164. Aikhenvald A Y (2004). Evidentiality. Oxford: Oxford University Press. Aikhenvald A Y & Dixon R M W (1998). ‘Evidentials and areal typology: a case-study from Amazonia.’ Language Sciences 20, 241–257. Aikhenvald A Y & Dixon R M W (eds.) (2003). Studies in evidentiality. Amsterdam: John Benjamins. Barnes J (1984). ‘Evidentials in the Tuyuca verb.’ International Journal of American Linguistics 50, 255–271. Boas F (1938). ‘Language.’ In Boas F (ed.) General Anthropology. Boston / New York: D. C. Heath and Company. 124–145. Chafe W L & Nichols J (eds.) (1986). Evidentiality: the linguistic coding of epistemology. Norwood, NJ: Ablex. Chirikba V (2003). ‘Evidential category and evidential strategy in Abkhaz.’ In Aikhenvald & Dixon (eds.). 243–272. De Haan F (1999). ‘Evidentiality and epistemic modality: setting boundaries.’ Southwest Journal of Linguistics 18, 83–102. DeLancey S (1997). ‘Mirativity: the grammatical marking of unexpected information.’ Linguistic Typology 1, 33–52. DeLancey S (2001). ‘The mirative and evidentiality.’ Journal of Pragmatics 33, 369–382.

288 Evolution of Semantics Dendale P (1993). ‘Le conditionnel de l’information incertaine: marqueur modal ou marqueur evidentiel?’ In Hilty G (ed.) Proceedings of the XXe Congre`s International de Linguistique et Philologie Romanes, Tome I, Section I. La phrase. Tu¨bingen: Francke. 165–176. Dickinson C (2000). ‘Mirativity in Tsafiki.’ Studies in Language 24, 379–421. Dixon R M W (2003). ‘Evidentiality in Jarawara.’ In Aikhenvald & Dixon (eds.). 165–188. Floyd R (1999). The structure of evidential categories in Wanka Quechua. Arlington: Summer Institute of Linguistics / University of Texas. 165–188. Guentche´va Z (ed.) (1996). L’E´nonciation me´diatise´e. Louvain-Paris: E´ditions Peeters. Hansson I-L (2003). ‘Akha.’ In Thurgood G & LaPolla R J (eds.) The Sino-Tibetan languages. London: Routledge. 236–252. Hardman M J (ed.) (1981). The Aymara language in its social and cultural context: a collection of essays on aspects of Aymara language and culture. Gainesville: University Presses of Florida. Hardman M J (1986). ‘Data-source marking in the Jaqi languages.’ In Chafe & Nichols (eds.). 113–136.

Jacobsen Jr & William H (1986). ‘The heterogeneity of evidentials in Makah.’ In Chafe & Nichols (eds.). 3–28. Jakobson R O (1957). Shifters, verbal categories, and the Russian verb. Cambridge: Harvard University. Johanson L (2003). ‘Evidentiality in Turkic.’ In Aikhenvald & Dixon (eds.). 273–291. Johanson L & Utas B (eds.) (2000). Evidentials. Turkic, Iranian and neighbouring languages. Berlin: Mouton de Gruyter. Lazard G (1999). ‘Mirativity, evidentiality, mediativity, or other?’ Linguistic Typology 3, 91–110. Lazard G (2001). ‘On the grammaticalization of evidentiality.’ Journal of Pragmatics 33, 358–368. Palmer F R (1986). Mood and modality. Cambridge: Cambridge University Press. de Reuse W J (2003). ‘Evidentiality in Western Apache.’ In Aikhenvald & Dixon (eds.). 79–100. Silver S & Miller W (1997). American Indian languages. cultural and social contexts. Tucson: The University of Arizona Press. Valenzuela P (2003). ‘Evidentiality in Shipibo-Konibo, with a comparative overview of the category in Panoan.’ In Aikhenvald & Dixon (eds.) 33–62.

Evolution of Semantics V Evans, University of Sussex, Brighton, UK ß 2006 Elsevier Ltd. All rights reserved.

One of the most important functions of language is to facilitate the ‘transmission’ of thought from one language user to another. A number of scholars, including Sperber and Wilson (1995), and Tomasello (1999, 2003), have observed that verbal communication requires both a code – which is to say a languagesystem involving conventional symbols, pairings of form and meaning – and intentional mechanisms such as inference-reading abilities. While both these aspects are essential for verbal communication, communication can, in principle, occur in the absence of a code. Indeed, as we shall see, intentionality and the ability to recognize communicative intentions are likely to have been necessary prerequisites for the evolution of symbolic representation in language. To function as a means of communication, an important prerequisite of a code, which is to say a language-system, is to be able to encode and externalize humanly-relevant concepts and combinations of concepts. Semantic knowledge, therefore, concerns the range and nature of humanly relevant concepts that can be expressed in language, and the way language serves to combine concepts in order to convey

complex ideas. In this article, we explore (i) possible cognitive preadaptations for the development of semantic knowledge, and (ii) the range and nature of conceptual structure as encoded in language, and suggestions as to the way that this structure may have evolved. Unlike some other aspects of language, there is scant evidence we can draw on in attempting to reconstruct the evolution of semantic knowledge. After all, we are, in essence, attempting to reconstruct the evolution of human cognition. To do this, we are relying on indirect evidence drawn from primatology and comparative psychology, paleontology, evolutionary anthropology, and evolutionary psychology. Nevertheless, in view of some recent developments in linguistics, both in terms of uncovering and better understanding semantic phenomena, and recent theory-construction, we can now construct some plausible paths of semantic evolution that will at least facilitate further inquiry.

Cognitive Preadaptations for Semantic Knowledge Language is characterized by being representational or ‘symbolic.’ That is, a language consists of a structured set of ‘symbolic units’ consisting of form and

Evolution of Semantics 289

meaning components. While this definition represents the received view for lexical items, a growing body of scholarship argues that grammatical patterns can also be thought of as being inherently symbolic in nature (Langacker, 1987). Symbolic units consist of two further units: a phonological unit and a semantic or conceptual unit. The semantic unit, which is what we are concerned with here, has been variously termed a ‘lemma’ (Levelt, 1989) or a ‘lexical concept’ (Evans, 2004). In this section, we approach the evolution of semantic knowledge in a general way by considering the cognitive preadaptations that may have paved the way for the emergence of semantic knowledge. The Importance of Motor Evolution

Donald (1991, 1999) has argued that there were two essential prerequisites for the evolution of symbolic units. One defining characteristic of language is that it can represent a particular idea or entity in the absence of a concrete cue: the design feature of language known as ‘displacement.’ For this representation to occur, hominids had to gain conscious access to their own memories (Donald, 1999). A second and crucial preadaptation for the emergence of language was the development of voluntary motor control. That is, hominids must have developed the ability to attend to their own action patterns, and to select, trigger, and ‘edit’ action pattern sequences. According to Donald, this development gave rise to ‘mimesis,’ a form of nonlinguistic representation. Mimetic action is representational in that it relies on perceptual resemblance to represent itself. For instance, hominid tool use, which can be traced back 1.5 million years, may have employed mimetic representation not only for showing and learning how to employ a tool, but through ‘editing’ motor routines through rehearsal, to improve the way in which the tool was used. Forms of representation such as mime, dance, ritual acts, and some kinds of music are also mimetic, serving as a form of communication that is nonlinguistic in nature. According to Donald, mimetic action was the earliest form of communication, upon which the later development of language may have been built. While voluntary control of the musculature must have been important in the rise of this early and basic form of communication, and presumably also facilitated the later development of phonetic abilities and phonological systems, for Donald, linguistic representation is of a different kind from mimetic representation. While mimetic representation is holistic, a key characteristic of semantic knowledge, as represented by the inventory of lexical concepts available in the languages of the world, is that symbolic units

serve to ‘parse’ sensory or perceptual experience into component parts, e.g., tree versus rock versus mountain, and even to encode a particular perspective with respect to which a component is viewed. For, instance, ‘shore’ and ‘coast’ both encode the same strip of land at the edge of the sea, but do so from different perspectives. Thus, for Donald, the importance of mimetic representation was that it created an appropriate cultural context, what he terms ‘mimetic culture,’ in which communication took place, and more precise disambiguation could occur with the advent of linguistic representation. The Importance of Intention-Reading Skills

Another important preadaptation for the development of semantic knowledge is likely to have been the emergence of the ability to read intentions. According to Tomasello (1999), this sort of ability was the crucial preadaptation required for the evolution of symbolic abilities such as language more generally. Research in developmental psychology reveals that during early ontogeny, shortly before a year old, human infants begin to experience themselves as ‘intentional agents.’ That is, they perceive themselves as beings whose attentional and behavioral strategies are goal-directed. Accordingly, human infants also come to see others with whom they identify, conspecifics, as intentional agents. Crucially, it is shortly after this ontogenetic ‘breakthrough’ that language begins to emerge (Tomasello, 2003). Later, from around the age of three, human infants begin to develop the notion of themselves and conspecifics as ‘mental agents.’ This development constitutes the emergence of the ‘theory-of-mind,’ in which children develop the ability to conceive that others can hold different views from their own. The importance of viewing oneself and conspecifics as intentional agents is far-reaching. From this view, it follows that others are intentional agents who possess mental states that can be directly influenced and manipulated. For instance, pointing at an object can cause one intentional agent – who recognizes the person doing the pointing as an intentional agent attempting to direct attention – to follow the direction of pointing and thus share a ‘joint attentional frame’ (Tomasello, 1999, 2003). Thus, from this perspective, the importance of a lexical concept being associated with a particular linguistic form is in the utility of the symbolic unit in affecting the mental state of another in some way, such as by coordinating behavior. In other words, language, and the lexical concepts encoded by language, require intention-reading skills, which derive from the awareness that conspecifics represent intentional agents whose mental states can be influenced and manipulated by language.

290 Evolution of Semantics

A number of scholars view intention-reading abilities as an outcome of earlier evolutionary developments. For instance, Whiten (1999) argued that intention-reading skills constitute the outcome of the emergence of what he termed ‘deep social mind.’ This result can be characterized by cooperative behaviors including the sharing of food, monogamous reproduction – which has been claimed to be the ancestral pattern for humans – and behavior such as communal hunting. Indeed, Whiten argued that intention-reading abilities would have been essential for coordinating activities such as hunting, success at which requires being able to read the intentions of cohunters, and possibly also the prey. Intention-reading skills most likely evolved by reading observables, such as direction of gaze, direction of motion, and so on. Thus, intention-reading skills are likely to have emerged from behaviorreading skills. On some accounts, chimpanzees are capable of rudimentary intention-reading abilities. Thus, intention-reading might be more than 6 million years old (Byrne, 1999), the time when hominids and chimpanzees separated. Some scholars have argued that intention-reading in hominids can be viewed as a consequence of a long chain of evolutionary development. For instance, Savage-Rumbaugh (1994) suggested that bipedalism may have set in chain a series of evolutionary developments that gave rise to the cognitive ability to take the perspective of others (intention-reading). For instance, while chimpanzees and gorillas are distinguished from orangutans by a kind of quadrupedal locomotion termed ‘knuckle-walking,’ early hominids, the australopithecines, who emerged sometime between 4 and 5 million years ago, were distinguished by bipedalism. According to Savage-Rumbaugh, knuckle-walking and bipedalism were distinct and independent solutions to traversing open terrain and transporting infants. However, a consequence of bipedalism, but not knuckle-walking, is that the parent would have had to pay more attention to the infant, which is carried in the arms. In particular, the parent must remember to pick the child up after it has been put down. This consequence may have led to the later evolution of being able to take the perspective of others. Similarly, Byrne (1999) argued that there may be more remote evolutionary antecedents for intentionreading abilities. One hypothesis is that our relatively large brachiating ancestors, for whom a fall would have been deadly, may have accomplished arboreal locomotion by advance planning. The mental representation of self as an entity moving through space would have prefigured representational abilities in general, and would have facilitated planning a

trajectory of motion. Self-representation, and the ability to consciously plan one’s movements are cognitive achievements that imply intentionality, and the later evolution of intention-reading skills. The suite of intention-reading skills evident in modern humans is summarized in Table 1. The Importance of Personality Types

This issue concerns the idea that the earliest lexical concepts may have related to personality traits (King et al., 1999). Recent research suggests that personality traits are stable across time and between contexts, correlate with verbal and nonverbal behaviors, and can be reliably judged by human observers. Moreover, King et al. (1999) argued that such behaviorallysignaled personality traits as reliability, dominance, and trustworthiness are directly relevant to complex social interactions involving competition, cooperation, sharing, sexual selection, and so on. King et al. (1999) suggested that it is the context-independent nature of such complex personality traits, and their importance for hominids that suggests such traits may have been encoded as the earliest lexical concepts. For instance, studies that have sought to teach chimpanzees to manipulate symbolic units have found that for symbol use to succeed, meaning must be decontextualized. Consider the example of an apple. If a symbol is applied to this referent, it is not clear which properties of the scene the symbolic form relates to. For instance, it could refer to the apple’s color, shape, or that it is an item of food. Until the referent has been experienced in a number of contexts, it is not clear which aspect of the referent is being indexed, and thus what the lexical concept is that is being associated with the form. As personality traits are context-independent and readily identifiable by observers, then an early linguistic form that indexed a particular personality trait might have served as an early lexical concept. That is, personality traits achieve the displacement aspect of lexical concepts

Table 1 Human intention reading abilities Human intention reading abilities include . . . The ability to coordinate or share attention, as when an infant and

adult both attend to the same object The ability to follow attention and gesturing, as when an infant follows

an adult’s pointing or gaze, in order to attend to an object The ability to actively direct attention of others, such as drawing

attention to a particular object or event, for instance, through pointing The ability of culturally (imitatively) learning the intentional actions of others, such as imitating verbal cues in order to perform

intentional actions such as declarative, interrogative or imperative speech functions

Evolution of Semantics 291

by virtue of being inherently context-independent. For this reason, symbolic representation in language may have taken personality traits as the first lexical concepts.

The Nature and Evolution of Semantic Knowledge In this section, we examine the nature of semantic knowledge in more detail. That is, we examine how humans organize the world and their experience of the world into concepts. We also speculate on possible evolutionary bases of semantic knowledge of this kind and the cognitive mechanisms underlying this knowledge. Concept Formation

‘Semantic structure’ constitutes the meaning system directly expressed by and encoded in language. In other words, semantic structure is the form that conceptual structure takes for expression in language. Thus, in order to get a sense of the nature of semantic knowledge, for instance, the nature and range of lexical concepts, we must begin by examining the nature of conceptual structure. In this section, then, we consider the basic units of concept structure, ‘concepts.’ We consider the following question: Where do concepts come from? For psychologists, concepts are the basic units of knowledge and are essential both for ‘categorization’ – the ability to identify individuals, entities, and instances – and ‘conceptualization’ – the ability to construct alternative perspectives (Barsalou, 1992). To illustrate the notion of conceptualization, consider the sentences in (1) and (2). Each provides a different conceptualization of the concept Book: (1) That book is heavy. (2) That book is boring.

While the example in (1) relates to the book ‘as tome,’ the example in (2) relates to book ‘as text.’ Since the work of the French philosopher Rene´ Descartes in the 17th century, who developed the principle of Mind/Body dualism, there has been a common assumption within philosophy and, more recently, the other cognitive sciences, that conceptual structure can be studied without recourse to the body, and hence without recourse to ‘embodiment.’ In modern linguistics, this ‘objectivist approach’ has been most evident in the approach to meaning known as ‘Formal Semantics.’ Proponents of this approach assume that it is possible to study meaning as a formal or computational system without taking into account the nature of human bodies or human experience. This position is problematic from an evolutionary

perspective as it entails that a new discontinuous cognitive adaptation was required for conceptual structure. Conceptual structure, on this account, is assumed to employ what has been termed an ‘amodal’ (nonperceptual) form of representation. Amodal representation is distinct from the ‘modal’ or perceptual forms of representation that presumably had to exist prior to the emergence of conceptual structure, in order to represent ‘percepts’ (Barsalou, 1999). The last two decades or so have seen a shift from modeling conceptual representation in terms of amodal systems, towards a more perceptual-based or ‘embodied perspective.’ An embodied perspective takes the view that concepts derive from percepts, and thus, conceptual structure is fundamentally perceptual in nature. Within linguistics, this general perspective has been advocated most notably by Lakoff and Johnson (1980, 1999; Lakoff, 1987), and also by Jackendoff (1983, 1992, 2002). In general terms, the idea is that concepts have an embodied character. This idea constitutes the thesis of embodied cognition (see Ziemke, 2003 for discussion). The idea that concepts are embodied assumes that we have a species-specific view of the world, due to the nature of our physical bodies. One obvious way in which our embodiment affects the nature of experience is in the realm of color. While the human visual system has three kinds of photoreceptors or color channels, other organisms often have a different number. For instance, the visual system of squirrels, rabbits, and possibly cats, makes use of two color channels, while other organisms, for instance, goldfish and pigeons, have four color channels (Varela et al., 1991). Having a different range of color channels radically alters how the world of color is perceived. This difference affects our experience of color in terms of the range of colors accessible to us along the color spectrum. Moreover, while some organisms can see in the infrared range, humans are unable to see in this range (Jackendoff, 1992). It’s clear, then, that the nature of our visual apparatus – an aspect of our physical embodiment – determines the nature and range of our visual experience. The position that different organisms have different kinds of experiences due to the nature of their embodiment is known as ‘variable embodiment.’ The position that our experience is embodied – that is, structured in part by the nature of the kinds of bodies/neuro-anatomical structure we have – has consequences for conceptual structure. This corollary follows because the concepts we have access to, and the nature of the ‘reality’ we think and talk about, is a function of our embodiment. In other words, we can only talk about what we can perceive and

292 Evolution of Semantics

think about, and the things that we can perceive and think about derive from embodied experience. Hence, the human mind must bear the imprint of embodied experience. Some psychologists have made specific proposals as to how embodied experience gives rise to concepts. For instance, the developmental psychologist Jean Mandler (2004) suggested that through a process of ‘perceptual meaning analysis,’ percepts come to be recoded as concepts. Mandler argued that this process occurs alongside percept formation and begins virtually from birth. However, she viewed percepts and concepts as wholly distinct forms of representation. Another view has been proposed by Barsalou (1999). He argued that a concept is akin to a remembered perceptual state, which he termed a ‘perceptual symbol.’ From an evolutionary perspective, if it is correct that concepts are fundamentally perceptual in nature, then by virtue of early hominids gaining conscious access to the contents of their own memories, little additional complexity in terms of cognitive development is required for a rudimentary conceptual system to have emerged. This corollary follows as concepts, on this account, are something akin to ‘remembered percepts.’ The Nature of Lexical Concepts: The Natural Partitions Hypothesis

Having examined conceptual structure, we now turn to semantic structure. Linguists have traditionally classified lexical concepts into those that are encoded by ‘open’ versus ‘closed class forms.’ Open class forms include, for English, nouns, verbs and adjectives, while closed class forms include determiners, prepositions, conjunctions, and so on. The basic insight is that it is much harder to add new members to the closed class set than to the open class set. A related insight is that open class forms tend to have much richer denotational meaning, while closed class forms are associated with lexical concepts that have more schematic or relational meaning. That is, they provide connections to other lexical concepts that have a more referential meaning.

However, since at least the early 1980s, the strict separation between closed and open class concepts has been called into question. This query stems from the observation that the division between open and closed class concepts constitutes more of a continuum rather than a strict bifurcation. For instance, Gentner (1981) pointed out that verbs, which are normally thought of as being open class, are highly relational in nature, a feature associated with closed class elements. More recently, Gentner and Boroditsky (2001) have elaborated on this view, suggesting that open class lexical concepts exhibit ‘cognitive dominance.’ This contrasts with closed class concepts that exhibit ‘linguistic dominance.’ These notions relate to the similar idea expressed by Langacker (1987), who used the terms ‘conceptually autonomous’ versus ‘conceptually dependent.’ The basic idea is that lexical concepts associated with prototypical open class (autonomous) forms obtain their reference independently of language, which is to say from the world, while prototypical lexical concepts associated with closed class or relational forms obtain their reference from language. Moreover, whether a form is cognitively dominant (or autonomous) or linguistically dominant (or dependent) is a matter of degree. A proposed continuum is given in Figure 1. In order to account for the cognitive dominance of prototypical open class lexical concepts (i.e., nouns), Gentner (1981) proposed the Natural Partitions Hypothesis. This idea holds that concepts that are encoded as prototypical open class elements such as individuals and objects are ‘individuated.’ That is, entities of this kind constitute densely bundled collections of percepts. Thus, an entity such as a rock or a tree ‘stands out.’ In Gestalt Psychology terms, a rock constitutes the figure in the figure-ground organization of a given scene. The Natural Partitions Hypothesis states that certain aspects of the world are given by the world. These entities are typically encoded crosslinguistically by nouns, and are acquired first by children. On this account then, bundles of percepts are ‘given’ by the world, and are simply labeled by language. The Natural Partitions Hypothesis offers an intriguing insight into a possible order of evolution among

Figure 1 Division of dominance among form classes of lexical concepts. (Adapted from Gentner and Boroditsky, 2001: 216.)

Evolution of Semantics 293

lexical concepts – which is to say concepts encoded by language. That is, we might speculate, based on this, that the very first lexical concepts were those for individuals, including animals (and possibly classes of animals) and objects. Concepts of this kind have the most cognitive dominance. That is, they have highest conceptual autonomy. Other lexical concepts may have evolved later. Further, there is a correlation between the position of a lexical concept on the continuum of dominance (see Figure 1) and the form class associated with the lexical concept. Although this correlation is not exact, for instance, ‘destruction’ and ‘destroy’ encode a similar concept employing different lexical classes (noun versus verb), it is plausible that the evolution of lexical classes (or ‘parts of speech’) emerged due to distinctions in the relative dominance or autonomy being further, later, encoded by morphosyntactic properties of language. Lexical Concepts and Concept-Combination

From an evolutionary perspective, being able to form concepts and express them via language, while a remarkable achievement, doesn’t begin to approach the range and complexity of the semantic structure available to modern Homo Sapiens. Lexical concepts are only a subset of our semantic knowledge. Another important aspect of semantic knowledge concerns our ability to combine lexical concepts in order to give rise to new and different kinds of conceptual structure. Moreover, it is a striking fact that concept combination produces complex concepts that are not simply the sum of the individual parts that comprise the derived concept. For instance, the complex concept Petfish is not simply the intersection of the

Figure 2 Conceptual integration for the composite concept goldfish.

concepts Pet and Fish. Rather, the concept Petfish has its own concept-internal structure, known as ‘category structure.’ For instance, while most people would rank mackerel, which is silver in color, as a good example of the Fish category, a cat or a dog would be rated as a good example of the Pet category. Yet, a good example of a Petfish is a goldfish. Not only is a goldfish not silver, it is not soft and cuddly either. An important task in developing an evolutionary perspective on semantic knowledge is to account not only for the way in which lexical concepts are formed, but also for the mechanisms responsible for concept combination. A recent approach to concept combination of this kind argued that complex concepts result from a process of ‘conceptual integration’ (Fauconnier and Turner, 2002; Turner and Fauconnier, 1995). This process involves what is termed ‘selective projection’ of content from each of the concepts that give rise to the complex concept, as well as additional material derived from background knowledge, such as knowledge that the kinds of fish we keep in fishbowls are typically goldfish. This process is termed ‘completion.’ Thus, the complex concept, known as a ‘conceptual blend,’ has structure associated with it that is found in neither of the ‘input’ concepts that give rise to it. This structure is diagrammed in Figure 2. Clearly, some form of conceptual integration allows humans to combine and manipulate concepts in order to produce more complex ideas. Fauconnier and Turner argued that the emergence of cognitively modern human beings, during the upper paleolithic era, somewhere in the region of 50 000 years ago, points to the development of a new cognitive ability: our ability to perform conceptual integration. While

294 Evolution of Semantics

anatomically modern humans appear to have existed from at least 100 000 years ago, the upper paleolithic stands out. This period witnessed the emergence of new social and technological breakthroughs, including the development of projectile points made from bony material for use in hunting, the manufacture of personal adornments, the development of sophisticated art, evidence of belief systems such as religion and magic, plus manmade shelters were built for the first time, sewn clothing was worn, and sculptures were produced. Fauconnier and Turner argued that what made advances such as these possible, was that humans had evolved the ability to perform complex conceptual integrations. This process, then, may have facilitated composing and elaborating concepts to produce new and more elaborate conceptual structures. Polysemy

Another striking aspect of semantic knowledge is the phenomenon of ‘polysemy.’ This aspect constitutes the way in which a range of related lexical concepts can be expressed using a single form. For instance, the English preposition ‘over’ has a number of distinct but related lexical concepts associated with it. Consider some of the distinct lexical concepts proposed by Tyler and Evans (2003): (3a) (3b) (3c) (3d) (3e) (3f) (3g) (3h)

The picture is over the sofa [‘above’] The picture is over the hole [‘covering’] The ball is over the wall [‘on-the-other-side-of’] She has a strange power over him [‘control’] The government handed over power [‘transfer’] She prefers wine over beer [‘preference’] The relationship is over [‘completion’] The relationship evolved over the years [‘temporal’] (3i) The fence fell over [‘reflexive’] (3j) They started the race over [‘repetition’]

Recent research has argued that polysemy, far from being merely a ‘surface’ phenomenon, is in fact conceptually real. That is, polysemy patterns reflect distinct lexical concepts, stored as different senses in the mental lexicon (Evans, 2004; Lakoff, 1987; Tyler and Evans, 2003). Accordingly, from an evolutionary perspective, the challenge is to explain how the proliferation of lexical concepts, i.e., polysemy, arises. A recent perspective is that polysemy emerges from the interaction between language use and contexts of use, due to the conventionalization of situated (or invited) inferences (Traugott and Dasher, 2002; Tyler and Evans, 2003; Evans, 2004). For instance, the ‘covering’ meaning associated with ‘over’ may have derived from contexts of use in which, in a given

spatial scene, an element placed above another entity thereby covered it. Through a process of decontextualization, the ‘covering’ meaning was reanalyzed as being a distinct meaning component. Once this reanalysis occurred, it could be used in novel ways unsupported by the original spatial scene that gave rise to the inference in the first place (Tyler and Evans, 2003). From an evolutionary perspective, the importance of polysemy and meaning-extension is that it illustrates how language, in conjunction with human experience, can give rise to new lexical concepts. Moreover, this particular phenomenon of meaning-extension illustrates how language can flexibly increase its repertoire of lexical concepts without increasing the number of linguistic forms. Abstract Concepts

Another important aspect of semantic structure relates to so-called abstract concepts. These include lexical concepts such as Truth, Justice, or Theory. Concepts of these kinds are abstract in the sense that they cannot be straightforwardly accounted for in terms of perceptual recording, precisely because it’s not clear what their perceptual basis is, and even whether they have one. Indeed, abstract concepts provide a significant challenge if we are to attempt to provide an evolutionary account maintaining the thesis of embodied cognition. An influential framework that provides an account that is based in perceptual or embodied experience is the ‘conceptual metaphor theory’ of Lakoff and Johnson (1980, 1999). Lakoff and Johnson argued that abstract concepts are grounded in embodied experience, and thus our perception of the world, even if the grounding is not direct. This grounding is achieved by virtue of ‘conceptual metaphors,’ which are long-term conceptual mappings that serve to project structure from a ‘source concept,’ which relates to perceptual experience onto the abstract concept, the ‘target concept.’ For instance, we commonly understand the abstract concept of Quantity in terms of the more perceptually concrete concept of Verticality, as evidenced by examples such as the following: (4a) The price of stocks has gone up. (4b) Her score is higher than mine.

In both these examples, an abstract notion of Quantity is understood in terms of physical position or motion on the vertical axis. This understanding is licensed by the conceptual metaphor Quantity Is Vertical Elevation. The most recent version of conceptual metaphor theory recognizes two distinct kinds of conceptual metaphors: ‘primary metaphors,’ which are directly

Evolution of Semantics 295

grounded in experience and constitute ‘primitive’ conceptual mappings, and more complex ‘compound metaphors,’ which are constructed out of the more experientially basic primary metaphors (Grady, 1997; Lakoff and Johnson, 1999). For instance, when we understand Theories in terms of Physical Structures, as evidenced by the following examples: (5a) Is that the foundation for your theory? (5b) The argument is shaky.

Grady argues that the motivation for linguistic examples such as these is in fact two primary metaphors, Persisting Is Remaining Erect and Organization Is Physical Structure. These unify to give the compound metaphor An Abstract Organized Entity [such as a theory] Is An Erect Physical Object (Grady, 1997). Thus, it is only primary metaphors that are grounded in perceptual experience. The motivation for the conceptual associations captured by primary metaphors is due to a tight and ubiquitous correlation in experience. For instance, there is a tight and recurring correlation in experience between quantity and height. When we fill a glass with water, an increase in quantity correlates with an increase in height. Thus, primary metaphors are motivated by ‘experiential correlation.’ From an evolutionary perspective, the phenomenon of ‘metaphoric’ mappings holding between concepts from different parts of ‘conceptual space,’ known as ‘domains,’ allows us to account for how perceptual information can be recruited in order to construct more abstract concepts, such as Quantity and Theories. This phenomenon suggests that, in addition to being able to recode percepts as concepts and combine concepts, the conceptual system must have additionally developed a mechanism for projecting structure from one conceptual domain to another in order to create more abstract concepts. Cultural Evolution

The final issue we examine is that of cultural evolution. Lexical concepts are culturally embedded, and thus, we must briefly look at the role of cultural evolution in providing the conceptual backdrop for the emergence of semantic knowledge. Consider the evolution of the concept of Money. This concept is one that has been evolving for over 3000 years. Weatherford (1998) identified a number of key mutations in the development of how we conceptualize Money. The first was the invention of coins in Anatolia over 3000 years ago. This development gave rise to the monetary economies that

underpinned the classical Greek and Roman civilizations. The second was the development of family-owned credit banks in Renaissance Italy. This development gave rise to capitalist market economies that replaced earlier feudal societies throughout Europe, and the period in which European countries expanded to became global economic powers. The process whereby cultural artifacts or cultural practice undergoes cumulative evolution, resulting in modification or improvement has been dubbed the ‘ratchet effect’ (Tomasello, 1999). Thus, an important aspect of the evolution of semantic knowledge involves the development and evolution of cultural knowledge. See also: Acquisition of Meaning by Children; Aristotle and Linguistics; Cognitive Semantics; Concepts; Connotation; Conventions in Language; Default Semantics; Disambiguation; Existence; Extensionality and Intensionality; Factivity; Folk Etymology; Frame Semantics; Generic Reference; Ideational Theories of Meaning; Ideophones; Idioms; Indexicality; Inference: Abduction, Induction, Deduction; Intention and Semantics; Lexical Acquisition; Meaning, Sense, and Reference; Mentalese; Metalanguage versus Object Language; Metaphor and Conceptual Blending; Metonymy; Natural versus Nonnatural Meaning; Negation; Neo-Gricean Pragmatics; Nominalism; Onomasiology and Lexical Variation; Philosophical Theories of Meaning; Pragmatics and Semantics; Propositional Attitude Ascription; Prosody; Psychology, Semantics in; Semantic Maps; Semantics– Pragmatics Boundary; Thought and Language; Type versus Token; Use Theories of Meaning; Vagueness.

Bibliography Barsalou L (1992). Cognitive psychology. Hillsdale: Lawrence Erlbaum. Barsalou L (1999). ‘Perceptual symbol systems.’ Behavioral and Brain Sciences 22, 577–660. Byrne R (1999). ‘Human cognitive evolution.’ In Corballis M & Lea S (eds.) The descent of mind. Oxford: Oxford University Press. 71–87. Deacon T (1997). The symbolic species. New York: Norton. Donald M (1991). Origins of the modern mind. Cambridge, MA: Harvard University Press. Donald M (1999). ‘Human cognitive evolution.’ In Corballis M & Lea S (eds.) The descent of mind. Oxford: Oxford University Press. 138–154. Evans V (2000). The structure of time. Amsterdam: John Benjamins. Fauconnier G & Turner M (2002). The way we think. New York: Basic Books. Gentner D (1981). ‘Some interesting differences between verbs and nouns.’ Cognition and Brain Theory 4(2), 161–178.

296 Existence Gentner D & Boroditsky L (2001). ‘Individuation, relativity and early word learning.’ In Bowerman M & Levinson S (eds.) Language acquisition and conceptual development. Cambridge: Cambridge University Press. 215–256. Grady J (1997). ‘Theories are buildings Revisited.’ Cognitive Linguistics 8(4), 267–259. Jackendoff R (1983). Semantics and cognition. Cambridge, MA: MIT Press. Jackendoff R (1992). Languages of the mind. Cambridge, MA: MIT Press. Jackendoff R (2002). Foundations of language. Oxford: Oxford University Press. King J, Rumbaugh D & Savage-Rumbaugh S (1999). ‘Perception as personality traits and semantic learning in evolving hominids.’ In Corballis M & Lea S (eds.) The descent of mind. Oxford: Oxford University Press. 98–115. Lakoff G (1987). Women, fire and dangerous things. Chicago: Chicago University Press. Lakoff G & Johnson M (1980). Metaphors we live by. Chicago: Chicago University Press. Lakoff G & Johnson M (1999). Philosophy in the flesh. New York: Basic Books. Langacker R (1987). Foundations of cognitive grammar. Stanford: Stanford University Press. Levelt W (1989). Speaking. Cambridge, MA: MIT Press. Mandler J (2004). The foundations of mind. Oxford: Oxford University Press.

Savage-Rumbaugh S (1994). ‘Hominid evolution: looking to modern apes for clues.’ In Quiatt D & Itani J (eds.) Hominid culture in primate perspective. Boulder: University of Colorado Press. 7–49. Sperber D & Wilson D (1995). Relevance (2nd edn.). Oxford: Blackwell. Tomasello M (1999). Cultural origins of human cognition. Cambridge, MA: Harvard University Press. Tomasello M (2003). Constructing a language. Cambridge, MA: Harvard University Press. Traugott E-C & Dasher R (2002). Regularity in semantic change. Cambridge: Cambridge University Press. Turner M & Fauconnier G (1995). ‘Conceptual integration and formal expression.’ Metaphor and Symbolic Activity 10(3), 183–203. Tyler A & Evans V (2003). The semantics of English prepositions. Cambridge: Cambridge University Press. Varela F, Thompson E & Rosch E (1991). The embodied mind. Cambridge, MA: MIT Press. Weatherford J (1998). The history of money. New York: Three Rivers Press. Whiten A (1999). ‘Human cognitive evolution.’ In Corballis M & Lea S (eds.) The descent of mind. Oxford: Oxford University Press. 173–193. Ziemke T (2003). ‘What’s this thing called embodiment?’ Proceedings of the 25th Annual Meeting of the Cognitive Science Society. Hillsdale: Lawrence Erlbaum.

Existence B Caplan, University of Manitoba, Winnipeg, Canada ß 2006 Elsevier Ltd. All rights reserved.

What Existence Is Existence is the property that is attributed to Uma Thurman in (1) Uma Thurman exists.

Perhaps existence is also attributed to some object in (2) There is an even prime.

There is a connection between existence and (objectual) quantification: what exists is exactly what our quantifiers quantify over, when our quantifiers are unrestricted. Sometimes our quantifiers are restricted so that they quantify over only some of the things that exist. For example, in (3) All the bottles of beer are in the fridge.

the quantifier ‘‘all the bottles of beer’’ is naturally interpreted so that it doesn’t quantify over all of

the bottles of beer in existence. But what exists is not limited to what our quantifiers quantify over when they are restricted in one way or another. (In various free logics, variables need not be interpreted so as to have as values objects that exist. Sometimes a special predicate is introduced for ‘exists’ in these logics. Existence is not tied to quantification in these logics, although it might be tied to the special predicate.) It seems that existence is a property that everything has: namely, the property existing or being existent. But various philosophers deny this for various reasons: some deny that existence is a property; others accept that existence is a property but deny that any objects have it (because only properties do); and still others accept that existence is a property but deny that all objects have it (because only some do).

The Hume-Kant View The Scottish philosopher David Hume (1711–1776) and the German philosopher Immanuel Kant (1724–1804) denied that existence is a property.

Existence 297

(It is often said that existence is not a predicate. This is at best a confused way of denying that existence is a property.) Let us call the view that existence is not a property the Hume-Kant view. One reason for holding the Hume-Kant view is that existence is supposedly not a property but rather a precondition for having properties. After all, how could something have any properties if it did not exist? But it is hard to see what a precondition is if it is not a property. For example, being human might be a precondition for being a movie star; and being human is a property. Another reason for holding the Hume-Kant view is that to say that something has a property F and exists is supposedly not to say anything more than that something has F. For example, (4) Uma is a movie star and exists.

supposedly doesn’t say anything more than (5) Uma is a movie star.

But if this is a good reason to deny that existence is a property, then it is also a good reason to deny that being self-identical or being either round or not round is a property. For if (4) doesn’t say anything more than (5), then (6) Uma is a movie star and is self-identical.

and (7) Uma is a movie star and is either round or not round.

don’t say anything more than (5) either. But it seems that being self-identical and being either round or not round are perfectly respectable properties. For example, being round is a perfectly respectable property. And if negations and disjunctions of perfectly respectable properties are themselves perfectly respectable properties, then being either round or not round is also a perfectly respectable property.

The Frege-Russell View Some philosophers who accept that existence is a property deny that everything has it, because they think that no objects have it; rather, they think that only properties have it. On this view, existence is not a (first-level) property of objects; rather, it is a (higher-level) property of properties. In particular, it is the property being instantiated. This is a view that was held by the German mathematician and philosopher Gottlob Frege (1848–1925) and, at least at one time, by the British philosopher Bertrand Russell (1872–1970). Let’s call this view the Frege-Russell view.

One reason for holding the Frege-Russell view is that if existence were a property of objects, then it would not be possible to be mistaken in ascribing that property to an object. (By contrast, one can attribute the property being instantiated to the property being a golden mountain, say, even if that property is not instantiated.) But if this is a good reason to deny that existence is a property of objects, then it is also a good reason to deny that being selfidentical or being either round or not round is a property of objects. For it is not possible to be mistaken in ascribing those properties to an object either. And yet they are perfectly respectable properties of objects. Another reason for holding the Frege-Russell view comes from the problem of negative existentials. A negative existential is a sentence like (8) The golden mountain doesn’t exist.

which seems to say of some object that it doesn’t exist. For example, (8) seems to say, of the object that ‘‘the golden mountain’’ refers to, that it doesn’t exist. Either ‘‘the golden mountain’’ refers to something or it doesn’t. On the one hand, if ‘‘the golden mountain’’ doesn’t refer to anything, then it seems that (8) doesn’t say anything about anything. On the other hand, if ‘‘the golden mountain’’ does refer to something, then it seems that it must refer to something that exists, in which case (8) says, of something that does exist, that it doesn’t exist. Either way, it seems that (8) can’t be true. But (8) seems true; hence the problem. The Frege-Russell view offers a straightforward solution to the problem of negative existentials. On the Frege-Russell view, (8) says, of the property being the golden mountain, that it does not have the property being instantiated. And it is true that the property being the golden mountain does not have the property being instantiated. So, on the Frege-Russell view, (8) is true, as desired. (Russell’s treatment of definite descriptions like ‘‘the golden mountain’’ is actually more complicated. (see Definite and Indefinite Descriptions). One might worry that even if Russell’s treatment solved the problem of negative existentials for sentences like (8), it wouldn’t solve the problem of negative existentials for sentences like (9) Santa Claus doesn’t exist.

which contain names rather than definite descriptions (see Proper Names: Philosophical Aspects). One problem with the Frege-Russell view is that (8) doesn’t seem to say the same thing as (10) The property being the golden mountain doesn’t have the property being instantiated.

298 Existence

Similarly, (11) If the golden mountain were to exist and if cows were to fly, then just as they would have the property being able to fly, it would have the property being golden.

seems true, and it doesn’t seem to say the same thing as

problem of negative existentials. On the MeinongRussell view, (8) says, of the object ‘‘the golden mountain’’ refers to, that it doesn’t exist, and ‘‘the golden mountain’’ refers to an object that doesn’t exist. So, on the Meinong-Russell view, (8) is true, as desired. But the Meinong-Russell view doesn’t solve parallel problems. A negative subsistential is a sentence like (13) The golden mountain has no being.

(12) If the property being the golden mountain were to have the property being instantiated and if cows were to fly, then just as they would have the property being able to fly, it would be instantiated by something that has the property being golden.

that seems to say of some object that it has no being. Those who distinguish being and existence sometimes say that ‘‘there is’’ has to do with being, not existence. On this view,

Another problem with the Frege-Russell view is that the property being instantiated doesn’t seem to be fundamental in the right sort of way. It seems that facts about which properties have the property being instantiated depend on quantificational facts. For example, it seems that the property being a movie star has the property being instantiated only because some object (Uma, say) instantiates the property being a movie star. But it seems that objects (Uma, say) can instantiate properties (being a movie star, say) only if they exist. So if it is to be instantiated, then the property being instantiated seems to require that some objects exist and hence that, contrary to the Frege-Russell view, existence be a property that at least some objects have.

is also a negative subsistential. And a negative objectual is a sentence like

The Meinong-Russell View Some philosophers who accept that existence is a property deny that everything has it, because they think that some, but not all, objects have it. At one time, Russell thought that there is a broad ontological property that everything has; but he thought that this property is being (or subsisting), not existing. On this view, the golden mountain, the round square, numbers, sets, tables, and chairs have being; but only tables and chairs (and other objects that are located in space and time) exist. The Austrian philosopher Alexius Meinong (1853–1920) held a similar view. He thought that there is a broad ontological property that everything has; but he thought that this property is being an object, not being or existing. On this view, the golden mountain, the round square, numbers, sets, tables, and chairs are objects; but of these, only numbers, sets, tables, and chairs have being. (And only tables, chairs, and other objects that are located in space and time exist.) Let’s call this – the view that although there is a broad ontological property that everything has, only some objects exist – the Meinong-Russell view. One reason for holding the Meinong-Russell view is that it offers a straightforward solution to the

(14) There is no golden mountain.

(15) The golden mountain isn’t an object.

or (16) No object is the golden mountain.

that seems to say of some object that it isn’t an object. Speakers who have the intuition that (8) is true might also have the intuition that (13)–(16) are true. And if a solution to the problem of negative existentials should respect speakers’ intuition that (8) is true, then one might think that a solution to the problem of negative subsistentials or negative objectuals should similarly respect speakers’ intuition about (13)–(16). But on the Meinong-Russell view, (13) and (14) or at least (15) and (16) are false, because ‘‘the golden mountain’’ refers to an object that has being or at least is an object. (This argument might work best against those who say that (8) is false but (14) is true.) Solving the problem of negative existentials only at the cost of not solving the problem of negative subsistentials or the problem of negative objectuals doesn’t seem like much of a benefit. In addition, many dislike the Meinong-Russell view because, by saying that existence is what Russell (1903) once called ‘‘the prerogative of some only amongst beings,’’ the view offends what Russell (1919) later described as ‘‘a robust sense of reality.’’ If one rejects the Hume-Kant view, the FregeRussell view, and the Meinong-Russell view, one is left with the view that existence is a property that everything has. Although there is much to commend this view, those who hold it still have to solve the problem of negative existentials. This suggests that a solution to that problem will not come from views about existence. And once one had a solution to the problem of negative existentials (whatever that solution is and wherever it comes from), it seems that there would be little to prevent one from holding the view that existence is a property that everything has.

Expression Meaning vs Utterance/Speaker Meaning 299 See also: Definite and Indefinite Descriptions; Evidential-

ity; Extensionality and Intensionality; Factivity; Generic Reference; Negation; Nominalism; Possible Worlds; Proper Names: Philosophical Aspects; Propositional Attitude Ascription; Propositional Attitudes; Quantifiers; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Referential versus Attributive; Specificity; Virtual Objects.

Bibliography Frege G (1884). Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung u¨ber den Begriff der Zahl. Breslau: Koebner. Austin J L (trans.) (1950). The foundations of arithmetic: a logico-mathematical enquiry into the concept of number (2nd edn., 1980). Evanston, IL: Northwestern University Press. Frege G (1892). ‘U¨ber Begriff und Gegenstand.’ Vierteljahrsschrift fu¨r wissenschaftliche Philosophie 16, 192–205. Black M (trans.) (1952). ‘On concept and object.’ In Geach P T Black M (eds.) (1952). Translations from the philosophical writings of Gottlob Frege (3rd edn., 1980). Oxford: Blackwell. 42–55. Reprinted in Frege G (1997). Beaney M (ed.). The Frege reader. Oxford: Blackwell. 181–193. Hume D (1740). A treatise of human nature: being an attempt to introduce the experimental method of reasoning into moral subjects. London: Noon. Reprinted in Norton D F & Norton M J (eds.) (2000). A treatise of human nature. Oxford philosophical texts. Oxford: Oxford University Press. Kant I (1781). Kritik der reinen Vernuft. (2nd edn., 1787). Riga: Hartknoch. Guyer P & Wood A W (trans.) (1998).

Critique of pure reason. Cambridge Edition of the Works of Immanuel Kant. Cambridge: Cambridge University Press. Meinong A (1904). ‘U¨ber Gegenstandstheorie.’ In Meinong A (ed.) Untersuchungen zur Gegenstandstheorie und Psychologie. Leipzig: Barth. Levi I, Terrell D B & Chisholm R M (trans.) (1960). ‘The theory of objects.’ In Chisholm R M (ed.). Realism and the background of phenomenology. Glencoe, IL: Free Press. 76–117. Quine W V O (1948). ‘On what there is.’ Review of Metaphysics 2(5) (Sept.), 21–38. Reprinted in Quine W V O (1961). From a logical point of view: nine logico-philosophical essays (2nd edn., 1953). Cambridge, MA: Harvard University Press. 1–19. Russell B (1903). The principles of mathematics. Cambridge: Cambridge University Press. Russell B (1905). ‘On denoting.’ Mind 14(56), 479–493. Reprinted in Urquhart A (ed.) (1994). The collected papers of Bertrand Russell, vol. 4: Foundations of logic, 1903–05. New York, NY: Routledge. 414–427. Russell B (1918–1919). ‘The philosophy of logical atomism.’ Monist 28(4), (Oct. 1918): 495–527, 29(1) (Jan. 1919): 32–63; 29(2) (April 1919): 190–222; 29(3) (July 1919): 345–380. Reprinted in Slater J (ed.) (1986). The collected papers of Bertrand Russell, vol. 8: The philosophy of logical atomism and other essays, 1914–19. London: Allen & Unwin. 157–244. Russell B (1919). Introduction to mathematical philosophy. Muirhead Library of Philosophy. London: Allen & Unwin. Salmon N (1987). ‘Existence.’ In Tomberlin J E (ed.) Philosophical perspectives, vol. 1: Metaphysics. Atascadero, CA: Ridgeview. 49–108.

Expression Meaning vs Utterance/Speaker Meaning A Bezuidenhout, University of South Carolina, Columbia, SC, USA ß 2006 Elsevier Ltd. All rights reserved.

When Mrs. Malaprop in Richard Sheridan’s play The Rivals says to her niece Lydia Languish ‘‘don’t attempt to extirpate yourself from the matter,’’ she means to say that her niece should not attempt to extricate herself from the matter. But that is not what ‘extirpate’ means in English (at least, it is not a meaning one would find listed under ‘extirpate’ in a good dictionary of English usage). Malapropisms of this sort are one way in which expression meaning (i.e., word or sentence meaning) can come apart from speaker meaning. Mrs. Malaprop has a mistaken belief about what the words she is using mean in the language she is using. Slips of the tongue (e.g.,

saying ‘pig vat’ instead of ‘big fat’) represent another way in which expression and speaker meaning can come apart. Gricean conversational implicatures represent another, much larger, class of cases in which these two kinds of meaning come apart. These are cases in which the speaker engages in some form of indirection, where, typically, the main conversational point is something implicitly communicated rather than explicitly expressed. In such cases, the speaker’s words mean one thing, but the speaker is trying to convey another meaning, either in addition to the literal expression meaning or in place of it. An example of the former sort is when Mary replies to Peter’s offer to take her to the movies that evening that she will be studying for an exam then. Mary has explicitly said that she will be studying, but has implicitly communicated that she is refusing Peter’s

300 Expression Meaning vs Utterance/Speaker Meaning

invitation. Here both the explicit statement and the implicit refusal are intentionally communicated. The statement is intended to give Mary’s reason for her refusal. An example of the latter sort is when Mary responds to Peter’s refusal to help her when she is in need by saying ‘You’re a fine friend!’ Here she is implicitly communicating that Peter is not a good friend. Her words ‘fine friend’ are being used sarcastically, and she does not intend to communicate what her words literally mean. It should be mentioned that there are philosophers who think that even what is explicitly said (as opposed to implicitly communicated) can come apart from literal sentence meaning. These are cases where literal expression meaning must be pragmatically narrowed or broadened in order to arrive at what is explicitly communicated. Thus, when Mary says to the waiter at the restaurant that he should take her steak back because it is raw, she doesn’t mean to say the steak is literally uncooked, but that it is too undercooked for her taste – a case of pragmatic broadening. Or when Mary tells her son that he is not going to die when he comes crying to her with a cut on his finger, she means to say that he is not going to die from that cut, not that he is never going to die – a case of pragmatic narrowing (see Pragmatic Determinants of What Is Said). For some, utterance meaning is just a variety of speaker meaning. It is the meaning an expression has as used by a speaker in some conversational context. The hearer arrives at an understanding of utterance meaning by combining literal expression meaning with other contextually available information, including information about the speaker’s communicative intentions. However, at least some philosophers of language and linguists wish to draw a contrast between utterance and speaker meaning. Levinson (1987, 1995, 2000) has argued for three levels of meaning. There is expression meaning, utterance meaning, and speaker meaning. Utterance meanings belong to a system of default meanings associated with certain expression types. These default meanings are distinct from literally encoded expression meanings. However, when a speaker utters an expression of this type in a normal context, she will have conveyed the default meaning, unless she either explicitly or implicitly cancels this meaning. Levinson identifies these default meanings with the class of conversational implicatures that Grice called generalized conversational implicatures. For instance, when Peter accuses Mary of having eaten all the cookies and Mary replies that she has eaten some of the cookies, she explicitly says that she has eaten some and possibly all of the cookies, she implicates in a generalized way that she has not eaten all of the

cookies, and she implicates in a particularized way that she is not the guilty party. These three meanings correspond to Levinson’s three levels of sentence, utterance, and speaker meaning, respectively. The distinction between expression and speaker meaning has been invoked in many philosophical debates as a way of avoiding the postulation of multiple meanings for a single expression type. One well-known instance is Kripke’s (1977) appeal to a distinction between speaker’s reference and semantic reference of definite descriptions. Kripke appealed to this distinction in order to deny the semantic significance of what Donnellan (1966) called the referential use of such descriptions. Suppose Mary uses the description ‘the man in the corner drinking a Martini,’ intending to refer to Peter, but in fact Peter is drinking water, not Martini. Kripke argues that the so-called referential use of the description can be accounted for by appeal to what Mary meant to convey by the use of that expression, whereas what she actually said is determined by giving a Russellian analysis of the description. Since there is no unique Martini drinker in the corner (since, let us suppose, there is no Martini drinker there), what Mary has said is false, although what she meant to convey (her speaker meaning) may very well have been true. There are differing views as to the relative priority of expression and speaker meaning. Some philosophers, such as Strawson (1950), have argued that it is not words and sentences by themselves that refer or express propositions. Rather, it is speakers who refer or express propositions by their uses of words and sentences, respectively. Salmon (2004) calls this the speech-act-centered conception of semantics and contrasts it with the view he favors, namely the expressioncentered conception. According to the latter conception, words and sentences have their semantic properties intrinsically, in the sense that one can talk about the referential and truth-conditional content of expressions without any knowledge of or appeal to the communicative intentions of users of those expressions. Although defenders of the speech-act-centered conception are committed to denying that expressions have referential or truth-conditional content independently of speakers’ communicative intentions, their view is compatible with the claim that expression types have aspects of meaning that are context invariant. These would correspond to Fregean ‘senses’ or ‘modes-of-presentation’ or (for demonstratives and indexicals) to Kaplanian ‘characters.’ Such nonreferential or nontruth-conditional aspects of meaning may be intrinsic in Salmon’s sense. In other words, such meaning would be a property of expression types, independently of the intentions of the users of those expression types.

Extensionality and Intensionality 301

Some philosophers of language have denied the idea of intrinsic expression meaning independent of speaker meaning. For instance, Grice (1957) argued that expression meaning is reducible to speaker meaning. Grice was interested in nonnatural meaning (MeaningNN), as opposed to the sort of natural meaning that a sign may have in virtue of naturally signaling or indicating some state of affairs. He argued that an utterance’s nonnaturally meaning that p is simply a matter of a speaker’s uttering an expression with a certain communicative intention. This would be a sort of ‘one-off’ meaning for that expression. However, that speaker may be disposed to utter an expression of this type whenever he wishes to convey a certain meaning. Thus, he might develop a habit of using that expression type that way. If this usage were then to spread to other members of his community, it would become a standardized usage, and that expression type would come to have a stable meaning independent of the intentions of any one speaker. But such a meaning would not be independent of the linguistic activities of the users of the expression type in general. Another way that defenders of a speech-act-centered conception have challenged the idea of intrinsic expression meaning is to argue with Searle (1983) that all meaning is relative to a nonintentional Background. A sentence only has truth-conditions relative to some assumed Background. This Background can never be made fully explicit, because at bottom it consists in a set of abilities, practices, and ways of acting that are nonintentional. Although Searle, unlike Grice, is not suggesting that expression meaning depends ultimately on the communicative intentions

of speakers, he is arguing that expression meaning depends on a certain sort of human activity, and so this conception is antithetical to the idea of intrinsic expression meaning. See also: Character versus Content; Context Principle; Conventions in Language; Definite and Indefinite Descriptions; Intention and Semantics; Natural versus Nonnatural Meaning; Pragmatic Determinants of What Is Said; Referential versus Attributive; Semantics– Pragmatics Boundary; Sense and Reference; Speech Acts; Truth Conditional Semantics and Meaning.

Bibliography Donnellan K (1966). ‘Reference and definite descriptions.’ Philosophical Review 75, 281–304. Grice P (1957). ‘Meaning.’ Philosophical Review 66, 377–388. Kripke S (1977). ‘Speaker’s reference and semantic reference.’ Midwest Studies in Philosophy 2, 255–276. Levinson S (1987). ‘Minimization and conversational inference.’ In Verschueren J & Bertuccelli-Papi M (eds.) The pragmatic perspective. Amsterdam: John Benjamins. 61–129. Levinson S (1995). ‘Three levels of meaning.’ In Palmer F R (ed.) Grammar and meaning. Cambridge: Cambridge University Press. 90–115. Levinson S (2000). Presumptive meanings: the theory of generalized conversational implicature. Cambridge, MA: MIT Press. Salmon N (2004). ‘The good, the bad and the ugly.’ In Reimer M & Bezuidenhout A (eds.) Descriptions and beyond. Oxford: Oxford University Press. 230–260. Searle J (1983). Intentionality. Cambridge: Cambridge University Press. Strawson P (1950). ‘On referring.’ Mind 59, 320–344.

Extensionality and Intensionality N Oldager, Technical University of Denmark, Lyngby, Denmark ß 2006 Elsevier Ltd. All rights reserved.

A sentence is extensional if its expressions can be substituted with expressions that have the same denotation (reference) without altering the truth value of the sentence. A sentence that is not extensional is intensional. A language is extensional if every sentence of it is extensional. Otherwise, the language is intensional. The following sentence is then intensional: George IV wished to know whether Scott was the author of Waverly.

As Scott was in fact the author of Waverly, ‘Scott’ and ‘the author of Waverly’ are co-denotational. However, if we substitute one for the other, we get George IV wished to know whether Scott was Scott,

which, unlike the former, can hardly be taken as true. Because natural language contains intensional sentences, natural language is intensional. A context in which co-denotational expressions cannot be substituted is known as an indirect context (or oblique, opaque or intensional context), and a context of extensional expressions is a direct context. Sentences involving propositional attitudes, intentions, quotations, temporal designation, and

302 Extensionality and Intensionality

modalities give rise to indirect contexts. Another example of intensionality: Nine necessarily exceeds seven. Nine is the number of the planets. The number of the planets necessarily exceeds seven.

Although the first two sentences are true, the third is not because it is only a contingent astronomical fact and not a necessary truth that the number of planets exceeds seven – it is possible that there were only seven planets.

Semantical Aspects of Extensionality and Intensionality Issues concerned with extensionality and intensionality have been cardinal motivations behind the development of important semantical theories. Reducing extensionality and intensionality to technical conditions regarding substitutivity of expressions is accordingly a crude simplification. Though extensionality and intensionality can be traced as far back as ancient Greek philosophy, the first major contribution to the subject was Gottlob ¨ ber Sinn und Bedeutung (Frege, 1892). Frege’s U Note, there are different translations of the title words of this work: Sinn is translated as ‘sense’ but Bedeutung is translated as either ‘denotation’, ‘reference’, or ‘nominatum’ (‘meaning’ has actually been used for both Sinn and Bedeutung). Following Bertrand Russell and Alonzo Church, Bedeutung will be identified with ‘denotation’. Although Frege does not spell it out in detail, he maintains that semantics is compositional, such that the semantics of a sentence is determined by the semantics of its parts. To illustrate his theory, assume, to begin with, that semantics is purely referential, that is, assume that the semantics of an expression is what the expression denotes. This seems plausible; for example, Paris is beautiful

asserts that what ‘Paris’ denotes, i.e., the actual capital of France, has the property of being beautiful. It is not the string of symbols ‘Paris’ or one’s idea of Paris, whatever that may be, which is beautiful. However, things are more complicated than this. There are aspects of natural language semantics that cannot be explained by resorting to the notion of denotation, Frege argues. He illustrates this by the following puzzle. Suppose a and b are names for some objects and that a¼b

is true, hence the expressions ‘a’ and ‘b’ have the same denotation.

Frege then recognizes a difference between this identity and an identity such as a ¼ a. The latter is trivially true (analytically true), whereas the former may contain useful information. For instance, in a criminal investigation a discovery such as ‘‘the burglar is the suspect’’ could be decisive, whereas ‘‘the burglar is the burglar’’ is useless. The important question is then: What is the source of the difference between a ¼ a and a ¼ b? As semantics is compositional, the difference must be due to a difference between the semantics of the expressions ‘a’ and ‘b’. But by assumption, ‘a’ and ‘b’ have the same semantics because they have the same denotation. In other words, referential semantics must be rejected because it cannot explain the difference between the identities. Frege’s famous solution is to acknowledge that ‘a’ and ‘b’ refer to their denotation in different ways. Consider, for example, the expressions ‘morning star’ and ‘evening star.’ Both denote the planet Venus, so morning star ¼ evening star is true, but they refer to Venus in different ways: One refers to a heavenly object seen in the morning, the other to a heavenly object seen in the evening. Frege says ‘morning star’ and ‘evening star’ have different senses. The puzzle about identity can now be solved by noting that a ¼ a and a ¼ b express different senses. Frege was inspired by mathematics when he developed this theory. Consider the two expressions ‘1 þ 3’ and ‘2 * 2’. Both are equal to, i.e., denote, the number 4, however, their way of referring to four differs because the calculations for obtaining the result differ. Hence, the expressions have different senses. To solve the puzzle, Frege accordingly introduces two semantical concepts, denotation and sense. Each expression – including proper names and sentences – is then assumed to have both a denotation as well as a sense, although he recognizes that exceptions may occur. Frege never precisely described what senses are, but he explained that the sense of an expression contains its mode of presentation (its way of referring), that expressions express their senses, and senses are something we grasp. Moreover, senses are distinguished from ideas (subjective thinking), meaning they are objective. The notion of sense may appear unfamiliar and it may not be clear why it is seminal. However, expressions such as ‘morning star’ or ‘Paris’ are signs, and signs are characterized by their ability to refer to something. Sense addresses this fundamental feature – the referential capacity of expressions. So, it is natural to discern between expressions that refer to the same thing. There is a close relationship between Frege’s theory and the earlier definition of extensionality and intensionality. Frege can now provide an explanation for

Extensionality and Intensionality 303

failure of substitutivity in indirect contexts. In indirect contexts we are talking about the senses of the expressions occurring therein. When we say John believes that the morning star is the evening star

we are not talking about Venus, Frege argues, but about different senses that determine Venus. In indirect contexts the semantics become the sense. Thus, when we substitute two co-denotational expressions that have different senses in indirect contexts, we obtain different propositions that may have different truth values. Frege’s theory has been challenged by, among others, Bertrand Russell (Russell, 1905). Russell notes that in Frege’s theory, a phrase such as ‘‘the present Queen of England’’ denotes an actual woman. It would seem, by parity of form, that a phrase like ‘‘the present King of France’’ also is about, i.e., denotes, an actual individual. However, as an actual King of France does not exist, this sentence does not denote – it merely expresses a sense. Only when sentences are false or nonsense, do we talk about senses, it seems. Russell then presents a rival theory of denotation which, by means of a clever paraphrasing technique, does not resort to Fregean senses. However, it faces other difficulties. For instance, Russell would have to accept that Ponce de Leon sought the fountain of youth

is either false or nonsense because there did not exist an actual fountain of youth. There are other semantical theories and notions similar to Frege’s. The notion of connotation is similar to sense. Rudolf Carnap (Carnap, 1947) has presented a semantical method in which he distinguishes the notions extension and intension. These are closely related to Frege’s notions, in fact, in direct contexts extension and denotation are the same, and so are intension and sense; only in indirect contexts does Carnap distinguish his notions from Frege’s. Common to these theories is the distinction between two semantical notions, one more general than the other.

Extensionality and Intensionality in Formal Settings So far, the investigations have been restricted to natural language; in the following they will be generalized to logic. The underlying idea is to formalize the condition for extensionality (substitutivity of codenotational expressions). This will allow us to determine whether a logic is extensional or intensional. In propositional logic, the formula (1) (P $ Q) ! (R $ R[Q/P ])

is valid (logically true) for all formulas P, Q, and R, where R[Q/P] is the result of substituting zero or more occurrences of P with Q in R. This result says that equivalent (co-denotational) formulas can be substituted with preservation of truth, hence, that propositional logic is extensional. In contrast, modal logic is intensional because (1) is not valid in modal logic. We have the following counterexample comprising the necessity operator: (p $ q) ! (u p $ u p[q/p]),

that is, (p $ q) ! (u p $ u q)

is not valid, where p and q are atomic propositions. Thus, in modal logic we cannot substitute equivalent formulas, but this is precisely what we want because modal logic formalizes the intensional notion of modality. The example shows that u creates an indirect context. Presenting a general, formal definition of when a logic is extensional is no trivial task. One reason is that the notion of logic is general indeed, meaning there are several, non-equivalent formalizations of the condition for extensionality. Consider first-order predicate logic, which is commonly said to be extensional. If we accept open formulas (formulas in which variable occurrences are not bound, such as F(x)), formula (1) is not valid. But this means that we would have to say that predicate logic is intensional. However, (1) is not the only formalization of the condition for extensionality for predicate logic. As an alternative formalization, we have: (2) If P $ Q is valid then R $ R[Q/P] is valid.

The difference between this formalization and (1) is that (2) is formulated in metalanguage. As (2) holds, it says that predicate logic is extensional in terms of metalogical formalization. Unfortunately, we cannot adopt (2) as a general formalization of extensionality because modal logic also satisfies (2), meaning it would become extensional too. A possible solution is to discard open formulas in predicate logic and accept (1) as a formalization of extensionality. However, other solutions might be preferred. It has been suggested, e.g., by Ruth Barcan Marcus (Marcus, 1960), that there are several principles (definitions) of extensionality, and hence also several principles of intensionality. This reveals subtleties in the distinction between extensionality and intensionality. Because of their imprecise nature, intensional notions such as sense have been deemed opaque. However, the last 50 years of developments in nonclassical logic, in particular the development of possible-world semantics by such people as Saul

304 Extensionality and Intensionality

Kripke and Richard Montague (1970), have shown that a significant part of intensional notions can be formulated in precise (mathematical) settings. See Gamut (Gamut, 1991) for an introduction and Fitting and Mendelsohn (Fitting and Mendelsohn, 1998) for newer developments in possible-world semantics. See also: Connotation; Evidentiality; Existence; Factivity; Generic Reference; Modal Logic; Montague Semantics; Negation; Possible Worlds; Propositional Attitude Ascription; Propositional Attitudes; Quantifiers; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Referential versus Attributive; Sense and Reference; Specificity; Virtual Objects.

Bibliography Carnap R (1947). Meaning and necessity. Chicago: University of Chicago Press.

Frege G (1892). U¨ber Sinn und Bedeutung. Zeitschrift fu¨r Philosophie und philosophische Kritik 100, 25–50. Reprinted as ‘On sense and reference.’ In Geach P & Black M (eds.) (1984). Translations from the philosophical writings of Gottlob Frege. Oxford: Blackwell. 56–78. Fitting M & Mendelsohn R L (1998). First-order modal logic. Dordrecht: Kluwer Academic Publishers. Gamut L T F (1991). Logic, language and meaning (vol. 2). Intensional logic and logical grammar. Chicago: Chicago University Press. Marcus R B (1960). ‘Extensionality.’ Mind, New Series 69, 55–62. Montague R (1970). ‘Universal grammar.’ In Thomason R H (ed.) (1974). Formal philosophy – selected papers of Richard Montague. New Haven and London: Yale University Press. 222–246. Russell B (1905). ‘On denoting.’ Mind New Series 14, 479–493.

F Face F Bargiela-Chiappini, Nottingham Trent University, Nottingham, UK ß 2006 Elsevier Ltd. All rights reserved.

Background A search for ‘face’ in the on-line Oxford English dictionary (OED) yields pages on this polysemic entry. It also illuminates the cultural and psychological dimensions of the construct that lies at the heart of a well-known model of linguistic politeness expounded by Brown and Levinson in their seminal work Politeness: some universals in language usage (1978, 1987) (see Politeness). The Western character of their (positive and negative) face derives from an Anglo-Saxon understanding of the rational individual who seeks to protect himself or herself, and others, from Face-Threatening Acts (FTAs). In spite of its claim to universality, the politeness model that stems from this characterization of face is also, inevitably, culture-biased. In reaction to this, a critical reappraisal of Brown and Levinson’s notion of face has engaged scholars beyond the Anglophone world, who have brought to their analyses insights from psychology, philosophy, and anthropology. In particular, critique from Asian linguists has now grown into a consistent contrastive body of research, which has expanded to include perspectives from Southern Europe (e.g., Turkey, Greece, Spain, Italy), South America (e.g., Ecuador, Argentina), and South Africa, thus widening considerably the cultural spread of the debate. If it is thanks to two British anthropologists (Brown and Levinson) that linguistic politeness rose to become the subject of ongoing scholarly debate, it is too easily forgotten that it was an American sociologist who gave us the powerful account of ‘face’ that stands at the heart of it. Not a universal account, however; in fact, Erving Goffman’s ‘face’ is American in many of its facets, and yet an original contribution to the modern study of the social and psychological relevance of this arguably universal construct to politeness research and human interaction.

Face as a philosophical construct boasts a history that dates to ancient civilizations. For example, the Na´huatl people who inhabited Central America would use the expression ‘face-heart’ (Spanish rostro-corazo´n) to define personhood. In their understanding, ‘‘face seems to refer to the physiognomy of the self while the heart is the dynamic aspect of self’’ (Jime´nez Catan˜o, 1993: 72) (my translation). In modern times, among the more defining influences on Goffman’s early work and his conceptualization of face are Chinese and American Indian sources and the work of the French sociologist E´mile Durkheim.

‘Face’ According to Goffman Even though Goffman was primarily concerned with uncovering the rules governing social interaction, his treatment of ‘face’ leads the reader to believe that he saw it as the hub of interpersonal dynamics; in fact, for Goffman ‘face-saving’ was shorthand for ‘the traffic rules of social interactions’ (1967:12). His concept of face, unlike Brown and Levinson’s later understanding of it, seeks to accommodate both strategic and social indexing behaviors and is best apprehended in the context of social order as ritual. Equilibrium is maintained by interactants making choices informed by moral rules. In turn, the social morality underpinning Goffman’s order consists of values such as pride, honor, dignity, consideration, tact and poise, perceptiveness and feelings, all of which the self expresses through face. Within the wider social order, face maintenance is a condition rather than the objective of interaction. E´mile Durkheim, the French thinker, saw social action as symbolic ritual enacted through positive and negative rites, hence Goffman’s definition of the person as a ‘‘ritual object,’’ a ‘‘deity,’’ and ‘‘his own priest’’ (1967: 55). However, Durkheim also envisaged solidarity as the glue of the social order. Solidarity between actors would aim for the fulfillment of obligations toward others as a condition for the maintenance of equilibrium. Goffman’s interactant is certainly more individualistic, but avoids the egocentrism of Brown and Levinson’s idealized

306 Face

agent. Ultimately, for Goffman as for Durkheim, organizational order comes before the safeguard of the individual self, which can be asked to sacrifice his or her face (with the ensuing embarrassment) for the gain of society (1967: 112). In 1944, the American Anthropologist published Hsien Chin Hu’s seminal account of the Chinese concept of ‘face.’ A decade later, Goffman acknowledged his debt to Hu in his discussion of a Western version of ‘face.’ Goffman’s social psychological definition of face is that of ‘the positive social value a person effectively claims for himself by the line others assume he has taken during a particular contact’ where a ‘‘line’’ is the interactants’ self and others’ evaluation (1967: 23). Self-respect and considerateness are safeguards for one’s own and others’ face in social encounters. These social values underline the interdependent character of Goffman’s actor whose face he considers sacred (1967: 19). Face maintenance requires a ritual order, that is ‘acts through whose symbolic component the actor shows how worthy he is of respect or how worthy he feels others are of it’ (Goffman, 1967: 19). For Goffman, the contemporary and still enduring Anglo-Saxon values of independence and privacy have swung the balance from a Durkheimian collective self to a selfaware, more individualistic self. Goffman’s ‘face-work’ consists of defensive (saving one’s own face) and protective (saving others’ face) practices exercised simultaneously, another indication of the social value that he attached to face. ‘‘Avoidance’’ rituals, ‘‘corrective’’ processes, and the ‘‘aggressive use of face-work’’ are also discussed by Goffman in some detail alongside the tacit cooperation that makes face-work possible. Brown and Levinson’s model seems to have given preeminence to the first three practices, thus emphasizing the selfdefensive, negative posture of their ideal agent. For Goffman, ‘‘tact,’’ ‘‘reciprocal self-denial,’’ and ‘‘negative bargaining’’ (favoring one’s counterpart) signal the degree of social awareness and concern for others, possibly motivated by self-preservation, which interactants would display as a ‘‘ritually delicate object’’ (1967: 31). In spite of later developments in politeness research that somewhat obscured the originality of Goffman’s face, his notion remains a primary object of interest for scholars engaged in the situated study of interpersonal behavior in general and of ‘polite behavior’ in particular.

Brown and Levinson’s ‘Face’ and Its Critics Brown and Levinson’s positive and negative face is a cognitive, abstract, culture-dependent construct

attributable to a rational agent; it makes extensive use of Durkheim’s negative (avoidance) rituals. The dualistic notion of face and the emphasis on negative face and the notion of ‘imposition’ have attracted extensive criticism in subsequent Chinese and Japanese studies of politeness, as well as among other non Anglo-American scholars. Although Brown and Levinson acknowledge Durkheim as their source for negative and positive rites, their understanding of the sociologist’s original concepts is substantively altered in their model; their ideal rational actor is busy protecting own and others’ face, rather than giving, enhancing, or maintaining face. Emotions, when accounted for, are also means of face-protection. Such a reductionist perspective has progressively laid bare the need for a sociopsychological and affective construct of face that resonates again with Goffman’s original motivation for the study of interaction as being ‘‘not about the individual and his psychology, but rather the syntactical relations among the acts of different persons mutually present to one another’’ (1967: 2) (italics added). It is Brown and Levinson’s negative face (and negative politeness) that have been singled out for intensive scrutiny on grounds of cultural relativity. Further, their outline of positive and negative face as mutually exclusive in interaction has been challenged by empirical research that indicates, instead, the coexistence of many face wants calling on simultaneous positive and negative face-work. Similarly, the assumptions that only one type of face can be threatened at any given time and that Face Threatening Acts (FTAs) can be analyzed out of context are highly problematic. The crosscultural validity of Brown and Levinson’s model has buckled under the criticism of suffering from British ‘cultural bias’ (Baxter, 1984). Research in linguistic politeness in the 1990s indicated a split between cultures where face is a key explanatory construct in interpersonal behavior and those where other values such as discernment, deference, and respect play a more important role. Face and face-work will operate differently in socially stratified cultures (e.g. Japan and Mexico, where normative politeness is dominant) compared to status-based cultures (such as China and Korea), where normative and strategic (volitional) politeness coexist. Where status is allegedly less marked in interpersonal interaction and normative politeness is less in evidence (e.g., Northern Europe and North America), the concern for face remains nevertheless relevant, even if expressed in a more understated way. The consolidation of psychology research on the social self and social identity has opened up new contexts of relevance for the dynamics of face and

Factivity 307

face-work, calling on a modified model of face and politeness that breaks free from the straitjacket of cognitivist psychology. Another exciting development is the emergence of indigenous psychologies from Asia and Africa which, instead of testing Western formulations, may develop their own analytical models and constructs to make true crosscultural comparison possible.

Future of Face Research For all the criticism of ethnocentrism that their model has generated, Brown and Levinson’s work can boast an unmatched and continuing level of interest within the field of linguistic politeness research. Twenty years on, new insights from social and cultural psychology point in the direction of a culture-situated, dynamic understanding of face that gives consideration to other factors such as personal values, one’s own self-concept, self-identity in various groupings, role expectations, and normative constraints (Earley, 1997: 95–97). After years of established disciplinary research in politeness, especially in anthropology and linguistics, face is becoming a privileged topic for interdisciplinary research. In addition, its cultural sensitivity calls for more international collaborative research if its sociocultural and philosophical roots are to be uncovered for further comparative analysis. On a more abstract level, one could imagine face as a bridging concept between interpersonal interaction and social order in the sense that face, at the micro-level of verbal and nonverbal behavior, encapsulates and dynamically displays the manifestations of (macro-level) cultural values. Situated discourse would thus become the epistemological locus

where observation and interpretation of face and face-work take place as within their most expressive environment. Alongside psychologists, anthropologists, and sociologists, in future social theorists may be called upon to explore a powerful construct that thus far has occupied but not yet exhausted the endeavors of most politeness scholars. See also: Context and Common Ground; Cooperative Principle; Honorifics; Memes; Politeness; Politeness Strategies as Linguistic Variables; Pragmatic Presupposition; Taboo, Euphemism, and Political Correctness; Taboo Words.

Bibliography Baxter L A (1984). ‘An investigation of compliance-gaining as politeness.’ Human Communication Research 103, 427–456. Brown P & Levinson S (1978). ‘Universals in language usage: politeness phenomena.’ In Goody E (ed.) Questions and politeness: strategies in social interaction. Cambridge: Cambridge University Press. 56–311. Brown P & Levinson S (1987). Politeness: some universals in language usage. Cambridge: Cambridge University Press. Durkheim E´ (1915). The elementary forms of the religious life. London: George Allen & Unwin Ltd. Earley P C (1997). Face, harmony and social structure. An analysis of organizational behavior across cultures. New York: Oxford University Press. Goffman E (1967). Interaction ritual. Essays on face-toface behavior. London: The Penguin Press. Hu H C (1944). ‘The Chinese concepts of ‘‘face.’’’ American Anthropologist 46, 45–64. Jime´nez Catan˜o R (1993). ‘La concepcio´n Na´huatl del hombre.’ Istmo 204, 69–75.

Factivity P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

Factivity is a semantic property of certain predicates, factive predicates, which take an embedded S-structure, preferably a that clause, as subject or object. The that clause of a factive predicate P is presupposed to be true when P is the main lexical predicate of a main clause (directly under a speech act of assertion, question, wish, command, etc.). Examples of factive predicates with factive object clauses are know, realize, and have forgotten. Usually the

so-called affective factives are included, such as regret, deplore, and be delighted. (Un)forgivable, pity, and regrettable are predicates with factive subject clauses. Thus, What a pity that she has left presupposes that she has left. And He hasn’t forgotten that Jack played a trick on him (with presuppositionpreserving not) presupposes that Jack played a trick on him. Sometimes a predicate may take a sentential subject as well as a sentential object clause. Such doublecomplementation predicates are invariably factive with respect to their subject clause. For example, a sentence such as That the butler had blood on his shirt suggested that he was the murderer presupposes that

308 False Friends

the butler had blood on his shirt. This is a general, and so far unexplained, property of verbs that take double complementation. Factive verbs are intensional in that they block substitution salva veritate of coreferential terms: Luke realizes that the Morning Star is uninhabited does not have the same truth conditions as Luke realizes that the Evening Star is uninhabited. Some predicates are ‘antifactive,’ in that they induce a presupposition of the falsity of the embedded that clause. For example, be under the illusion is an antifactive predicate (likewise for the German wa¨hnen, used by Frege (1892: 47) in the first modern observation of factivity). In modern times, factivity was brought to the attention of the linguistic world by Kiparsky and Kiparsky (1971), who pointed out that factivity is not only a semantic but also a syntactic property, as factive predicates share a number of syntactic properties, in particular the impossibility of Subject Raising from the embedded clause, and the possibility

of replacing that with the complex NP the fact that. The only exception is the prototypical factive verb know, which behaves syntactically as a nonfactive verb. This problem may be solved by assuming that know has lexically incorporated the NP the fact as part of its object clause, reducing the complex NP the fact that to the simple complementizer that. See also: Evidentiality; Existence; Extensionality and In-

tensionality; Lexical Conditions; Presupposition; Specificity; Virtual Objects.

Bibliography Frege G (1892). ‘Ueber Sinn und Bedeutung.’ Zeitschrift fu¨r Philosophie und philosophische Kritik 100, 25–50. Kiparsky P & Kiparsky C (1971). ‘Fact.’ In Steinberg D & Jakobovits L (eds.) Semantics: An interdisciplinary reader in philosophy, linguistics, and psychology. Cambridge: Cambridge University Press. 345–369.

False Friends P Chamizo-Domı´nguez, Universidad de Ma´laga, Ma´laga, Spain ß 2006 Elsevier Ltd. All rights reserved.

The term ‘false friends’ (faux amis, in French) was coined by M. Koessler and J. Derocquigny (1928) in their classical, seminal work on this topic. From a synchronic point of view, the linguistic phenomenon of false friends can be defined as the fact that two given words are similar or equivalent graphically or phonetically in two or more given languages but have different meanings. In other words, false friends share their signifiers, but they do not share their meanings. For that reason, false friends are extremely insidious traps for translators. To illustrate just how problematic false friends can become, take the Spanish term for them, falsos amigos – now widely used in linguistics and translation studies; it results from inadequate translation of French faux amis. In Spanish, a treacherous, disloyal, or unfaithful friend is not properly called un falso amigo, but un mal amigo, literally ‘a bad friend.’ Although ‘false friends’ is the most common term applied to the phenomenon under discussion, other names,

such as false equivalents and false cognates, also have been used (Buncic, 2000). Nevertheless, it should be stressed that the concepts of false friends and false cognates (from Latin cognatus, ‘relative’) are quite different because all false cognates are false friends but not all false friends are false cognates. For instance, Italian burro ‘butter’ and Spanish burro ‘ass, donkey’ are false cognates and false friends. By contrast, French personne ‘nobody’ and English person are false friends but, given that they are etymologically related, they aren’t false cognates. In other words, the set of false friends includes the set of false cognates but not vice versa. False friends can be classified in two groups: chance false friends and semantic false friends. Chance false friends are those words that are similar or equivalent in two or more given languages but without any semantic or etymological reason for this overlap. The English noun coin and the French noun coin ‘corner’ have exactly the same graphic shape because of a fortuitous diachronic process. Similarly, the Portuguese word chumbo ‘lead’ and the Spanish noun chumbo ‘Indian fig, prickly pear’ are not etymologically related at all. In fact, chance false friends could be considered as the equivalents, in two or more given

False Friends 309

languages, of homonymic words in a single natural language. Such false friends also are false cognates. Semantic false friends are those words that are similar or equivalent in two or more languages because they are etymologically related. That is, semantic false friends have the same etymological origin but have developed different meanings in each language. For that reason, semantic false friends could be considered the equivalents, in two or more given languages, of polysemous words in a single natural language. Semantic false friends can in turn be divided into two groups: (1) full false friends, which are those words that have completely different meanings; and (2) partial false friends, which are those words that have several senses, some of which coincide in both languages whereas others do not. The Spanish word bigote ‘moustache,’ English bigot, and French bigot/bigote are cases of full false friends because there is no possible context in which the Spanish word can be (correctly) translated into the English or French words. By contrast, Spanish actual, French actuel/actuelle and German aktuell are only partial false friends with regards to English actual because, although all three usually mean in ordinary language ‘up-to-date’ or ‘current,’ they also can mean actual ‘real, existing in fact’ in philosophical jargon. The English noun injury is another perfect example of a partial false friend with regard to Spanish injuria, Portuguese inju´ria, and French injure. Injury, injuria, inju´ria, and injure are rooted in Latin injuria ‘offence, injustice.’ Spanish, Portuguese, and French have more or less kept the original Latin meaning of that word. As a consequence, the Spanish, Portuguese, and French terms are not false friends with regard to the English one, in which injury is a synonym for ‘insult,’ ‘offence,’ or ‘injustice.’ But the English term is also used metaphorically and/or euphemistically to mean ‘wound’. So that when injury is a synonym for ‘wound’ we are dealing with a false friend with regard to injuria, inju´ria, and injure. This means that certain word plays and double entendres that are possible in English are impossible in Spanish, Portuguese, or French: for instance, the play on words in The resulting lawsuit injured her more than the actual automobile accident had done. The study and knowledge of semantic false friends is especially interesting and beneficial for translators. When false friends are etymologically related, speakers may fail to recognize that their meanings are quite different. Usually the hearer or reader of a translation can detect erroneously translated false friends by means of a pragmatic strategy when detecting an inconsistency in the text. Nevertheless, there are also cases in which the translated text or utterance may

make sense, but what the hearer/reader understands may be quite different from what the speaker or writer was trying to say (Chamizo-Domı´nguez and Nerlich, 2002). As words change their meanings inside a single language, false friends develop by one or or more well-known mechanisms; namely, metaphor, amplification or restriction of meaning, amelioration or pejoration, metonymy, borrowings, euphemism or dysphemism, irony, and so on (Chamizo-Domı´nguez and Nerlich, 2002). Let us consider some examples. 1. Metaphor. In addition to its literal meaning, the Spanish noun canguro ‘kangaroo’ has developed the metaphorical meaning (now lexicalized and widely used) of ‘babysitter,’ whereas the English word has not developed any such metaphorical meaning, except perhaps in the collocation ‘kangaroo court’, i.e., ‘illegal court.’ 2. Amplification of meaning. The Spanish adjective inexcusable shares the meaning of ‘indefensible, unjustifiable, unpardonable’ with its English cognate, but the Spanish word, in addition, has amplified this shared meaning and also means ‘unavoidable.’ 3. Amelioration/pejoration. The English noun topic and the Spanish counterpart to´pico ‘commonplace’ are perfect examples of semantic false friends. It should be stressed, however, that both words are related to the Greek word topos ‘place, site’ and have emerged from an allusion to Aristotle’s work Topics. Topic has become a synonym for ‘subject of a text or discourse’ in English because of a process of amelioration through which the standard meaning of the Greek term changed its meaning. By contrast, the standard meaning of the Spanish word to´pico is the outcome of a process of pejoration, a process which sent the word in the opposite direction semantically. So when an English lecturer teaches a topic she/he can be regarded as an excellent teacher, whereas a Spanish one who teaches un to´pico can be regarded as a terrible teacher. By contrast, the French cognate topique (mainly used in medical jargon) has kept a meaning very close to the usual meaning of Greek topos. 4. Metonymy. Take the case of the Spanish word ban˜o and the French word bagne. Both derive from Latin balneum ‘bath, bath house,’ but ban˜o means ‘bath’ or ‘bath house,’ whereas bagne has the standard meanings of ‘prison,’ ‘dungeon,’ and ‘hard labor,’ as well as the slang or familiar meaning of ‘work’ or ‘the site where a person works.’ But in the past the Spanish word ban˜o also meant ‘prison’ or ‘dungeon’, as illustrated by the title of the well-known

310 False Friends

work by Miguel de Cervantes, Los ban˜os de Argel ‘The dungeons of Algiers.’ It has to be said, however, that few Spaniards (and that includes most scholars) know this old-fashioned meaning of ban˜o. The meaning of ‘prison’ for ban˜o and bagne originated in a metonymy based on the fact that the Turks used to imprison their captives inside Constantinople bath houses. From these initial conditions the following metonymic chain of semantic changes arose: (1) French has lexicalized the meaning of ‘prison’ for bagne; (2) by means of a second metonymy the meaning ‘hard work’ has become lexicalized in bagne; and (3) by means of a third (and perhaps humorous/ironic) metonymy the meaning of ‘work’ has been added. As a result of this diverging chain of metonymies the words ban˜o and bagne have now become false friends, because, whereas French bagne is synonymous with pe´nitencier, gale`res, enfer, pre´side, or travaux force´s, current Spanish ban˜o is not. Both words are false friends with regard to the English word bagnio that was borrowed from Italian bagno ‘bath, bathing house’ with the euphemistic meaning of ‘brothel,’ which was in use in English from the 17th to the 19th century and probably beyond. 5. Borrowings. Given that most words are polysemous, when a concrete term is borrowed it usually becomes a (partial) false friend by means of a restriction in meaning. English check has been borrowed by lots of languages but is restricted to the field of banking. Similarly, German has borrowed English adjective soft in the collocation soft Eis, but what Germans call soft Eis is what English speakers call ice cream. 6. Euphemism/dysphemism. Although the Latin word latrina (related to lavare, ‘to wash’) was coined as a euphemism for cloaca, it has acquired a dysphemistic flavor in most European language (e.g., English latrine, Spanish letrina, French latrine, or German Latrine), because it is used in barrack room contexts. As a result of this, English latrine and Latin latrina have become false friends because, if they are used in the same context, a different register is evoked and different implicatures arise. 7. Irony. Spanish ba´rbaro/ba´rbara has developed, in addition to the meanings of ‘brute,’ ‘cruel,’ ‘inhuman,’ or ‘ferocious,’ which are shared with English barbaric/barbarous, the ironic meaning (now widely lexicalized) of ‘fantastic,’ ‘terrific,’ ‘great,’ or ‘super’ (cf. colloquial English wicked). When one of these kinds of semantic change takes place in a given language false friends are likely to

arise. And this happens not only with common nouns, adjectives, and verbs but also with proper names. For instance, the Spanish toponym Cabo de Hornos ‘Cape Furnaces’ originated in the erroneous translation of English Cape Horn and/or Dutch Kaap Hoorn because of the graphic similarity between Spanish horno ‘furnace’ and the English or Dutch word. And, as a result of this erroneous translation, Cabo de Hornos and Cape Horn/Kaap Hoorn have become false friends. Although false friends are usually analyzed as words in isolation, many terms only become false friends in the context in which they are used. In fact, there are numerous words that are not false friends at all when they are considered in isolation but that become false friends when they are part of an idiom. It should be stressed that most idioms can be understood both according to the literal meaning of their component words and according to any figurative interpretation. This figurative meaning becomes the most salient one when idioms are fully lexicalized. The literal translation of French letter (euphemism for ‘condom’) into French lettre franc¸aise has only the literal meaning, ‘letter in French.’ See also: Idioms; Irony; Metaphor and Conceptual Blending; Metonymy; Polysemy and Homonymy; Taboo, Euphemism, and Political Correctness.

Bibliography Buncic Daniel (2000). Das sprachwissenschaftliche Problem der innerslavischen ‘falschen Freunde’ im Russischen. Masters thesis, Ko¨ln. Chamizo-Domı´nguez PJ (2008) Semantics and pragmatics of false friends. New York: Routledge. Chamizo-Domı´nguez Pedro J & Nerlich Brigitte (2002). ‘False friends: Their origin and semantics in some selected languages.’ Journal of Pragmatics 34, 1833–1849. Koessler M & Derocquigny J (1928). Les faux amis, ou, Les trahisons du vocabulaire anglais: conseils aux traducteurs. Avec un avant-propos de M. Louis Cazamian et une lettre de M. E´mile Borel. Paris: Vuibert.

Relevant Websites http://www.lipczuk.buncic.de—An excellent bibliography, continuously enlarged and updated by Ryszard Lipczuk and Daniel Buncic.

Field Work Methods in Semantics 311

Field Work Methods in Semantics B Hellwig, School of Oriental and African Studies, London, UK ß 2006 Elsevier Ltd. All rights reserved.

The linguistic literature contains a wealth of information about semantics, on the one hand, and fieldwork methodology, on the other, yet it devotes comparatively little attention to the combination of semantics and fieldwork. Although the semantics literature acknowledges potential difficulties when researchers collect and semantically analyze data in less-described languages, its focus is clearly on the well-studied European languages (and often on English alone). Conversely, although the fieldwork literature considers many different aspects of data collection and language description, the investigation of semantic phenomena is rarely discussed (with the notable exception of specific semantic fields, e.g., kinship terminology, as found in the literature on linguistic anthropology). As a result, many fieldwork-based studies contain little semantic information, and our knowledge of non-European semantics remains limited. This lack of convergence is not accidental: Semantic analysis relies to a great extent on native-speaker intuitions about the acceptability and equivalence of expressions, but field linguists are generally not native speakers of the languages they analyze. That is, they cannot resort to their own intuitions in order to determine the meaning of an expression, to distinguish between ambiguity and vagueness, to establish sense relations, or to set apart semantic entailments from pragmatic implicatures. Instead, they have to infer the meaning of an expression from observing its use in context or from eliciting grammaticality and acceptability judgments. In the first case, they need access to a text corpus, which has to be as large and diversified as possible (because many expressions occur only infrequently or are restricted to specific genres). In the second case, they need access to a thorough grammatical description (to be able to construct appropriate contexts and test sentences). However, neither corpora nor grammatical descriptions are usually available for the less-described languages. In most cases, such materials have to be created in a painstaking and time-consuming way by the field linguists themselves before they can embark on a semantic investigation. Field linguists thus face two major challenges: the lack of suitable data and the problems of conducting an intensional semantic analysis on the basis of

observed and elicited extensional data. In recent years, a number of studies have appeared that (explicitly or implicitly) address these methodological issues. These impulses come especially from the fields of semantic typology (aiming at a semantic comparison of languages and, therefore, needing comparable semantic data), documentary linguistics and linguistic anthropology (aiming at an adequate description and documentation of all areas of language, including semantics), cognitive linguistics (aiming at investigating possible cognitive correlates of linguistic phenomena), and language acquisition (aiming at understanding the learner’s developing knowledge of the semantic structure of the target language). Depending on the research question and the requirements of a particular field site, these studies exemplify quite a variety of fieldwork techniques: They ask speakers to describe objects, pictures, or video clips; to retell picture book or video stories; to play games; to respond to reenacted scenes from questionnaires or to (nonlinguistic) triads and categorization tasks; or – as in the probably best-known studies – they use color charts to discuss color terminology. Despite this attested variety, all studies propose to address the methodological issues in similar ways: They create controlled contexts that can generate a large number of desired target expressions, and, in particular, they use (visual) stimuli to create these contexts. By way of an example, two such stimuli are illustrated here: picture elicitation and matching games (see the bibliography at the end of this article for other examples). In picture elicitation, speakers are presented with pictures and are then asked a question that would conceivably generate the target expression. Figure 1 is an example of such a stimuli picture (designed to elicit locative expressions) and example (1) illustrates the prompt by the field linguist and the subsequent discussion between a linguist and speaker (conducted in the West Chadic language Goemai). (1) Linguist: Speaker: Linguist: Speaker:

Kwalba hok d’e nnang ? ‘Where is the bottle?’ T’ong k’a tebul. ‘It sits on the table.’ Ko d’yem k’a tebul a ? ‘Or does it stand on the table?’ A’a, d’yem ba, goe sh’e ba. ‘No, it doesn’t stand, (because) it doesn’t have a leg.’

In matching games, two speakers interact with on another, one speaker assuming the role of ‘director’ and the other the role of ‘matcher.’ In many studies,

312 Field Work Methods in Semantics

Figure 1 Picture elicitation: Stimuli picture discussed in example (1).

the two players are screened off from one another and are given a set of identical pictures. The director is asked to pick one of the pictures and to describe it to the matcher, who then picks the corresponding picture from his or her set. Because the two players cannot see one another, they have to rely on verbal descriptions for this purpose. After having identified a picture, both players put their chosen picture aside on a pile, and the director picks another one from the set. In the end, the screen is removed, and the players compare their piles to check if they match. Figure 2A and B present two of the stimuli pictures (out of a set of 16 pictures), and example (2) illustrates such a matching game and the resulting discussion between the two players (who, again, speak the West Chadic language Goemai). (2) Director: Ndoe kwalba na nd’e. D’yem k’a tebul. ‘See a bottle here. It stands on the table.’ The Director describes the upright bottle (Figure 2A), but matcher picks the upside-down one (Figure 2B). At the end of the game, they compare their piles (pointing at the different pictures). Director: A’a, d’yem k’a tebul muk mu? ‘No (¼ not Figure 2B), it (¼ Figure 2A) stands on its table, right?’ Matcher: Ai, t’ong, moe yi: t’ong k’a tebul, ai. Nd’yem ba ai. ‘Hey, it (¼ Figure 2A) sits, we say: it sits on the table, hey. Not standing, hey.’

In this way, stimuli-based setups generate a large amount of data about the use of target expressions in reference to real world contexts; that is, they generate extensional data. At the same time, they also generate relevant information that makes the collected data accessible to an intensional analysis. Such an analysis is facilitated by the following characteristics of stimuli-based methods:

Figure 2 Matching games: Stimuli pictures discussed in example (2).

. The researcher exercises some control over the setup, allowing him or her to either probe for parameters that potentially determine the use of an expression or to systematically test hypotheses. . The context and tested parameters are held constant; that is, the same task can be run with different speakers for purposes of comparison. Such data lend themselves to a quantification of preferred expressions or of dialectal and crosslinguistic variation. . The setups provide speakers with a clear reference context for their answers, thereby reducing the risks of generating translation equivalents and misunderstandings. It is known that speakers tend to judge the acceptability of an expression against a specific context. If the researcher can provide such a context, the speakers can use it – otherwise, they have to create it themselves outside the control of the researcher, introducing potential misunderstandings. Furthermore, knowledge about the reference context makes it possible for the researcher to take it into account in the process of analysis, that is, to relate expressions to real-world contexts,

Field Work Methods in Semantics 313

to recognize contextual factors that trigger particular usage patterns, or to discover stimuli effects. . Many setups generate negative evidence (e.g., speakers and players explicitly judge certain expressions as inappropriate) as well as information about the equivalence of expressions (e.g., expressions by fellow players cause misunderstandings, and players negotiate and discuss alternative possibilities and contexts). And because many setups are not overtly concerned with the speech of the speakers (but, e.g., with finding the matching picture), they reduce the risk of prescriptive language use. Such information is needed for any kind of semantic analysis, independent of the theoretical framework adopted and independent of whether the analyst is a native speaker of the language analyzed. In fact, there are more and more studies in which native-speaker linguists do not rely on their intuitions alone but exploit the advantages offered by employing stimuli-based methods. However, it is also acknowledged that such methods have potential drawbacks: a cultural bias (because such setups are not always appropriate in all cultures and all circumstances) and a creation of stimuli effects and ‘nonnaturalistic’ language use (because speakers are asked to make fine-grained distinctions that they would not necessarily make in everyday life). Although such problems cannot be avoided, the literature agrees that they can be counterbalanced by diversifying the database. That is, two opposing criteria need to be balanced: the amount of control researchers can exercise over the context and the degree to which speakers can talk naturally. Because no set of data can satisfy both criteria equally well, it is advocated that different stimuli-based techniques be combined with one another and augmented with data from natural settings, from staged communicative events (i.e., linguistic events that are prompted by the researchers but in which speakers structure their texts freely; e.g., a speaker is asked to talk about the manufacture of baskets), and from elicitation sessions in which researchers elicit expressions, grammatical information, and so on. Taken together, these methods generate a broad spectrum of relevant data that can serve as the basis for a detailed semantic analysis.

See also: Categorizing Percepts: Vantage Theory; Cognitive Semantics; Coherence: Psycholinguistic Approach; Color Terms; Connotation; Context and Common Ground; Extensionality and Intensionality; Human Reasoning and Language Interpretation; Inference: Abduction, Induction, Deduction; Lexical Meaning, Cognitive Dependency of; Lexical Semantics; Semantic Value; Spatial Expressions; Specificity.

Bibliography Berlin B (1968). Tzeltal numeral classifiers: a study in ethnographic semantics. The Hague and Paris: Mouton. Berlin B & Kay P (1969). Basic color terms: their universality and evolution. Berkeley, CA: University of California Press. Berman R A & Slobin D I (eds.) (1994). Relating events in narrative: a crosslinguistic developmental study. Hillsdale, NJ: Lawrence Erlbaum. Bowerman M & Levinson S C (eds.) Language acquisition and conceptual development. Cambridge, UK: Cambridge University Press. Chafe W L (ed.) (1980). The pear stories: cognitive, cultural, and linguistic aspects of narrative production. Norwood, NJ: ABLEX. Eisenbeiss S, Bartke S, Weyerts H et al. (1994). Elizitationsverfahren in der Spracherwerbsforschung: Nominalphrasen, Kasus, Plural, Partizipien. Du¨sseldorf: Seminar fu¨r Allgemeine Sprachwissenschaft. Frawley W (1992). Linguistic semantics. Hillsdale, NJ: Lawrence Erlbaum. Givo´n T (1991). ‘Serial verbs and the mental reality of ‘‘event’’: grammatical vs. cognitive packaging.’ In Traugott E C & Heine B (eds.) Approaches to grammaticalization 1: focus on theoretical and methodological issues. Amsterdam and Philadelphia: John Benjamins. 81–127. Goddard C (1998). Semantic analysis: a practical introduction. Oxford: Oxford University Press. Hardin C L & Maffi L (eds.) (1997). Color categories in thought and language. Cambridge, UK: Cambridge University Press. Hellwig B (2006). ‘Field semantics and grammar-writing: stimuli-based techniques and the study of locative verbs.’ In Ameka F, Dench A & Evans N (eds.) Catching language: issues in grammar-writing. Berlin and New York: Mouton de Gruyter. 321–358. Himmelmann N P (1998). ‘Documentary and descriptive linguistics.’ Linguistics 36, 161–195. Kay P & McDaniel C K (1978). ‘The linguistic significance of the meaning of basic color terms.’ Language 54(3), 610–646. Language and Cognition Group (1993–2003). Field manuals. Nijmegen: Max Planck Institute for Psycholinguistics. Lehrer A (1983). Wine and conversation. Bloomington, IN: Indiana University Press. Levinson S C (1992). ‘Primer for the field investigation of spatial description and conception.’ Pragmatics 2(1), 5–47. Levinson S C (2000a). ‘H. P. Grice on location on Rossel Island.’ Berkeley Linguistics Society 25, 210–224. Levinson S C (2000b). ‘Yeli Dnye and the theory of basic color terms.’ Journal of Linguistic Anthropology 10(1), 3–55. Levinson S C & Meira S (2003). ‘‘‘Natural concepts’’ in the spatial topological domain – adpositional meanings in crosslinguistic perspective: an exercise in semantic typology.’ Language 79(3), 485–516. Lucy J A (1992). Grammatical categories and cognition: a case study of the linguistic relativity hypothesis. Cambridge, UK: Cambridge University Press.

314 Formal Semantics MacLaury R E (1997). Color and cognition in Mesoamerica: constructing categories as vantages. Austin, TX: University of Texas Press. Pederson E, Danziger E, Wilkins D P et al. (1998). ‘Semantic typology and spatial conceptualization.’ Language 74(3), 557–589.

Stro¨mquist S & Verhoeven L (eds.) (2004). Relating events in narrative: typological and contextual perspectives. Mahwah, NJ: Lawrence Erlbaum. Turnbull W (2001). ‘An appraisal of pragmatic elicitation techniques for the social psychological study of talk: the case of request refusals.’ Pragmatics 11(1), 31–61.

Folk Etymology L Bauer, Victoria University of Wellington, Wellington, New Zealand ß 2006 Elsevier Ltd. All rights reserved.

‘Folk etymology’ or ‘popular etymology’ is the name given to a process of reanalysis. Speakers of a language, expecting their words to be partly motivated, find in them elements which they perceive as motivating the word, even where these elements have no historical presence. It is called ‘folk’ or ‘popular’ because it pays no attention to the knowledge of the erudite. It is called ‘etymology’ because it appears to invent a new origin for a word, an origin which is contrary to fact. A better label would be ‘morphological reanalysis.’ A set of words particularly susceptible to folk etymology is loan words. Very often these obtain forms in the borrowing language which show something of the way they have been perceived, but owe little to the structure of the language from which the word is borrowed, except some superficial phonetic resemblance. Examples include English woodchuck from Ojibwa otchig, with nothing to do with either wood or throwing; Danish dialect kamelte (literally ‘camel tea’) from kamillete (‘chamomile tea’); German (Standard German) Ha¨ngematte (literally ‘hanging mat’) originally from Taino hamaca ‘hammock.’ The source word need not, however, be a foreign word; it can be native. Consider English titmouse, a kind of bird, from an earlier titma¯se, with no link to mice, or German (Standard German) Maulwurf ‘mole’

(literally ‘mouth throw’) from an earlier moltwerf ‘earth thrower’ (compare English dialect mouldywarp). The same word turns up in one attested (but not established) instance in Danish as muldhvalp (literally ‘earth puppy’). Note also that folk etymology frequently leaves part of the original word unanalyzed: belfry, from an earlier berfry, analyzes the first element as related to bell, but leaves the -fry element meaningless. The line between an instance of folk etymology and a malapropism is sometimes a thin one, but in principle a malapropism confuses two similar-sounding words (e.g., in an example from Sheridan’s character Mrs. Malaprop, contiguous and contagious) while folk etymology is an error based on the supposed meaning of elements within the word (even if those elements are clearly absurd, like the mouse in titmouse) which spreads beyond the individual to a whole community. Both are often occasioned by the attempt to use exotic or difficult vocabulary whose form is not entirely accurately perceived. See also: Category-Specific Knowledge; False Friends; Human Reasoning and Language Interpretation.

Bibliography Coates R (1987). ‘Pragmatic sources of analogical reformation.’ Journal of Linguistics 23, 319–340. Gundersen H (1995). Linjedansere og Pantomine pa˚ Sirkhus: folkeetymologi som morfologisk omtolkning. Oslo: Novus. Palmer A S (1883). Folk-etymology. New York: Holt.

Formal Semantics G Chierchia, Universita degli Studi di Milano-Bicocca, Milan, Italy ß 2006 Elsevier Ltd. All rights reserved.

Introduction Semantics, in its most general form, is the study of how a system of signs or symbols (i.e., a language of

some sort) carries information about the world. One can think of a language as constituted by a lexicon (an inventory of morphemes or words) and a combinatorial apparatus according to which complex expressions, including, in particular, sentences, can be built up. Semantics deals with the procedures that enable users of a language to attach an interpretation to its arrays of symbols. Formal semantics studies such

Formal Semantics 315

procedures through formally explicit mathematical means. The history of semantics is nearly as long and complex as the history of human thought; witness, e.g., the early debates on the natural vs. conventional character of language among the pre-Socratic philosophers. The history of formal semantics is nearly as daunting as it is intertwined with the development of logic. In its modern incarnation, it is customary to locate its inception in the work of logicians such as Frege, Russell, and Tarski. A particularly important and relatively recent turning point is constituted by the encounter of this logico-philosophical tradition with structural and generative approaches to the study of human languages, especially (though by no means exclusively) those influenced by N. Chomsky. The merger of these two lines of research (one brewing within logic, the other within linguistics), has led formal semantics to become a central protagonist in the empirical study of natural language. The research paradigm that has emerged has proven to be quite fruitful, both in terms of breadth and depth of results and in terms of the role it is playing in the investigation of human cognition. The present work reviews some of the basic assumptions of modern formal semantics of natural language and illustrates its workings through a couple of examples, with no pretence of completeness. Semantics vs. Lexicography

One of the traditional ideas about semantics is that it deals with the meaning of words. The main task of semantics is perceived as the compilation of dictionaries (semantics as lexicography). To this, people often add the task of investigating the history of words. Such a history can teach us about cultural development. One might even hope to arrive at the true meaning of a word through its history. Compiling dictionaries or reconstructing how particular words have changed over time are worthy tasks; but they are not what formal semantics is about. Lexicography, philology, and related disciplines vs. semantics as conceived here constitute complementary enterprises. They all, of course, deal with language. But the main goal of semantics is to investigate how we can effortlessly understand a potential infinity of expressions (words, phrases, sentences). To do that, we have to go beyond the level of single words. It may be of use to point to the kind of considerations that have led semantics to move the main focus of investigation away from single word meanings and their development. For one thing, it can be doubted that word histories shed light on how words are synchronically (i.e., at a given point in time) understood and used. People use words effectively in total

ignorance of their history (a point forcefully made by one of the founding fathers of modern linguistics, namely F. de Saussure). To make this point more vividly, take the word money. An important word indeed; where does it come from? What does its history reveal about the true meaning of money? It comes from Latin moneta, the past participle feminine of the verb moneo ‘to warn/to advise.’ Moneta was one of the canonical attributes of the Roman goddess Juno; Juno moneta is ‘the one who advises.’ What has Juno to do with money? Is it perhaps that her capacity to advise extends to finances? No. It so happens that in ancient Rome, the mint was right next to the temple of Juno. So people metonymically transferred Juno’s attribute to what was coming out of the mint. A fascinating historical fact that tells us something as to how word meanings may evolve; but it reveals no deep link between money and the capacity to advise. This example is not meant to downplay the interest of historical investigations on word meanings; it is just an illustration of how linguistic history affects only marginally the way in which a community actually understands its lexicon. There is a second kind of consideration suggesting that the scope of semantics cannot be confined to the study of word meanings. Do words in isolation have clearly identifiable meanings? Take any simple word, say the concrete, singular, common noun dog. What does it mean? Some possible candidates are: the dogkind, the concept of dog, the class of individual dogs. . . . And the list can go on. How do we choose among these possibilities? Note, moreover, that all these hypotheses attempt to analyze the meaning of the word dog by tacking onto it notions (kind, concept, class . . .) that are in and of themselves in need of explication. If we left it at that, we wouldn’t go far. Looking at dictionary definitions is no big help either. If we look up the entry for dog, typically we will find something like: (1) A highly variable carnivorous domesticated mammal (Canis familiaris) prob. descended from the common wolf.

Indeed, if someone doesn’t know the meaning of the word dog and knows what carnivorous and mammal mean, then (1) may be of some practical help. But clearly to understand (1), we must rely on our understanding of whole phrases and the words occurring in them. Words which, in turn, need a definition to be understood. And so on, in a loop. This problem is sometimes called the problem of the circularity of the lexicon. To put it differently, (1) is of help only if the capacity to use and interpret language is already taken for granted. But it is precisely such capacity that we want to study.

316 Formal Semantics

The limitation of a purely word-based perspective on the investigation of meaning is now widely recognized. Frege summarized it in a nice motto: ‘‘only in the context of a sentence do words have meaning.’’ His insight is that complete sentences are linguistic units that can sort of stand on their own (more so than any other linguistic units). They can, as it were, express selfcontained thoughts. We are more likely, therefore, to arrive at the meaning of single words (and of phrases in between words and complete sentences) via a process of abstraction from the contribution that words make to sentence meaning, rather than the other way around. This is so because sentence meaning is somehow more readily accessible (being, as it were, more complete) than the meaning of words in isolation. These are some reasons, then, why the perspective of modern semantics is so different from and complementary to lexicography and philology; such perspective is much more directly tied to the investigation of the universal laws of language (language universals) and of the psychological mechanisms underlying such laws. Understanding the function, use, etc., of a single word presupposes a whole, complex cognitive apparatus. It is, therefore, an arrival point more than a starting point. It seems thus reasonable to start by asking what it is to understand a sentence. The main thesis we wish to put forth is that to understand a sentence involves understanding its relations to the other sentences of the language. Each sentence carries information. Such information will be related to that of other sentences while being unrelated to that of yet others. In communicating, we rely on our spontaneous (and unconscious) knowledge of these relations. The Notion of Synonymy and Its Problems

Imagine watching a Batman movie in which the caped hero fights the Riddler, one of his eternal foes. The Riddler has scattered around five riddles with clues to his evil plans. Batman has managed to find and solve four of them. We could report this situation in any of the following ways: (2a) Batman has found all of the five clues but one. (2b) Batman has found four out of the five clues. (2c) Four of the five clues have been found by Batman.

These sentences are good paraphrases of each other. One might say that they have roughly the same information content; or that they describe the same state of affairs; or that they are (nearly) synonymous. (I will be using these modes of speaking interchangeably.) To put it differently, English speakers know that there is a tight connection between what the sentences in (2a), (2b), and (2c) mean. This is a kind of knowledge

they have a priori, i.e., regardless of what actually goes on. Just by looking at (2a) vs., say, (2b) and grasping what they convey, we immediately see that they have roughly the same informational content. This is what we mean when we say that understanding a sentence involves understanding which other sentences count as good paraphrases and which don’t. Thus, knowing a language is to know which sentences in that language count as synonymous. Semantics is (among other things) the study of synonymy. Two synonymous sentences (and, more generally, two synonymous expressions) can always be used interchangeably. This last informal characterization can be turned into a precise definition along the following lines. (3a) Suppose one utters any complex expression a containing a subexpression A. If one can replace in A a with a different expression b, without changing the overall communicative import of A, then a and b are synonymous. (3b) a is synonymous with b ¼ in the utterance of any expression A containing a, a can be replaced with b without changing the communicative import of the utterance (salva significatione).

For example, in uttering (2a) (our A), we can replace the subcomponent that comes after Batman has found namely all of the five clues but one (our a) with four out of the five clues (our b) and convey exactly the same information. Hence, these two expressions must be synonymous (and, in fact, so are the whole sentences). This looks promising. It paves the way for the following setup for semantics. Speakers have intuitions of whether two expressions can be replaced with each other while keeping information content unchanged. For any two sentences a and b, they spontaneously know whether they can be substituted for each other (i.e., whether b can be used to paraphrase a). Because the sentences of a language are potentially infinite, it is impossible for speakers to memorize synonymous sentences one by one (for that clearly exceeds what our memory can do). Hence, they must recognize synonymy by rule, by following an algorithm of some sort. The task of semantics, then, becomes characterizing such an algorithm. There is a problem, however. Sameness of communicative import is a more or less thing, much like translation. In many contexts, even sentences as close as those in (3a) and (3b) could not be replaced felicitously with each other. Here is a simple example. The discourse in (4a) is natural and coherent. The one in (4b) much less so: (4a) Batman has found all of the five clues but one, which is pinned on his back.

Formal Semantics 317 (4b) ?? Batman has found four out of the five clues, which is pinned on his back. (modeled after a famous example by B. Partee)

Clearly in (4a) we cannot replace Batman has found all of the five clues but one with Batman has found four out of the five clues while keeping unaltered the overall communicative effect. This means that if we define synonymy as in (3a) and (3b), then (2a) and (2b) cannot be regarded as synonymous after all. Yet they clearly share a significant part of their informational content. What is it that they share? In fact, it has been argued that if (3a) and (3b) are how we define synonymy, then there simply are no two sentences that qualify as such. Here is a classical argument that purports to show this (based on Mates (1950)). Take the following two sentences: (5a) Billy has a dog. (5b) Billy has a highly variable carnivorous domesticated mammal prob. descended from the common wolf.

Are these two sentences synonymous? Hardly. They are clearly semantically related. But they surely do not have the same communicative import. Nor can one replace the other in every context. For example, (5a) could describe a true state of affairs, while (5b) might not: (6a) Molly believes that Billy has a dog. (6b) Molly believes that Billy has a highly variable carnivorous domesticated mammal prob. descended from the common wolf.

This shows that in contexts like Molly believes that __ we cannot simply replace a word with its dictionary definition. And if dictionary definitions don’t license synonymy, then what does? The problem can be couched in the following terms. Any normal speaker of English perceives a strong semantic connection among the sentences in (2a), (2b), and (2c), or (4a) and (4b). So strong that one might feel tempted to talk about synonymy. Yet when we try to make the notion of synonymy precise, we run into serious problems. Such a notion appears to be elusive and graded (a more or less thing); so much so that people have been skeptical about the possibility of investigating synonymy through precise, formal means. A fundamental breakthrough has been identifying relatively precise criteria for assessing semantic relations. The point is that perfect synonymy simply does not exist. No two sentences can be always replaced with each other. The notion of synonymy has to be deconstructed into a series of more basic semantic relations. We need to find a reliable source for classifying such relations, and, we will argue, such a source

lies in the notions of truth and reference. Consider the sentences in (2a), (2b), and (2c) again. Assume that the noun phrase the five clues in (2a) and (2b) refer to the same clues (i.e., we are talking about a particular episode in a particular story). Then, could it possibly happen that say (2a) is true and (2b) false? Evidently not: no one in his right mind could assert (2a) while simultaneously contending that (2b) is false. If (2a) is true, (2b) also must be true. And, in fact, vice versa: if (2b) is true, then (2a) also must be. When this happens, i.e., when two sentences are true in the same set of circumstances, we say that they have the same truth conditions. Notice that sameness of truth conditions does not coincide with or somehow require sameness of communicative import (too elusive a notion), nor substitutivity in any context whatsoever (a condition too difficult to attain). Our proposal is to replace such exceedingly demanding notions with a series of truthbased notions, while keeping the same general setup we sketched in connection with synonymy: for any pair of sentences, speakers have intuition about whether they are true under the same conditions or not. They can judge whether they are true in the same (real or hypothetical) circumstances or not. Because the sentences of our language are infinite, this capacity must be somehow based on a computational resource. Speakers must be able to compare the truth-conditions associated with sentences via an algorithm of some sort. The task of semantics is to characterize such an algorithm. The basic notion changes (synonymy is replaced with sameness of truth conditions), but the setup of the problem stays the same.

Truth and Semantic Competence Let us elaborate on the proposal sketched at the end of the previous section. Information is transmitted from one agent to another (the ‘illocutionary agents’) in concrete communicative situations (‘speech acts’). No two such situations are alike. And consequently, no two pieces of information that are transmitted through them are alike. In Groundhog Day, a movie with the actor Bill Murray, the protagonist gets trapped into going through the same day over and over. He wakes up and his day starts out in the same way (with the alarm clock ringing at 7 a.m. on groundhog day); as he walks outside, he meets the same waitress who greets him in the same way (‘‘weather so-so, today’’). Yet this sentence, though being the same day after day, and being uttered in circumstances as identical as they can conceivably be, clearly conveys a different sense or information unit on each occasion of its use (the hearer going from

318 Formal Semantics

noticing that something is fishy about this verbatim repetition, to the painful discovery of the condemnation to live through groundhog day for eternity). Ultimately, we want to understand how communication takes place. But we cannot nail down every aspect of a speech act, just as we cannot know (not even in principle, I believe) every aspect of the physical or mental life of a particular human being. At the same time, while speech acts are unique events, there is much that is regular and invariant about them; that is what can be fruitfully investigated. One family of such invariants concerns form: similar sound patterns may be used in different speech acts. Another family of invariants concerns content: similar states of affairs may be described through a variety of expressions. The notion of truth is useful in describing the latter phenomenon. A pair of sentences may be judged as being necessarily true in the same circumstances. This is so, for example, for (5a) vs. (5b). Yet, such sentences clearly differ in many other respects. One is much more long-winded than the other; it uses rarer words, which are typical of high, formal registers. So in spite of having the same truth conditions, such sentences may well be used in different ways. Having the same truth condition is generally regarded as a semantic fact; being able to be used in different ways is often regarded as a pragmatic fact. While this gives us a clue as to the role of these two disciplines (both of which deal with meaning broadly construed), the exact division of labor between semantics and pragmatics remains the object of controversy. Truth conditions are a tool for describing semantic invariants, structural regularities across communicative situations. Whenever I utter a declarative sentence, I typically do so with the intention to communicate that its truth conditions are satisfied (which of course raises the question of nondeclaratives, emotive expressions, and the like; see for example textbooks such as Chierchia and McConnell-Ginet (2000) or Heim and Kratzer (1998); cf. also Kratzer (1999) for a discussion of relevant issues). Truth conditions depend on the reference (or denotation) of words and the way they are put together, i.e., they are compositionally projected via the reference of words (or morphemes). If I say to you, as we are watching a movie, ‘‘Batman has found all of the five clues but one,’’ you understand me because you sort of know (or guess) who Batman is, what sort of things clues are, what finding something is, what number the word five refers to; you also understand the ‘‘all . . . but . . .’’ construction. The reference/denotation of words is set (and is modified, as words may change their denotation in time) through use, in complex ways we cannot get into within the limits of the present work. The denotation of complex expressions

(e.g., of a verb phrase such as [VPfound five clues] and truth conditions of sentences) are set by rule (the semantic component of grammar). Semantic rules presumably work like syntactic rules: they display variation as well as a common core, constitutive of universal grammar. Insofar as semantics is concerned, what is important for our purposes is that truth conditions can be compositionally specified. This paves the way for an algorithmic approach to meaning. We already remarked that sentences are formed by composing morphemes together via a limited number of syntactic operations. So to arrive at the truth condition of an arbitrary sentence, we can start by the contribution of the words (their reference). Then, for each way of putting words together, there will be a way of forming the reference of complex expressions, and so on until we arrive at the truth condition of the target sentence. So far, we have discussed sentences that have the same truth conditions (such as those in (2a), (2b), and (2c)); but this is not the only semantic relation that can be characterized in terms of the notion of truth. Consider the following examples. (7a) Every Italian voted for B (7b) Leo voted for B

a’. Most Italians voted for B. b’. Leo voted for B.

Sentence (7a) is related to (7b) in a way that differs from the relation between (7a’) vs. (7b’). Here is the difference. If (7a) is true, and Leo is Italian, then (7b) has to be true, too; this is clearly not so for (7a’) vs. (7b’): (7a’) may be true without (7b’) being true. If whenever A is true, B also must be, we say that A entails B (B’s meaning is part of of A’s meaning). Two sentences with the same truth conditions entail each other (i.e., they hold a symmetric relation); when entailment goes only one way (as from (6a) to (6b)), we have an asymmetric relation. Entailment is pervasive. Virtually all semantic intuitions are related to it. As an illustration, consider the pair of sentences in (8a) and (8b). (8a) John promised Bill to take him to the station. (8b) John ordered Bill to take him to the station.

Pronouns, like him in (8a) and (8b), take their denotation from the context; they can take it from the extra linguistic context (a person salient in the visual environment, a person the speaker points at, etc.) or from the linguistic context (e.g., from NPs that occur in the same discourse; John or Bill in (8a) and (8b)); one widespread terminology is to speak of indexical uses in the first case and of anaphoric uses in the second. We can conceptualize this state of affairs by viewing pronouns as context-dependent items, incomplete without pointers of some sort. Now we

Formal Semantics 319

shall focus on the anaphoric interpretation of (8a) vs. (8b). (9a) John promised Bill that John would take Bill to the station. (9b) John ordered Bill that Bill should take John to the station.

These appear to be the only options. That is to say, sentence (8a) cannot convey something like ‘John promised Bill that Bill should take John to the station.’ The point of this example is that we have intuitions that govern how the denotation of a pronoun is to be reconstructed out of contextual clues; such intuitions tell us that (8a) and (8b), though structurally so similar, allow for a distinct range of interpretive options. At the basis of intuitions of this sort, we again see entailment at work: on its anaphoric construal, (8a) entails (9a). Another important set of truth-based semantic relations are presuppositions. Consider the contrast between the sentences in (10a) and (10b). (10a) Fred stole the cookies. (10b) It was Fred who stole the cookies.

There is a noticeable semantic contrast between (10a) and (10b). How can we characterize it? Clearly the two sentences are true in the same circumstances (they entail each other). Yet they differ semantically. Such a difference can be perhaps caught by looking at what happens by embedding (10a) and (10b) in a negative context. (11) So, what happened this morning? (11a) Everything went well. Fred didn’t steal the cookies; he played with his toys. (11b) ?? Everything went well. It wasn’t Fred who stole the cookies.

The answer in (11a) is natural. The one in sentence (11b) would sound more natural as an answer to (12a) Who stole the cookies? (12b) It wasn’t Fred.

The difference between the question in (11) and the one in (12a) is that the latter (but not the former) tends to presuppose that cookies where stolen. In other terms, the situation seems to be the following. Both sentences in (10a) and (10b) entail: (13) Someone stole the cookies.

If either (11a) or (11b) are true, then (13) must also be true. Furthermore, sentence (13) must be true for (10b) to be denied felicitously. The illocutionary agents must take for granted the truth of (13) to assert, deny, or otherwise use sentence (10b), as the naturalness of the following continuations for (13) illustrate:

(14) (14a) (14b) (14c) (14d)

Someone stole the cookies . . . It was Fred. It wasn’t Fred. Was it Fred? If it was Fred, he is going to get it . . .

This brings us to the identification of presupposing as a distinctive semantic relation: a sentence A presupposes B if the truth of B must be taken for granted in order to felicitously assert, deny, etc., A. Presuppositions are quite important in language. So much so that there are distinctive syntactic constructions (such as those in (10b), known as cleft sentences) specifically keyed to them. Let me illustrate the wealth of semantic relations and their systematic character by means of another example, which will bring us to the interface between semantics and pragmatics. Consider: (15a) Who stole the cookies? (15b) Fred looks mischievous. (15c) Fred stole the cookies.

If to a question such as (15a), I reply with (15b), I do suggest/convey something like (15c). Sentence (15c) clearly is not part of the literal meaning of (15b) (however hard defining such a notion might be). Yet, in the context of the dialogue in (15a), (15b), and (15c), speakers will converge in seeing that (15c) is strongly suggested by (15b). Here, too, we have, thus, a systematic semantic intuition. The suggestion in (15c) can be retracted; that is, one can continue (15b) with ‘. . . but I know he didn’t do it’. However, in the absence of such an explicit correction, illocutionary agents upon hearing (15b) will tend to infer (15c). This phenomenon has been studied by H. P. Grice (1989), who dubbed it implicature. His proposal is that it arises through interaction of the core meaning assigned to sentences by rule with principles that govern conversational exchanges. The basic idea is that for conversational exchanges to be successful they have to be basically cooperative acts; cooperating means that one sticks to relevant topics, one only gives information believed to be truthful, one gives no more and no less than what is relevant, etc. Applying this to the case at hand, in a situation in which question (15a) is topical, answering (15b) would seem to be blatantly irrelevant; the hearer, however, tends to interpret it as relevant and sets in motion an inferential process that tends to link it to some piece of information that does address the topical question; such a link is to be found with the help of the information available in the context to the illocutionary agents (e.g., in the common knowledge that if people commit a mischief, such as stealing cookies, they may well look mischievous, etc.). Thus, this type of semantic judgment (the

320 Formal Semantics

implicature) appears to be best accounted for in terms of the interaction between grammar and general conditions on reasonable language use (that fall under the scope of pragmatics). Sometimes it is not immediately clear whether something is a matter of conventionalized meaning or pragmatics. To illustrate, consider the oscillation in meaning of a word like or. It can be illustrated with the following examples. Consider first (16a): (16a) If I got it right, either John or Mary will be hired. (16b) If I got it right, either John or Mary but not both will be hired.

Normally, one tends to interpret (16a) as truth conditionally equivalent to (16b); i.e., the disjunction in (16a) is interpreted exclusively (as incompatible with the simultaneous truth of each disjunct). However, this is not always so. Contrast (16a) with (17a). (17a) If either John or Mary are hired, we’ll celebrate. (17b) (?) If John or Mary (but not both) are hired, we’ll celebrate. (17c) If John or Mary or possibly both are hired, we’ll celebrate.

The most natural interpretation of (17a) is not the exclusive one (namely (17b), which is somewhat odd pragmatically); rather it is the inclusive one, made explicit in (17c). (Notice that the emphatic word either is present both in (16a) and (17a); in spite of this, the interpretation of or shifts.) We might see in these phenomena a lexical ambiguity of disjunction. Words expressing disjunction, we may feel inclined to conclude, have a varying interpretation, as it happens with words such as bank or lap (‘sit on my lap’ vs. ‘he swam three laps’). We may assume that such interpretations are always in principle available, but then we select the most suitable to the context of the speech act. While this seems prima facie possible, there are reasons to doubt it. In particular, true lexical ambiguities are resolved across languages (in Italian, there are two different words for the two senses of lap). Ambiguities are never universal. The meaning shift of or, per contra, seems to be universal: in every language disjunction appears to have a similar oscillation in meaning. A convincing case for two lexically distinct disjunctions, one exclusive, the other exclusive, has not been made (sometimes it has been proposed that Latin vel vs. autem is just that; for arguments against this, cf., e.g., Jennings (1994)). Moreover, other areas of the lexicon have been found that display a similar behavior (e.g., the number words). This strongly suggests that a different explanation for such behavior should be found.

Grice himself has proposed that the phenomenon under discussion is to be accounted for in terms of the interaction between semantics and pragmatics. The idea is that the basic meaning of or is the inclusive one, as it is the most liberal interpretation; the exclusive construal arises as an implicature, i.e., a pragmatic enrichment, albeit a generalized one. The advantage of this move is that it would explain the oscillation in meaning of disjunction without positing a covert ambiguity. We will come back to how the generalized implicature associated with or might come about in the later section ‘‘The Semantics/ Pragmatics Interface’’. Wrapping up, the picture that emerges is roughly the following. In using language, speakers display complex forms of spontaneous knowledge. They put together words in certain ways and not others. This is how knowledge of syntax manifests itself. They also accept certain paraphrases and not others, draw certain inferences and not others, etc. It turns out to be possible/useful to categorize the latter in three major families of semantic relations. (18a) Entailment-based (entailment, mutual entailment, contradictoriness, analyticity, etc.) (18b) Presupposition-based (presupposition, question/answer pairs, etc.) (18c) Implicature-based (generalized implicature, particularized implicature, etc.)

All of them can be readily defined in terms of the notion of truth: (19a) A entails B ¼ for any conceivable situation s, if A is true in s, B is also true in s. (19b) A presupposes B ¼ to use A appropriately in a situation s, the truth of B must be taken for granted by the illocutionary agents in s. (19c) A implicates B ¼ use of A in a situation s suggests, everything else being equal, that B is true in s.

The definitions in (19a), (19b), and (19c) can be readily associated with ‘‘operational’’ tests that enable speakers to assess whether a given relation obtains or not. For example, to check whether (20a) entails (20b), you might check whether you could sincerely assert (20a) while denying (20b), viz. whether you could sincerely and felicitously utter something like (20c): (20a) It is indeed odd that Mary is home. (20b) Mary is home. (20c) It is indeed odd that Mary is home, even if she in fact isn’t.

To the extent that you can’t really say something like (20c), you are entitled to conclude that (20a) entails

Formal Semantics 321

(20b). It is useful, in these cases, to use contrast sets such as (21a) and (21b). (21a) It is indeed conceivable that Mary is at home. (21b) It is indeed conceivable that Mary is home, even if she in fact isn’t.

The semantic relations in (18a), (18b), and (18c) can be viewed as intuitions of semantic relatedness speakers have about sentences of their own language, as judgments that may be elicited, and the like. By analogy with well-formedness judgments, there are some cases in which things are not so clear and we may not be sure whether, say, a certain entailment holds or not. In such a case, more complex arguments, indirect evidence of various sorts, or psycholinguistic experimentation may be called for (see, e.g., Crain and Thornton (1998) on experimental methodologies for truth-based semantic judgments). But in indefinitely many cases, simple introspection yields relatively straightforward judgments. The capacity for making such judgments is constitutive of our semantic competence. Such a competence cannot be simply a thesaurus, a store of pairs of sentences, with the relative judgment tacked on, for the number of judgments speakers can make on the fly is potentially infinite. Semantic competence must be a computational device of some sort. Such a device given an arbitrary pair of sentences must be able to determine in principle whether A entails B, presupposes it, etc. The task of semantics is to characterize the general architecture of such a computational device. While there are many foundational controversies that permeate the field, there is a broad convergence that this is roughly the form that the problem of meaning takes within modern formal semantics.

Semantic Modeling In the present section I will sketch how a (necessarily, much simplified) calculus of semantic relations may look. Suppose you have a lexicon of the following form: (22a) N: John, Bill, dog, cat, table, . . . . (22b) V: runs, smokes, drinks, . . . (22c) DET: the, a, some, every, no . . . .

Think of syntax as a device that combines lexical entries by merging them in complex phrases and assigning them a syntactic analysis that can be represented by tree diagrams or labeled bracketings of the following form: (23a) [VP John smokes] (23b) [DP every boy] (23c) [VP [DP every boy] smokes]

I assume, without being able to justify it, that lexical items have phrasal projections. In particular, VP is the phrasal projection of V and constitutes a clausal nucleus composed of the verb and its arguments linked in a predicative structure. Such a nucleus forms the innermost skeleton of the sentence (I will have to ignore matters pertaining to inflection, agreement, tense, and the like). The lexical features of verbs are crucial in determining the characteristics of clausal nuclei. DP is the phrasal projection of D, and it is constituted by a determiner and a (common) noun. Clausal nuclei can be formed by merging a verb with a (proper) name or a DP, as indicated. In the spirit of the discussion in the section on Truth and Semantic Competence, semantics assigns recursive truth conditions to sentences in terms of the reference assigned to lexical entries. There are several ways to do this. Ultimately, the choice one makes on the exact format of interpretive rules has far-reaching consequences for our understanding of grammar. However, our choices here are only in small part dictated by our current understanding of semantics in universal grammar; for the major part, they result from considerations such as ease of exposition, keeping prerequisites at a minimum, and the like. To get started, we should assign a reference (or denotation, terms we will use interchangeably) to lexical entries. To do so, we assume we have a certain domain Ds ¼ {a, b, c, . . .} at each given discourse situation s that constitutes our universe of discourse. A discourse situation can be thought of as the time at which the utterance takes place. A domain is just a set of individuals, pragmatically selected (e.g., those salient to the illocutionary agents). Interpretations are relative to an utterance situation s and the corresponding domain of discourse Ds. Reference of proper nouns, for example, is suitably chosen from the domain of discourse. Suppose, for example, that a and b are salient humans in our universe of discourse, then we might have: (24) For any conceivably relevant utterance situation s, the name John denotes a in s; the name Bill denotes b in s . . .

It doesn’t matter how a or b are characterized (via a description, an act of indication, etc.) to the extent that one successfully succeeds in linking the noun to its bearer. Also, it is useful to have a uniform category-neutral notation for semantic values; we will use for this the double bar notation || ||; accordingly, for any expression a, ||a||s will be the semantic value of a in situation s. Thus, (24) can be abbreviated as: (25) ||John||s ¼ a (where a 2 Ds, the domain of discourse at s)

322 Formal Semantics

(Technically, || || can be viewed as a function from expressions and situations into denotations; so sometimes we will speak of the interpretation function.) The denotation of a simple (intransitive) verb such as those in (22b) can be thought of as a function that for each (appropriate) individual in the domain discourse tells us whether that individual performs a certain action or not. Here is an example: (26) smokes in a situation s denotes a function smokes that applies to animate individuals and returns truth values. If a is such an individual, then smokes(a) returns ‘true’ (which we represent as the number 1) if that individual performs the action of smoking in s (where smoking involves . . . .); otherwise smokes (a) returns 0 (i.e., ‘false’).

If a is not animate (e.g., if a is a stone and s is a ‘normal’ situation), then smokes (a) is not defined (lacks a value). The final part in definition (26) reflects the fact that sentences like (27a) and (27b), out of the blue, are (equally) strange: smoking normally requires its subject argument to be animate. (27a) That stone smokes. (27b) That stone doesn’t smoke.

The deviance of sentences like (27a) and (27b) has been variously characterized as a violation of selectional restrictions or as sortal deviance. Here we are couching the relevant phenomenon in presuppositional terms (to illustrate a further application of such a concept). The fact that sentences of this sort remain deviant across negation may be taken as evidence that the verb smoke imposes an animacy presupposition on its arguments (see e.g., Chierchia and McConnell-Ginet (2000) for more discussion). A definition like (26) can be stated more compactly: (28) ||smokes||s ¼ smokes, where for each a in Ds, smokes(a) is defined iff a is animate in s; if defined, smokes(a) ¼ 1 if a smokes in s (where smoking involves . . .); smokes (a) ¼ 0, otherwise.

The definition of (or constraints on) smoking (i.e., the dots in (28)) can be elaborated further in several ways by refining our lexical analysis of the verb smoke. Although much progress has been made on this score, many important issues remain open (including, e.g., whether a presuppositional treatment of selectional restrictions is ultimately viable). What is important, from the point of view of compositional semantics, is the logical type or semantic category of the denotation of a verb like smoke. Such verbs are treated here as functions from individuals into truth

values. These are called characteristic functions; they divide the (relevant portion of) the domain of discourse of the utterance situation in two: the things that satisfy the verb from those that don’t. Characteristic functions correspond to sets (which might be called the extension of the function), as the following example illustrates: (29) Let universe of discourse be {a, b, c, d}; let a, b, and c be people. Of these, let a and b smoke in s while b but not a also smokes in a different situation s’. We can represent all this as follows: a !1 smokes ¼ b !1 corresponding extension: {a,b} c !0 a !1 smokes’¼ b !0 corresponding extension: {a} c !0

As is evident from the example, sets and characteristic functions are structurally isomorphic (encode the same information). In what follows it will be useful on occasion to switch back and forth between these two concepts. Use of characteristic functions as a formal rendering of verb meanings is useful in giving truth conditions for simple subject predicate sentences: (30a) A sentence of the form [VP N V ] is true in s iff ||V||s (||N||s ) ¼ 1 Example: (30b) [VP Bill drinks ] is true in s iff ||drinks||s (||Bill||s ) ¼1

The truth conditions of any sentence with the syntactic structure specified in (30a) boil down to applying a characteristic function to an individual (and thereby ascertaining whether that individual belongs to the set that constitutes the extension). To find out whether Bill in fact smokes in s, we need factual information about the situation obtaining in s. To understand the sentence, we don’t. We merely need to know its truth conditions, which in the case of simple subject– predicate sentences are an instruction to check the value of a characteristic function for the argument specified by the subject. The rules in (30a) and (30b) can be reformulated more compactly as in (31): (31) || [VP N V ] ||s ¼ ||V||s (||N||s )

This can be viewed as the kernel of a predication rule (that tells us how subject and predicates combine semantically). Everything so far looks like a formally explicit (and perhaps somewhat pedantic) way of sketching a denotational, information-oriented semantics, and the reader may get the feeling of not yet finding striking insights on what meaning is. In order to grasp the potential of this method, one needs to look at a little

Formal Semantics 323

more of its computational apparatus. So let us turn now to DPs. Things here are definitely more challenging. DPs are constituents formed by a determiner plus a common noun. Common nouns can be given, at least in first approximation, the same analysis as (intransitive) verbs, i.e., the meaning of, say, cat can be thought of as a characteristic function that selects those entities that are cats out of the universe of discourse (or, equivalently, we can say that cat identifies a class/set across situations). But what about things like no cat or every cat, which are the typical constituents one finds in, e.g., subject position and the like? What does no cat denote? And, even worse, what do no or every or some denote? Our program is to assign a denotation to lexical entries and then to define in terms of it truth conditions for sentences. So we must find suitable denotations of Ds and DPs. To address questions of this sort, we apply a heuristic that goes naturally with our general setup: whenever the denotation of an expression is not directly accessible to your intuition, look at what that expression contributes to the truth conditions of the sentences it occurs in (the epistemological primacy of sentences, again). So, consider for example: (32) No boy smokes.

We know/assume/conjecture that boy and smoke denote characteristic functions and that sentences contribute truth values (i.e., they are true or false, as the case may be, in different situations). We may think of no as a function, too. As is evident from (32), such a function combines first with a characteristic function/set (corresponding to the noun); then the result combines with a second characteristic function (corresponding to the verb) to yield a truth value. Schematically, here is what we have: (33) no(boys) ( smokes) ¼ 1 or 0

Now we can look at our intuitions. When is (32) true? The answer is pretty clear. When among the boys, nobody smokes. Or, equivalently, when the class of boys (i.e., the extension of boys) has no member in common with the smokers (i.e., the extension of smokes), (32) is true. In set talk, the intersection between the boys and the smokers must be empty: (34) no(boys) ( smokes) ¼ 1 iff BOYs \ SMOKEs ¼ B

(where BOYs, SMOKEs are the extensions corresponding to boys, smokes, respectively) This is perfectly general. Replace boy/smokes with any other noun/ verb. The contribution of no stays constant: no(N) (V) is true just in case no member of the extension of N is in V. We thus discover that no has a perfectly sensible

(if abstract) denotation: a function that encodes a relation between sets. Our contention here is that speakers behave as if they had such a function in mind (or something similar to it) in using no. The next step is to see that all determiners express relations among sets (characteristic functions), just like no does. Here are a few examples, along with some comments. (35a) Some (35a.i) Example: some boy smokes (35a.ii) Truth conditions: some(boys) ( smokes) ¼ 1 iff BOYs \ SMOKEs 6¼ B (35a.iii) Comment: some is the contrary of no; some boy smokes is true just in case you can find someone among the boys who is also among the smokers; i.e., the intersection between the class of boys and the class of smokers must be non empty. The indefinite article a can be analyzed along similar lines. (35b) Every (35b.i) Example: every boy smokes (35b.ii) Truth conditions: every(boys) ( smokes) ¼ 1 iff BOYs  SMOKEs (35b.iii) Comment: every expresses the subset relation: every boy smokes is true just in case all the members of the class of boys also belongs to the class of smokers (35c) Most (35c.i) Example: Most boys smoke (35c.ii) most(boys) ( smokes) ¼ 1 iff the number of member of BOYs \ SMOKEs is bigger than half the number of members of BOYs. (35c.iii) Comment: most involves actual counting. Most boys smoke is true just in case the number of boys who smoke (i.e., the intersection of the boys with the smokers) is greater than half the number of boys (i.e., more than half of the boys are smokers). (35d) The (35d.i) Example: The blond boy smokes. (35d.ii) Truth conditions: the (blond boys) ( smokes) is defined only if there is exactly one blond boy in s. Whenever defined, the (boys) (smokes) ¼ every (boys) ( smokes). (35d.iii) Comment: this reflects the fact that the blond boy smokes is only interpretable in situations in which the universe of discourse contains just one blond boy. If there is more than one blond boy or if there is no blond boy, we wouldn’t really know what to make of the sentence. So the is a presuppositional determiner; it presupposes the existence and uniqueness of the common noun extension. (This analysis of the goes back to Frege.)

324 Formal Semantics

In spite of the sketchiness of these remarks (that neglect important details of particular determiners), it should be evident that the present line of analysis is potentially quite effective. A class of words and phrases important and tendentially stable across many languages falls into place: determiners ultimately express natural relations between sets (the set associated with the common noun and the set associated with the verb phrase). Our denotational perspective seems to meet rather well the challenge that seemingly denotationless items pose. It is useful to see what becomes of our rule of predication (viz. (31) above). Evidently such a rule needs to be split into two (main) subcases, depending on whether the subject is a simple N (a proper name) or a complex DP. Here is an exemplification of the two cases: (36a) Mary smokes. (36b) No boy smokes.

In case (36a), we have semantically two pieces: an individual (whomever Mary denotes) and a characteristic function (smokes); so the latter applies to the former. In case (36b) the two pieces are: a complex function (namely no (boys)) that looks for a characteristic function to yield a truth value, and, as before, the characteristic function smokes; in this case the former applies to the latter. In either case, the end result is a truth value. So our predication rule becomes: (37a) || [VP N V ] ||s ¼ ||V||s (||N||s ) (37b) || [VP DP V ] ||s ¼ ||DP||s (||V||s )

This suggests that the core rule of semantic composition is functional application. Consider for example an ungrammatical sentence of the form: (38) * [VP boy smokes ]

Such a sentence, as things stand, would be generated by our (rudimentary) syntax. However, when we try to interpret it, we find two characteristic functions of individuals, neither of which can apply to the other. Hence, the sentence is uninterpretable, which explains its ungrammaticality. There are languages like, for example, Russian or Hindi where singular common nouns without a determiner can occur in subject position: (39a) Russian: (39b) Hindi:

mal’cik kurit boy smokes ‘the boy smokes’_ kamre meN cuuha ghuum rahaa hai (from Dayal 2004) room in mouse moving is ‘a mouse is moving in the room’

Notice that (39a) is the verbatim translation of (38) and is grammatical in Russian. The line we are taking

suggests that in such languages it must be possible to turn some covert forms of common nouns into argumental DPs, i.e., things that can semantically combine with predicates; for example it is conceivable that in a language without articles, like Russian, the semantic functions associated with the articles can be applied covertly (as part of the interpretive procedure), so as to rescue the semantic mismatch that would otherwise ensue. This may, in turn, involve the presence of a phonologically null determiner (for alternative developments of this line of analysis, as well as details concerning the available interpretations, see, e.g., Chierchia (1998), Longobardi (2001), and Dayal (2004)). The picture that emerges is the following. The basic mode of syntactic composition is merge, or some analogously simple operation that puts together two constituents (subject to parametrization pertaining to, e.g., word order, case, etc.). The basic mode of semantic composition is apply: constituents are compositionally analyzed as functions (of more or less complex semantic type) and arguments (individuals or other functions); so whenever we find a function and an argument of the appropriate sort, we simply apply the former to the latter. If things go wrong at any level, the derivation crashes and the result is ungrammatical. The semantic side of this process has come to be known as ‘type driven interpretation,’ the main idea being that the semantic categories of functions and arguments drive the interpretation process. The present approach directly yields a computationally tractable theory of entailment and presupposition. We have defined entailment roughly as follows: a sentence S entails a sentence S’ iff whenever S is true, S’ is also true. The apparatus we have developed allows us to prove whether a certain entailment holds or not. Let me show, as an illustration, that (40a) entails (40b) but not vice versa. (40a) Every scientist smokes. (40b) Every mathematician smokes.

To show this we need to assume that if one is a mathematician, one is a scientist; i.e., (41) For every individual a, (41a) if mathematicians (a) ¼ 1, then scientists (a) ¼ 1 or, equivalently: (41b) MATHEMATICIANs  SCIENTISTs

Consider now the semantics of (40a), according to our analysis. It is the following: (42) every(scientists) ( smokes)

In virtue of (35b), this is tantamount to (43) SCIENTISTs  SMOKEs

Formal Semantics 325

This being so, every subset of the set of scientists must also be included among the smokers (by elementary set theoretic considerations). Since, in particular, mathematicians are scientists, it follows that (44) MATHEMATICIANs  SMOKEs

But this is just the semantics of (40b). So, if (40a) is true in s, then (40b) must also be true in s. Evidently, this reasoning goes through no matter which situation we are in. Hence, (40a) does entail (40b). On the other hand, it is easy to conceive of a situation in which (44), and hence (40b), hold, but say some economist doesn’t smoke; in such a situation, (43) would fail to obtain. Hence, (40b) does not entail (40a). A fully parallel way of reasoning can be put forth for presuppositions. We said that S presupposes S’ iff S’ must be taken for granted in every situation in which S is asserted, denied, etc. This can be cashed in as follows. We can say that for S to be true or false (i.e., to have a semantic value that makes it suitable for assertion or denial), S’ must be known to be true in the utterance situations by the illocutionary agents, i.e., S can be true or false in s iff S’ is true in s. Using this definition (known as the ‘semantic’ definition of presupposition), we can formally prove (though we will not do so here) that, for example, (45a) presupposes (45b): (45a) The blond boy smokes. (45b) There is exactly one blond boy around.

The general point of these examples is the following. Intuitions about entailment and the like are a priori; speakers have them just by inspecting the meaning of the relevant sentences. In the present setup, this central fact is captured as follows. Semantics can be viewed as a set of axioms that (a) determines the interpretation of lexical entries and (b) assigns truth conditions to sentences. Such apparatus yields a calculus of entailment (and other semantic relations) that reemerge as theorems of semantics. We have not formalized each single step of the derivation (relying on the readers’ patience and understanding of elementary set theory); but such a formalization is, evidently, feasible. We not only thereby gain in clarity. We also obtain a device that constitutes a reasonable (and falsifiable) model of speakers’ linguistic abilities. The claim is not that the specific rules we have given are actually implemented in the speakers’ mind. The claim is that speakers, to the extent that they can be said to compute entailments must be endowed with computational facilities that bear a structural resemblance to the ones sketched here. This, in turn, paves the way for inspecting the architecture of our linguistic abilities

ever more closely. Without excessive optimism and in full awareness of the controversies that permeate the field, this seems to constitute a step in the right direction. One further remark on the general picture that emerges from the sketch above cannot be avoided. Our approach to meaning is denotational: we assign a denotation to words and morphemes and (in terms of such denotations) truth conditions to sentences. This can be understood in several ways, of which I will present two much simplified extremes. We can take truth condition assignment as a way of exposing the link between language and the world, which is, arguably, the ultimate goal of semantics. Words/ morphemes are actually mapped into aspects of the world (e.g., names are mapped into actual individuals); sentences are symbolic structures that code through their fine structure how things may be arranged in the world. However, it is also possible to view things somewhat differently. What really matters, it can be argued, is not the actual mapping between words and aspects of reality and between sentences and the conditions under which they, in fact, are true. What we do is give a form or recipe or potential for actual truth conditions; we merely constrain the form that truth conditions may take. What we get out of this is what really matters: a calculus of semantic relations (entailment, presupposition, etc.). Unlike what happens in, say, pure logic, such a calculus is not a normative characterization of sound reasoning; it is an empirically falsifiable characterization of semantic competence (i.e., of what speakers take to follow from what, when). Under the latter view, truth conditions (or truth condition potentials, or whatever it is that we map sentences on) are a ladder we climb on to understand the working of semantic relations, i.e., relations that concern the information content of linguistic expressions. It is evident that we are not going to settle these issues here. As a small consolation (but also, if you wish, as evidence of the maturity of the field), I hope to have given the reader reasons to believe that progress is possible even if such foundational issues remain open. We haven’t discussed implicatures and other pragmatically driven intuitions about meaning. To understand the full scope of the present proposal, it is important to do so. This requires extending a bit what we have done so far.

The Semantics/Pragmatics Interface In the section Truth and Semantic Competence, we mentioned implicatures, a broad and varied type of meaning relations. We will elaborate by looking more closely at the oscillation in the meaning of or. The

326 Formal Semantics

purpose is to illustrate how generalized implicatures come about and how this bears on the view of semantics sketched in the preceeding section on Semantic Modeling. The first step is to attempt a semantic analysis of or. To this we now turn. Imagine we extend our grammar by introducing coordination and negation along the following lines: (46a.i) (46a.ii) (46b.i) (46b.ii)

[VP John doesn’t smoke] [VP NEG VP] [[VP John smokes ] and/or [VP Bill smokes]] [VP and/or VP]

The syntax of negation and coordination poses many thorny questions we simply cannot address here. Although for our purposes any number of assumptions concerning syntax might do, let us maintain, again without much justification, that a negative sentence like (46a.i) has the structure in (46a.ii) out of which the observed word order is derived by moving the subject left from the inner VP. Furthermore, we will assume that coordinated sentences, whether disjunctive or conjunctive, such as (46b.i), are obtained through schemas such as (46b.ii). Insofar as semantics is concerned, the introduction of negation, conjunction, disjunction, etc., poses problems similar to that of determiners. The relevant expressions are function words, and it is not obvious how to analyze them in denotational terms. This question, however, can be addressed in much the same way as we have done with the determiners: by looking at what the relevant elements contribute to the truth conditions of the sentences they occur in. For sentential operators, we can draw on a rich logical tradition. In the attempt to characterize the notion of valid inference, logicians have discussed extensively propositional connectives (like not, and, or), and the outcome is an analysis of such elements as truth functions or, equivalently, in terms of ‘truth tables.’ For example, the contribution of negation to meaning can be spelled out in terms of conditions of the following sort: (47a) John doesn’t smoke is true in s iff John smoke is false in s (47b) || NEG VP ||s ¼ 1 iff || VP||s ¼ 0 (47c) VP NEG VP 1 0 0 1

In (47c) we display in the form of a truth table the semantics given in (47b). Essentially, this says that in uttering a negation like (47a), the speaker intends to convey the falsity of the corresponding positive sentence. By the same token, conjunctions can be analyzed as in (48a), (48b), and (48c), and disjunction as in (49a), (49b), and (49c):

(48a) John smokes and Bill smokes is true if both John smokes and Bill smokes are. (48b) || [VP1 and VP2]||s ¼ 1 iff || VP1||s ¼ || VP2||s ¼ 1 (48c) VP1 VP2 [ VP1 and VP2] (48c.i) 1 1 1 (48c.ii) 1 0 0 (48c.iii) 0 1 0 (48c.iv) 0 0 0 (49a) John smokes or Bill smokes is true if either John smokes or Bill smokes or both are true. (49b) || [ VP1 and VP2]||s ¼ 1 iff either || VP1||s ¼ 1 or || VP2||s ¼ 1 or both (49c) VP1 VP2 [ VP1 or VP2] (49c.i) 1 1 1 (49c.ii) 1 0 1 (49c.iii) 0 1 1 (49c.iv) 0 0 0

This is the way in which such connectives are analyzed in classical (Boolean) logic. Such an analysis has proven extremely fruitful for many purposes. Moreover, there is little doubt that the analysis in question is ultimately rooted in the way in which negation, etc., works in natural language; such an analysis indeed captures at least certain natural uses of the relevant words. What is unclear and much debated is whether such an analysis stands a chance as a fullfledged (or nearly so) analysis of the semantics of the corresponding English words. There are plenty of cases where this seems prima facie unlikely. This is so much so that many people have concluded that while Boolean operators may be distilled out of language via a process of abstraction, they actually reflect normative principles of good reasoning more than the actual semantics of the corresponding natural language constructions. Of the many ways in which this problem might illustrated, I will choose the debate on the interpretation of or. The interpretation of or provided in (49a), (49b), and (49c) is the inclusive one: in case both disjuncts turn out to be true, the disjunction as a whole is considered true. As we saw, this seems adequate for certain uses but not for others. The exclusive or can be analyzed along the following lines: (50) Exclusive or VP1 (50.i) 1 (50.ii) 1 (50.iii) 0 (50.iv) 0

VP2 1 0 1 0

[ VP1 or VP2] 0 1 1 0

As the readers can verify by comparing (49a), (49b), and (49c) with (50), the two interpretations of

Formal Semantics 327

or differ only in case (i); if both disjuncts are true, the whole disjunction is true on the inclusive interpretation and false on the exclusive one. So, the thesis that or is ambiguous can be given a precise form. There are two homophonous ors in English. One is interpreted as in (48a), (48b), and (48c), the other as in (50). Illocutionary agents choose among these options on pragmatic grounds. They go for the interpretation that is best suited to the context. Determining which one that is will involve knowing things like the topic of the conversation (e.g., are we talking about a single job or more than one), the purpose of the conversational exchange, the intentions of the speaker, etc. We mentioned that Grice proposed an alternative view, however. We are now in position to spell it out more clearly. If you look closely at the two truth tables in (49a), (49b), and (49c) vs. (50), you’ll notice that in all the cases in which the exclusive or comes out true (namely case (ii) and case (iii)), the inclusive one does, too, i.e., in our terms, [p orexclusive q] entails [p orinclusive q]. The former is, thus, stronger, more informative than the latter in the following precise sense: it rules out more cases. If you get the information that [p orexclusive q] holds, you know that case (ii) or case (iii) may obtain, but case (i) and case (iv) are ruled out. If you know instead that [p orinclusive q] obtains, you know that you might be in case (i), (ii), or (iii); only case (iv) is ruled out. Your degree of uncertainty is higher. So orexclusive is more restrictive; orinclusive is more general (more liberal we said). Things being so, suppose for a moment that or in English is unambiguously inclusive (i.e., its interpretation is the most general, less restrictive of the two); this does not rule out at all the possibility that we are in case (ii) or case (iii). The exclusive construal, in other words, might arise as a special case of pragmatic strengthening. It is as if we silently add to, say, (51a) something like (51b). (51a) John or Mary will be hired. (51b) (. . . but not both)

The silent addition of (51b) to (51a) might be justified through a reasoning of the following sort: (52) The speaker said (51a); let us assume she is being cooperative and not hiding on purpose any relevant information. This entails that she has no evidence that both John and Mary have been hired, for otherwise she would have said so. Assuming, moreover, that she is wellinformed about the facts, this furthermore entails that she thinks that in fact (51b) holds.

So in this view, the base interpretation (viz. (51a)) is enriched through an inferential process that draws on

principles of rational conversational exchanges and on factual knowledge about the context. The relation between (51a) and (51b) can thus be analyzed as a case of implicature (cf. on this, e.g., Horn (1989), Levinson (2000), and references therein). The debate on how the two interpretations of or come about is important and shows different ways in which semantics is taken to interact with broader considerations pertaining to communication. Whether the two interpretations of or are a matter of ambiguity or arise as an implicature, I want to point out a generalization concerning their distribution, which I think shows something important concerning how language works. I will argue that the cases in which or is construed preferentially inclusively are (1) predictable, and (2) determined by structure. Then, I will put forth a hypothesis as to why this is so. We have seen that a sentence like (16a), repeated here as (57a), is interpreted as in (57b), namely exclusively: (53a) If I got it right, either John or Mary will be hired. (53b) If I got it right, either John or Mary but not both will be hired.

Now take the consequent (i.e., the main clause) in the conditional in (53a) and move it to the antecedent, and the interpretation tends to shift: (54a) If either John or Mary are hired, we’ll celebrate. (54b) If John or Mary or both are hired, we’ll celebrate.

So, moving a disjunction from the consequent to the antecedent seems to have a systematic effect on the interpretation of or. The same holds for the pair in (55a) and (55b): (55a) Every student will either take an exam or write a paper. (55b) Every student who either takes an exam or writes a paper will satisfy the requirements.

In (55a), or is within the VP, which corresponds to the second argument of every, according to the analysis sketched in the section Semantic Modeling. Its preferred interpretation is exclusive. In (55b), every is in a relative clause which is part of the subject NP (namely, the first argument of every according to the analysis in Semantic Modeling). Its preferred interpretation is clearly inclusive. A further class of contexts that displays a similar effect are negation and negative verbs. Compare (56a) and (56b): (56a) I believe that either John or Mary will be hired. (56b) I really doubt that either John or Mary will be hired.

328 Formal Semantics

Sentence (56a) is likely to get the interpretation ‘I believe that either John or Mary but not both will be hired.’ Sentence (56b), on the other hand does not have a parallel reading. It rather means ‘I really disbelieve that John and Mary stand a chance.’ The list could go on. But these examples should suffice to instill in the reader the idea that there is a systematic effect of structure on the interpretation of or. A doubt might linger, though, as to whether it is really in the nature of structure to have this impact. Take, for example, the pair in (55a) and (55b). Is it the position of disjunction that makes a difference? Or is it rather our knowledge of how classes normally work? This is a legitimate question. Noveck et al. (2002) address it experimentally. They designed a reasoning task, in which logically naı¨ve subjects are asked to judge whether a certain inference is sound or not. For example, subjects were asked to judge whether one can infer (57c) from (57a) and (57b): (57a) If there is an A, then there is a B or a C. (57b) There is an A. therefore: (57c) There aren’t both a B and a C.

Subjects were told that this was about inferences that could be drawn (on the basis of the given premises) concerning letters written on the back of a certain blackboard. What would your answer be? The experimental subjects overwhelmingly accepted the inference in (57a) and (57b). What is interesting is that in terms of classical Boolean logic (which takes or to be inclusive) this inference is invalid. It is only valid if or in (57a) is interpreted exclusively. At the same time, subjects rejected inferences of the following form: (58a) If there is an A, then there is a B and a C. (58b) There is an A. therefore: (58c) There is a B or a C.

Again, this seems to make sense only if or in (58c) is interpreted exclusively. Things change dramatically if or is embedded in the antecedent of a conditional: (59a) If there is an A or a B, then there is a C. (59b) There is an A; there is also a B. therefore: (59c) There is a C.

Subjects overwhelmingly accepted this inference as valid. But this is only possible if or in (59a) is construed inclusively. Our raw intuition thus finds experimental confirmation, one that passes all due controls (the inferences were mixed with others containing other connectives and quantifiers, so that subjects were not conditioned to devise an answering

strategy, and the order of presentation was duly varied, etc.). What is interesting is that these experiments only involved meaningless letters A, B, C . . . so scripts, contextual clues, knowledge of the world can hardly be imputed any role in the outcome. If there is a systematic effect on the interpretation of or, this must be due to the meaning of conditionals, of disjunction, and to the positioning of the latter. Nothing else is at play. The reader may wonder how one manages to find out which structures affect the interpretation of or. The answer is that such structures were familiar from another phenomenon: the licensing of Negative Polarity Items (NPIs). NPIs are lexical items like any or ever that seem to require the presence of a negative element: (60a) * There is any cake left (60b) There isn’t any cake left.

NPIs are acceptable in the contexts that favor the inclusive interpretation of or over the exclusive one: (61a) * If we are in luck, there are any cookies left (61b) If there are any cookies left, we are in luck. (62a) * Everyone had any cookies left (62b) Everyone who had any cookies left shared them.

This correlation is striking, for the two phenomena (the distribution of any and of inclusive vs. exclusive or) seem to have little in common. The next question is whether the relevant contexts have some common property. The answer seems to be positive and, surprisingly, points in the direction of a rather abstract, entailment-based property. Positive contexts typically license inferences that go from sets to supersets. For example, (63a) entails (63b) and not vice versa. (63a) There are Marlboros. (63b) There are cigarettes.

The set of cigarettes is a superset of the set of Marlboros; so the entailment goes from a set to its supersets. Negation reverses this pattern: (64b) entails (64a) and not vice versa: (64a) There aren’t any Marlboros. (64b) There aren’t any cigarettes.

Now the VP portion of a sentence with every (i.e., its second argument) patterns with (63a) and (63b): (65a) Everyone had Marlboros. (65b) Everyone had cigarettes.

Sentence (65a) entails sentence (65b) and not vice versa. So does the consequent of a conditional

Formal Semantics 329 (66a) If you open the drawer, you’ll find Marlboros. (66b) If you open the drawer, you’ll find cigarettes.

But the NP argument of every (its first argument) inverts this pattern just like negation, as we saw in the Semantic Modeling section: (67a) Everyone who had Marlboros shared them. (67b) Everyone who had cigarettes shared them.

Here it is (67b) that entails (67a) and not vice versa. The same applies to the antecedent of conditionals: (68a) If you smoke Marlboros, you’ll be fined. (68b) If you smoke cigarettes, you’ll be fined.

Sentence (68b) entails (68a); on the other hand (68a) could be true without (68b) necessarily being true (in a town in which Marlboros but no other brand is banned). In conclusion, the contexts that favor the inclusive interpretation of or share a semantic property that has to do with entailment patterns: they all license entailments from sets to their subsets. Such a property has come to be seen as the property of being downward entailing (where down refers to the directionality of the entailment from sets to smaller ones). If this characterization is correct, this means that speakers to the extent that they interpret or as shown, must differentiate such contexts, and hence must be able to compute the entailments associated with the relevant structure. The next question is why or tends to be interpreted inclusively in downward entailing structures. I will only hint at what strikes me as a highly plausible answer. As we saw above, in plain unembedded contexts, exclusive or is stronger (i.e., asymmetrically entails) than inclusive or. The set of cases in which exclusive or is true is a subset of the set of cases in which the inclusive one is true. We evidently prefer, everything else being equal, to go for the strongest of two available interpretations. Now, negation and, in fact, all downward entailing structures, as we just saw, reverse this pattern. Under negation, first becomes last; i.e., strongest becomes weakest. In the case of disjunction, the negation of inclusive or is stronger (i.e., entails) the negation of exclusive or. I’ll leave it to the readers to persuade themselves that this is so. Now why is this observation relevant? Suppose we go for the strongest of two alternatives (i.e., we maximize informativeness, everything else being equal); for disjunction, in downward-entailing contexts inclusive or is the strongest interpretation; in nondownward-entailing contexts exclusive or is the strongest. This explains the observed behavior in terms of a rather simple principle that optimizes information content on the basis of the available expressive resources.

So pragmatic strengthening (via a generalized implicature) correlates harmoniously with the entailment properties of various elements.

Conclusions We have sketched a view of semantic competence as the implicit knowledge a speaker has of how the information content of various expressions is related. We have proposed to classify the hosts of semantic relations in three major families: entailment-based, presupposition-based, and implicature-based. Given two sentences, speakers can judge whether they entail each other or not, whether they presuppose each other or not, and so on; and they can do so with finite cognitive resources. We have sketched a denotational semantics that accounts for such a competence (i.e., provides a model for it). Our semantics takes the form of a calculus in which entailments, presuppositions, and even (certain) implicatures reemerge as theorems. Such a model is formal in the sense of being explicit (building on the tradition of logic and model theory). It is, however, also substantive, in that it models a human cognitive capacity (i.e., the ability to semantically relate sentences to each other). We have seen two simple applications of this approach, to the analysis of determiners and connectives. We have also discussed a case of pragmatic enrichment. What we found is that the interpretation of or as exclusive or inclusive follows a pattern sensitive to downward entailingness (much like what happens with negative polarity items). If this is so, then entailment patterns are not simply an invention of logicians or linguists. They must be constitutive, in an unconscious form, of the spontaneous knowledge that endows speakers with their linguistic abilities.

See also: Boole and Algebraic Semantics; Categorial

Grammar, Semantics in; Compositionality; Default Semantics; Discourse Representation Theory; Dynamic Semantics; Event-Based Semantics; Game-theoretical Semantics; Generative Lexicon; Implicature; Interpreted Logical Forms; Logic and Language; Logical and Linguistic Notation; Logical Form; Meaning Postulates; Metalanguage versus Object Language; Modal Logic; Monotonicity and Generalized Quantifiers; Montague Semantics; Natural Language Understanding, Automatic; Nonmonotonic Inference; Operators in Semantics and Typed Logics; Possible Worlds; Presupposition; Propositional and Predicate Logic; Propositional Attitude Ascription; Propositional Attitudes; Propositions; Quantifiers; Scope and Binding; Semantic Value; Situation Semantics; Specificity; Truth Conditional Semantics and Meaning; WordNet(s).

330 Frame Semantics

Bibliography Chierchia G (1998). ‘Reference to kinds across languages.’ Natural Language Semantics 6, 339–445. Chierchia G & McConnell-Ginet S (2000). Meaning and grammar (2nd edn.). Cambridge, Mass: MIT Press. Crain S & Thornton R (1998). Investigations in Universal Grammar. Cambridge, Mass: MIT Press. Dayal V (2004). Number marking and (in)definiteness in kind-terms. Linguistics and Philosophy 27, 393–450. Grice H P (1989). Studies in the ways of words. Cambridge, Mass: Harvard University Press. Heim I & Kratzer A (1998). Semantics in generative grammar. Oxford: Blackwell. Horn L (1989). A natural history of negation. Chicago: University of Chicago Press. Jennings R E (1994). The genealogy of disjunction. Oxford: Oxford University Press.

Kratzer A (1999). Beyond ouch and oops. How expressive and descriptive meaning interact. Paper presented at the Cornell Conference on Context Dependency, unpublished manuscript. Amherst Mass. http://www.semantics archine.net/Archine/WEWNGUyO/Beyond%20%22 Ouck%22%20 and %20% 220ops%22.fdf. Levinson S (2000). Presumptive meanings. Cambridge, Mass: MIT Press. Longobardi G (2001). ‘How comparative is semantics? A unified parametric theory of bare nouns and proper names.’ Natural Language Semantics 9, 335–369. Mates B (1950). ‘Synonymity.’ In Meaning and interpretation. University of California Publications in Philosophy, 25. Noveck I, Chierchia G, Chevaux F, Guelminger R & Sylvestre E (2002). ‘Linguistic–pragmatic factors in interpreting disjunctions.’ Thinking and Reasoning 8, 297–326.

Frame Semantics C J Fillmore, University of California at Berkeley, Berkeley, CA, USA ß 2006 Elsevier Ltd. All rights reserved.

Introduction Frame semantics is first of all an approach to describing the meanings of independent linguistic entities (words, lexicalized phrases, and a number of special grammatical constructions) by appealing to the kinds of conceptual structures that underlie their meanings and that motivate their use. These conceptual structures, called frames, can be schematizations of particular situation types and their components such as the events or states expressed by simple verbs or adjectives, e.g., lift or similar; large-scale institutional scenarios such as commercial transactions or judicial process; patterns of contrast such as that between winning and losing; networks of relationships such as what is found in kinship terminology; and a great many others. The words or other linguistic entities in a text or discourse evoke or project their frames in the minds of the language user and figure in the cognitive process of language interpretation. In 1983, Victor Raskin convened a Congress Workshop at the 13th International Congress of Linguists in Tokyo under the title ‘Round Table Discussion on Frame/Script Semantics.’ A 1985 volume of papers (Raskin, 1985), including many from researchers who did not participate in the Tokyo Round Table, became the first general collection on this broad topic, characterized mostly as work that accepted cognitive structures with names such as frame,

schema, script, and scenario, that led one away from prevailing views of semantics within the Katz and Fodor (1963) tradition in generative grammar. In 2003, at the 17th Congress of the same organization, in Prague, a second Frame Semantics workshop was held, and the range of approaches was again quite broad. Collections of papers from that workshop are said to be forthcoming. The kinds of phenomena dealt with in framesemantic analysis can be illustrated with a few examples. Oppositions

If we read of kestrels that they ‘sometimes nest on the ground,’ our picture of kestrel nesting behavior has us imagining that these birds otherwise build their nests in trees, or perhaps under the eaves of buildings or in bird houses. But if we read of auks that they ‘build their nests on land,’ this matches or perhaps induces our understanding that these birds otherwise spend most or all of their time in or above water. This is because there are conventional pairings between the words ground and air, as between land and sea, matched by phrasal contrasts between on the ground and in the air on the one hand, or on land and at sea on the other hand. Each member of such contrast sets evokes an understanding of the contrast as a whole. We hear that someone named Jim ‘‘spent two hours on land today,’’ and we conclude easily that this is an interruption of a sea voyage; or we hear that he ‘‘spent two hours on the ground today,’’ and we see it as an interruption of a period of air travel. There is yet

Frame Semantics 331

another such contrast, using the word earth, as can be recognized in the report that Jim ‘‘spent only a few years on this earth’’: we assume this time that we are being told that he is now in heaven – or possibly that he has returned to outer space. It is important to realize that these conclusions are based, not on information about where these nests or people are located, but on the words used to express that information, words that evoke the frames against which the fuller understanding is structured. We could, in fact, have a single snapshot image of Jim doing something somewhere on the earth’s surface that fits any of the three framings. Word Choice

Sometimes we are surprised by the use of a word in a particular context, and inquiry leads to an explanation involving the ‘framing’ role of words. If we were to buy some herbal medicine at an American healthfood store and wished to find out how much we should ingest in order to treat our problem, we might be surprised to read on the product label that one serving is two capsules. We expect the language on the package to match our framing of the product as a medicine, but the word serving surprises us. We would have expected something like dose or dosage. Officially, the product we bought is neither a food nor a medicine: it is a dietary supplement, more or less limited to botanics, vitamins and minerals. The U.S. Food and Drug Administration has premarketing regulatory authority over food and medicine, but it has only postmarketing monitoring authority over dietary supplements. This means that if any such product is described on labels or packaging as effective for specific medical conditions, or is directly or indirectly presented to customers as a medicine, it can be taken off the market. Since the words dose and dosage would surely evoke the medicine frame, marketers of dietary supplements refrain from using it. It is not, strictly speaking, a food either, but the term serving, while seeming out of place, is presumably safer.

person who happens to be the in-context designated cellist. If the people identified as reviewers said something positive about the person called the cellist, then the sentence is literally true, even if they met in a bar and what they said was something like Nice haircut! That same literal meaning would have been achieved (on this narrow view) if the reviewers had been identified by a list of names and if the cellist had been identified by a pronoun. We need more than context and literal meaning to explain the most natural out-of-context interpretation of that sentence, however, since the words themselves evoke situations whose details need to get filled in in the cognitively most straightforward way. The meaning of reviewer that makes most sense in this context is a person who writes reviews of artistic performances; the praising in this case would be understood as expressed in the text of such reviews; and the cellist would have been praised for his or her performance on a cello as part of the event about which the reviews were written. In all of the above examples, it seems clear that what is needed for understanding the meanings of particular linguistic signs is a two-fold structure, separating an institutional or experiential background from what is profiled or foregrounded within that background. That is, we need both the words and the frames.

Frames and Framing Two separate traditions, one inside linguistics and one outside, have come together in motivating both the activity referred to as frame semantics (FS) and, coincidentally, the use of the word frame within that activity. In the examples just examined, the word ‘frame’ has been used very broadly, but we see in each case that the frame that participates in the interpretation is associated with a particular lexical or grammatical fact. Since frame semanticists are frequently asked about the relationships between different uses of the same term of art, some clarification seems appropriate.

Beyond Literal Meaning

It is common to believe that a sentence can have a literal meaning, namely that part of the meaning that determines its truth conditions, and that the more elaborated in-context meaning is attained by fitting that literal meaning into an awareness of the context of communication and assumed answers to questions about why it was said, when it was said, and the like. Presumably in the literal meaning of Reviewers praised the cellist.

any truthful utterance of the sentence has to refer to a group of people who happen to be reviewers, and a

Cognitive Frames

There is a common nonlinguistic use of this word where it refers not to what a piece of language evokes in the mind of an interpreter, but what kinds of conceptual structures an interpreter invokes to make sense of some experience. This nonlinguistic use of the concept has a long history (Bartlett, 1932; Piaget, 1971; Mill, 1846; see Fillmore, 1985), but it flourished in the cognitive and social sciences in the 1970s. The idea behind this work is that people understand what they are observing by calling on memories of past experiences, or by

332 Frame Semantics

construing what is observed as instances of or variations from structures of belief and experience that could be used to make sense of what they have heard or observed. These structures in the mind are called frames, and these are not necessarily connected in specific ways with language. Minsky Frames To Marvin Minsky (1981, 1985), the mind is equipped with a vast repertory of such ready-made (i.e., from past experiences) frames, and people are capable of bringing these to bear, with remarkable ease and speed, to make sense of their perceptual and other everyday experiences, such as recognizing familiar objects on limited visual evidence, or assigning one or another interpretation to ambiguous line figures. A well-known example of Minsky’s that involves a response to language centers around the Birthday Party frame. Consider what’s involved in making sense of the following two-sentence text, Mary was invited to Jack’s party. She wondered if he would like a kite.

There is nothing in the text that directly evokes the idea of a birthday party, but a typical interpreter quickly and easily invokes just that frame in order to make the text cohere. The kind of event that fits a child’s birthday party frame in this culture contains ‘slots’ for certain expected participants, props, and subevents. We expect such an event to have a celebrant, a host, a number of partygoers, and gifts, and other items such as balloons, cakes, candles, and the birthday song. The interpreter makes the parts of the text cohere by the cognitive process of filling in as many of the slots as possible on the basis of what he has just learned – here with Mary as a partygoer, Jack as the celebrant, and the kite as Mary’s potential gift for Jack – and populates the other slots with default values that characterize the typical child’s birthday party. (World knowledge invited us to assume that this is a child’s birthday party, but if the potential gift had been identified as a box of cigars rather than a kite, that would have been different.) Again, there is nothing in the text itself that invites the Birthday Party frame: that is the interpreter’s contribution. The language conveys certain information, without directly providing the framework, and the interpreter makes it cohere. Goffman Frames Important in the work of Erving Goffman (1974) is the notion of mistaken framings and their possible consequences. Take a variant of a Goffman example: an instance of roughhousing on the street between young boys could be construed by an outside observer as either Combat or Play. An adult

observer might ‘misframe’ a playing event as combat, and believing that the person who has the upper hand is likely to do serious harm to his coparticipant, might inappropriately intervene – and might discover his mistake when both boys attack him. Frame semantics, in contrast, is concerned with how situation types are evoked specifically by linguistic entities: if Minsky’s example had included either of the phrases birthday party, birthday cake or birthday present, that could have evoked the frame; if the boys in the Goffman example had declared, ‘‘We’re just playing, Mister,’’ that would have provided a linguistic encoding of the frame, i.e., would have instructed the observer on the proper frame to invoke. The Word ‘Frame’ in Linguistics

The nonlinguistic strand, which has figured in sociology, cognitive science (including artificial intelligence and cognitive psychology), and theories of narrative, and has used the words ‘frame’ and ‘schema’ in essentially the same way, has to do with the structures of knowledge that enabled people to interpret (or misinterpret) the things they experienced. Frame semantics in its current form was influenced by this nonlinguistic strand, but also arose as part of a continued exploration of what was called case grammar (Fillmore, 1968, 1977). Within this linguistic strand, in its earliest history, the word ‘frame’ had nothing to do with cognitive acts of construal, but instead organized fairly mundane matters of grammar and semantics. ‘Syntactic frames’ were proposed as slot-defining contexts that provided criteria for membership in particular word classes. For example, the frame [the very ___ thing] could represent a context for prenominal adjectives. In generative grammar (Chomsky, 1965), ‘subcategorization frames’ defined slots for the various kinds of English verbs, i.e., according to whether they required direct or prepositional objects, directional or manner adverbs, etc. This narrowly syntactic sense of ‘frame’ was expanded in case grammar. Case Frames In the early work of Charles Fillmore and his colleagues (Fillmore, 1968, 1978), the term ‘case frame’ was introduced to describe the semantic ‘valence’ of verbs (mostly). Early treatments of valence, tabulating the combinatory properties of predicating words, were more or less limited to (1) specifying the number of arguments a verb could take (intransitive verbs take one argument, transitive verbs take two arguments, a smaller class of verbs take three or more arguments), and (2) specifying information about the grammatical requirements on the dependents that a predicating word could ‘take’ – in traditional Latin grammars, this could include the grammatical cases (nominative, accusative, dative,

Frame Semantics 333

etc.) of accompanying nominals and the ways in which prepositional phrases could introduce less central meanings. Discussion of the semantic aspects of verb valence was more or less limited to information about very general semantic properties of argument nominals (human, inanimate, abstract, etc.), summary descriptions of the uses of the cases and prepositions (as in traditional Latin grammars), and classifications of the kinds of verbs that were likely to occur in one or another of these patterns. Fillmore’s (1968) proposal was that the ‘semantic valence’ of a verb could be identified independently of the manner of grammatical realization of the verb’s arguments, in terms of semantic role notions (called deep cases) that classified such participant types as the perpetrator of an action (Agent), an entity that moves or is capable of moving (Theme), the destination of a movement (Goal), the starting point of a movement (Source), and several others. A combination of deep cases that together expressed the semantic part of a verb’s combinatorial requirements was referred to as its case frame. The sets of deep cases that could occur with verbs of one sort of meaning (e.g., Placing) could frequently be seen as characterizing all of the verbs with similar meanings (set, place, put, stand, insert). The case frames therefore naturally came to be thought of as characterizing particular event types, and this classification could be importantly different from classifications based on grammatical context differences. Thus the combination {Agent, Theme, Goal} could serve as role names for the participants in Placing events, independently of how the these participants were actually expressed in the grammar of a sentence. Case grammar was designed to show both how the various cases combined to classify verbs and other predicates, and how the grammar of the language allowed these to be matched with surface grammatical structures. However, it eventually became possible to think of these classifications – these frames – as the primary concepts, identifying situation types on their own, and the semantic role names as identifying the participants in such situations – somewhat analogously to the gift, cake, and celebrant in a birthday party, with the difference that case frames were tied to the grammar of particular lexical items. The case frames started out as clusters of participant roles using, initially, names from an assumed universally valid finite inventory of such roles and it was thought that any verbal meaning could be seen as using some collection of these. The frames of current frame semantics, in contrast, are described in terms of characteristics of the situation types themselves, including whatever could be said about the background and other associations of such situations. Moreover,

the goal of limiting the number of semantic roles to a roster of reusable universal role names lost its importance, and the effort to force any frame-specific semantic role into the members of such a list could be abandoned in good conscience. Merging the Two Research Strands

In merging these two strands of talk about frames, a frame semantics that goes beyond verb valence now includes within its scope fairly large and multilayered structures, like baseball or commerce or movie production, which incorporate complex connected patterns of knowledge and terminology. Frame semantics continues to be closely tied to linguistic form, but in recent work (i.e., in FrameNet; see ‘Frames and Lexicon’ below) it has proved useful to distinguish ‘small frames’ associated with individual predicates from ‘big frames,’ i.e., the institutional concepts that were themselves identifiable by names (birthday, but also commerce, computation, criminal process, etc.). In the current framework, the differently motivated needs for a frame concept unite. This joining supports semantic analysis that accommodates the richness of the cognitive work that goes into linguistic interpretation and expression. It also allows us to investigate more precisely and powerfully the linkages between grammar, lexicon, and the contents of experience.

Frames and the Lexicon Sponsored by the National Science Foundation and housed at the International Computer Science Institute in Berkeley, California, the FrameNet project is building a frame-based lexicon for English; in collaboration with researchers in Europe and Asia, the extent to which semantically valid lexica can be built for other languages using the same principles is being investigated. The project takes a lexicon to be a vast inventory of the words, multiword phrases, and grammatical constructions that speakers of a language have to know outright, as contrasted with those abilities speakers have for generating or interpreting language by building on the things they already know. A frame-based lexicon associates each lexical unit that it evokes, together with the patterns by which the grammar of the language provides information about the frame in words and phrases which are in construction with the LU. The ongoing work of the FrameNet project (Fontenelle, 2003) differs from that of ordinary lexicography in an important way: instead of working with a single word and exploring all of its meanings, it takes a single frame and examines all of the LUs that evoke that frame. In practice, of course, such

334 Frame Semantics

work begins by considering a word in one of its senses and exploring the frame that goes with just that sense. To illustrate, the verb cure in its medical care sense (as in to cure a disease) is one LU; the same verb in its food preservation sense (as in to cure ham) is another LU and calls for a separate frame. Instead of depending on dictionaries or native-speaker intuition to summarize what can be known about the LU, FrameNet, taking advantage of the power of computers, the easy availability of vast text resources and tools for managing and manipulating these, bases its findings on corpus evidence. Currently our corpus base includes one hundred million running words in the British National Corpus plus two hundred million words of U.S. newswire text. FrameNet entries are built up in the following way. A group of LUs is chosen as representative of a single frame; taking one LU at a time; example sentences containing it are extracted from the corpus and sorted by syntactic context; representative samples are selected that clearly illustrate the sense in question; and the selected sentences are annotated according to the frame the LU evokes. Before the annotation begins, labels are chosen to represent the semantic roles or frame elements (FEs) that the LU has in respect to the given frame, for example, Buyer, Seller, Goods, Money, etc., for the various frames connected with Commerce. Sentence constituents that are grammatically linked to the LU in question are assigned appropriate FE labels, and the constituents thus labeled are also provided with information about their grammatical function (GF) – Subject, Object, etc. – and their phrase type (PT), e.g., finite that-clause, marked infinitive VP, NP, etc. Important steps in these processes are carried out automatically and several are computer-assisted, but an indispensable core is handled manually by trained native-speaker annotators who have to read and understand the sentences and make judgments about the fit of each usage with the frame currently being studied. An Example: The Revenge Frame

The analysis process can be exemplified by considering the Revenge frame. Any situation that can be thought of in terms of a revenge concept has an associated scenario or history. Within that history, one person has done something to offend another person (the FE names for these individuals are the Offender and the Injured Party, respectively, and the offending act is referred to as the Injury), and one person, called the Avenger, who may be but need not be identical to the Injured Party, undertakes to harm the Offender, in response to the Injury, with an avenging act called the Punishment. The roster of core FEs, those FEs that are criterial for the frame,

then includes Avenger, Offender, Injured Party, Injury, and Punishment. The project in its current work also annotates peripheral FEs, those that can accompany expressions within the frame, indicating such matters as Time, Place, Manner for events, Purpose, and a few others for events that are also acts, and so on. In general practice, the core FEs are given frame-specific FE names, whereas the peripheral FEs are given ‘reusable’ names that do not require frame-by-frame definitions. The words that evoke the Revenge frame include simple verbs such as avenge, revenge, retaliate; phrasal verbs such as get back, get even; nouns such as vengeance, revenge, retaliation, retribution; adjectives that characterize the punishment such as retaliatory, retributive, vindictive; adjectives that describe the intensity of feeling of the person carrying out the revenge such as vengeful, vindictive; and support verb constructions such as take revenge, wreak vengeance, and exact retribution. Examples of annotations in the Revenge frame are offered in Figure 1. As can be seen in Figure 1, the linking of FEs to their grammatical realization is variable, though the Avenger is uniformly linked to the subject position of an active verb. The Offender is introduced by a variety of prepositions: on, with, at, and against, the Injury can be a direct object or an object of the preposition for, and so on. Several features not discussed so far are indicated in the annotations. At the end of each sentence there are representations of null instantiation: these notations have symbols such as DNI and INI in place of lexical material, and they indicate that the missing FE is either understood in the context (DNI ¼ definite null instantiation), or are simply unspecified (INI ¼ indefinite null instantiation). The last few sentences are examples of support constructions, where the verbal meaning is expressed by a combination of a support verb and a Revenge noun: have vengeance (11), exact vengeance (12), take vengeance (13), wreak vengeance (14), and visit retribution (15). The subject arguments are sometimes attributed to a verbal frame if the phrases that express them are arguments of control verbs, as with the subjects of wish to avenge (1), determined to avenge (2), proceed to wreak vengeance (14). Polysemy

The words in the Revenge frame have mostly a single meaning, but many of the words examined in FrameNet have multiple senses, which almost always means that they belong to different frames. We decide that a word belongs to different frames (i.e., that it formally supports different LUs) on the basis of such criteria

Frame Semantics 335

Figure 1 Annotations from the Revenge frame.

as its relationship with other frames, the permitted inferences associated with given FEs, synonyms and antonyms, and the grammatical patterns in which they are expressed. For example, the word remember needs to be assigned to different frames as shown in the examples in Table 1. (The use found in Remember me to your parents has not been included.) The FrameNet Database

The FrameNet database has four main components: descriptions of frames and frame elements; the body of annotated sentences; lexical entries that show for each LU the variety of ways in which the FEs are grammatically signaled; and information about how the frames are related to each other.

The Basic Data Proceeding by frames rather than words means that members of a single frame can include words from all major parts of speech. As seen in the annotations in Figure 1, the data collected involves more than verbs and their expressed arguments. Verb-headed and adjectival expressions that make use of the frame-bearing functions of various kinds of nouns occur with the help of support constructions of various kinds. We saw expressions such as take revenge, wreak vengeance, and exact retribution in the Revenge annotations, where the syntactic head and the semantic head are discrepant: to the extent that the verbs make their own semantic contribution, they are made to fit into the frame structure evoked by the object noun. And nouns can be

336 Frame Semantics Table 1 Polysemy of Remember Frame name Interpretation Examples

Synonyms and paraphrases Complement types Relationships

Remembering_experience recall of personal experiences, episodic memory I remember as a child falling down the steps in my grandmother’s house. I remember something touching me in the dark. I remember you as much taller. Recall, recollect, have a memory of; associated with count-noun memory

VP-gerunds, S-gerunds, NP+Predicate Not paired with forget or know

Frame name Interpretation Examples

Remembering_information Keeping knowledge, continuing to know

Complement types Relationships

S-that, S-wh, VP-wh, ‘concealed question’ NP Paired with frames led by forget and know

Frame name Interpretation Examples

Remembering_to_do Thinking of some preknown task or intention at the appropriate time

Complement types Relationships

‘supported’ to create adjectival expressions with the help of prepositions, as in at risk, in danger, on fire, and scores of others. The FrameNet database allows the recovery of such information through associated browsing software. A commitment to account for all frame elements that are conceptually necessary for a given framebearing word has required the development of annotations for missing but understood elements, through a variety of indications of null instantiation. The concept definite null instantiation (DNI) is associated with a given FE in a given sentence as a signal of zero anaphora: some value to the FE thus tagged must be available in the shared context of speaker and hearer, and this is a property associated with particular LUs or the words in particular frames. The destination of verbs of Arriving can be omitted under anaphoric conditions if they would otherwise be expressed prepositionally; the speaker who says She’s arrived! assumes that the addressee already knows the intended destination. The destination of reach cannot be left unexpressed: She’s reached *(home). The Frame-to-Frame Relations FrameNet captures linguistic generalizations in a number of ways. The ‘Net’ part of the name refers to the frame-to-frame relations in addition to LU membership in individual frames, both of which are potential sources of generalization. The fact that the frame-specific FE Avenger has a preferred subject-linking preference should be represented in some way that recognizes the ‘agency’ associated with this FE; although there are good

I didn’t remember that the meeting was today. Do you remember what she said? I hope they remember my phone number.

Did you remember to feed the cat? I hope she remembered her umbrella. VP-inv, NP with coerced verb Paired with forget, not with know

reasons for not wishing to be limited to the familiar thematic roles from the linguistic literature, it is nevertheless possible to see a great many verbal frames as instances of more abstract frames of action, change, motion, experience, causation, etc. Providing a systematic display of the full network of inheritance relations between frames is an ongoing area of development. Some frames represent complex events that are made up of component events, and these contained events can have frame structure as well. A structure of ‘subframe’ relations is created to show the role of Money_transfer in Commerce, the role of Ratifying_treaty in a larger Diplomacy scenario, and so on. In many cases, a subordinate frame presents a perspectivization on the larger frame. For example, a broadly conceived frame of Employment lays out a large set of situations and changes, within which there are perspective-taking frames of Hiring, Employing, Firing, Taking_a_job, Working, Quitting, and several others. A major frame of Criminal_justice_procedures contains components of Arresting, Arraigning, Trying, Adjudicating, and Sentencing, as well as a number of alternate routes through that chain. Relations such as those of inheritance and subframe are needed for deriving inferences from frame-based assertions, and this must include FE-to-FE binding between lower and higher frames, and transitively between the elements of lower frames where appropriate. The individual who undergoes Arrest and then Arraignment in two successive phases of a complex criminal_procedure scenario is the same as

Frame Semantics 337

the Defendant in the (later) Trial and Adjudication scenarios, and so on. For a well-designed system to carry out inference on frame-annotated texts, such bindings have to be made formally explicit, and the work to make this possible is under way. Applications and Extensions

Frame semantics has in a sense two subfields, and these can be thought of as the drudgery part and the fun part. The drudgery, or painstaking foundation part, is the work of actually building a frame-based lexicon and integrating it into a particular kind of grammar of the language. The fun part is that of analyzing how the actual choice of a lexical item evokes a frame, and what follows from that evocation. Here are just two classes of such consequences. One type of consequence concerns aesthetic and persuasive effects: frame semantics may be able to provide a precise basis for the analysis of the effects stemming from word choice in poetry and prose as well as texts of a more propagandistic sort. Current political discourse in the United States makes a great deal of use of ‘reframing’ (Lakoff, 2004) in political discourse to associate personal values with the propagandist’s political positions: when talk of tax reductions is rephrased as tax relief, the image of someone who has been suffering is highlighted; when estate tax is spoken of as death tax, the unwelcome image of being punished for dying comes to the fore; if in the debates about the morality of abortion, a week-old fetus is referred to as the little baby, then the association is created of a small and helpless human being needing adult protection. Second, there are potentially powerful uses of frame-semantic notions within the fields of Natural Language Processing. As of this writing, several projects are under way to show the relevance of framesemantic representation in such tasks as automatic sense resolution, semantic tagging, question answering, and machine translation. More far-reaching uses for work in NLP lie ahead.

See also: Cognitive Semantics; Collocations; Context Principle; Context; Dictionaries; Disambiguation; Generative Lexicon; Hyponymy and Hyperonymy; Lexical Conceptual Structure; Lexical Conditions; Lexical Meaning, Cognitive Dependency of; Lexical Semantics; Lexicology; Lexicon/ Dictionary: Computational Approaches; Polysemy and Homonymy; Representation in Language and Mind; Selectional Restrictions; Semantic Primitives; Speech Act Verbs; Synonymy; Syntax-Semantics Interface.

Bibliography Atkins B T S, Fillmore C J & Johnson C R (2003a). ‘Lexicographic relevance: selecting information from corpus evidence.’ International Journal of Lexicography 16(3), 251–280. Atkins B T S, Rundell M & Sato H (2003b). ‘The contribution of FrameNet to practical lexicography.’ International Journal of Lexicography 16(3), 333–357. Fillmore C J (1968). ‘The case for case.’ In Bach E & Harms R (eds.) Universals in linguistic theory. New York: Holt, Rinehart, and Winston. 1–88. Fillmore C J (1977). ‘The case for case reopened.’ In Cole P & Sadock M (eds.) Syntax and semantics, vol. 8, Grammatical relations. New York: Academic Press. 59–81. Fillmore C J (1985). ‘Frames and the semantics of understanding.’ Quaderni di Semantica 6(2), 222–254. Fillmore C J (1987). ‘A private history of the concept of frame.’ In Dirven R & Radden G (eds.) Concepts of case. Tu¨bingen: Gunter Narr Verlag. Fillmore C J (2003). ‘Valence and semantic roles: the concept of deep structure case.’ In Agel V et al. (eds.) Dependenz und Valenz, dependency and valency. Berlin: Walter de Gruyter. 457–475. Fillmore C J & Atkins B T S (1992). ‘Towards a frame-based organization of the lexicon: the semantics of RISK and its neighbors.’ In Lehrer A & Kittay E (eds.) Frames, fields, and contrast: new essays in semantics and lexical organization. Hillsdale: Lawrence Erlbaum Associates. 75–102. Fillmore C J & Atkins B T S (1994). ‘Starting where the dictionaries stop: the challenge for computational lexicography.’ In Atkins B T S & Zampolli A (eds.) Computational approaches to the lexicon. Oxford: Oxford University Press. 349–393. Fontenelle T (2003). ‘FrameNet.’ International Journal of Lexicography XVI(3), 231. Gildea D & Jurafsky D (2002). ‘Automatic labeling of semantic roles.’ Computational Linguistics 28(3), 245–288. Goffman E (1974). Frame analysis: an essay on the organization of experience. New York: Harper and Row. Katz J & Fodor J (1963). ‘The structure of a semantic theory.’ Language 39, 170–210. Lakoff G (1987). Women, fire and dangerous things: what categories reveal about the mind. Chicago: University of Chicago. Lakoff G (2004). Don’t think of an elephant: know your values and reframe the debate. Chelsea: Green Publishing. Minsky M (1981). ‘A framework for representing knowledge.’ In Winston P (ed.) The psychology of computer vision. New York: McGraw-Hill. 1985. 211–217. Minsky M (1985). Society of Mind. Touchstone. Petruck M R L (1996). ‘Frame semantics.’ In Verschueren J et al. (eds.) Handbook of pragmatics 1996. Philadelphia: John Benjamins. 1–13. Piaget J (1971). Biology and knowledge: an essay on the relations between organic regulations and cognitive processes. Edinburgh.

338 Future Tense and Future Time Reference

Future Tense and Future Time Reference ¨ Dahl, Stockholm University, Stockholm, Sweden O ß 2006 Elsevier Ltd. All rights reserved.

It is tempting to think of time simply as a line extending in both directions from the point at which we happen to be located. However, in constructing a theory of temporal semantics, we have to acknowledge that what is ahead of us – the future – is epistemologically radically different from both what is behind us – the past – and what is taking place at this moment – the present. Future states of affairs cannot be perceived or remembered, although they can be the subject of our hopes, plans, conjectures, and predictions. Philosophers such as Aristotle have claimed that the future has a special nature not only epistemologically but also ontologically: statements about the future do not yet have a determinate truth value. In a possible worlds framework, the ‘branching futures model’ can be seen as an expression of a similar idea: time looks like a tree rather than a line, and at any point in the tree there is only one way back into the past, but many branches lead into the future. Against this background, it is perhaps not so strange that there tend to be asymmetries in the ways in which temporal reference is structured in languages, and that in particular the grammatical category of tense often blurs into modality and evidentiality in the area of the future. Whether for instance the English auxiliaries shall and will should be seen as markers of future tense or rather as ordinary modal verbs is a muchdebated issue, the importance of which depends on the stand one takes on another, equally contentious, issue: how essential it is to uphold the discreteness of grammatical categories. If it is acknowledged that it is normal for the semantics of grammatical items to combine temporal elements with components of a modal, evidential, or aspectual character, it may become more important to study how the weight of these different factors shift over time, in the process of grammaticalization. From this perspective, it is notable that the diachronic sources of what grammars refer to as future tenses typically have exclusively nontemporal meanings, and the temporal meaning elements tend to grow stronger during the course of grammaticalization (‘temporalization,’ in the term of Fleischman, 1983), as future markers gradually obtain an obligatory status. English is a language that has advanced relatively far along the road towards obligatory marking of future time reference. In this regard, it is instructive to compare English to a language such as Finnish, in which there is hardly any grammaticalization of future time reference. In English, the sentence It is cold tomorrow, with the present tense of the copula is, sounds rather

strange: it is more natural to say it will (it’ll) be cold tomorrow or it is going to be cold tomorrow. In Finnish, on the other hand, we may replace the adverb ta¨na¨a¨n ‘today’ in the sentence Ta¨na¨a¨n on kylma¨a¨ ‘Today is cold’ with huomenna ‘tomorrow,’ yielding Huomenna on kylma¨a¨ ‘(lit.) Tomorrow is cold,’ without any further changes in the sentence. Thus, Finnish weather forecasts are typically formulated in the present tense, which is hardly possible in English. The English examples, however, also illustrate two other important points. First, the obligatoriness of future marking in English is not independent of the epistemological status of the statement. If it concerns an event that is fixed by some kind of schedule, English tends to use the present tense, although the time referred to is in the future, as in The train leaves at noon. Second, future marking is an area where we often find competition between two or more grammatical devices. For English, will and be going to have already been mentioned, but there are in fact several additional ways of referring to the future, such as by the present progressive (We are leaving at four) or by a combination of will and the progressive (The shop will be closing in five minutes), neither of which have a progressive meaning in the examples cited. Other languages are similar: Bybee et al. (1994) found at least two futures in 70% of the languages in their sample and at least three in close to 25%. As the word ‘competition’ suggests, the choice between the alternative ways of marking future is usually not reducible to any simple semantic or pragmatic distinction: rather, a number of different factors are at play: thus, will and be going to differ both semantically and stylistically. In many cases, differences between future-marking devices are attributable to what point they have reached in the grammaticalization process; in others, the differences reflect the original meanings of the diachronic sources of the items in question. Future-marking devices derive historically from a number of sources. Among the most common are auxiliary constructions expressing obligation (‘must’), e.g., English shall; volition/intention (‘want’), e.g., English will; and motion (‘go’ and ‘come’), e.g., English be going to. However, a future tense may develop out of an earlier nonpast or imperfective as an indirect effect, for example of the expansion of an earlier progressive – the future uses are what is left of the old category after that expansion (possible examples mentioned by Bybee et al. (1994) are Margi, Tigre, Pangasinan, and Kui). In the development of futures, several things happen. To start with, there is normally an initial change of meaning that may involve both what has been called ‘pragmatic strengthening’ and ‘semantic bleaching.’

Future Tense and Future Time Reference 339

Thus, a verb of volition, such as want, does not normally imply that the willed action is performed; to interpret something like She wants to leave as ‘She will leave,’ the meaning of the volitional verb has to be strengthened. But when extended to cases of ‘pure prediction’ such as It will rain, the volitional element has to be bleached altogether. Furthermore, the item gradually comes to be used in contexts where it is communicatively redundant, which leads to it being reduced phonetically (e.g., is going to > is gonna). Eventually, the grammaticalizing item may fuse with the main verb and become affixed to it. A famous example is the French inflectional future as in il chantera ‘he will sing,’ which derives from a construction involving the Latin verb habere ‘have’ in an obligative meaning. English shall/will have not become inflections, although they are usually cliticized to the preceding word in the reduced form ’ll. Being usually more advanced on the grammaticalization path, inflectional futures tend to have a wider range of uses than periphrastic ones. Normally, future-marking devices start out in main clauses, which are bearers of the illocutionary force of an utterance. English is an example of a language in which no future time marking is normally found in some types of subordinate clauses, such as conditionals and temporal clauses – for example, If it rains, you’ll get wet, where only the main clause is marked for future time reference. As it turns out, future marking in such clauses tends to be restricted to languages with inflectional futures (Bybee and Dahl, 1989). It was said above that future states of affairs are the subjects of our hopes, plans, conjectures, and predictions. The latter notions also represent what we can call different epistemological bases for statements about the future, and, as was also mentioned above, the way in which such statements are expressed in a language can depend on which kind of epistemological base it has. A major distinction may be drawn between intention-based and prediction-based future time reference. Particularly in everyday conversation, a large part of what is said about the future refers to plans and intentions of the participants. I announce what I intend to do, or ask you what you intend to do. This is clearly different from discussing what the weather will be like tomorrow. A straightforward grammatical opposition based on the distinction between intentionbased and prediction-based future time reference is less common than one would perhaps think, in view of the apparent cognitive salience of that distinction. Its importance lies rather in the observation that markers that are originally restricted to intentionbased future time reference tend to develop into

general future markers, which include predictionbased future time reference as central cases but can in the normal case still be used for intention-based future time reference. Another major parameter is the temporal distance between the speech act and the future time point in the future. Immediacy is often cited as a condition on the use of certain future-marking devices, such as the English be going to or the French aller þ infinitive construction. At a closer look, it often turns out that immediacy is a contributing factor but hardly the only one, as we shall illustrate below. It does happen, however, that more precise restrictions on temporal distance develop, although this is much less common for the future than for the past. Bybee et al. (1994) cite Mwera (1994; original source Harries, 1950) as an example of a language that has three different future auxiliaries, ci for reference to the same day (hodiernal future), cika for the following day (crastinal future), and jiya, which they interpret as a general future. Typological surveys (Dahl, 1985; Bybee, 1994) have shown approximately equal numbers of inflectional and periphrastically expressed futures. In the sample in Dahl and Velupillai (2005), North America, Australia, central New Guinea, the Caucasus, and South Asia come out as areas where languages with inflectional futures are in a clear majority. Among areas where inflectional futures tend to be absent are Southeast Asia (where most languages are isolating) and northern Europe. See also: Evidentiality; Modal Logic; Mood and Modality;

Mood, Clause Types, and Illocutionary Force; Temporal Logic; Tense.

Bibliography ¨ (1989). ‘The creation of tense and aspect Bybee J & Dahl O systems in the languages of the world.’ Studies in Language 13, 51–103. Bybee J, Perkins R & Pagliuca W (1994). The evolution of grammar: tense, aspect, and modality in the languages of the world. Chicago: University of Chicago Press. ¨ (1985). Tense and aspect systems. Oxford: Dahl O Blackwell. ¨ & Velupillai V (2005). ‘Grammatical marking Dahl O of future tense.’ In Comrie B, Dryer M, Gil D & Haspelmath M (eds.) World atlas of linguistic structures. To be published by Oxford University Press. Fleischman S (1983). ‘From pragmatics to grammar: diachronic reflections on complex pasts and futures in Romance.’ Lingua 60, 183–214. Harries L (1950). A grammar of Mwera. Johannesburg: Witwatersrand University.

This page intentionally left blank

G Game-Theoretical Semantics J Hintikka and G Sandu, University of Helsinki, Helsinki, Finland ß 2006 Elsevier Ltd. All rights reserved.

The leading ideas of game-theoretical semantics (GTS) can be seen by considering the truth condition of a sentence S of an interpreted first-order language. Now, S is true in an obvious pretheoretical sense iff there exist suitable witness individuals testifying to its truth. Thus, (9x)F[x] is true iff there exists an individual b such that F[b], (8x) (9y)F[x,y] is true iff for each a there exists an individual b such that F[a,b], and so on. As such examples show, one witness individual can depend on others. Hence, the existence of suitable witness individuals for S means the existence of a full array of Skolem functions for S, such as the function f in the sentence (9f ) (8x) F[x,f(x)], which is equivalent to (8x) (9y) F[x,y]. Such arrays of Skolem functions can be seen to codify a winning strategy in a simple game (semantical game) between a ‘verifier’ V and a ‘falsifier’ F. The game G(S) associated with S is played on some domain in which the nonlogical constants of S have been interpreted. G(S) begins with S, but at each move the sentence that the players are considering changes. When a move is made, V chooses disjuncts and values of existential quantifiers, whereas F chooses conjuncts and values of universal quantifiers, proceeding from outside in. For G(S) a game rule tells V and F to exchange roles and continue as for G(S). If a play of G(S) ends with a true negated or unnegated atomic sentence or identity, V wins and F loses; if it ends with a false one, vice versa. S is true iff there exists a winning strategy for V and false iff there exists one for F. GTS amounts to the systematic use of such truth conditions. Game-theoretical ideas thus enter into GTS as highlighting the role of Skolem functions in logical theory. For this purpose, the whole generality of game theory is usually not needed. For instance, the only failure of perfect information that needs to be considered is a player’s ignorance of earlier moves because only the individuals chosen at such moves can be arguments of Skolem functions.

So far, GTS for quantified sentences merely spells out our ordinary concept of truth. In traditional firstorder languages, truth in the sense of GTS coincides with truth according to a Tarski-type definition (assuming the Axiom of Choice). But the GTS treatment can be extended and varied in several ways not available otherwise. 1. By allowing informational independence in the sense of game theory, we obtain a more expressive logic called independence-friendly (IF) logic (cf. Hintikka and Sandu, 1989). The greater expressive power is due to the semantic job of quantifiers as expressing dependence relations between actual variables by means of their formal dependence on one another. In the received first-order logic, only some patterns of such dependence could be formulated because the dependence-indicating scope relation is nested and hence incapable of expressing other dependence patterns. This defect is eliminated in IF logic. The role of Skolem functions in GTS is illustrated by the fact that each IF first-order sentence has a sigma-one-one second-order translation, namely, the sentence that asserts the existence of its Skolem functions. Conversely, each sigma-one-one sentence can be translated into the corresponding IF first-order language. 2. The independence of a quantifier (Q2y) from another quantifier (Q1x) within whose syntactical scope it occurs can be expressed by writing it (Q2y/ Q1x). Often it is convenient to restrict this notation to quantifiers of the form (9y/ 8x) and to assume that existential quantifiers are independent of one another, as are universal ones. (This simplification is used in this article.) As this slash notation suggests, the semantics of unextended first-order IF languages is not compositional. The limits of compositional methods in the semantics of IF logics have been investigated intensively (Hodges, 1997), and the impossibility of compositional semantics for IF logic normally interpreted has been shown (Cameron and Hodges, 2001; Sandu and Hintikka, 2001). The strength and naturalness of IF logic thus throws serious doubts on compositionality as a desideratum in semantic

342 Game-Theoretical Semantics

theorizing in general. In any case, many noncompositional languages can be treated semantically by means of GTS. 3. GTS can be extended to natural languages by allowing the substitution of names for entire quantifier phrases, as in Hintikka and Kulas (1983, 1985). Then the meaning of this phrase has to be taken into account by suitable additions to the output sentence. For instance, a game step (amounting to existential instantiation) might take the players from a singular sentence of the form (1) X — some Y who Z — W

to a sentence of the form (2) X — b — W, b is a Y and b Zs

Perhaps the greatest difference between natural language games and formal ones is that in the former the order of the application of game rules can be indicated by means other than scopes (labeled tree structures), for instance by the lexical items in question. This shows the limitations of the notion of scope, as argued in Hintikka (1997). At the very least, the two functions of scope (parentheses) have to be distinguished from one another as indicating the limits of binding (binding scope) and as indicating logical priority (priority scope). This distinction helps, among other things, to solve the problem of donkey sentences. 4. GTS can accommodate any concept, logical or nonlogical, whose meaning can be captured by a rule or rules in a semantical game. Cases in point are anaphoric pronouns, epistemic notions, genitives, only, and so on (cf. Hintikka and Sandu, 1991). For instance, the semantics of anaphoric pronouns can be captured by construing them, in effect, as existential quantifiers ranging over the individuals hitherto selected by V and F in a play of a semantical game. This extendability of GTS to nonlogical concepts throws into doubt the possibility of any sharp distinction between logical and nonlogical concepts. By means of IF logic, several mathematical concepts can be expressed on the first-order level that could not be captured in ordinary first-order logic, including equicardinality, infinity, (topological) continuity, and Ko¨nig’s lemma. In general, IF logic extends greatly the scope of what can be done in mathematics on the first-order level (cf. Hintikka, 1996). 5. The notion of informational independence plays an especially important role in epistemic logic, including the logic of questions and answers. Their logic depends essentially on the logical properties of the desideratum of a question. Such a desideratum

is of the form KIS where KI means I know that. It expresses the epistemic state that an answer to the question is calculated to bring about. The question ingredient is now of the form (9x/KI) for wh-questions and (_/KI) for propositional questions. All the most important notions related to questions and answers can be defined by means of the slash notation. 6. The law of the excluded middle amounts to the requirement of determinacy in semantical games and, hence, is not always satisfied. Determinacy fails, in fact, in IF logic, where the negation  is used but not the contradictory : negation. The latter can be introduced by a fiat, but within the GTS framework this can be done only when it occurs sentence-initially. When both negations are present, we obtain a logic whose algebraic structure is that of a Boolean algebra with an operator in Tarski’s sense. In that logic, we can define generalizations of such notions as orthogonality and dimension. 7. The failure of tertium non datur shows that IF logic is closely related to intuitionistic logic. In a sense, IF logic is in fact more elementary than ordinary first-order logic. For instance, an application of the GTS truth definition to a sentence does not involve infinite closed totalities of individuals even when the domain is infinite. If such totalities are relied on, we can give truth conditions also for sentences in which : occurs internally. The resulting logic is as strong as the entire second-order logic (with the standard interpretation), even though it is first-order logic in the sense that all quantifiers range over individuals. In general, GTS shows that our concept of negation is irreducibly ambiguous between the strong (dual) negation  and the contradictory negation :. This fact explains a number of features of the behavior of negation in natural language (cf. Hintikka, 2002a). 8. The simplest type of IF sentence that is not translatable into ordinary quantificational notation is of the following form, known as a Henkin quantifier sentence: (3) (8x) (8y) (9z/ 8y) (9u/ 8x)F[x, y, z, u]

The negation S of an IF sentence S can be formed in the same way as in ordinary logic. For instance, the negation of (3) is (4) (4) (9x) (9y) (8z/9y) (8u/9x) F[x, y, z, u]

The ‘independence-free’ meaning of (4) is brought out by its equivalence with (5) (5) (9f ) (9g) (9x) (9y) (8z) (8u) ((z ¼ f(x) & u ¼ g(y))  F [x, y, z, u]

Game-Theoretical Semantics 343

9. IF logic is misnamed in that it allows for the representation of more dependencies (not just more independencies) than ordinary logic. A case in point is constituted by irreducibly mutually dependent variables. In order to express them, we have to generalize the notion of function and to take seriously the idea that a functional identity y ¼ f(x) expresses a dependence relation (cf. Hintikka, 2002b). This can be taken to mean that if such an identity fails to have a truth value for some xo, f(xo) has to be taken to represent a probability distribution. Such distributions must then be able to be reified so as to act as argument values, too. With this understanding, the mutual dependence of x and y satisfying the condition F[x,y] can be expressed by (6) (9f ) (9g) (8x) (8y) (x ¼ f(y) & y ¼ g(x) & F[x,y])

which is obviously equivalent to (7) (8x) (8y) (9z/8x) (9u/8y) (x ¼ z & y ¼ u & F[x,y])

10. The GTS approach can be varied in other ways, for instance, by restricting V’s strategies to recursive or otherwise constructive ones as discussed in Hintikka (1996). In a different direction, a semantic game can be divided into subgames with specific rules for the transfer of information from one subgame to another one. Such subgames can be used for the interpretation of conditional sentences (cf. Hintikka and Sandu, 1991). 11. The notion of truth is put into a new light by IF logic. Tarski’s well-known result shows that truth cannot be defined in an ordinary first-order language L for L itself, even when the syntax of L can be represented in L, for instance by means of Go¨del numbering. The reason is that quantifiers ranging over numbers as numbers and quantifiers ranging over numbers as codifying numerical expressions must be informationally independent of one another. This requirement cannot be implemented in the received first-order logic. Because such independencies can be expressed in IF logic, a first-order IF language allows the formulation of a truth predicate for itself. Because Tarski’s theorem is thus due to the expressive poverty of traditional logic (rather than its excessive deductive strength, as usually thought), definability problems are no obstacles to the explicit use of the notion of truth also in natural languages (cf. Hintikka, 1996; Sandu, 1998). Because much of the recent philosophical discussion of the notion of truth has in effect been prompted by Tarski’s undefinability theorem, most of this discussion has to be reconsidered. 12. GTS, in the sense used here, is characterized by the definition of truth of S as the existence of a winning strategy for V in a semantical game G(S) associated with S. This meaning of GTS has to be

distinguished from other uses of games and game theory in logic and linguistics, such as dialogical games (including questioning games), games of formal proof, games of communication, and so on. There are interesting connections between these different kinds of games. For instance, games of formal proof can be considered as mapping the different courses that a play of a semantical game can take. Again, the strategies of deductive games are closely related to the strategies of the corresponding questioning games when all answers are known a priori to be true (i.e., games of pure discovery). 13. Historically, GTS was inspired by Ludwig Wittgenstein’s notion of language game. Philosophically, these two share the idea that language-world relations are constituted by certain rule-governed human activities. This implies that intentional relations are dispensable in semantics. In neither kind of game are the moves normally made by speech acts or other language acts. Both games are objective in that their theory depends only on their rules. In sum, GTS is not merely one branch of formal semantics among others. It is an approach to all semantics, based on the possibility of considering the language-world links as being mediated by games in the abstract sense of the mathematical theory of games. See also: Boole and Algebraic Semantics; Categorial

Grammar, Semantics in; Compositionality; Discourse Representation Theory; Donkey Sentences; Dynamic Semantics; Event-Based Semantics; Formal Semantics; Logic and Language; Multivalued Logics; Negation; Nonmonotonic Inference; Operators in Semantics and Typed Logics; Semantic Value; Situation Semantics; Truth Conditional Semantics and Meaning.

Bibliography Cameron P & Hodges W (2001). ‘Some combinatorics of imperfect information.’ Journal of Symbolic Logic 66, 673–684. Hintikka J (1987). ‘Game-theoretical semantics as a synthesis of truth-conditional and verificationist theories of meaning.’ In Lepore E (ed.) New directions in semantics. London: Academic Press. 235–258. Hintikka J (1996). The principles of mathematics revisited. Cambridge, UK: Cambridge University Press. Hintikka J (1997). ‘No scope for scope.’ Linguistics and Philosophy 20, 515–544. Hintikka J (2002a). ‘Negation in logic and natural language.’ Linguistics and Philosophy 25, 585–600. Hintikka J (2002b). ‘Quantum logic as a fragment of independence-friendly logic.’ Journal of Philosophical Logic 31, 197–209. Hintikka J & Halonen I (1995). ‘Semantics and pragmatics for why-questions.’ Journal of Philosophy 92, 636–657.

344 Gender Hintikka J & Kulas J (1983). The game of language. Dordrecht: D. Reidel. Hintikka J & Kulas J (1985). Anaphora and definite descriptions: Two applications of game-theoretical semantics. Dordrecht: D. Reidel. Hintikka J & Sandu G (1989). ‘Informational independence as a semantical phenomenon.’ In Fenstad J E, Frolov I T & Hilpinen R (eds.) Logic, methodology and philosophy of science VIII. Amsterdam: North-Holland. 571–589. Hintikka J & Sandu G (1991). On the methodology of linguistics: A case study. Oxford: Basil Blackwell.

Hintikka J & Sandu G (1996). ‘Game-theoretical semantics.’ In van Benthem J & ter Meulen A (eds.) Handbook of logic and language. Amsterdam: Elsevier. 361–410. Hodges W (1997). ‘Compositional semantics for a language of imperfect information.’ Logic Journal of the IGPL 5, 539–563. Sandu G (1998). ‘IF logic and truth-definition.’ Journal of Philosophical Logic 27, 143–164. Sandu G & Hintikka J (2001). ‘Aspects of compositionality.’ Journal of Logic, Language, and Information 10, 49–61.

Gender D Cameron, Worcester College, Oxford, UK ß 2006 Elsevier Ltd. All rights reserved.

What Is Gender and Why Do Linguists Study It? The term gender will be used in this article to refer to the social condition of being a woman or a man. Gender in this sense is distinguished from sex, meaning biological femaleness or maleness. Sex is potentially relevant for areas of linguistic inquiry where biological mechanisms are at issue (e.g., neurolinguistics), but in most research on language what is relevant is the social differentiation of men and women in particular communities. Generally speaking, genderlinked patterns of language-use arise not because men and women are naturally different, but because of the way that difference is made significant in the organization of social life and social relations. The forms and precise social significance of gender can vary considerably across cultures and through time. Gender in its ‘men and women’ sense is also distinct from the use of the term in linguistics to denote a grammatical category. The relationship between linguistic and social gender has been of interest to researchers who study the linguistic representation of men and women, but it will not be considered here. Rather I will concentrate on empirical research investigating patterns of language-use linked to the gender of the language-user. This kind of research may be done within a number of academic disciplines and paradigms, but here I will be mainly concerned with research that adopts a broadly sociolinguistic approach. Why is gender an important concern in sociolinguistics? First, because the social fact of gender differentiation is consequential for processes of general interest to sociolinguists, such as language variation, change, and shift. Gender must therefore be considered in any satisfactory account of those linguistic

processes. But the influence does not run in one direction only. Just as gender influences the workings of certain linguistic processes, so language-using is part of the process whereby gender is produced and reproduced as a salient feature of the social landscape. Language can be seen as one resource used by social actors to construct various kinds of masculinity and femininity, aligning themselves with some gender positions and differentiating themselves from others. The study of language and gender, then, aims to illuminate both linguistic and social processes. For many researchers its goals are also political: gender is seen from a feminist perspective not as a neutral difference but as a socially constructed inequality.

Theorizing Gender: From Difference to Diversity Both expert and popular discussions of gender and language-use have traditionally revolved around the question of what differentiates women as a group from men as a group. It should be said immediately that virtually none of the putative differences are categorical: statements about men’s and women’s linguistic behavior are almost invariably probabilistic, asserting that the analysis of a particular data sample has shown a statistically significant tendency for women to use feature X more than men, or vice versa. Well-known generalizations of this form include ‘‘women use more tag questions than men,’’ ‘‘women’s speech is closer to the prestige standard than men’s,’’ and ‘‘women are more polite than men.’’ In each of these cases, the generalization has been contested because the evidence is not clear-cut: different studies have yielded conflicting results, and even researchers who accept the generalization would acknowledge that exceptions to it exist. It is not coincidental that all the claims just cited covertly treat men as the norm and women as

Gender 345

the gender whose behavior requires explanation. Explanations have had a tendency to psychologize, treating women’s linguistic behavior as an expression of the personality traits they are thought to develop as a consequence of their distinctive social roles. For instance, the ‘‘women are more standard’’ generalization has been explained as a consequence of women’s ‘‘status-consciousness,’’ a psychological disposition which has been related in turn to women’s own alleged social insecurity (since their status depends on that of the male head of household) and/or their desire to secure social advantages for their children, among other things by providing them with models of ‘correct’ speech. Even in the 1970s there was dissatisfaction with accounts like these, which relied on stereotypical and increasingly outdated understandings of women’s social position. But more radically, since the early 1990s, the idea of gender as a simple binary difference – however conceptualized – has been subjected to intense critical scrutiny. A particular problem critics have pointed to in the binary framework is its tendency to flatten out the internal diversity that exists within each gender category: the differences among women, or men, may be as great as or greater than the differences between the two genders. People are after all never just men and women, but are always men and women of particular ages, classes, ethnic and geographical origins, occupations, social roles and statuses, and religious and political beliefs. The form gendered behavior takes is inflected by these other dimensions of identity and experience. Older and younger women, or working-class and middle-class women, may be as different from one another as they are from their male peers; each of these groups may define its femininity more by contrast with the femininity of some other group of women than in opposition to masculinity. That is still a question of gender, but it is not simply about differences between men and women. With broader definitions in mind, many researchers today have rejected the old assumption that an inquiry into gendered linguistic behavior is essentially a search for large-scale generalizations about the language used by women and men. The traditional preoccupation with gender differences has increasingly given way to a concern with gender diversity – in other words, with the use of linguistic variability as a resource for producing a range of masculine and feminine styles in different communities or contexts. Researchers follow the injunction to ‘look locally’ at the form gender identities and relations take in specific communities, on the grounds that gender-linked patterns of language-use will emerge from the localized social practices in which women and men are

engaged. How our understanding of language and gender has been changed by this move from difference to diversity, and from a global to a more localized perspective, will be explored in the following sections.

Language and Gender in the Variationist Paradigm The ‘variationist paradigm’ in sociolinguistics uses quantitative methods to investigate linguistic variation and change. In its classic form, pioneered by William Labov and others in the 1960s, variationist research focused on two issues particularly: the distribution of linguistic variables (in most cases, phonological ones) across major demographic categories in a speech community (e.g., social class, ethnicity, sex), and the motivations and mechanisms of linguistic change. These issues are interrelated, since linguistic change does not usually involve a sudden break, but rather a gradual shift in the balance between already-existing variant realizations of a particular variable. The motivation for change is typically related to the social meaning of the variants involved, which is linked in turn to their demographic distribution. Early in his career Labov studied the variable (r) – that is, the pronunciation or non-pronunciation of /r/ after vowels, as in car, farm – in the English of New York City. New York was historically non-rhotic (not /r/-pronouncing), but Labov found evidence that the incidence of rhoticity was rising. This change reflected speakers’ desire to avoid the stigma attached to non-rhotic pronunciations, used most frequently by lower-class speakers as part of a generally lowstatus New York urban dialect. Women were more advanced than men in the shift. In an earlier study conducted on the island of Martha’s Vineyard off the coast of Massachusetts, Labov had found that a centralized pronunciation of the diphthongs (ay) – as in white – and (aw) – as in out – was gaining ground from the non-centralized pronunciation. Centralized diphthongs were associated with local island tradition rather than with external measures of status (e.g., wealth, power, education); the motivation for adopting them was a desire to assert a ‘real islander’ identity in opposition to the incomers and tourists who increasingly dominated the island economy. This change was led by men. These findings, and analogous ones from other speech communities, suggested that women and men were instrumental in different kinds of changes. Women led in ‘‘change from above’’ (the adoption of prestige pronunciations for variables whose social meaning people are consciously aware of, like (r) in New York City), and men in ‘‘change from below’’

346 Gender

(the adoption of particular non-prestige pronunciations as an expression of identity, which takes place below the level of conscious awareness, as with (ay) and (aw) on Martha’s Vineyard). The idea that men and women play different roles in language change was not new. Eighteenth-century enthusiasts for ‘fixing’ language, who saw change as undesirable, frequently attributed innovations to women’s fickleness; but later commentators who saw language change as ‘natural’ or progressive more often held some variant of the view cited by Otto Jespersen (1922): ‘‘Women do nothing more than keep to the traditional language which they have learnt from their parents and hand on to their children, while innovations are due to the initiative of men.’’ The advent of variationist studies, which offered solid statistical evidence for the involvement of both genders in change, should arguably have put an end to the myth of female conservatism, but in fact that myth seems to have influenced the way at least some early variationists interpreted their findings. Two senses of conservative were sometimes confusingly conflated: the sense in which it means ‘‘resistant to any change,’’ and the political sense in which it connotes allegiance to the values of a dominant class. The women who were leading the shift towards /r/pronouncing in New York City were clearly being innovative rather than conservative in the first sense; but the fact that women typically led in changes towards a prestige norm was often taken to evidence conservatism in the second sense – women’s alleged status-consciousness made them susceptible to the influence of middle-class prescriptions for ‘correct’ usage. Meanwhile, ‘real’ innovation occurred in local vernacular speech by way of change from below, which was then thought to be initiated most often by men. Probably the most familiar of all variationist claims about gender – that the speech of women in all classes tends to be closer than men’s to the prestige standard – avoids the issue of conservatism/innovation, and is to that extent less confusing, but essentially it makes the same point and reflects the same understanding of femininity and masculinity. It should be noted, however, that this familiar claim no longer accurately represents the orthodox variationist position. In 1990, after reviewing the accumulated evidence, Labov formulated some general principles that can be summarized as follows: . Where sociolinguistic variables are stable, men use higher frequencies of nonstandard variants than women. . Where there is change from above, women lead in the adoption of the incoming prestige variant.

. Where there is change from below, women are more advanced than men in their use of the innovative variant. The big change here is that women are now considered prototypical leaders in both the major types of linguistic change. There is, then, no overall tendency for women to favor more standard and men more vernacular pronunciations, or for women to be (socially or linguistically) conservative. The neat gender-complementarity of the older model has broken down, producing what Labov dubs a ‘‘gender paradox’’: women ‘‘conform more closely than men to sociolinguistic norms that are overtly prescribed, but conform less than men when they are not’ (Labov, 2001: 293). And this paradox is not amenable to explanations in terms of women’s distinctive social roles or psychological dispositions, since a major part of what now needs to be explained is, precisely, the non-uniform behavior of women as a group. At this point it becomes relevant to return to the theoretical development mentioned above, namely the shift from gender difference to gender diversity. In a diversity framework it is not assumed that all women’s or all men’s linguistic behavior will be uniform. If we accept that women and men are internally diverse groups, the fact that some women do one thing while others do the opposite need not be considered a paradox at all. In research conducted in the1970s in coastal South Carolina, Patricia Nichols (1998) observed younger black island women shifting more markedly than men of the same generations from the Creole variety Gullah towards Standard English (SE). Older black women on the other hand were more conservative than their male peers. Nichols’s account relates the women’s behavior to local labor market conditions. Historically, islanders had lived by farming, but economic change had propelled them into waged labor on the mainland. Their opportunities, however, were circumscribed by gender. Whereas black men could find well-paid jobs in construction, the best-paid positions available to black women were mainly clerical, demanding standard language and literacy skills. Hence black families had increasingly invested in educating girls, with the result that younger women made more use of SE than either men or older women who had not had the same economic opportunities. White mainland women (who did not speak Gullah but rather Appalachian English) were a different case again: where their husbands, more advantaged in the labor market than either women or black men, earned enough to support dependents, they did not have the same need as black women to

Gender 347

work outside the home and had neither opportunity nor incentive to adopt a more standard speech. This study provides an early illustration of the benefits of ‘looking locally.’ Nichols showed that people’s linguistic behavior was not simply a consequence of their membership of particular social categories – the fact that they were men or women, black or white – but rather emerged from their attempts to negotiate local arrangements involving those categories (e.g., a labor market segregated by gender and race). Looking locally suggests that conflicting tendencies in women’s sociolinguistic behavior may reflect the different choices, opportunities, and risks that different groups of women are faced with. Even if it is found (as in some studies it has been) that the same women’s behavior instantiates both elements of Labov’s paradox (each in relation to a different set of linguistic variables), we may be able to explain this conflict by looking carefully at the social conditions these women inhabit. Why, though, is it specifically women whose behavior exhibits the paradox? Merely invoking the diversity of women cannot provide an answer, since men are no less diverse. If Labov’s principles have any validity or value, do they not point to some global, or at least supra-local, phenomenon requiring an explanation? Penelope Eckert is among those variationist researchers who believe that looking locally can and should be combined with ‘thinking globally’ about gender. Making use of the concept of the community of practice (CoP), a group defined by the engagement of its members in some joint endeavor (examples might include a workplace team or sports team, a church congregation or a teenage gang), Eckert points out that languageusing is shaped by the activities and practices in which it is embedded; and that in most societies, gender exerts an influence on the activities people habitually engage in. If men and women differ in the range of CoPs they belong to and/or the terms on which they participate in joint endeavors, those differences can be expected to have linguistic consequences – and perhaps not only local ones, though since CoPs are local entities, investigation must begin at the local level. Eckert carried out research in a suburban high school near Detroit, where identity and social practice were organized around the contrast between ‘jocks’ (who embrace the official definition of school success, e.g., participating actively in both academic and extra-curricular pursuits) and ‘burnouts’ (who reject the school’s values and resist active participation in its official culture). Affiliation to these groups was marked linguistically as well as in other ways: several phonological variables that are currently undergoing a change known as the Northern Cities

Vowel Shift were appropriated differentially to signify either jock or burnout identity. In both groups, however, young women were more advanced than young men in their use of the variants that marked group membership: linguistically, women were the ‘jockiest’ jocks and the most ‘burned-out’ burnouts. So although the jock and burnout women were less similar to one another in terms of actual linguistic behavior than any other pair of subgroups in Eckert’s sample, at a more abstract level they had something in common – a tendency to be extreme in their linguistic marking of group membership. Eckert’s account of this tendency could be seen as a reworking of the old idea about women’s statusconsciousness – except that status is reinterpreted in terms of participants’ local values. She suggests that women use language to claim status as ‘good’ jocks or ‘good’ burnouts, and gives reasons why this strategy is especially favored by women. The reasons have to do with the different terms on which women and men participate in the jock and burnout CoPs. Men can gain status by displaying ability (e.g., in sports or fighting), but even if women possess the same skills, they are not rewarded in the same way. A woman’s status has more to do with her appearance and personal style: her capital as a jock or a burnout is, in other words, largely symbolic. Consequently, women work harder to assert in-group status through symbolic details like the styling of their jeans and the pronunciation of their vowels. Eckert has suggested that this pattern is not confined to adolescent subcultures. Women, as the subordinate gender, may perceive their status and legitimacy to be in question in all kinds of CoPs; making a symbolic display of in-group credentials – pointedly presenting oneself as, say, a ‘real’ lawyer/athlete/truck driver, using resources such as language that are accessible to women as well as men – is one way to deal with this marginal social positioning. If so, it is evident that inequality, rather than just difference, shapes the relationship of language to gender; and there are patterns of gender inequality which are not just operative at local level. These observations may bear on Labov’s gender paradox: the combination of conformist and nonconformist behavior identified with women in his principles could be produced by women asserting their status in both global terms (by adopting variants that carry supra-local prestige) and local settings (by adopting variants that can be used to assert one’s credentials as a member of a particular community). If both kinds of status are relevant to an individual, as will commonly be the case in societies where people have some range of identities and roles, then there is no contradiction in seeking to maximize

348 Gender

both; and as Eckert points out, there are reasons why women might be particularly active in this regard. Eckert has described her approach as part of a third wave in variationist studies. In the first wave, researchers used survey techniques to reveal largescale patterns of variation and change: Labov’s research in New York City is an example. A second wave employed ethnographic methods to study the workings of variation and change in particular local communities, relating linguistic variables to the social categories and divisions that were meaningful for community-members themselves. The third wave is similar to the second in that research is ethnographic and locally-based, but the focus of interest shifts from simply correlating linguistic and demographic variables to investigating the social meaning of linguistic variables and their clustering into styles. It is these styles rather than individual variables that are linked to membership of local social categories (for instance, jock and burnout). Eckert comments: ‘‘Since [the third wave] takes social meaning as primary, it examines not just variables that are of prior interest to linguists (e.g., changes in progress) but any linguistic material that serves a social/stylistic purpose. And in shifting the focus from dialects to styles, it shifts the focus from speaker categories to the construction of personae.’’ These preoccupations give the new wave of variationist research a good deal in common with another major current of language and gender research, which is the subject of the next section.

Gendered Discourse Styles Variationist investigations of gendered linguistic behavior were motivated in the first instance by a desire to understand the workings of variation and change as general linguistic processes. Other researchers, however, were motivated to investigate gendered linguistic behavior primarily by an interest in what language might contribute to the social processes whereby gender is reproduced. These researchers often sought to identify what I am calling ‘gendered discourse styles,’ in other words, ways of speaking that signal masculinity or (more commonly) femininity by deploying characteristic combinations of linguistic features. Whereas variationists took the link between gender and particular realizations of phonological variables to be arbitrary in linguistic terms, researchers of discourse style often supposed that the linguistic markers of men’s style and women’s style would be functionally linked to the traits and roles of men and women. One early attempt to delineate a gendered discourse style in English (though on the basis of the analyst’s own intuitions rather than empirical

investigation) was Robin Lakoff’s account of women’s language (WL), first proposed in 1973. The characteristic features of WL included a preference for milder over more strongly tabooed expletives, exaggerated politeness, an elaborate color vocabulary, use of empty adjectives (‘lovely,’ ‘divine’) and intensifiers (‘so nice’), hedging to reduce the force of an utterance and/or the speaker’s degree of commitment to it, and phrasing statements as questions, using rising intonation and/or end-of-sentence question tags. This formally, rather ill-assorted cluster of features is made coherent by attending to their functions in discourse. Many of them, Lakoff argued, communicate insecurity: a lack of confidence in one’s own opinion, a desire to avoid giving offence and a need to seek approval from others. Lakoff saw these as traits women were socialized to develop in order to conform to mainstream notions of femininity, and she linked this insecurity to women’s subordinate status. WL, from her perspective, was a display of women’s culturally-imposed powerlessness in a male-dominated and sexist society. This line of argument places Lakoff among the many 1970s feminists who adopted what has been called the ‘dominance’ approach to language and gender – in other words, they sought to explain gendered linguistic behavior in terms of men’s power and women’s subordination. But since Lakoff clearly regarded WL as inferior, for many purposes, to the alternative (which she labeled ‘neutral language’ rather than ‘men’s language’ – Lakoff understood femininity to be not parallel to masculinity but marked with respect to it), her work has frequently been cited as an example of the ‘deficit’ approach, in which women’s behavior is judged negatively against an explicitly or implicitly male norm. (In my own view Lakoff’s work has elements of both deficit and dominance, and if anything, the dominance view is stronger.) In the 1980s both the abovementioned approaches were challenged by the advent of a ‘[cultural] difference’ approach to the question of gendered discourse styles. Difference researchers (the best-known of whom is the interactional sociolinguist Deborah Tannen) argued that women and men, like members of different ethnic or national groups, behaved differently because they had unconsciously internalized contrasting norms for the performance of communicative acts. These norms, in turn could be related to the gender-differentiated organization of children’s peer groups, which both reflected and reproduced quasicultural differences in men’s and women’s general orientation to the world. Proponents of this argument have frequently suggested, for instance, that women typically orient to people and relationships while

Gender 349

men are more oriented to objects and information. Such a difference in orientation might be expected to affect many aspects of discourse performance: what topics men and women prefer to talk about, how they address others, how they manage the floor, how much and what kind of politeness they use, the frequency of questions and minimal responses, the form of directives, the use of deixis, even the structure of nominals referring to objects. Research along the lines suggested by the difference approach continues. One notable development is the use of corpus methods to search for statistical evidence of the predicted gender-linked patterns in multi-million word collections of texts. The computing scientist Shlomo Argamon and his associates, for instance, have recently developed an algorithm for which they claim 80% success in deciding whether a formal written text was composed by a man or a woman. The criteria used, which were originally selected using purely statistical methods, include the frequency of determiners and quantifiers (high frequency is a good predictor of male authorship) and that of personal pronouns, especially I, you, she and their variants (high frequencies are associated with female authorship). The researchers suggest that these correlations arise because of men’s greater concern with specifying the properties of objects and women’s greater concern to specify relations between persons addressed or mentioned in the text. A simplified version of their algorithm, known as the Gender Genie, has been made available on a website, inviting visitors to measure their own or others’ writing against the Genie’s templates for male and female style. But this quest for gendered discourse styles, whether couched in terms of deficit, dominance, or difference, is vulnerable to the same criticism as early variationist work: that it is a global approach, predicated on a binary model of gender, and taking insufficient account of intra-gender diversity. Many researchers on gender and discourse have taken Penelope Eckert’s point that new understandings of gender not only require us to look locally (the community of practice framework is influential in discourse studies, too), more radically they ‘‘shift the focus from speaker categories to the construction of personae.’’ In discourse studies as in third wave variationist research, the emphasis is now on mapping the way social actors use what Eckert calls ‘‘linguistic material that serves a social/stylistic purpose’’ (which in this current of work may be anything from a single vowel segment to an entire code) to construct gendered personae. Some researchers are influenced by Judith Butler’s notion of gender as ‘performative,’ that is,

not given in advance but brought into being when people act (or in this case, speak) in ways that are culturally coded as masculine or feminine. It follows that social actors may construct a range of gendered personae (not just one each), and that their performances may include unusual, artificial, or deviant ones. There has been considerable interest recently in researching the linguistic performance of gender by non-mainstream sexual subjects in a variety of cultures and contexts – gay men, lesbians, transgendered individuals, sex workers, drag artists, and teenage nerds who remain marginal to peer-cultures organized around heterosexual activity. In more mainstream cases, too, the diversity of gendered performances and personae is emphasized, with many studies setting out to show that particular CoPs perform gender linguistically in particular ways, and that even a single small group of women, or men, will often make use of stylistic resources to shift between different kinds of femininity or masculinity.

Conclusion This article has focused on the nature, results, and implications of the recent shift from difference to diversity as a framework for studying the relationship of language and gender. It should be noted that this shift, though unquestionably important, is not by any means total. Third wave variationist research coexists with older approaches in which the mechanisms of variation and change and their relationship to demographic categories continue to be explored. The quest for general accounts of men’s and women’s discourse styles continues to flourish alongside studies that ‘look locally.’ But to the extent that language and gender researchers’ approaches have shifted, the resulting research is now beginning to prompt its own questions, and in some cases criticisms. Perhaps the most significant of these is whether the new emphasis on looking locally, stressing the diversity of gendered behavior, is resulting in a reluctance to ‘think globally’ about gender. For some commentators, this caution about treating gender as an overarching social category risks losing sight of the political dimension of language and gender research. It may also distort the overall picture by over-emphasizing the agency of subjects in constructing gendered personae, while downplaying the structural and institutional factors that in reality constrain their performances. The anthropologist Susan Phillips, for instance, has observed that the way gender is enacted locally by the members of an adolescent peer-group or the inhabitants of a village is not unaffected by what happens above the local level, in state and other public institutions (and in contemporary conditions, one might add, in transnational

350 General Semantics

institutions like global communications media). The challenge, then, is to articulate the analysis of local and diverse social practices with an analysis of larger social and political structures. See also: Classifiers and Noun Classes; Concepts; Connotation; Context and Common Ground; Conventions in Language; Cooperative Principle; Face; Generic Reference; Honorifics; Politeness Strategies as Linguistic Variables; Taboo, Euphemism, and Political Correctness.

Bibliography Argamon S, Koppel M & Fine J (2003). ‘Gender, genre and writing style in formal written texts.’ Text 24, 321–346. Benor S, Rose M, Sharma D & Sweetland J (eds.) (2002). Gendered practices in language. Stanford, California: CSLI Publications. Bergvall V, Bing J & Freed A (eds.) (1997). Rethinking language and gender research: theory and practice. London: Longman. Butler J (1990). Gender trouble: feminism and the subversion of identity. New York: Routledge. Cameron D (ed.) (1998). The feminist critique of language: a reader. London: Routledge. Cameron D & Kulick D (2003). Language and sexuality. Cambridge: Cambridge University Press. Coates J (ed.) (1998). Language and gender: a reader. Oxford: Blackwell. Eckert P (1997). ‘Gender and sociolinguistic variation.’ In Coates (ed.). Oxford: Blackwell. 64–75. Eckert P & McConnell-Ginet S (1992). ‘Think practically and look locally: language and gender as communitybased practice.’ Annual Review of Anthropology 12, 461–490. Eckert P & McConnell-Ginet S (1999). ‘New generalizations and explanations in language and gender research.’ Language in Society 28, 185–201. Eckert P & McConnell-Ginet S (2003). Language and gender. Cambridge: Cambridge University Press. Gal S (1990). ‘Between speech and silence: the problematics of research on language and gender.’ In di Leonardo M

(ed.) Gender at the crossroads of knowledge. Berkeley: University of California Press. 175–203. Hall K & Bucholtz M (eds.) (1995). Gender articulated: language and the socially constructed self. London: Routledge. Holmes J (1995). Women, men and politeness. London: Longman. Holmes J & Meyerhoff M (eds.) (2003). The handbook of language and gender. Malden, MA: Blackwell. Johnson S & Meinhof U (eds.) (1997). Language and masculinity. Oxford: Blackwell. Kulick D (1999). ‘Transgender and language: a review of the literature and suggestions for the future.’ GLQ 5, 605–622. Labov W (1990). ‘The intersection of sex and social class in the course of linguistic change.’ Language Variation and Change 2, 205–254. Labov W (2001). Principles of linguistic change 2: social factors. Oxford: Blackwell. Lakoff R (1975). Language and woman’s place. New York: Harper & Row. Leap W & Boellstorff T (eds.) (2003). Speaking in queer tongues: globalization and gay language. Urbana: University of Illinois Press. Livia A & Hall K (eds.) (1997). Queerly phrased: language, gender and sexuality. New York: Oxford University Press. Nichols P (1997). ‘Black women in the rural South: conservative and innovative.’ In Coates (ed.). 55–63. Ochs E (1992). ‘Indexing gender.’ In Duranti A & Goodwin C (eds.) Rethinking context: language as an interactive phenomenon. Cambridge: Cambridge University Press. 335–358. Phillips S U (2003). ‘The power of gender ideologies in discourse.’ In Holmes & Meyerhoff (eds.). 455–476. Sherzer J (1987). ‘A diversity of voices: women’s and men’s speech in ethnographic perspective.’ In Phillips S, Steele S & Tanz C (eds.) Language, gender and sex in comparative perspective. Cambridge: Cambridge University Press. 95–120. Tannen D (1990). You just don’t understand: men and women in conversation. New York: Morrow. Tannen D (ed.) (1993). Gender and conversational interaction. New York: Oxford University Press.

General Semantics K Allan, Monash University, Victoria, Australia ß 2006 Elsevier Ltd. All rights reserved.

General semantics was initiated by Korzybski (Korzybski, 1958 [1933, 1938, 1948]) and propagated through the journal ETC., by Chase (Chase, 1938, 1954) and by Hayakawa (Hayakawa, 1972 [1949, 1964]). Its aims are ‘‘The study and improvement of

human evaluative processes with special emphasis on the relation to signs and symbols, including language’’ (Chase, 1954: 128). Korzybski wrote: The present-day theories of ‘meaning’ are extremely confused and difficult, ultimately hopeless, and probably harmful to the sanity of the human race. ... There is a fundamental confusion between the notion of the older term semantics as connected with a theory of

General Semantics 351 verbal ‘meaning’ and words defined by words, and the present theoretical term general semantics, which deals only with neurosemantic and neurolinguistic living reactions of Smith1, Smith2, etc., as their reactions to neurosemantic and neurolinguistic environments as environment (Korzybski, 1958: xxx).

General semantics was (and is) supposed to have therapeutic value: ‘‘In general semantics we utilize what I call ‘neuro-semantic relaxation,’ which, as attested by physicians, usually brings about ‘normal’ blood pressure’’ (Korzybski, 1958: xlvii) – but no attestations are in fact supplied. The heir to semantics-as-therapy is neuro-linguistic programming (Bandler and Grinder, 1975, 1979, 1982; Grinder and Bandler, 1976; O’Connor and Seymour, 1990). General semantics has a mission to educate people against the dangers of being hoodwinked by propaganda, euphemism, gobbledygook, and even ordinary, everyday language. In part, the movement was a response to the affective and all too effective jargon of 20th century European totalitarianism (both fascism and communism) and of McCarthyism in the United States. So a constant theme is ‘‘Don’t be bamboozled by what is said, search for the meaning and substance in all that you hear.’’ Bolinger blames general semantics for giving rise to the jibe That’s just semantics in which the word semantics has the sense ‘pettifogging’ (Bolinger, 1980: vii). General semantics ‘‘tells you what to do and what to observe in order to bring the thing defined or its effects within the range of one’s experience’’ (Hayakawa, 1972: 157). More precisely, the literal meaning of a statement expressed by sentence S is given by defining the method for observationally verifying the conditions under which S is properly used. There are several problems with this method. First, as Ayer (Ayer, 1946: 12) admits, there is no upper limit on the number of conditions on S’s use. Second, verificationism interprets ‘‘conditions under which S is properly used’’ as ‘‘conditions under which the truth of the statement expressed by S is true’’; consequently, values other than truth must be found for types of illocutionary acts such as requestives, directives, expressives, permissives, and declarations. Third, Hayakawa (Hayakawa, 1972: 54) contrasts the simplicity of using a tapemeasure to verify the truth of This room is fifteen feet long with the impossibility of operationally verifying to everyone’s satisfaction Angels watch over my bed at night or Ed thinks he dreamt he was in bed with Marilyn Monroe. Such

sentences are judged meaningless and therefore synonymous with one another – which they are not. Fourth, operational semantics affords no account of the compositionality of meaning. Fifth, general semantics has little or nothing to say about semantic relationships within a language. In sum, general semantics has little to offer the 21st century linguist; but for what it does offer, check out the Institute of General Semantics. See also: Taboo, Euphemism, and Political Correctness;

Use Theories of Meaning.

Bibliography Ayer A J (1946). Language, truth and logic (2nd edn.). London: Gollancz. Bandler R & Grinder J (1975). The structure of magic I: a book about therapy and language. Palo Alto: Science and Behavior Books. Bandler R & Grinder J (1979). Frogs into princes: neurolinguistic programming. Moab UT: Real People Press. Bandler R & Grinder J (1982). Reframing: neuro-linguistic programming and the transformation of meaning. Moab, UT: Real People Press. Bolinger D L (1980). Language: the loaded weapon. London: Longman. Chase S (1938). The tyranny of words. New York: Harcourt, Brace. Chase S (1954). The power of words. New York: Harcourt, Brace. Grinder J & Bandler R (1976). The structure of magic II. Palo Alto: Science and Behavior Books. Hayakawa S I (1972 [1949, 1964]). Language in thought and action (3rd edn.). New York: Harcourt, Brace, Jovanovich. Korzybski A (1958 [1933, 1938, 1948]). Science and Sanity: an introduction to non-aristotelian systems and general semantics (4th edn.). Lakeville, CT: International Non-Aristotelian Publishing Corporation. O’Connor J & Seymour J (1990). Introducing neurolinguistic programming: the new psychology of personal excellence. Wellingborough: Crucible Press. Paulson R E (1987). Language, science, and action: korzybski’s general semantics a study in comparative intellectual history. Westport, CT: Greenwood Press.

Relevant Website http://www.generalsemantics.org – Website of the Institute of General Semantics.

352 Generating Referring Expressions

Generating Referring Expressions R Dale, Macquarie University, Sydney, NSW, Australia ß 2006 Elsevier Ltd. All rights reserved.

Introduction: An Informal Characterization of the Problem The generation of referring expressions is a key component task in many studies of natural language generation, and one that has attracted a significant amount of attention over the past 15–20 years. Every natural language generation system has to talk about things, and in order to do so it has to determine how to refer to the things it wants to talk about. This may seem relatively straightforward when the entities to be talked about have names or possess other unique identifiers – so-called rigid designators (Kripke, 1980) – as is the case for people and certain other special types of entities, such as cities, newspapers, or ships. In such cases, we can usually assume that the use of the name or identifier will be enough to pick out the entity in question. However, most entities in the world do not have unique names; in the absence of a unique name, in order to refer to such an entity, the generator is faced with the task of constructing a description that allows the hearer to identify the intended referent. The process is necessarily a constructive one, or at least one of choice among a number of possibilities, since what counts as an appropriate form of reference will depend on contextual factors. Determining what description to use is the problem of referring expression generation. As human users of a natural language, we are faced with this task countless times every day. Suppose, for example, we are in a meeting, and I am drawing a diagram on a whiteboard. While carrying out this exercise, I discover that I need to use a different color to highlight some aspect of my diagram, and so I decide to ask you to pass me a particular whiteboard pen that is out of my reach on the other side of the table. If there are a number of other whiteboard pens on the table, and other ones scattered around the room in other locations, a bare request of the form Could you pass me the whiteboard pen? is unlikely to do the job. I need to identify for you the particular pen I want you to pass to me, and to do so, in the absence of an associated pointing gesture – suppose both my hands are occupied – I need to formulate a verbal description of this pen that allows you to determine which pen I have in mind. The pen has many properties that I could use in such a description, including some that are intrinsic to the pen (the fact that it is red, that it is of a particular brand, and so

on), and some that it possesses by virtue of its relationship to other entities (the fact that it is nearer to you than it is to another colleague, the fact that it is on the table rather than on the floor, and so on). Which of these properties it is most appropriate to use in my description will depend on a range of contextual factors, including what other entities are in the environment, and what knowledge I might believe you to have of the entity in question. For example, I might say Could you pass me the pen that Bill brought yesterday? but this is appropriate only if I have reason to believe that you are aware of the provenance of the pen. Of course, as humans, we are normally able to carry out the task of producing an appropriate and effective identifying description with apparent effortlessness. The goal of researchers interested in natural language generation is to design algorithms that can achieve the same end results.

A More Formal Characterization of the Problem From the point of view of computational implementation, natural language generation (NLG) is generally conceived of as a process that involves many subtasks, including content determination, text planning, sentence planning, lexicalization, and grammatical realization. Yorick Wilks is credited with the observation that, if natural language understanding is like counting from one to infinity, then natural language generation is like counting from infinity to one: in every one of the subtasks that make up NLG, a key problem that presents itself is the question of what kind of input to take as a starting point. For each subtask, the lack of an agreed consensus on the answer to this question is a major stumbling block to the consolidation of research results from different perspectives. For example, it is difficult to compare the merits of two components that carry out grammatical realization – the process of producing a syntactic structure, given some input semantic structure that contains the content to be conveyed – if the two components embody different views as to what a semantic structure should contain. The task of referring expression generation stands out from among these other subtasks in that there is a widely agreed starting point. It is probably this fact more than any other that has contributed to the level of interest in addressing this particular research problem. The bulk of research in the generation of referring expressions takes as given the following assumptions: . We have a domain that consists of a collection of entities, identified by symbolic identifiers, such as e1 through en.

Generating Referring Expressions 353

. The entities in the domain are characterized in terms of a set of attributes and the values that the entities have for these attributes; so, for example, our knowledge base might represent the fact that entity e1 has the value pen for the attribute type, and the value red for the attribute color, where the symbols used to designate attribute values correspond to conceptual or semantic elements rather than to surface lexical items. We will use the notation hattribute, valuei for attribute–value pairs; for example, hcolor, redi indicates the attribute of color with the value red. . In a typical context, when we want to refer to some e1, which we call the intended referent, there will be other entities from which the intended referent must be distinguished; these are generally referred to as potential distractors. The goal of referring expression generation is therefore to find some collection of attributes and their values that distinguish the intended referent from all of the potential distractors in the context. If such a collection of attributes and their values can be found, then it serves as a distinguishing description. Formally, we can characterize this as follows: Let r be the intended referent, and C be the set of distractors; then, a set L of attribute–value pairs will represent a distinguishing description if the following two conditions hold:  C1: Every attribute–value pair in L applies to r: that is, every element of L specifies an attribute–value that r possesses.  C2: For every member c of C, there is at least one element l of L that does not apply to c: that is, there is an l in L that specifies an attribute value that c does not possess; l is said to rule out c.

We can think of the generation of referring expressions as being governed by three principles, referred to by Dale (1992) as the principles of adequacy, efficiency, and sensitivity; these are Gricean-like conversational maxims (Grice, 1975) framed from the point of view of the specific task of generating referring expressions. The first two of these principles are primarily concerned with saying neither too much nor too little: the principle of adequacy requires that a referring expression should contain enough information to allow the hearer to identify the referent, and the principle of efficiency requires that the referring expression should not contain unnecessary information. The principle of sensitivity, however, has a different concern: it specifies that the referring expression constructed should be sensitive to the needs and abilities of the hearer or reader. Accordingly, the definition of a distinguishing description specified here should really include a third component:

 C3: The hearer knows or can easily perceive that conditions C1 and C2 hold.

In other words, the hearer must realize that the distinguishing description matches the intended referent and none of the potential distractors, and ideally this realization should not require a large perceptual or cognitive effort on the hearer’s part. In broad terms, the task of referring expression generation is assumed to fall within the overall architecture of a natural language generation system in the following manner. We generally assume that some prior stage of processing has determined what speech acts and embedded propositions should be expressed in the surface form output, whether written or spoken. For example, the referring expression generator might be invoked when we have an input structure corresponding to something like request(h, pass(h, s, e1)), where h is the hearer, s is the speaker, and e1 is some entity: in other words, the speaker has determined that it wants to produce an utterance that requests the hearer to pass the speaker the entity in question. An appropriate utterance corresponding to this structure might begin Please pass me . . . . The provenance of the input structure is not considered to be of relevance to the task of referring expression generation, but typically is the responsibility of the sentence-planning component, and linguistic realization will be responsible for fleshing out the embedded proposition in an appropriate syntactic form, such as the request described here. The task of referring expression generation is to decide how to refer to the entities in the proposition. Note that, strictly speaking, this includes determining how to refer to both the hearer and the speaker. The chosen syntactic form in this case rules out the need for an explicit reference to the hearer, and we will assume that the choice of the third-person singular personal pronoun to refer to the speaker is straightforward; in reality, however, things may not always be so simple. If we leave aside those cases in which the intended referent has a unique name, the task can be thought of, in Fregean terms, as mapping from a referent to something like a sense (Frege, 1892). Generally speaking, the output of the process is seen as being a fragment of semantic content, rather than as a collection of lexical elements; the lexicalization of the chosen semantic material is considered to be a separate and subsequent task. This allows the later process to make further and more surface-oriented choices; for example, once the content of a description has been chosen, surface realization might then still have the option of realizing that description as either the red pen or the pen that is red. This view makes the referring expression generation task one of

354 Generating Referring Expressions

content determination, and raises some questions as to where this process is best located within the natural language generation task as a whole: generally, we think of content determination as having taken place before (for example, sentence planning). This question and its ramifications merit further exploration, but this is beyond the scope of this article.

Approaches to the Problem Early Work

As noted previously, the generation of referring expressions is a component task in almost all NLG systems. Discussion of issues that arise in the broader context of NLG system construction can be found in the seminal early works of Davey (1979), McDonald (1980), and McKeown (1985). However, in each of these cases, the generation of referring expressions was only one of many tasks the authors tried to address, and so the problem was necessarily only touched on, and not given an extensive treatment. The first major work to look specifically at the problem of referring expression generation was that of Appelt (1985). This work contained many insights into the range of problems that a thoroughgoing solution to the problem needs to consider, placing the task firmly within the framework of speech acts (Searle, 1969). Following other work that has attempted to model speech acts by means of artificial intelligence (AI)-style planning operators, Appelt developed a plan-based approach to constructing references, whereby the speaker reasons about the effects on the hearer by using particular elements of content in a referring expression. Producing Minimal Distinguishing Descriptions

The early works of Davey, McDonald, McKeown, and Appelt clearly established the need to view the generation of referring expressions as a specific task to be addressed within a generation system. However, none of these works provided a formally well-specified algorithm for carrying out the task. The principle of efficiency outlined previously suggests that, other things being equal, we desire an algorithm that will deliver a referring expression that mentions as few attributes as possible: a minimal distinguishing description. From the point of view of computational complexity, the problem of constructing a minimal distinguishing description is equivalent to finding the minimal size set cover. In computational complexity theory, this problem is known to be NP-hard (Garey and Johnson, 1979), and any such algorithm is probably computationally impractical when entities have any more than a small number of properties that might be used in descriptions. Much of the work in

providing algorithms for referring expression generation has therefore been concerned with finding heuristic workarounds that avoid these computational complexity issues. There is insufficient space here to consider each of the algorithms in the literature in detail; however, to provide a flavor of what such algorithms contain, we show here the first of the fully specified algorithms to appear, as published in Dale (1992); this has served as a starting point for many of the subsequently proposed algorithms. The algorithm, which is essentially a variant of Johnson’s greedy heuristic for minimal set cover (Johnson, 1974), is as follows: Let L be the set of properties to be realized in our description; let P be the set of properties known to be true of our intended referent r, where a property is a combination of an attribute and a value for that attribute; and let C be the set of potential distractors as defined earlier. The initial conditions are thus as follows: C ¼ {hall distractorsi}; P ¼ {hall properties true of ri}; L ¼ {}.

In order to describe the intended referent r with respect to the set C, we do the following: 1. Check Success: if |C| ¼ 0 then return L as a distinguishing description elseif P ¼ Ø then fail else goto Step 2. 2. Choose Property: for each pi 2 P do: Ci C \ {x|pi(x)} Chosen property is pj, where Cj is the smallest set. goto Step 3. 3. Extend Description (with respect to the chosen pj): Cj; P P  {pj} L L [ {pj}; C goto Step 1.

To see how this works, suppose we have a context that contains the following entities with the properties indicated, and that the intended referent is e1: e1: htype, peni, hcolor, redi, hbrand, Staedtleri, hsize, largei e2: htype, peni, hcolor, bluei, hbrand, Staedtleri, hsize, smalli e3: htype, peni, hcolor, redi, hbrand, Stabiloi, hsize, largei e4: htype, peni, hcolor, greeni, hbrand, Staedtleri, hsize, largei

Here, the property of being red rules out more distractors than does any other property possessed by e1: it rules out both e2 and e4, whereas the brand and size attributes only rule out one distractor each,

Generating Referring Expressions 355

and the type attribute has no impact at all. We still then have to add additional information to rule out e3: here, size does not help, but adding the brand does what is needed. On the other hand, if the intended referent was e2, the color or size alone would be sufficient, and if the intended referent was e3, the brand alone would be sufficient. Verifying that the algorithm does indeed provide these results is left as an exercise for the reader. In reality, since noun phrases always (or almost always) contain head nouns, the head noun indicating type is also incorporated into the resulting description, and for completeness the algorithm given here would have to be augmented to ensure this. The algorithm here is intended to be quite general and domain independent; wherever we can characterize the problem in terms of the basic assumptions laid out earlier, this algorithm can be used to determine the content of context-dependent referring expressions. However, the algorithm also suffers from some limitations: . As stated, it may satisfy the principles of adequacy and efficiency, but it does not pay heed to the principle of sensitivity: there is no guarantee that the properties selected will even be perceptible to the hearer. In the preceding scenario, for example, if the labels that indicate the brands of the pens are face down on the table, and the pens have no other brand-distinguishing characteristics, then the brand information is of no real value for the hearer. . The algorithm only makes use of one-place predicates (such as the fact that our whiteboard pen is red), but as already noted, we can also make use of relational properties (such as the fact that the pen is on the table rather than on the floor) when describing entities. Since this early algorithm was developed, a wide range of alternatives have been proposed, either to address these particular problems or to extend the coverage of the algorithms in other ways. More Efficient Algorithms

Although the algorithm described here avoids the computational complexity of a literal interpretation of the principle of efficiency, it is still a relatively expensive way of computing a description. A second algorithm for referring expression generation that has had widespread influence is Dale and Reiter’s (1995) incremental algorithm. This algorithm has three distinctive properties. First, the algorithm sacrifices the goal of finding a minimal distinguishing description in the interests of efficiency and tractability. It does this by considering the attributes that might be used in a description in a predefined order. This means that on occasion it may include a property

with a discriminatory power that is made redundant by some subsequently incorporated property; however, Dale and Reiter argued that this is not necessarily a bad thing, since human-generated referring expressions also often contain informational redundancy (see, for example, Pechmann (1989)). It remains to be seen whether the algorithm provided by Dale and Reiter provides the same kinds of redundancies as those produced by humans. Second, although the incremental algorithm contains a domain-independent, and therefore quite generally applicable, algorithm for building up the content of a referring expression, it does so by reference to a predefined list of properties, the content of which is dependent on the domain. This provides a convenient balance between generality and domain specificity. Third, when the algorithm considers whether a given property would be useful to include in a description, it carries out a check to determine whether the property in question is one that the hearer would be able to make use of in identifying the referent. In so doing, it also explicitly aims to address the principle of sensitivity. The incremental algorithm has served as the basis for a number of other algorithms in the literature. Referring to Entities Using Relations

Another deficiency (noted previously) with respect to the greedy heuristic algorithm was that it could only make use of one-place predicates in determining which attributes of an entity to use in its description. The reason for this limitation is to avoid adding significant complexity to the algorithm: if we want to build referring expressions of the form the red pen next to the coffee cup, note that the entity to which we are relating the intended referent (in this example the coffee cup) is itself an entity for which we have to construct a referring expression. Once the generator decides that some other entity has to be introduced in order to identify the intended referent, a recursive call to the process of referring expression generation is required. There is clearly the scope here for an infinite regress, as in the red pen next to the coffee cup next to the red pen next to the coffee cup next to . . . . In informal terms, it would appear to be straight forward to deal with such cases appropriately, but capturing the required behavior algorithmically is a little more difficult. As a solution, Dale and Haddock (1991) proposed an algorithm based on constraint satisfaction. This effectively builds up collections of properties for each of the entities to be described in parallel, and determines when the properties (or more precisely, the relations) used to describe one entity also support the identification of another entity. Krahmer et al. (2003) have proposed a solution based on viewing

356 Generating Referring Expressions

the domain of entities, properties, and relations as a labeled graph, with the problem of referring expression generation then being one of constructing an appropriate subgraph. This provides a very elegant solution to the problem, with well-understood mathematical properties. Logical Extensions: Sets, Booleans, and Quantifiers

So far, we have pointed to algorithms that permit the use of both one-place predicates and relations in the construction of referring expressions. Although this provides for a wide range of possibilities, it still does not encompass all of the devices that humans use in constructing referring expressions. Notably, the algorithms discussed so far assume that the intended referent is a singular individual, and that it is to be picked out by the logical conjunction of the properties in the description. There are, of course, other possibilities here. We may sometimes need to refer to sets of entities (Please pass me all the pens), and it may sometimes be convenient to refer to entities not just in terms of conjunctions of properties, but also via other Boolean possibilities, such as the use of negation and disjunction. To return to our earlier example, it might in some circumstances be most appropriate for me to say Please pass me the red pen that hasn’t run out of ink; and in cases in which we want to refer to a set of entities, there may be no property that is shared by all the intended referents, requiring an expression such as the red pen and the blue pen. These ideas are explored in detail in van Deemter (2002), who presented generalizations and extensions of the incremental algorithm that cater to these possibilities. Creaney (2002) offered an algorithm that allows the incorporation of logical quantifiers, such as all and some, into descriptions.

Broader Issues and Outstanding Problems In the foregoing discussions we have surveyed, albeit very briefly, the major aspects of referring expression generation that have been the focus of attention in the development of algorithms over the past 15 years. There are many other algorithms in the literature beyond those presented here, but the problems addressed are essentially the same. In this final section, we turn to aspects of the problem that have not yet received the same level of detailed exploration. Other Forms of Anaphoric Reference

We have focused here on the construction of definite descriptions: references to entities that are in the common ground, either by virtue of their presence in the environment or because they have already been introduced into the discourse. Definite descriptions are

not the only form of anaphoric reference: we can, of course, also use pronouns such as it and one-anaphoric expressions such as the red one to identify intended referents. Although the literature does contain some work on generating these forms of reference, they have not received the degree of attention that has been lavished on definite descriptions, and in particular there has been little work that has seriously attempted to integrate the various forms of reference in a unified algorithmic framework. Other forms of definite reference have also been neglected: in particular, there is no substantive work that explores the generation of associative anaphoric expressions, such as the use of a reference like the cap in a context in which the hearer can infer that this must be a first mention of the cap of a particular pen that is already salient in the discourse. Initial Reference

An even more striking gap in the literature is any serious treatment of initial reference, as opposed to subsequent or anaphoric reference. Subsequent references to entities have a tendency to reuse information used in earlier references; so, for example, I might introduce an entity as that Stabilo pen on the table and subsequently refer to it as the Stabilo pen. The generation literature not only tends to ignore this influence of the initial form of reference on subsequent referring expressions, but is in general quite silent on the problem of how initial references can be constructed. There are a great many ways in which I might refer to a specific entity when introducing it into the discourse, and when we move beyond simple examples of collections of similar objects, as are often used in the literature, it becomes clear that the form of initial reference used has a lot to do with the purpose of the discourse and other aspects of the discourse context. Thus, my decision as to whether to introduce someone into the discourse as a man I met at the bus stop, or a mountain climber I met last night, or an interesting guy I met, will depend on what I have planned in terms of content for the rest of the discourse. The complexities of planning and reasoning required to explain how this selection process works are far beyond our current understanding. The Pragmatics of Reference

Following on from the previous point, relatively little work has looked at the broader context of reference, and in particular the fact that, when we construct a referring expression, we may be doing many things simultaneously and attempting to achieve a variety of purposes. Building on the earlier explorations of Appelt (1985), the work of Kronfeld (1990) is a notable exception here, but we lack a worked-out computational theory of how reference functions within

Generative Lexicon 357

the wider context of language use, and, in particular, we have no formally well-specified explanations of the variety of purposes of reference. The bulk of the work in the field has focused on referring expressions as linguistic devices to distinguish intended referents from other entities in a given context, but this is a rather narrow and low-level view. We also use reference to maintain topic, to shift topic, to introduce contrasting entities, and to navigate a hearer’s focus of attention in various ways, among a range of other higher level purposes. These are all relatively unanalyzed notions from a formal perspective, and the relationships between them need to be worked out more clearly before we can say that we have a proper computational theory of the use of reference and the part it plays in language. As is often the case, approaching this problem from the perspective of natural language generation may provide insights that can also enrich our understanding of natural language analysis. See also: Context and Common Ground; Context Principle;

Coreference: Identity and Similarity; Default Semantics; Definite and Indefinite; Definite and Indefinite Articles; Definite and Indefinite Descriptions; Direct Reference; Generative Lexicon; Indefinite Pronouns; Pronouns; Proper Names; Proper Names: Philosophical Aspects; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Referential versus Attributive; Rigid Designation.

Bibliography Appelt D E (1985). Planning English sentences. Cambridge: Cambridge University Press. Creaney N (2002). Generating descriptions containing quantifiers: aggregation and search. In Kibble R & van Deemter K (eds.) Information sharing: reference and presupposition in language generation and interpretation. Stanford, CA: CSLI. Dale R (1992). Generating referring expressions. Cambridge, MA: MIT Press.

Dale R & Haddock N (1991). ‘Content determination in the generation of referring expressions.’ Computational Intelligence 7(4), 252–265. Dale R & Reiter E (1995). ‘Computational interpretations of the Gricean maxims in the generation of referring expressions.’ Cognitive Science 19, 233–263. Davey A C (1979). Discourse production. Edinburgh: Edinburgh University Press. Frege G (1892). On sense and reference. [Reprinted in Geach P & Black M (eds.) (1960). Translations from the philosophical writings of Gottlob Frege. Oxford: Basil Blackwell.] Garey W & Johnson D (1979). Computers and intractability: a guide to the theory of NP-completeness. San Francisco: W. H. Freeman. Grice H P (1975). ‘Logic and conversation.’ In Cole P & Morgan J (eds.) Syntax and semantics, volume 3: speech acts. New York: Academic Press. 43–58. Johnson D (1974). ‘Approximation algorithms for combinatorial problems.’ Journal of Computer and Systems Sciences 9, 256–278. Krahmer E, van Erk S & Verleg A (2002). ‘Graph-based generation of referring expressions.’ Computational Linguistics 29(1), 53–72. Kripke S (1980). Naming and necessity. Cambridge, MA: Harvard University Press. Kronfeld A (1990). Reference and computation: an essay in applied philosophy of language. Cambridge: Cambridge University Press. McDonald D D (1980). Natural language production as a process of decision making under constraints. Ph.D. diss., MIT, Cambridge, MA. McKeown K R (1985). Text generation: using discourse strategies and focus constraints to generate natural language text. Cambridge: Cambridge University Press. Pechmann T (1989). ‘Incremental speech production and referential overspecification.’ Linguistics 27, 89–110. Searle J R (1969). Speech acts: an essay in the philosophy of language. Cambridge: Cambridge University Press. van Deemter K (2002). ‘Generating referring expressions: Boolean extensions of the incremental algorithm.’ Computational Linguistics 28(1), 37–52.

Generative Lexicon J Pustejovsky, Brandeis University, Waltham, MA, USA ß 2006 Elsevier Ltd. All rights reserved.

Introduction Generative Lexicon (GL) introduces a knowledge representation framework that offers a rich and expressive vocabulary for lexical information. The motivations for this are twofold. Overall, GL is

concerned with explaining the creative use of language; we consider the lexicon to be the key repository holding much of the information underlying this phenomenon. More specifically, however, it is the notion of a constantly evolving lexicon that GL attempts to emulate; this is in contrast to currently prevalent views of static lexicon design, where the set of contexts licensing the use of words is determined in advance, and there are no formal mechanisms offered for expanding this set.

358 Generative Lexicon

One of the most difficult problems facing theoretical and computational semantics is defining the representational interface between linguistic and nonlinguistic knowledge. GL was initially developed as a theoretical framework for encoding selectional knowledge in natural language. This in turn required making some changes in the formal rules of representation and composition. Perhaps the most controversial aspect of GL has been the manner in which lexically encoded knowledge is exploited in the construction of interpretations for linguistic utterances. Following standard assumptions in GL, the computational resources available to a lexical item consist of the following four levels: (1a) Lexical typing structure: giving an explicit type for a word positioned within a type system for the language; (1b) Argument structure: specifying the number and nature of the arguments to a predicate; (1c) Event structure: defining the event type of the expression and any subeventual structure it may have with subevents; (1d) Qualia structure: a structural differentiation of the predicative force for a lexical item.

The qualia structure, inspired by Moravcsik’s (1975) interpretation of the aitia of Aristotle, are defined as the modes of explanation associated with a word or phrase in the language, and are defined as follows (Pustejovsky, 1991): (2a) Formal: the basic category of which distinguishes the meaning of a word within a larger domain; (2b) Constitutive: the relation between an object and its constituent parts; (2c) Telic: the purpose or function of the object, if there is one; (2d) Agentive: the factors involved in the object’s origins or ‘coming into being.’

Conventional interpretations of the GL semantic representation have been as feature structures (cf. Bouillon, 1993; Pustejovsky, 1995). The feature representation shown below gives the basic template of argument and event variables, and the specification of the qualia structure.

Traditional Lexical Representations The traditional organization of lexicons in both theoretical linguistics and natural language processing systems assumes that word meaning can be exhaustively defined by an enumerable set of senses per word. Lexicons, to date, generally tend to follow this organization. As a result, whenever natural language interpretation tasks face the problem of lexical ambiguity, a particular approach to disambiguation is warranted. The system attempts to select the most appropriate ‘definition’ available under the lexical entry for any given word; the selection process is driven by matching sense characterizations against contextual factors. One disadvantage of such a design follows from the need to specify, ahead of time, the contexts in which a word might appear; failure to do so results in incomplete coverage. Furthermore, dictionaries and lexicons currently are of a distinctly static nature: the division into separate word senses not only precludes permeability; it also fails to account for the creative use of words in novel contexts. GL attempts to overcome these problems, both in terms of the expressiveness of notation and the kinds of interpretive operations the theory is capable of supporting. Rather than taking a ‘snapshot’ of language at any moment of time and freezing it into lists of word sense specifications, the model of the lexicon proposed here does not preclude extensibility: it is open-ended in nature and accounts for the novel, creative uses of words in a variety of contexts by positing procedures for generating semantic expressions for words on the basis of particular contexts. Adopting such a model presents a number of benefits. From the point of view of a language user, a rich and expressive lexicon can explain aspects of learnability. From the point of view of linguistic theory, it can offer improvements in robustness of coverage. Such benefits stem from the fact that the model offers a scheme for explicitly encoding lexical knowledge at several levels of generalization. In particular, by making lexical ambiguity resolution an integral part of a uniform semantic analysis procedure, the problem is rephrased in terms of dynamic interpretation of a word in context; this is in contrast to current frameworks which select among a static, predetermined set of word senses, and do so separately from constructing semantic representations for larger text units. There are several methodological motivations for importing tools developed for the computational representation and manipulation of knowledge into the study of word meaning, or lexical semantics. Generic knowledge representation (KR) mechanisms, such as

Generative Lexicon 359

inheritance structures or rule bases, can – and have been – used for encoding linguistic information. However, not much attention has been paid to the notion of what exactly constitutes such linguistic information. Traditionally, the application area of knowledge representation formalisms has been the domain of general world knowledge. By shifting the focus to a level below that of words (or lexical concepts) one is able to abstract the notion of lexical meaning away from world knowledge, as well as from other semantic influences such as discourse and pragmatic factors. Such a process of abstraction is an essential prerequisite for the principled creation of lexical entries. Although GL makes judicious use of knowledge representation (KR) tools to enrich the semantics of lexical expressions, it preserves a felicitous partitioning of the information space. Keeping lexical meaning separate from other linguistic factors, as well as from general world knowledge, is a methodologically sound principle; nonetheless, GL maintains that all of these should be referenced by a lexical entry. In essence, such capabilities are the base components of a generative language whose domain is that of lexical knowledge. The interpretive aspect of this language embodies a set of principles for richer composition of components of word meaning. As illustrated later in this entry, semantic expressions for word meaning in context are constructed by a fixed number of generative devices (cf. Pustejovsky, 1991). Such devices operate on a core set of senses (with greater internal structure than hitherto assumed); through composition, an extended set of word senses is obtained when individual lexical items are considered jointly with others in larger phrases. The language presented below thus becomes an expressive tool for capturing lexical knowledge, without presupposing finite sense enumeration.

The Nature of Polysemy One of the most pervasive phenomena in natural language is that of systematic ambiguity, or polysemy. This problem confronts language learners and natural language processing systems alike. The notion of context enforcing a certain reading of a word, traditionally viewed as selecting for a particular word sense, is central both to global lexical design (the issue of breaking a word into word senses) and local composition of individual sense definitions. However, current lexicons reflect a particularly ‘static’ approach to dealing with this problem: the numbers of and distinctions between senses within an entry are ‘frozen’ into a fixed grammar’s lexicon. Furthermore, definitions hardly make any provisions for the notion that

boundaries between word senses may shift with context – not to mention that no lexicon really accounts for any of a range of lexical transfer phenomena. There are serious problems with positing a fixed number of ‘bounded’ word senses for lexical items. In a framework that assumes a partitioning of the space of possible uses of a word into word senses, the problem becomes that of selecting, on the basis of various contextual factors (typically subsumed by, but not necessarily limited to, the notion of selectional restrictions), the word sense closest to the use of the word in the given text. As far as a language user is concerned, the question is that of ‘fuzzy matching’ of contexts; as far as a text analysis system is concerned, this reduces to a search within a finite space of possibilities. This approach fails on several accounts, both in terms of what information is made available in a lexicon for driving the disambiguation process, and how a sense selection procedure makes use of this information. Typically, external contextual factors alone are not sufficient for precise selection of a word sense; additionally, often the lexical entry does not provide enough reliable pointers to critically discriminate between word senses. In the case of automated sense selection, the search process becomes computationally undesirable, particularly when it has to account for longer phrases made up of individually ambiguous words. Finally, and most importantly, the assumption that an exhaustive listing can be assigned to the different uses of a word lacks the explanatory power necessary for making generalizations and/or predictions about how words used in a novel way can be reconciled with their currently existing lexical definitions. To illustrate this last point, consider the ambiguity and context-dependence of adjectives such as fast and slow, where the meaning of the predicate varies depending on the noun being modified. Sentences (3a)–(3e) show the range of meanings associated with the adjective fast. Typically, a lexicon requires an enumeration of different senses for such words, to account for this ambiguity: (3a) The island authorities sent out a fast little government boat to welcome us: Ambiguous between a boat driven quickly/one that is inherently fast. (3b) a fast typist:

a person who performs the act of typing quickly. (3c) Rackets is a fast game: the motions involved in the game are rapid and swift. (3d) a fast book: one that can be read in a short time.

360 Generative Lexicon (3e) My friend is a fast driver and a constant worry to her cautious husband: one who drives quickly.

These examples involve at least four distinct word senses for the word fast, (WordNet 2.0 has ten senses for the adjectival reading of fast.): fast (1): moving quickly; fast (2): performing some act quickly; fast (3): doing something requiring a short space of time; fast (4): involving rapid motion.

In an operational lexicon, word senses would be further annotated with selectional restrictions: for instance, fast (1) may be predicated by the object belonging to a class of movable entities, and fast (3) may relate the action ‘that takes a little time’ – e.g., reading, in the case of (4) below – to the object being modified. Upon closer analysis, each occurrence of fast above predicates in a slightly different way. In fact, any finite enumeration of word senses will not account for creative applications of this adjective in the language. For example, consider the two phrases fast freeway and fast garage. The adjective fast in the phrase a fast freeway refers to the ability of vehicles on the freeway to sustain high speed, while in fast garage it refers to the length of time needed for a repair. As novel uses of fast, we are clearly looking at new senses that are not covered by the enumeration given above. Part of GL’s argument for a different organization of the lexicon is based on a claim that the boundaries between the word senses in the analysis of fast above are too rigid. Still, even if we assume that enumeration is adequate as a descriptive mechanism, it is not always obvious how to select the correct word sense in any given context: consider the systematic ambiguity of verbs like bake (discussed by Atkins et al., 1988), which require discrimination with respect to change-of-state versus create readings, depending on the context (see sentences [4a] and [4b], respectively). (4a) John baked the potatoes. (4b) Mary baked a cake.

The problem here is that there is too much overlap in the ‘core’ semantic components of the different readings (Jackendoff (1985) correctly points out, however, that deriving one core meaning for all homographs of a word form may not be possible, a view not inconsistent with that proposed here.); hence, it is not possible to guarantee correct word sense selection on the basis of selectional restrictions alone. Another problem with this approach is that it lacks any appropriate or natural level of abstraction. As these examples clearly demonstrate, partial overlaps of core and peripheral components of different

word meanings make the traditional notion of word sense, as implemented in current dictionaries, inadequate. Within this approach, the only feasible solution would be to employ a richer set of semantic distinctions for the selection of complements than is conventionally provided by the mechanism of selectional restrictions. It is equally arbitrary to create separate word senses for a lexical item just because it can participate in several subcategorization forms; yet this has been the only approach open to computational lexicons that are based on a fixed number of features and senses. A striking example of this is provided by verbs such as believe and forget. The sentences in (5a)–(5b) show that the syntactic realization of the verb’s object complement determines how the phrase is interpreted semantically. The that-complement, for example, in (5a) exhibits a property called factivity (Kiparsky and Kiparsky, 1971), where the object proposition is assumed to be a fact regardless of what modality the whole sentence carries. Sentence (5d) contains a ‘concealed question’ complement (Grimshaw, 1979), so called because the phrase can be paraphrased as a question. These different interpretations are usually encoded as separate senses of the verb, with distinct lexical entries. (5a) Mary forgot that she left the light on at home. (a factive reading) (5b) Mary forgot to leave the light on for the delivery man. (a nonfactive reading) (5c) I almost forgot where we’re going. (an embedded question) (5d) She always forgets the password to her account. (a concealed question) (5e) He leaves, forgets his umbrella, comes back to get it . . . (ellipsed nonfactive)

These distinctions could be easily accounted for by simply positing separate word senses for each syntactic type, but this misses the obvious relatedness between the different syntactic contexts of forget. Moreover, the general ‘core’ sense of the verb forget, which deontically relates a mental attitude with a proposition or event, is lost between the separate senses of the verb. GL, on the other hand, posits one definition for forget which can, by suitable composition with the different complement types, generate all the allowable readings (cf. Pustejovsky, 1995).

Levels of Lexical Meaning The richer structure for the lexical entry proposed in GL takes to an extreme the established notions of

Generative Lexicon 361

predicate-argument structure, primitive decomposition and conceptual organization; these can be seen as determining the space of possible interpretations that a word may have. That is, rather than committing to an enumeration of a predetermined number of different word senses, a lexical entry for a word now encodes a range of representative aspects of lexical meaning. For an isolated word, these meaning components simply define the semantic boundaries appropriate to its use. When embedded in the context of other words, however, mutually compatible roles in the lexical decompositions of each word become more prominent, thus forcing a specific interpretation of individual words within a specific phrase. It is important to realize that this is a generative process, which goes well beyond the simple matching of features. In fact, this approach requires, in addition to a flexible notation for expressing semantic generalizations at the lexical level, a mechanism for composing these individual entries on the phrasal level. The emphasis of our analysis of the distinctions in lexical meaning is on studying and defining the role that all lexical types play in contributing to the overall meaning of a phrase. Crucial to the processes of semantic interpretation that the lexicon is targeted for is the notion of compositionality, necessarily different from the more conventional pairing of verbs as functions and nouns as arguments. If the semantic load in the lexicon is entirely spread among the verb entries, as many existing lexicons assume, differences like those exemplified above can only be accounted for by treating bake, forget, and so forth as polysemous verbs. If, on the other hand, elaborate lexical meanings of verbs and adjectives could be made sensitive to components of equally elaborate decompositions of nouns, the notion of spreading the semantic load evenly across the lexicon becomes the key organizing principle in expressing the knowledge necessary for disambiguation. To be able to express the lexical distinctions required for analyzing the examples in the last section, it is necessary to go beyond viewing lexical decomposition as based only on a predetermined set of primitives; rather, what is needed is to be able to specify, by means of sets of predicates, different levels or perspectives of lexical representation, and to be able to compose these predicates via a fixed number of generative devices. The ‘static’ definition of a word provides its literal meaning; it is only through the suitable composition of appropriately highlighted projections of words that we generate new meanings in context. In order to address these phenomena and inadequacies mentioned above, Generative Lexicon argues that a theory of computational lexical semantics must

make reference to the four levels of representations mentioned above: 1. Lexical Typing Structure. This determines the ways in which a word is related to other words in a structured type system (i.e., inheritance. In addition to providing information about the organization of a lexical knowledge base, this level of word meaning provides an explicit link to general world (commonsense) knowledge. 2. Argument Structure. This encodes the conventional mapping from a word to a function, and relates the syntactic realization of a word to the number and type of arguments that are identified at the level of syntax and made use of at the level of semantics (Grimshaw, 1991); 3. Event Structure. This identifies the particular event type for a verb or a phrase. There are essentially three components to this structure: the primitive event type – state (S), process (P), or transition (T); the focus of the event; and the rules for event composition (cf. Moens and Steedman, 1988; Pustejovsky, 1991b). 4. Qualia Structure. This defines the essential attributes of objects, events, and relations, associated with a lexical item. By positing separate components (see below) in what is, in essence, an argument structure for nominals, nouns are elevated from the status of being passive arguments to active functions (cf. Moravcsik, 1975; Pustejovsky, 1991a). We can view the fillers in qualia structure as prototypical predicates and relations associated with this word. A set of generative devices connects the four levels, providing for the compositional interpretation of words in context. These devices include subselection, type coercion, and cocomposition. In this article, we will focus on the qualia structure and type coercion, an operation that captures the semantic relatedness between syntactically distinct expressions. As an operation on types within a l-calculus, type coercion can be seen as transforming a fixed semantic language into one with changeable (polymorphic) types. Argument, event, and qualia types must conform to the wellformedness conditions defined by the type system and the lexical inheritance structure when undergoing operations of semantic composition. Lexical items are strongly typed yet are provided with mechanisms for fitting to novel typed environments by means of type coercion over a richer notion of types. Qualia Structure

Qualia structure is a system of relations that characterizes the semantics of a lexical item, very much like

362 Generative Lexicon

the argument structure of a verb (Pustejovsky). To illustrate the de scriptive power of qualia structure, the semantics of nominals will be the focus here. In effect, the qualia structure of a noun determines its meaning in much the same way as the typing of arguments to a verb determines its meaning. The elements that make up a qualia structure include familiar notions such as container, space, surface, figure, or artifact. One way to model the qualia structure is as a set of constraints on types (cf. Copestake and Briscoe, 1992; Pustejovsky and Boguraev, 1993). The operations in the compositional semantics make reference to the types within this system. The qualia structure along with the other representational devices (event structure and argument structure) can be seen as providing the building blocks for possible object types. Figure 1 illustrates a type hierarchy fragment for knowledge about objects, encoding qualia structure information. In Figure 1, the term nomrqs refers to a ‘relativized qualia structure’ a type of generic information structure for entities (cf. Calzolari, 1992 for discussion). Further, ind.obj represents ‘individuated object.’ The tangled type hierarchy above shows how qualia can be unified to create more complex concepts out of simple ones. Following Pustejovsky (2001, 2005), we can distinguish the domain of individuals into three ranks or levels of type: (6a) Natural Types: Natural kind concepts consisting of reference only to Formal and Const qualia roles; (6b) Functional Types: Concepts integrating reference to purpose or function. (6c) Complex Types: Concepts integrating reference to a relation between types.

For example, a simple natural physical object (7), can be given a function (i.e., a Telic role), and transformed into a functional type, as in (8). (7)

(8)

Functional types in language behave differently from naturals, as they carry more information with them regarding their use and purpose. For example, the noun sandwich contains information of the ‘eating activity’ as a constraint on its Telic value, due to its position in the type structure; that is, eat(P,w,x) denotes a process, P, between an individual w and the physical object x. (9)

From qualia structures such as these, it now becomes clear how a sentence such as Mary finished her sandwich receives the default interpretation it does; namely, that of Mary eating the sandwich. This is an example of type coercion, and the semantic compositional rules in the grammar must make reference to values such as qualia structure, if such interpretations are to be constructed on-line and dynamically.

Coercion and Compositionality Type coercion is an operation in the grammar ensuring that the selectional requirements on an argument to a predicate are in fact satisfied by the argument in the compositional process. The rules of coercion presuppose a typed ontology such as that outlined above. By allowing lexical items to coerce their arguments, we obviate the enumeration of multiple entries for different senses of a word. We define coercion as follows (Pustejovsky, 1995): (10) Type Coercion: a semantic operation that converts an argument to the type that is expected by a function, where it would otherwise result in a type error.

The notion that a predicate can specify a particular target type for its argument is a very useful one, and intuitively explains the different syntactic argument forms for the verbs below. In sentences (11) and (12), noun phrases and verb phrases appear in the same argument position, somehow satisfying the type required by the verbs enjoy and begin. In sentences (13) and (14), noun phrases of very different semantic classes appear as subject of the verbs kill and wake. (11a) Mary enjoyed the movie. (11b) Mary enjoyed watching the movie.

Figure 1 Type hierarchy fragment.

(12a) Mary began a book. (12b) Mary began reading a book. (12c) Mary began to read a book

Generative Lexicon 363 (13a) John killed Mary. (13b) The gun killed Mary. (13c) The bullet killed Mary. (14a) The cup of coffee woke John up. (14b) Mary woke John up. (14c) John’s drinking the cup of coffee woke him up.

If we analyze the different syntactic occurrences of the above verbs as separate lexical entries, following the sense enumeration theory outlined in previous sections, we are unable to capture the underlying relatedness between these entries; namely, that no matter what the syntactic form of their arguments, the verbs seem to be interpreting all the phrases as events of some sort. It is exactly this type of complement selection that type coercion allows in the compositional process.

Complex Types in Language One of the more unique aspects of the representational mechanisms of GL is the data structure known as a complex type (or dot object), introduced to explain several phenomena involving the selection of conflicting types in syntax. There are well-known cases of container-containee and figure-ground ambiguities, where a single word may refer to two aspects of an object’s meaning (cf. Apresjan, 1973; Wilks, 1975; Lakoff, 1987; Pustejovsky and Anick, 1988). The words window, door, fireplace, and room can be used to refer to the physical object itself or the space associated with it: (15a) They walked through the door. (15b) She will paint the door red. (16a) Black smoke filled the fireplace. (16b) The fireplace is covered with soot.

In addition to figure-ground and container-containee alternations, there are many other cases in natural language where two or more aspects of a concept are denoted by a single lexicalization. As with nouns such as door, the nouns book and exam denote two contradictory types; books are both physical form and informational in nature; exams are both events and informational. (17a) Mary doesn’t believe the book. (17b) John bought his book from Mary. (17c) The police burnt a controversial book. (18a) John thought the exam was confusing. (18b) The exam lasted more than two hours this morning.

What is interesting about the above pairs is that the two senses of these nouns are related to one another

in a specific way. The apparently contradictory nature of the two senses for each pair actually reveals a deeper structure relating these senses, something that is called a dot object. For each pair, there is a relation that connects the senses, represented as a Cartesian product of the two semantic types. There must exist a relation R that relates the elements of the pairing, and this relation must be part of the definition of the semantics for the dot object to be well-formed. For nouns such as book, disk, and record, the relation R is a species of ‘containment,’ and shares grammatical behavior with other container-like concepts. For example, we speak of information in a book, articles in the newspaper, as well as songs on a disc. This containment relation is encoded directly into the semantics of a concept such as book – i.e., hold(x, y) – as the formal quale value. For other dot object nominals such as prize, sonata, and lunch, different relations will structure the types in the Cartesian product, as we see below. The lexical structure for book as a dot object can be represented as in (19). (19)

Nouns such as sonata, lunch, and appointment, on the other hand, are structured by entirely different relations.

Recent Developments in Generative Lexicon As the theory has matured, many of the analytic devices and the linguistic methodology of Generative Lexicon have been extended and applied to languages and phenomena well beyond the original scope of the theory. Cocomposition has been applied to a number of phenomena, particularly light verb constructions, with a fair amount of success, in Korean and Japanese (Lee et al., 2003). Qualia structure has proved to be an expressive representational device and has been adopted by adherents of many other grammatical frameworks. For example, Jensen and Vikner (1994) and Borschev and Partee (2001) both appeal to qualia structure in the interpretation of the genitive relation in NPs, while many working on the interpretation of noun compounds have developed qualia-based strategies for interpretation of noun–noun relations (Johnston and Busa, 1996, 1997; Lehner, 2003; Jackendoff, 2003). Van Valin (2005) has adopted qualia roles within several aspects of RRG analyses,

364 Generative Lexicon

where nominal semantics have required finer grained representations. Perhaps one of the biggest developments within the theory in recent years has been the integration of type coercion into a general theory of the mechanisms of selection in grammar (Pustejovsky, 2002, 2005). On this view, there are three mechanisms that account for all local syntagmatic and paradigmatic behavior in the grammar: pure selection, type exploitation, and type coercion. The challenges posed by Generative Lexicon to linguistic theory are quite direct and simple: semantic interpretation is as creative and generative as syntax if not more so. But the process operates under serious constraints and inherently restrictive mechanisms. It is GL’s goal to uncover these mechanisms in order to model the expressive semantic power of language. See also: Compositionality; Lexical Conceptual Structure;

Lexical Semantics; Lexicon: Structure; Lexicon/Dictionary: Computational Approaches; Selectional Restrictions; Syntax-Semantics Interface; Thematic Structure.

Bibliography Alsina A (1992). ‘On the Argument Structure of Causatives.’ Linguistic Inquiry 23(4), 517–555. Apresjan J D (1973). ‘Regular Polysemy.’ Linguistics 143, 5–32. Apresjan J D (1973). ‘Synonymy and synonyms.’ In Kiefer F (ed.). 173–199. Asher N & Morreau M (1991). ‘Common sense entailment: a modal theory of nonmonotonic reasoning.’ In Proceedings to the 12th International Joint Conference on Artificial Intelligence, Sydney, Australia. Asher N & Pustejovsky J (in press). ‘The metaphysics of words,’ ms. Brandeis University and University of Texas. Atkins B T, Kegl J & Levin B (1988). ‘Anatomy of a verb entry: from linguistic theory to lexicographic practice.’ International Journal of Lexicography 1, 84–126. Baker M (1988). Incorporation: a theory of grammatical function changing. Chicago: University of Chicago Press. Bierwisch M (1983). ‘Semantische und konzeptuelle Repra¨sentationen lexikalischer Einheiten.’ In Ruzicka R & Motsch W (eds.) Untersuchungen zur Semantik. Berlin: Akademische-Verlag. Boguraev B & Briscoe E (1989). Computational lexicography for natural language processing. Harlow/London: Longman. Boguraev B & Pustejovsky J (1996). Corpus processing for lexical acquisition. Cambridge: Bradford Books/MIT Press. Borschev V & Partee B H (in press). ‘Genitives, types, and sorts.’ In Kim J-Y Lander Y & Parlee B H (eds.) Possessives and beyond: semantics and syntax. Amherst: GLSA. Borschev V & Partee B H (2001). ‘Genitive modifiers, sorts, and metonymy.’ Nordic Journal of Linguistics. Borschev V & Partee B H (2001a). ‘Genitive modifiers, sorts, and metonymy.’ Nordic Journal of Linguistics 24, 140–160.

Borschev V & Partee B H (2001b). ‘Ontology and metonymy.’ In Jensen P A & Skadhauge P (eds.) Ontology-based interpretation of noun phrases. Proceedings of the First International OntoQuery Workshop. Kolding: Department of Business Communication and information Science, University of Southern Denmark. 121–138. Bouillon P (1997). ‘Polymorphie et se´mantique lexicale: le case des adjectifs’ Ph.D. diss., Paris VII. Paris. Bresnan J (1994). ‘Locative Inversion and the architecture of universal grammar.’ Language 70(1), 2–31. Briscoe T, de Paiva V & Copestake A (eds.) (1993). Inheritance, defaults, and the lexicon. Cambridge: Cambridge University Press. Busa F (1996). Compositionality and the semantics of nominals. Ph.D. diss., Brandeis University. Calzolari N (1992). ‘Acquiring and representing semantic information in a lexical knowledge base.’ In Pustejovsky J & Bergler S (eds.) Lexical semantics and knowledge representation. New York: Springer Verlag. Carpenter B (1992). ‘Typed feature structures.’ Computational Linguistics 18, 2. Chomsky N (1955). The logical structure of linguistic theory. Chicago: University of Chicago Press. Chomsky N (1965). Aspects of the theory of syntax. Cambridge: MIT Press. Choueka Y (1988). ‘Looking for needles in a haystack, or locating interesting collocational expressions in large textual databases.’ Proceedings of the RAIO. 609–623. Copestake A & Briscoe E (1992). ‘Lexical operations in a unification-based framework.’ In Pustejovsky J & Bergler S (eds.) Lexical semantics and knowledge representation. New York: Springer Verlag. Copestake A (1992). The Representation of lexical semantic information. CSRP 280, University of Sussex. Copestake A (1993). ‘Defaults in the LKB.’ In Briscoe T & Copestake A (eds.) Default inheritance in the lexicon. Cambridge: Cambridge University Press. Copestake A & Briscoe E (1992). ‘Lexical operations in a unification-based framework.’ In Pustejovsky J & Bergler S (eds.) Lexical semantics and knowledge representation. New York: Springer Verlag. Davis A & Koenig J-P (2000). ‘Linking as constraints on word classes in a hierarchical lexicon.’ Language 76(1). Davis A (1996). Lexical semantics and linking and the hierarchical lexicon. Ph.D. diss., Stanford University. de Miguel E & Fernandez Lagunilla M (2001). ‘El operador aspectual se.’ Revista Espaola de Lingstica 39(1), 13–43. de Miguel E (2000). ‘Relazioni tra il lessico e la sintassi: classi aspettuali di Verbi ed il passivo in spagnolo.’ In Simone R (ed.) Classi di parole e conoscenza lessicale. Studi Italiani di Linguistica Teorica e Applicata (SILTA), 2. Do¨lling J (1992). ‘Flexible Interpretationen durch Sortenverschiebung.’ In Zimmermann I & Strigen A (eds.) Fu¨gungspotenzen, Berlin: Akademie Verlag. Dowty D R (1979). Word meaning and Montague Grammar. Dordrecht: D. Reidel. Dowty D R (1985). ‘On some recent analyses of control.’ Linguistics and Philosophy 8, 1–41. Dowty D (1991). ‘Thematic proto-roles and argument selection.’ Language 67, 547–619.

Generative Lexicon 365 Egg M & Lebeth K (1995). ‘Semantic underspecification and modifier attachment ambiguities.’ In Kilbury J & Wiese R (eds.) Integrative Ansa¨tze in der Computerlinguistik. Du¨sseldorf: Seminar fu¨r Allgemeine Sprachwissenschaft. Fauconnier G (1985). Mental spaces. Cambridge: MIT Press. Goldberg A E (1995). Constructions: a Construction Grammar approach to Argument Structure. Chicago: University of Chicago Press. Grimshaw J (1979). ‘Complement selection and the lexicon’ Linguistic Inquiry 10, 279–326. Grimshaw J (1990). Argument structure. Cambridge: MIT Press. Grimshaw J & Mester A (1988). ‘Light verbs and y-marking.’ Linguistic Inquiry 10, 205–232. Gruber J S (1976). Lexical structures in syntax and semantics. Amsterdam: North-Holland. Gunter C (1992). Semantics of programming languages. Cambridge: MIT Press. Guthrie L, Pustejovsky J, Wilks Y & Slator B (1996). ‘The role of lexicons in natural language processing.’ Communications of the ACM 39(1). Hale K & Keyser J (1993). ‘On argument structure and the lexical expression of syntactic relations.’ In Hale K & Keyser J (eds.) The view from building 20. Cambridge: MIT Press. Halle M, Bresnan J & Miller G (eds.) (1978). Linguistic theory and psychological reality. Cambridge: MIT Press. Higginbotham J (1985). ‘On semantics.’ Linguistic Inquiry 16, 547–593. Higginbotham J (1989). ‘Elucidations of meaning.’ Linguistics and Philosophy 12, 465–517. Hirst G (1987). Semantic interpretation and the resolution of ambiguity. Cambridge: Cambridge University Press. Hjelmslev L (1961). Prolegomena to a theory of language. Whitfield F (trans.). Madison: University of Wisconsin Press, first published in 1943. Ingria R (1986). ‘Lexical information for parsing systems: points of convergence and divergence.’ Automating the Lexicon. Italy: Marina di Grosseto. Ingria R, Boguraev B & Pustejovsky J (1992). ‘Dictionary/ lexicon.’ In Shapiro S (ed.) Encyclopedia of artificial intelligence. 2nd ed. New York: Wiley. Jackendoff R (1972). Semantic interpretation in Generative Grammar. Cambridge: MIT Press. Jackendoff R (1985). ‘Multiple subcategorization and the theta-criterion: the case of climb.’ Natural Language and Linguistic Theory 3, 271–295. Jackendoff R (1990). Semantic structures. Cambridge: MIT Press. Jensen P A & Vikner C (1996). ‘The double nature of the verb have.’ In LAMBDA 21, OMNIS Workshop 23–24 Nov. 1995. Handelshjskolen i Kbenhavn: Institut for Datalingvistik. 25–37. Jensen P A & Vikner C (1994). ‘Lexical knowledge and the semantic analysis of Danish genitive constructions.’ In Hansen S L & Wegener H (eds.) Topics in knowledgebased NLP systems. Copenhagen: Samfundslitteratur. 37–55. Johnston M (1995). ‘Semantic underspecification and lexical types: capturing polysemy without lexical rules.’ In

Proceedings of ACQUILEX Workshop on Lexical Rules, August 9–11, 1995, Cambridgeshire. Kayser D (1988). ‘What kind of thing is a concept?’ Computational Intelligence 4, 158–165. Kiparsky P & Kiparsky C (1971). ‘Fact.’ In Steinberg D & Jakobovitz L (eds.) Semantics. Cambridge: Cambridge University Press. 345–369. Lee C & Kim Y (2003). ‘The lexico-semantic structure of Korean inchoative verbs: With reference to -e-ci-ta class.’ In Bouillon P (ed.) Proceedings of International Workshop on Generative Lexicon. Geneva: University of Geneva. Lee C (2000). ‘Numeral classifiers, (In-)Definites and Incremental Theme in Korean.’ In Lee C & Whitman J (eds.) Korean syntax and semantics: LSA Institute Workshop, Santa Cruz, ’91. Seoul: Thaehaksa. Lee C (2003). ‘Change of location and change of state.’ In Boullion P (ed.) Proceedings of the 2nd International Workshop on Generative Lexicon. Geneva: University of Geneva. Lee C (2004). ‘Motion and state: verbs of tul-/na-(K) and hairu/ deru (J) enter/exit.’ In Hudson E et al. (eds.) Japanese/Korean Linguistics 13. Standford: CSLI. Lee C & Im S (2003). ‘How to combine the verb ha-‘do’ with an entity type noun in Korean – Its cross-linguistic implications.’ In Bouillon P (ed.) Proceedings of International Workshop on Generative Lexicon. Geneva: University of Geneva. Levin B & Rappaport Hovav M (1995). Unaccusativity: at the syntax–semantics interface. Cambridge: MIT Press. Levin B (1993). Towards a lexical organization of English verbs. Chicago: University of Chicago Press. Lyons J (1968). Introduction to theoretical linguistics. Cambridge: Cambridge University Press. Maier D (1983). The theory of relational databases. Computer Science Press. McCawley J D (1968). ‘The role of semantics in a grammar.’ In Bach E & Harms R T (eds.) Universals in linguistic theory. New York, NY: Holt, Rinehart, and Winston. McCawley J (1968). Lexical insertion in a Transformational Grammar without Deep Structure. Proceedings of the Chicago Linguistic Society 4. Mel’cˇuk I A (1988). ‘Semantic description of lexical units in an explanatory combinatorial dictionary: basic principles and heuristic criteria.’ International Journal of Lexicography 1, 165–188. Miller G (1990). ‘WordNet: an on-line lexical database.’ International Journal of Lexicography 3, 235–312. Miller G (1991). The science of words. Scientific American Library. Moravcsik J M (1975). ‘Aitia as generative factor in Aristotle’s philosophy.’ Dialogue 14, 622–636. Nunberg G (1979). ‘The non-uniqueness of semantic solutions: polysemy.’ Linguistics and Philosophy 3, 143–184. Ostler N & Atkins B T (1992). ‘Predictable meaning shift: some linguistic properties of lexical implication rules.’ In Pustejovsky J & Bergler S (eds.) Lexical semantics and knowledge representation. Berlin: Springer Verlag.

366 Generative Lexicon Partee B H & Borschev V B (2000). ‘Possessives, favorite, and coercion.’ In Riehl A & Daly R (eds.) Proceedings of ESCOL 99. Ithaca: CLC Publications, Cornell University. 173–190. Partee B H & Borschev V (2003). ‘Genitives, relational nouns, and argument-modifier ambiguity.’ In Lang E, Maienborn C & Fabricius-Hansen C (eds.) Modifying adjuncts. Berlin: Mouton de Gruyter. 67–112. Partee B & Rooth M (1983). ‘Generalized conjunction and type ambiguity.’ In Ba¨uerle, Schwarze & von Stechow (eds.) Meaning, use, and interpretation of language. Walter de Gruyter. Pinkal M (1995). ‘Radical underspecification.’ In Proceedings of the Tenth Amsterdam Colloquium. Pinker S (1989). Learnability and cognition: the acquisition of Argument Structure. Cambridge: MIT Press. Poesio M (1994). ‘Ambiguity, underspecification, and discourse interpretation.’ In Bunt H, Muskens R & Rentier G (eds.) International Workshop on Computational Semantics. University of Tilburg. Pollard C & Sag I (1994). Head-Driven Phrase Structure Grammar. Chicago: University of Chicago Press/Stanford CSLI. Pustejovsky J & Boguraev P (1993). ‘Lexical knowledge representation and natural language processing.’ Artificial Intelligence 63, 193–223. Pustejovsky J (1991). ‘The generative lexicon.’ Computational Linguistics 17(4). Pustejovsky J (1991). ‘The syntax of event structure.’ Cognition 41, 47–81. Pustejovsky J (1992). ‘Lexical semantics.’ In Shapiro S (ed.) Encyclopedia of artificial intelligence, 2nd ed. New York: Wiley. Pustejovsky J (1994). ‘Semantic typing and degrees of polymorphism.’ In Martin-Vide (ed.) Current Issues in Mathematical Linguistics. Holland: Elsevier. Pustejovsky J (1995). The Generative Lexicon. Cambridge: MIT Press. Pustejovsky J (1995a). ‘Linguistic constraints on type coercion.’ In Saint-Dizier P & Viegas E (eds.) Computational lexical semantics. Cambridge: Cambridge University Press. Pustejovsky J (1995b). The generative lexicon. Cambridge: MIT Press. Pustejovsky J (1998). ‘Generativity and explanation in semantics: a reply to Fodor and Lepore.’ Linguistic Inquiry 29(2). Pustejovsky J (1998). ‘The Semantics of lexical underspecification.’ Folia Linguistica. Pustejovsky J (2001). Type Construction and the logic of concepts’. In Bouillon P and Busa F (eds.) The Syntax of Word Meaning. Cambridge: Cambridge University Press. 91–123. Pustejovsky J (2005). Meaning in Context: mechanisms of selection in language. Cambridge, MA: MIT Press. Pustejovsky J & Boguraev B (1993). ‘Lexical knowledge representation and natural language processing.’ In Artificial Intelligence 63, 193–223.

Reyle U (1993). ‘Dealing with ambiguities by underspecification: construction, representation, and deduction.’ Journal of Semantics 10, 123–179. Rosen S (1989). Argument structure and complex predicates. Ph.D. diss., Brandeis University. Sanfilippo A (1990). Grammatical relations, thematic roles, and verb semantics. Ph.D. diss., University of Edinburgh. Sanfilippo A (1993). ‘LKB encoding of lexical knowledge.’ In Briscoe T, de Paiva V & Copestake A (eds.) Inheritance, defaults, and the lexicon. Cambridge: Cambridge University Press. Schabes Y Abeille A & Joshi A. ‘Parsing strategies with leixcalized grammars.’ In Proceedings of the 12th International Conference on Computational linguistics, Budapest. Sowa J (1992). ‘Logical structures in the lexicon.’ In Pustejovsky J & Bergler S (eds.) Lexical semantics and knowledge representation. New York: Springer Verlag. Steedman M (1997). Surface structure interpretation. Cambridge: MIT Press. Talmy L (1975). ‘Semantics and syntax of motion.’ In Kimball J P (ed.) Syntax and semantics 4. New York: Academic Press. Talmy L (1985). ‘Lexicalization patterns: semantic structure in lexical forms.’ In Shopen T (ed.) Language typology and syntactic description 3: grammatical categories and the lexicon. Cambridge: Cambridge University Press. 57–149. Talmy L (1985). ‘Lexicalization patterns.’ In Shopen T (ed.) Language typology and syntactic description. Cambridge. Tenny C & Pustejovsky J (2000). Events as grammatical objects. Stanford: CSLI Publications University of Chicago Press. van Deemter K & Peters S (eds.) (1996). Ambiguity and underspecification. Stanford: CSLI, Chicago University Press. Vendler Z (1967). Linguistics and philosophy. Ithaca: Cornell University Press. Vikner C & Jensen P (2002). ‘A semantic analysis of the English genitive. Interaction of lexical and formal semantics.’ Studia Linguistica 56, 191–226. Weinreich U (1959). ‘Travels through semantic space.’ Word 14, 346–366. Weinreich U (1963). ‘On the semantic structure of language.’ In Greenberg J (ed.) Universal of language. Cambridge: MIT Press. Weinreich U (1964). ‘Webster’s third: a critique of its semantics.’ International Journal of American Linguistics 30, 405–409. Weinreich U (1972). Explorations in semantic theory. The Hague: Mouton. Wilks Y (1975). ‘A preferential pattern seeking semantics for natural language inference.’ Artificial Intelligence 6, 53–74. Williams E (1981). ‘Argument structure and morphology.’ Linguistic Review 1, 81–114.

Generative Semantics 367

Generative Semantics J D McCawleyy and R A Harris ß 2006 Elsevier Ltd. All rights reserved. This article is reproduced from the previous edition article by James D McCawley, volume 3, pp. 1398–1403, (c) 1994, Elsevier Ltd., with a foreword by Randy Harris.

Foreword (by Randy Harris) There is little that can or should be added to the definitive epitome of generative semantics you are about to read, by James D. McCawley (1938–1999), except (1) a few words about the importance of McCawley to the movement, which is perhaps, less prominent in an article of his own authorship than it may have been from anyone else’s pen, and (2) a few additional citations. Each of the four main figures McCawley associates with generative semantics – George Lakoff (b. 1941), John Robert (Ha´j) Ross (b. 1938), Paul Postal (b. 1936), and himself – contributed very substantial elements to its identity, but McCawley embodied the approach, from his feet to his very lively eyebrows, and especially above. He was, in all senses of the phrase, its presiding genius. He helped bring it to life in lengthy, rollicking, mid-1960s telephone calls with Lakoff between Cambridge, Massachusetts, and Chicago. He supplied what many regarded as its strongest arguments and its most compelling analyses, some of which brought Postal into the program. He spent his entire career in the movement’s epicenter, the University of Chicago. He continued to publish comprehensive works in the generative semantics spirit long after the label had fallen into disrepute, especially Syntactic Phenomena (1993a) and Everything That Linguists Have Always Wanted to Know about Logic (1993b). He believed in generative semantics to the very end, not in all of its specific proposals (relentlessly honest, he cheerfully and publicly dropped analyses that no longer fit his evolving views and cheerfully welcomed views that did, no matter what their origin), and certainly not in the label itself (indeed, he renounced all theoretical labels), but in its substance. Further reading in generative semantics include Lakoff (1971), McCawley (1976, 1979), Postal (1972), Ross (1972, 1973), Newmeyer (1980, McCawley cites the 1986 second edition; the 1980 first edition has more on generative semantics), Lakoff (1989), Harris (1993a, 1993b), Huck and Goldsmith (1996), and McCawley (1981). Also of note are the two festschrifts for McCawley: Brentari et al. (1992) and Zwicky et al. (1970/1992). y

Deceased

Generative Semantics (by James D McCawley) The term ‘generative semantics’ (GS) is an informal designation for the school of syntactic and semantic research that was prominent from the late 1960s through the mid-1970s and whose best-known practitioners were George Lakoff, James D. McCawley, Paul M. Postal, and John Robert Ross.

GS Positions on Controversial Issues The name GS gives undue prominence to one of many issues on which GS-ists took positions that conflicted with those of more orthodox generative grammarians, an issue that in hindsight seems arcane because it is intelligible only against the background of the once widely accepted assumption (shared then by GS-ists and their adversaries) that there must be a single level of linguistic structure for which it is appropriate to give a system of ‘generative rules’ (i.e., rules giving a complete specification of what structures are wellformed on that level) and to which all other levels of structure are related by ‘interpretive rules.’ The issue commemorated in the name GS was that of whether the privileged level was semantic structure (the GS position) or was a level of syntactic structure as distinct from semantic structure (the position of Chomsky and other ‘interpretive semanticists’). The prominence that has been given to that arcane issue should not obscure the fact that GS-ists disagreed with other generative grammarians on many far more substantial issues, such as the following: a. Whether sentences were ‘grammatical’ or ‘ungrammatical’ in themselves rather than relative to (linguistic and extralinguistic) contexts and to possible interpretations. GS-ists rejected the then popular idea that a language can be identified with a set of sentences and took syntactic derivations as implying that the surface form in question was grammatical not absolutely but only relative to the meaning represented in its deep structure and to any contextual factors to which steps in the derivation are sensitive. b. The nature of semantic structure. GS-ists held that semantic structures have the same formal nature as syntactic structures, except for having semantic rather than morphological entities as their ultimate constituents, while interpretive semanticists either were reluctant to make any concrete claims about the nature of semantic structure (e.g., Chomsky, 1972: 137) or adopted a conception of semantic structure that differed considerably

368 Generative Semantics

in formal nature from syntactic structure (e.g., Jackendoff, 1972). c. The nature of syntactic categories. Much work in GS attempted to reduce syntactic category distinctions to distinctions of logical category, supplemented by lexical ‘exception features’ (e.g., verbs and adjectives would both belong to the category ‘predicate,’ usually confusingly called ‘V’ by GSists, with adjectives differing from verbs in bearing a feature licensing the application of a transformation that inserts a copula), while other generative grammarians took syntactic categories to have at most a tangential relation to semantic categories. d. The linguistic level or levels relevant to the choice of the lexical material of a sentence. One who holds that there is no level of syntactic deep structure as distinct from semantic structure is forced to recognize syntactic structures whose ultimate units are semantic rather than morphological in nature, such as a syntactic structure [Brutus DO SOMETHINGx (X CAUSE (BECOME (NOT (Caesar ALIVE))))] underlying Brutus killed Caesar. (Here and below, capitalization is used as an informal way of representing semantic units corresponding roughly to the words in question.) GS-ists accordingly proposed transformations that combined semantic units into complexes that could potentially underlie lexical items, e.g., ‘predicate-raising’ (proposed in McCawley, 1968) adjoined a predicate to the immediately superordinate predicate, thus allowing the derivation of such complexes as NOT-ALIVE, BECOME-NOTALIVE (¼ die), BECOME-NOT (¼ cease), and CAUSE-BECOME-NOT-ALIVE or CAUSE-die (¼ kill). Intermediate derivational stages involving both lexical and semantic units (such as CAUSEdie) needed to be recognized in order to account for, e.g., the parallelism between idiomatic combinations with come (come about, around, to . . .) and their counterparts with bring: as Binnick (1971) noted, bring corresponded not to CAUSE plus some determinate complex of semantic material but to CAUSE plus come, irrespective of whether come was used as an independent lexical unit or as part of such combinations as come about. Consequently, lexical insertion could not be restricted to a single linguistic level: applications of certain transformations had to be interspersed between lexical insertions. The combinations that could be derived through the application of predicate-raising and other ‘prelexical’ transformation were supposed to underlie ‘possible lexical items.’ Since there are infinitely many such combinations but only finitely many actual lexical items in any given language, most

correspond to no actual lexical item of the language and were supposed to reflect accidental gaps in the lexicon of the language. Lexical decomposition analyses were criticized in such works as Fodor (1970), where it was argued that the simple and complex surface forms that supposedly corresponded to the same deep structure (e.g., Brutus killed Caesar and Brutus caused Caesar to die) did not in fact have the same meanings. It was noted subsequently (McCawley, 1978) that such discrepancies in interpretation can be explained by a version of Grice’s (1967/1989) maxim of manner according to which a simple surface form is preferred to a more complex alternative except when the referent is a peripheral instance of the category defined by the given semantic structure, e.g., using indirect means to cause someone to die would be a peripheral instance of the category defined by ‘cause to cease to be alive’ and thus would not be in the part of that category where kill would preempt the use of cause to die. (Syntactic analyses involving lexical decomposition also figured prominently in Gruber, 1965, a work that early GS-ists found congenial despite some important differences between its framework and theirs.)

GS Policies on the Conduct of Research Of equal importance to these points of theory in their influence on the directions that GS research took and the reception that GS work received were several policies about the conduct of linguistic research, of which the following deserve mention here: a. A lack of concern about the compartmentalization of the parts of a linguistic analysis or of a linguistic theory, as contrasted with the concern among Chomskyan generative grammarians with the drawing of boundaries among, e.g., syntax, semantics, and pragmatics. One important facet of this lack of concern was an egalitarian position regarding the different kinds of data that had a bearing on a linguistic analysis: whereas most generative grammarians held that syntactic analyses needed to be supported by arguments in which only syntactic facts figured, GS-ists held that facts about truth conditions, possible denotations, etc., were as relevant as any other kind of facts to evaluating analyses that purported to specify how meanings corresponded to surface forms in the given language, and that supposed syntactic facts were usually at least partly semantic in nature, in that what a speaker of a language judges acceptable is not a sentence in itself but that sentence relative to an assumed understanding of it. Another facet of this policy was GS-ists’ insistence

Generative Semantics 369

b.

c.

d.

e.

that all parts of a linguistic analysis were subject to the same standards of explicitness, simplicity, and factual accuracy, irrespective of how one might wish to demarcate syntax and semantics; by contrast, interpretive semantics has come only gradually and often grudgingly to subject the semantic parts of analyses to the same standards of appraisal as the syntactic parts. Rejection of the dogma of generative grammar that a fixed notational system is essential to a linguistic theory and that precision can be achieved only by formulating one’s analyses in the privileged notational system. Adoption of a ‘static’ conception of linguistic rules: rules were thought of not in terms of the popular metaphor of assembling linguistic structures and converting structures on one level into corresponding structures on another, but as ‘derivational constraints,’ that is, as specifications of what a structure may or may not contain and of how a structure on one level may not contain and of how a structure on one level may or must differ from the corresponding structure on another level. This difference in the conception of rules resulted in difference with regard to what theoretical notions posed ‘conceptual problems’ (Laudan, 1976) for each approach; thus GS-ists readily accepted rules that specified relations among nonadjacent levels of structure (what Lakoff, 1970b dubbed ‘global rules’), a notion that was unproblematic from their conceptual vantage point but outlandish from the vantage point of the ‘operation’ metaphor for rules, while rejecting the idea of ordering among rules, a notion that was unproblematic for those who accepted the ‘operation’ metaphor but was difficult to make coherent with the GS conception of rules as derivational constraints. Disdain for those concerns of Chomskyan generative grammarians that had little connection with linguistic facts or with detailed linguistic description, such as mathematical models and speculation about the extent to which linguistic structure is biologically determined. While GS-ists were receptive to the idea that linguistic structure is profoundly influenced by neuroanatomy, they demanded (e.g., Lakoff, 1974: 171) that claims to that effect be backed up with solid linguistic and solid biology rather than with what they dismissed as arguments from ignorance (i.e., hasty leaps from one’s failure to see how some characteristic of languages could be learned to the conclusion that it must be innate). Eagerness to put in practice in their professional lives many of the ideas of the 1960s counterculture, such as policies of antiauthoritarianism, antielitism,

and demystification of science and scholarship, and a belief that one’s work should be pleasurable. One of many facets of the GS ethos that these policies helped to shape is what Newmeyer (1986: 133) has disparaged as ‘data-fetishism’: joy in the unearthing of novel and intriguing facts for which one is not yet in a position to provide a satisfactory analysis; GS-ists, by contrast, regarded Chomskyan generative grammarians as ‘scientific Calvinists’ (McCawley, 1980: 918).

Prominent and Influential Analyses Proposed within the GS Approach Kuhn (1970) notes that one major component of the paradigm of a scientific community is a set of ‘exemplars’: prestigious solutions to problems, presented to neophytes in the field as paragons of good science, and serving as models for solutions to new problems. (For discussion of the history of generative grammarians’ analyses of English auxiliary verbs in terms of Kuhnian notions such as ‘paradigm’ and ‘exemplar,’ see McCawley, 1985.) The exemplars for the GS community included a number of analyses that bore an intimate relation to central tenets of GS, for example, lexical decomposition analyses such as were discussed in, First section of this article, and analyses of quantified expressions as being external to their host sentences in deep structure (e.g., John has read many books ¼ many books þ John has read x) and as being moved into their surface positions by a transformation of Quantifier-Lowering (QL). (The term QL was in fact applied indiscriminately to a variety of transformations that differed according to the specific deep structure that was assumed; proposals differed with regard to whether just a quantifier or a whole quantified expression was external to the host S, what filled the deep structure position into which the quantified expression was to be moved, and where the quantifier or quantified expression was in relation to the host S.) The best-known arguments given for a QL analysis consisted in demonstrations that the many syntactic rules that were problematic when applied to structures that contained quantified elements became unproblematic if the deep structure position of a quantified expression was external to its host sentence and consequently (in virtue of the principle of the cycle) the rule had as its domain of application a structure that does not contain the quantified expression; for example, this view of the interaction between QL and the transformation of ‘Reflexivization’ explained why such pairs of sentences as Every philosopher admires himself and Every philosopher admires every philosopher differed in meaning in the way in which they did, and why reflexivization was

370 Generative Semantics

applicable only in the derivation of the former. A thorough survey of arguments for a QL analysis is given in McCawley (1988: Ch. 18). Several other GS exemplars were in fact as consistent with the substantive claims of interpretivist transformational grammar as with those of GS, but were embraced by GS-ists and rejected by interpretive semanticists as much because of policies on the conduct of research (see previous section) as because of any points of linguistic theory, or simply because of the historical quirk that a particular idea occurred to a member of the one camp before it occurred to any of his counterparts in the other camp. One such exemplar is the analysis of English auxiliary verbs as being verbs that take nonfinite sentential complements in the manner of such verbs as seem (Ross, 1969; McCawley, 1971), which Pullum and Wilson (1977) subsequently argued for from within an interpretive semantic framework. (A similar treatment of auxiliary verbs is found in Jespersen, 1937: 92). A second was the proposal (McCawley, 1970, subsequently disavowed by the author) that English had underlying verb–subject–object (SVO) word order, a hypothesis that is, if anything, harder to reconcile with the assumptions of GS than with those of interpretive semantics in view of the dubious nature of the assumption that the order of elements is significant in semantic structure; by contrast, there is no general policy in interpretive semantic versions of generative grammar against discrepancies between deep and surface constituent order, and indeed languages with surface VSO word order are commonly analyzed by interpretive semanticists as having deep VSO word order. Another such exemplar is the ‘performative analysis’ (Ross, 1970), in which sentences are assigned underlying structures in which a ‘hypersentence’ (Sadock, 1969, 1974) specifies the illocutionary force of the sentence, e.g., Birds fly would have an underlying structure of the form [I tell you [birds fly]].

The History of GS The term ‘generative semantics’ first appears in Lakoff (1963/1976), a work that antedates the development of the Katz–Postal–Aspects approach to syntax and prefigures some of Lakoff’s subsequent GS work. GS originated in attempts by Postal and Lakoff to exploit novel possibilities that were opened up by the revisions of the transformational syntactic framework proposed in Katz and Postal (1964) and Chomsky (1965) and to fill gaps in the evolving framework. For example, Lakoff’s Ph.D. thesis (Lakoff, 1965/1970) originated as an attempt to develop a precise and coherent account of the way in which a lexical item could affect the applicability

of transformations to structures containing the given item, and thereby to put on a solider footing those analyses in Chomsky (1965) in which the choice of lexical items affected the possibilities for derivations. In the course of providing such a theory of ‘rule model theory.) Coincidentally, the radical revisions that interpretive semanticists were making in their versions of generative syntactic theory included the adoption of the ‘X-bar’ conception of syntactic categories (see X-Bar Theory), which identified two of the factors that affect the syntactic behavior of a linguistic unit, namely, the difference between a word unit and a phrasal unit, and the part of speech of the unit or of its head. Once a descriptive framework was available that allowed linguistic generalizations to be stated in terms of those factors, considerable progress was made in the analysis of the many syntactic phenomena in which those factors play a role. No important tenets of GS rule out the adoption of a conception of syntactic categories as defined by these factors in addition to logical categories, and indeed a conception of syntactic categories as reducible to those and other factors (with logical category being merely one of several factors that influence a unit’s syntactic behavior) is adopted in McCawley, 1977/1982 and subsequent works. However, in the 1960s and early 1970s, an assumption shared by GSists and interpretive semanticists impeded GS-ists from adopting such a conception of categories, namely the assumption that syntactic categories must remain constant throughout derivations: a word (with a determinate part of speech) that replaced a complex of semantic material (thus, a unit not having a part of speech) could not differ in category from the replaced unit and thus parts of speech could not be part of the category system. (The widespread misconception that GS analyses allowed linguistic units to change category in the course of derivations in their analysis of, for example, nominalizations overlooks the fact that, according to GS-ists’ assumptions, verbs and their nominalizations belonged to the same category. Anyway, analyses of any kind in which the verb invent is a constituent of the noun invention are not committed to any assumption that the former changes its category in the derivation of the latter: it is whatever it is, regardless of what it is contained in.) Since interpretive semanticists did not require that deep structures match semantic structures (and indeed took delight in arguing that they did not match), there was no obstacle to their having the same categories in deep as in surface structures while drawing the full range of category distinctions provided for by X-bar syntax. The interpretive semantic research program was thus able to become ‘progressive’ in the sense of Lakatos (1978) because of something extraneous to the issues

Generative Semantics 371

that were the loci of the substantive disputes between GS and interpretive semantics. See also: Compositionality; Dynamic Semantics; Evolution

of Semantics; Grammatical Meaning; Logic and Language; Logical and Linguistic Notation; Logical Form; Pre-20th Century Theories of Meaning; Propositional and Predicate Logic; Selectional Restrictions; Syntax-Semantics Interface.

Bibliography Bach E & Harms R T (eds.) (1968). Universals in linguistic theory. New York: Holt, Rinehart and Winston. Binnick R I (1971). ‘Come’ and ‘bring.’ Lin 2, 260–267. Brentari D, Larson G N & MacLeod L A (eds.) (1992). The joy of grammar: a festschrift in honor of James D. McCawley. Philadelphia: Benjamins. Chomsky N A (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press. Chomsky N A (1972). Studies on semantics in generative grammar. The Hague: Mouton. Fodor J A (1970). ‘Three reasons for not deriving ‘kill’ from ‘cause to die’. ’ Lin 1, 429–438. Grice H P (1967). ‘Logic and conversation.’ In Grice H P (ed.) (1989) Studies in the way of words. Cambridge, MA: Harvard University Press. Gruber J S (1965). ‘Studies in lexical relations.’ In Gruber J S (ed.) (1976) Lexical structures in syntax and semantics. North-Holland: Amsterdam. Harris R (1990). The generative heresy. Troy, NY: Rensselaer Polytechnic Institute, Unpublished Ph.D. diss. Harris R A (1993a). ‘Generative semantics: secret handshakes, anarchy notes, and the implosion of ethos.’ Rhetoric Review 23, 125–160. Harris R A (1993b). The linguistics wars. New York: Oxford University Press. Huck G J & Goldsmith J A (1996). Ideology and linguistic theory: Noam Chomsky and the deep structure debates. London: Routledge. Jackendoff R S (1972). Semantic interpretation in generative grammar. Cambridge, MA: MIT Press. Jespersen O (1937). Analytic syntax. London: Allen and Unwin. Katz J J & Postal P M (1964). An integrated theory of linguistic description. Cambridge, MA: MIT Press. Kuhn T (1970). ‘Postscript.’ The structure of a scientific revolution (2nd edn.). Chicago, IL: University of Chicago Press. Lakatos I (1978). ‘Falsification and the methodology of research programmes.’ In Lakatos I & Musgrave A (eds.) criticism and the growth of knowledge. Cambridge: Cambridge University Press. Lakoff G (1963). ‘Toward generative semantics.’ In McCawley J (ed.) (1976) Notes from the linguistic underground (syntax and semantics 7). New York: Academic Press. Lakoff G (1970a). Irregularity in syntax. New York: Holt, Rinehart and Winston. Lakoff G (1970b). ‘Global rules.’ Lg 46, 627–639.

Lakoff G (1971). ‘On generative semantics.’ In Steinberg D D & Jakobovits L A (eds.) Semantics: an interdisciplinary reader in philosophy, linguistics and psychology. Cambridge, UK: Cambridge University Press. 232–296. Lakoff G (1974). ‘Interview conducted by H Parret.’ In Parret H (ed.) Discussion language. The Hague: Mouton. Lakoff G (1987). Women, fire, and dangerous things. Chicago, IL: University of Chicago Press. Lakoff R T (1989). ‘The way we were; or, the real actual truth about generative semantics: a memoir.’ Journal of Pragmatics 13, 939–988. Laudan L (1976). Progress and its problems. Berkeley, CA: University of California Press. Levi J N (1978). The syntax and semantics of complex nominals. New York: Academic Press. McCawley J D (1968a). ‘The role of semantics in a grammar.’ In Bach E & Harms R T (eds.). McCawley J D (1968b). ‘Lexical insertion in a transformational grammar without deep structure.’ Chicago Linguistic Society Papers 4, 71–80. McCawley J D (1971). ‘Tense and time reference in English.’ In Fillmore C & Langendoen D T (eds.) Studies in linguistic semantics. New York: Holt, Rinehart and Winston. McCawley J D (1975). ‘Review of Chomsky 1972.’ Studies in English Linguistics 5, 209–311. McCawley J D (1976). Grammar and meaning. New York: Academic Press. McCawley J D (1977). ‘The nonexistence of syntactic categories.’ In Second Annual Metatheory Conference Proceedings. East Lansing, MI: Michigan State University. McCawley J D (1978). ‘Conversational implicature and the lexicon.’ In Cole P (ed.) Pragmatics (Syntax and Semantics 9). New York: Academic Press. McCawley J D (1979). Adverbs, vowels, and other objects of wonder. Chicago: University of Chicago Press. McCawley J D (1980). ‘Review of 1st edn. of Newmeyer 1986.’ Linguistics 18, 911–930. McCawley J D (1981). ‘Review of F. Newmeyer, Linguistic theory in America.’ Linguistics 18, 911–930. McCawley J D (1982). Thirty million theories of grammar. London: Croom Helm. McCawley J D (1985). ‘Kuhnian paradigms as systems of markedness conventions.’ In Makkai A & Melby A (eds.) Linguistics and philosophy: studies in honor of Rulon S. Wells. Benjamins, Amsterdam. McCawley J D (1988). The syntactic phenomena of English (2 vols). Chicago, IL: University of Chicago Press. McCawley J D (1993a). Everything that linguists have always wanted to know about logic (but were ashamed to ask) (2nd edn.). Chicago: University of Chicago Press. McCawley J D (1993b). The syntactic phenomena of English (2nd edn., 2 vols). Chicago: University of Chicago Press. Musgrave A (1976). ‘Why did oxygen supplant phlogiston? Research programmes in the Chemical Revolution.’ In Howson C (ed.) Method and appraisal in the physical sciences. Cambridge: Cambridge University Press. Newmeyer F J (1980). Linguistic theory in America. New York: Academic Press. Newmeyer F J (1986). Linguistic theory in America (2nd edn.). Orlando, FL: Academic Press.

372 Generic Reference Postal P M (1972). ‘The best theory.’ In Peters P S (ed.) The goals of linguistic theory. Englewood Cliffs, NJ: Prentice Hall. Postal P M (1974). On raising. Cambridge, MA: MIT Press. Pullum G K & Wilson D (1977). ‘Autonomous syntax and the analysis of auxiliaries.’ Lg 53, 741–788. Ross J R (1969). ‘Auxiliaries as main verbs.’ Studies in Philosophical Linguistics 1, 77–102. Ross J R (1970). ‘On declarative sentences.’ In Jacobs R & Rosenbaum P S (eds.) Readings in English transformational grammar. Waltham, MA: Ginn. Ross J R (1972). ‘Doubl-ing.’ In Kimball J (ed.) Syntax and semantics, vol. 1. New York: Seminar Press. 157–186.

Ross J R (1973). ‘Slifting.’ In Gross M, Halle M & Schu¨tzenberger M P (eds.) The formal analysis of natural languages. The Hague, The Netherlands: Mouton. 65–121. Sadock J (1969). ‘Hypersentences.’ Paper in Linguistics 1, 283–371. Sadock J (1974). Toward a linguistic theory of speech acts. New York: Academic Press. Zwicky A M, Salus P H, Binnick R I & Vanek A L (eds.) (1992). Studies out in left field: defamatory essays presented to James D. McCawley. Current inquiry into language and linguistics (vol. 4). Amsterdam: Benjamins.

Generic Reference G Carlson, University of Rochester, Rochester, NY, USA ß 2006 Elsevier Ltd. All rights reserved.

Forms of Generic Reference Generic reference is the term commonly used to describe noun-phrase reference in sentences that express generalizations (see Generics, Habituals and Iteratives). Some common examples are found in (1)–(3): (1) Potatoes are native to South America. (2) The lion is a fearsome beast. (3) A pencil is used for writing.

Generic reference is usually understood as making reference to kinds of things. When we speak of ‘kinds,’ we intend a classification system that is based on the denotations of nominal expressions, or sortals, of the language (for one view, see Gupta, 1980). It is now commonly accepted that reference is not only limited to individuals or pluralities of individuals, but also to kinds or types of things as well. This is most evident in noun phrases of the type ‘‘this kind of animal,’’ which evidences an overt postdeterminer of the class of ‘kind,’ ‘type,’ ‘sort,’ ‘species,’ and so on. (4) This kind of animal hibernates in the winter.

These kind-referring phrases can appear in quantified contexts as well. The analysis then is that the quantifier ranges over kinds of things, just as it ranges over individuals in the more usual instances. (5) Three kinds of swallows are found in the northeastern United States. (6) Every type of tree exchanges carbon dioxide for oxygen.

When the postdeterminer element is removed, there remains the possibility of interpreting the noun phrase as referring to or quantifying over kinds. This normally results in a type/token ambiguity. For instance, in (7) one could be talking about individual flowers in a given context or kinds of flowers; similarly for (8). This reading is called a taxonomic reading in Krifka et al. (1995). (7) Sharon photographed every flower. (8) Several penguins inhabit this frozen wilderness.

Examples such as (7) and (8) are ambiguous between a ‘kind’ reading and the more common individual reading. On the taxonomic reading, the denotation of the head noun is partitioned into subkinds, though this is done contextually since there are multiple ways to naturally partition any domain. For instance, automobiles can be partitioned according to body style (sedan, sports car, station wagon, etc.) or by manufacturer (BMW, Mazda, Volvo, etc.), among other ways. It is commonly noted that if one takes a mass term and syntactically treats it as a count term, by pluralizing it or pairing it with a determiner that selects for singular count nouns only, a taxonomic reading may emerge. Thus, in (9) we are speaking of kinds of wine, and in (10) of kinds of metal: (9) Three wines are stored in the cellar. (10) Every metal conducts electricity to some degree.

Another means by which kinds are referred to in natural language is by definite singular noun phrases. In English, this has a stylistically technical tone, but this is not a general feature of other languages. Three possible examples are: (11) The computer has changed society in many ways.

Generic Reference 373 (12) Scientists have now documented the entire life cycle of the three-toed sloth. (13) The self-cleaning plow was invented by John Deere.

These exemplify the definite generic on the most natural readings of the noun phrases in these examples. This reading appears in addition to the much more frequent individual-denoting use of the definite article, and often results in ambiguity. Generally unambiguous is the use of the definite article with only an adjective (e.g., ‘‘The rich are often oppressors of the poor’’). Other types of definite kind reference include uses of the proximal and distal demonstratives (this, that) in the plural. The demonstrative is not, on one interpretation, an actual indexical; instead, it colloquially conveys an emotional attitude toward the kind (Bowdle and Ward, 1995). It appears to be the same use of the demonstrative as when it accompanies a proper name (e.g., ‘‘That Roberto has done it again’’). (14) Those spotted owls (i.e., the kind spotted owl) are constantly being talked about by environmentalists. (15) Who invented these computers, anyway?

In addition, there are noun phrases that employ adjectives like ‘typical,’ ‘average,’ or ‘normal,’ which have a kind-reference reading, as in ‘‘Your typical businessperson takes eight plane trips per year.’’ Supplementing definite generics are the consciously introduced Latinate natural kind names, lacking a definite article, which always have an elevated scientific tone no matter the language. This includes examples like ‘felis domesticus’ (cat) or ‘acer saccharum’ (sugar maple tree). These are unambiguous and always denote kinds. Though not of consciously Latinate origin, the use of ‘man’ in English as a generic functions in much the same way. Beyond these are additional means of kind reference in natural language. Bare plurals – that is, plural noun phrases lacking a determiner or quantifier element, at least on one reading – may refer to kinds. The following are three examples: (16) Airplanes have made intercontinental travel a common event. (17) Lions once ranged from the tip of Africa to eastern Siberia. (18) Hunting dogs are most closely related to wolves.

Functioning much the same as bare plurals are undetermined mass expressions (in French, the definite article must be employed), which allow for generic reference to unindivuated domains.

(19) Water is a liquid. (cf. Fr. ‘‘L’eau est un liquide.’’) (20) Hydrogen is the most common element in the universe.

Finally, the singular indefinite article allows for a generic usage, as in the following: (21) A triangle has three sides. (22) A potato contains vitamin C.

The bare plural and the indefinite singular are commonly distinguished from the definite singular in English in that the former two usually allow for additional descriptive material in the form of modification, whereas the noun phrases in the definite generic instance are much more limited. (23) A cake without baking powder/Cakes without baking powder/??The cake without baking powder fails to rise properly in the oven.

Unlike the bare plurals or indefinite singulars, the definite singular is basically limited to expression of well-established kinds, those taken to be already familiar from one’s background knowledge. Furthermore, as Vendler (1971) notes, it does not appear they can be ‘too general.’ Thus, alongside ‘the parabola’ and ‘the circle,’ one does not find generic reference to ‘the curve.’ Currently, a full account of these facts is lacking. Cross-linguistically, generic reference is carried out by noun phrases with definite and indefinite articles and with determinerless expressions quite generally. In languages without articles, the determinerless form typically has a generic interpretation in at least some sentence positions. While in English the plural form of the definite has generic reference only marginally at best, in German, which is closely related to English, the plural definite may take generic reference quite easily (Delfitto, 1998). If there are languages with articles or other determiners specific to generic reference, they have yet to be brought to general attention, or they may not exist at all. It is important to distinguish generic reference from the type of sentence in which the expression appears. While generic reference takes place most commonly within the context of a generic or habitual sentence, not all generic or habitual sentences have a noun phrase with generic reference, and generic reference may take place within sentences that are episodic or that make reference to specific events. The clearest examples of this are sentences with the definite singular generic exhibiting the avant-garde reading (Krifka et al., 1995). Consider the following example: (24) The horse arrived in the New World around 1500.

374 Generic Reference

This means that some horses were introduced about that time, but implies that the event was the first time any modern horses had been in that area. To observe a shipment of horses arriving in the western hemisphere in 1980 and use (24) modified by ‘‘in 1980’’ to describe that event would be inappropriate. Other instances where there is kind-reference in episodic sentences, on at least one reading, include the following three examples: (25) Today, Professor James lectured to us on the history of dinosaurs. (26) Marconi invented the radio. (27) Monkeys evolved from lemurs.

Theory of Generic Reference While most semanticists agree that at least certain noun phrases refer to (or quantify over) kinds of things, there is a tradition in which apparent kind reference is treated in terms of quantification over individuals. Stebbings (1930), for instance, suggests that the sentence ‘‘The whale is a mammal’’ expresses a universal proposition (similar to ‘‘All whales are mammals’’), as does ‘‘Frenchmen are Europeans.’’ Russell comments that the sentence ‘‘Trespassers will be prosecuted’’ ‘‘means merely that, if one trespasses, he will be prosecuted’’ (1959: 102), which reduces the analysis of the apparent kind reference (trespassers) to an indefinite in a conditional. However, Moore (1944), in response to Russell’s theory of descriptions, cites examples like ‘‘The triangle is a figure to which Euclid devoted a great deal of attention’’ and ‘‘The lion is the king of beasts,’’ both of which convincingly resist implicit quantificational or conditional analyses. The most convincing evidence for kind reference in the semantics stems from predicate positions that select for something other than individuals and pluralities of individuals and that readily accept the types of noun phrases reviewed earlier. These are called kind-level predicates. Examples (26) and (27) contain kind-level predicates. While an individual might invent something, the direct object must express a kind of thing and not a particular individual or set of individuals. The verb ‘evolve’ relates species and other levels of biological classes to other such classes, but not individuals to individuals. The following are other examples of kind-level predicates: (28) Dogs are common/widespread/rare. (29) Insects are numerous. (30) The elm is a type/kind of tree.

(31) The gorilla is indigenous to Africa. (32) The Chevrolet Impala comes in 19 different colors.

Kind-level predicates are relatively infrequent in any language. Most predicates fall into the classes of either individual level or stage level. Roughly speaking, stagelevel predicates speak of highly temporary events and states, such as running across a lawn, eating a sandwich, or being asleep. Individual-level predicates, on the other hand, speak of more permanent states of affairs, such as knowing French, liking the opera, or being intelligent. Typically, the predicates of a habitual or generic sentence (‘‘x cooks wonderfully’’) are individual level. These are discussed in more detail in Carlson (1980), Kratzer (1995), Fernald (2000), and by others. Both stage-level and individual-level predicates select for noun phrases that denote individuals and pluralities of individuals. However, kind-denoting expressions appear with these predicates naturally as well. With both stage-level and individual-level predicates, a quantificational analysis of kind-denoting phrases (quantifying over individuals of that kind) becomes easily possible. The kind-level predicates do not typically allow for the use of the indefinite singular. An example like (33) is generally deemed not very acceptable: (33) ?A lion is a species of animal. (cf. the lion, lions)

A continuing controversy centers on the analysis of the English bare plural construction, which has an unlimited distribution in comparison to bare plurals in other languages with articles, such as Spanish or Italian (e.g., Laca, 1990). English bare plurals appear to have different interpretations in different contexts. With individual-level predicates, they have a general interpretation, one that is quantificationally similar to ‘all’ or ‘most.’ (34) Cats (roughly, all or most cats) sleep a lot. (35) Hammers (roughly, all or most hammers) are used for driving nails.

On the other hand, bare plurals also have an existential interpretation in other contexts that is similar to ‘‘some’’ in force. (36) Priceless works of art (i.e., some works) were delivered to the museum yesterday. (37) The rioters threw stones through shop windows, shattering them.

The primary question is whether in these instances, as well, the bare plural construction is kind-denoting, as most believe it is with kind-level predicates. Carlson (1980) and Chierchia (1998) argue that such a unified

Generics, Habituals and Iteratives 375

analysis is not only possible but also desirable, and both present analyses showing how it can be accomplished. However, others argue that more adequate insight can be gained through an analysis that differentiates true instances of kind reference from those instances where bare plurals appear with individuallevel and stage-level predicates and that a quantification over individuals approach is better taken (see Wilkinson, 1991; Diesing, 1992; Krifka et al., 1995). See also: Aspect and Aktionsart; Category-Specific Knowl-

edge; Existence; Generics, Habituals and Iteratives; Indefinite Pronouns; Mass Expressions; Sense and Reference.

Bibliography Bacon J (1971). ‘Do generic descriptions denote?’ Mind 82, 331–347. Bowdle B & Ward G (1995). ‘Generic demonstratives.’ In Proceedings of the Twenty-First Annual Meeting of the Berkeley Linguistics Society, 32–43. Burton-Roberts N (1976). ‘On the generic indefinite article.’ Language 52, 427–448. Carlson G (1980). Reference to kinds in English. New York: Garland. Carlson G & Pelletier F J (eds.) (1995). The generic book. Chicago: University of Chicago Press. Chierchia G (1995). ‘Individual level predicates as inherent generics.’ In Carlson G & Pelletier F J (eds.) The generic book. Chicago: University of Chicago Press. 176–223. Chierchia G (1998). ‘Reference to kinds across languages.’ Natural Language Semantics 6, 339–405. Cohen A (2001). ‘On the generic use of indefinite singulars.’ Journal of Semantics 18, 183–209. ¨ (1975). ‘On generics.’ In Keenan E (ed.) Formal Dahl O semantics of natural language. Cambridge: Cambridge University Press. 99–111. Delfitto D (1998). ‘Bare plurals.’ In Everaert M & van Riemsdijk H (eds.) Encyclopedia of syntactic case studies. Wasenaar: SynCom Group (electronic version 61pp.).

Delfitto D (2002). Genericity in language. Allesandria: Edizioni dell’Orso. Diesing M (1992). Indefinites. Cambridge, MA: MIT Press. Fernald T (2000). Predicates and arguments. Oxford: Oxford University Press. Greenberg Y (2003). Manifestations of genericity. New York: Routledge. Gupta A (1980). The logic of common nouns. New Haven, CT: Yale University Press. Kratzer A (1995). ‘Individual level predicates vs. stage level predicates.’ In Carlson G & Pelletier F J (eds.) The generic book. Chicago: University of Chicago Press. 125–175. Krifka M, Pelletier F J, Carlson G, ter Meulen A, Chierchia G & Link G (1995). ‘Genericity: an introduction.’ In Carlson G & Pelletier F J (eds.) The generic book. Chicago: University of Chicago Press. 1–124. Laca B (1990). ‘Generic objects: some more pieces of the puzzle.’ Lingua 81, 25–46. Landman F (1991). Structures for semantics. Dordrecht: Kluwer. Moore G E (1944). ‘Russell’s ‘‘Theory of descriptions.’’’ In Schilpp P (ed.) The philosophy of Bertrand Russell. New York: Tudor Publishing. 175–226. Ojeda A (1993). Linguistic individuals. Stanford, CA: CSLI. Russell B (1959). The philosophy of logical atomism. Minneapolis: University of Minnesota. Schubert L K & Pelletier F J (1989). ‘Generically speaking, or, using discourse representation theory to interpret generics.’ In Chierchia G, Partee B & Turner R (eds.) Property theory, type theory, and semantics 2: Semantic issues. Dordrecht: Kluwer. 193–268. Stebbings S (1930). A modern introduction to logic. London: Methuen. Vendler Z (1971). ‘Singular terms.’ In Steinberg D & Jakobovits L (eds.) Semantics: an interdisciplinary reader in philosophy, linguistics, and psychology. Cambridge: Cambridge University Press. 115–133. Wilkinson K (1991). ‘Studies in the semantics of generic NP’s.’ Ph.D. diss., University of Massachusetts, Amherst.

Generics, Habituals and Iteratives G Carlson, University of Rochester, Rochester, NY, USA ß 2006 Elsevier Ltd. All rights reserved.

Sentences may express information about particular events, such as: (1) Mary ate oatmeal for breakfast this morning

But sentences can also express regularities about the world that constitute generalizations over events and activities:

(2) Mary eats oatmeal for breakfast

Unlike (1), the truth of (2) does not depend on Mary eating oatmeal for breakfast at any particular time and place, but instead it is the regularity of occurrence that is asserted, and the truth conditions of the sentence are tied to that regularity. Sentences of the sort exemplified in (1) are sometimes called ‘episodic’ sentences. The class of episodic sentences also includes examples where a plurality of individuals or events occurs. The examples (3–5)

376 Generics, Habituals and Iteratives

exemplify sentences that are episodic but whose truth values depend on multiple occurrences of particular events: (3) Mary and George ate oatmeal for breakfast (4) Each student in the class handed in a completed assignment (5) Every day last week, Mary ate lunch at a restaurant

Such examples are episodic. In contrast, examples such as (1) are often called ‘habitual’ or ‘generic’ sentences. In some instances, habituals are termed ‘iteratives,’ but insofar as the terminology indicates that iteratives and habituals are the same thing, it can be misleading (see Comrie (1976: 27) for extended comments on the use of the term in Slavonic linguistics). Iteratives are a subclass of aspectual operators and do not produce generic or habitual sentences but rather are episodic in nature. Payne describes iteratives in the following way: ‘‘Iterative aspect is when a punctual event takes place several times in succession’’ (1997: 39). That is, what is produced is a series of events of the same type, which occur in a sequence (i.e., not simultaneously) and are intuitively connected with one another in time (i.e., not spaced ‘too far’ apart). Such iterative interpretations are especially common for semelfactive verbs such as cough or flap (a wing). In English, John coughed can be understood as saying that he coughed once, or in a series, repetitively. In some languages, iteratives are marked morphologically, typically by an inflectional affix on the verb though commonly in other ways such as by reduplication, as in Quileute (Greenberg et al., 1978). Iteratives, when specifically marked, also lend themselves to additional implications, especially those of intensity and/or prolongation. In English, John coughed and coughed is iterative in interpretation, like one understanding of the simple John coughed, but in addition implies that he coughed each time with intensity and/or that he coughed for a prolonged period. Often there are implications that the intensity or prolongation is inappropriate or a sign that something is wrong. These implications, however, are not a part of iterativity per se, but an additional, associated meaning above and beyond. It is also commonly noted that progressive or continuous aspectual constructions often imply iteration. In The bird is flapping its wings the most natural interpretation is that there is a series of wing flappings (though an extended single flap might also be described in this way). But again iteratives are not the same as progressive and continuous constructions, having different and distinguishable semantic contents. Unlike iteratives, habituals and generics do not denote a connected series of events, even though

there is the root intuition that repetitiveness is involved. Terminology is not entirely standardized; one also finds the terms ‘customary,’ ‘usitative,’ ‘gnomic,’ and ‘frequentative’ applied to generics and habituals, though occasionally with more specialized meanings. The term generic predominates in the formal semantics literature and habitual appears most dominant in the more descriptive literature. Some reserve the term generic for habitual sentences with subject noun phrases that have generic rather than specific reference (see Generic Reference), though this is not standard practice. The term habitual itself is potentially misleading. Lyons notes that, ‘‘The term ‘habitual’ is hallowed by usage; but it is something of a misnomer in that much of what linguists bring within its scope would not generally be thought of as being a matter of habit’’ (1977: 716). The following examples would also qualify as habituals according to the general pattern of usage: (6) Glass breaks easily (a disposition) (7) Bishops move diagonally (a rule of a game) (8) Robert works for the government (an occupation) (9) Soap is used to remove dirt (a function) (10) A wise man listens more than he speaks (a moral injunction)

Like iteratives, generics and habituals may be morphologically marked, normally by an inflectional affix or a free form in the verb’s ‘auxiliary’ complex, though also through a wide variety of other formal means. Habitual markers are typically classified as a member of the aspectual system, though this morphological marking is in addition to the variety of means lexically available (e.g., ‘tends to,’ ‘has a habit of’) and is a component of meaning of most frequency adverbs such as usually, often, or always. Payne (1997) cites the example of Ewe: (11) E´-du-a 3sg-eat.HAB ‘S/he eats rice’

mOli rice

Dahl (1985) in a cross-linguistic survey notes similar marking in Guarani (Paraguayan Guaranı´), Georgian, Kammu (Khmu), Czech, Akan, Wolof, and other languages. Similar markers can be found in a wide variety of other languages, noted in specific studies (e.g., Swahili, Guyanese English, Tamazight, Awa, Zapotec (Istmo Zapateco), Navajo). These cooccur with predicates classified as events and processes, but not, in general, with stative predicates. Most commonly, though, in languages that have specific morphological expression of habituality, one can also express habituality via a regular (usually tensed) form, often in the imperfective if the language makes an imperfective/perfective distinction, though also

Generics, Habituals and Iteratives 377

very commonly in the maximally unmarked tensed forms of the language (Dahl, 1995). Semantic differences are occasionally noted in languages that have a marked and an unmarked expression of habituality, but little research has been conducted on this question. One particular form appears with considerable regularity. This is a specialized remote past tense form, functioning like English used to. Further, formal distinctions not associated with the auxiliary and inflectional system of the verb also may be reflective of a habitual/episodic distinction, as in the wa/ga distinction in Japanese, the na˚r/da ‘when’ distinction in Norwegian, or the ser/estar distinction in Spanish. Though on occasion iterative forms and habitual forms are identical, this is not indicative of any special semantic connection as more commonly languages use syncretic future forms, progressives, and imperfectives to express habituality, among a wide variety of other possibilities. Whereas generics and habituals appear to make reference to a multiplicity of events, reminiscent of the episodic examples in (3–6) or iteratives, generics and habituals are quite different in character. For one, the resultant sentence is aspectually stative (though derived from a nonstative) or at least shares major properties with other statives. For instance, in a narrative discourse a generic sentence does not ‘move’ the time forward, as do events and processes, but rather, like other instances of statives, appears to provide background or setting information. Like statives, generics and habituals also observe the subinterval property (Dowty, 1979). That is, if a habitual is true for a period of time, it is also true for any smaller interval within that same period of time, no matter how short. Generics and habituals also have, as pointed out by Dahl (1975), an intensional component of meaning lacking in episodics. This intensionality may be observed, in part, in the ‘nonaccidental’ understanding of generics and habituals. This is the notion that the varying events generalized over are a part of a larger generalization, and not some happenstance (Schubert and Pelletier, 1987). For example, imagine you encounter some very, very small town in which all residents, entirely unbeknownst to one another, chew (only) sugarless gum. It could be sheer happenstance, but if one accepts the following as true: (12) Residents of this town chew sugarless gum

one commits to the notion that this is not sheer happenstance, but that there is some underlying cause or causes of this particular behavior (e.g., the town dentist instructs people to avoid sugared gum; it is the only brand the local store carries). The particular cause or causes need not be specifically identified,

but it does give rise to the counterfactual implication that if a person were to become a town resident, he or she too would likely chew sugarless gum, as a result of becoming a town resident, even if they had not done so before. Being generalizations, generics and habituals also have the property of tolerating exceptions. The initial instinct is to treat generics and habituals as universally quantified sentences. However, if you learn that Elena eats oatmeal for breakfast, she need not eat oatmeal at every breakfast. Or, the commonly found example Birds fly is tolerant of exceptional penguins, ostriches, and other flightless birds. The limits of this exceptionality has proven extremely difficult to quantify—how long must Elena go without eating oatmeal for breakfast before the generalization no longer holds? How many flightless birds need there be in order for Birds fly no longer to be thought true? While some quantitative understanding of exceptionality plays a role, most researchers agree that generics and habituals require an additional component of meaning, or a different arrangement of meaning altogether, to give an account of exceptionality. The most commonly assumed semantic analysis of habituals and generics is outlined in Krifka et al. (1995). This is a fundamentally quantificational analysis of habitual sentences. It posits an operator, which is often implicit in the linguistic form, which is a dyadic relation between the interpretations of two constituents partitioned from the sentence it is operating on, a ‘restrictor’ and a ‘matrix’ or ‘nuclear scope,’ in keeping with the most commonly accepted semantic analysis of quantification. As this dyadic operator is focus-sensitive, generic sentences can be ambiguous, according to which constituent meaning is assigned to the restrictor and matrix. For example, Milsark (1974) notes the ambiguity of the sentence Typhoons arise in this part of the Pacific. As discussed in Carlson (1989), if the subject noun phrase typhoons is understood as the restrictor, and the predicate of the sentence is understood as the matrix, then the interpretation assigned is akin to asserting that generally speaking, if something is a typhoon, it then arises in this part of the Pacific ocean (and not elsewhere). If, on the other hand, in this part of the Pacific is assigned to the restrictor, then the resulting interpretation is that, in this part of the Pacific, there arise typhoons (from time to time), and perhaps elsewhere as well. Word order in English and other languages can affect how the sentence is partitioned by this and other focus-sensitive operators (Diesing (1992) discusses German (Standard German) at some length). For instance, the English sentence In this part of the Pacific arise typhoons has only the latter of the two readings.

378 Grammatical Meaning

Krifka et al. (1995) describe the generic operator as a ‘‘default quantifier’’ in order to account for exceptionality and intensionality. Other researchers take a different approach, such as modifying possible worlds to enrich the interpretive structure with a notion of ‘normality’ or ‘prototypicality’ (e.g., Eckhardt, 2000; Heyer, 1987). The basic idea here is that one can reduce the generic operator to a universal statement relativized only to the most typical or normal individuals of the domain, or to ‘‘normal worlds.’’ Cohen (1999) suggests that the generic operator is a quantificational operator similar in contents to ‘most,’ though relativized to a partition of individuals and situations that is pragmatically driven, and not determined by the focus structure of the sentence. For instance, in asserting that mammals bear live young, one is partitioning the set of mammals by gender and age, as only mature (fertile) females have such capability. Meanings of habituals and generics are often expressed in Artificial Intelligence and Computer Science by way of default reasoning systems and non-monotonic logics. Such systems are designed to draw logical conclusions in the face of absence of information. According to this understanding then, a generic or habitual is information assumed to hold for any given relevant instance, unless specific information is given otherwise. See also: Aspect and Aktionsart; Event-Based Semantics;

Extensionality and Intensionality; Generic Reference; Quantifiers.

Bibliography Carlson G (1989). ‘The semantic composition of generic sentences.’ In Chierchia G, Partee B & Turner R (eds.) Property theory, type theory, and semantics, vol. 2: Semantic issues. Dordrecht: Kluwer. 167–192.

Cohen A (1999). Think generic: The meaning and use of generic sentences. Stanford, CA: CSLI. Comrie B (1976). Aspect. Cambridge: Cambridge University Press. ¨ (1975). ‘On generics.’ In Keenan E (ed.) Formal Dahl O semantics of natural language. Cambridge: Cambridge University Press. 99–111. ¨ (1985). Tense and aspect systems. London: BlackDahl O well. ¨ (1995). ‘The marking of the episodic/generic disDahl O tinction in tense-aspect systems.’ In Carlson G & Pelletier F J (eds.) The generic book. Chicago: University of Chicago Press. 412–425. Declerck R (1986). ‘The manifold interpretation of generic sentences.’ Lingua 68, 149–188. Diesing M (1992). Indefinites. Cambridge, MA: MIT Press. Dowty D (1979). Word meaning and Montague grammar. Dordrecht: Kluwer. Eckardt R (2000). ‘Normal objects, normal worlds, and the meaning of generic sentences.’ Journal of Semantics 16, 237–278. Greenberg J H, Ferguson C A & Moravscik E A (eds.) (1978). Universals of human language 3: Word structure. Stanford, CA: Stanford University Press. Heyer G (1987). Generische Kennzeichnungen: Zur Logic und Ontologie generischer Bedeutung. Munich: Philosophia Verlag. Krifka M, Pelletier F J, Carlson G, ter Meulen A, Chierchia G & Link G (1995). ‘Genericity: An introduction.’ In Carlson G & Pelletier F J (eds.) The generic book. Chicago: University of Chicago Press. 1–124. Lyons J (1977). Semantics (vol. 2). Cambridge: Cambridge University Press. Milsark G (1974). Existential sentences in English. Ph.D. diss., Massachusetts Institute of Technology. Distributed by the Indiana Linguistics Club. Payne T E (1997). Describing morphosyntax: A guide for field linguists. Cambridge: Cambridge University Press. Schubert L K & Pelletier F J (1987). ‘Problems in representing the logical form of generics, bare plurals, and mass terms.’ In Lepore E (ed.) New directions in semantics. New York: Academic Press. 387–453.

Grammatical Meaning ¨ Dahl, Stockholm University, Stockholm, Sweden O ß 2006 Elsevier Ltd. All rights reserved.

Grammatical meaning is usually seen as opposed to lexical meaning. Grammatical meaning thus ought to include any aspect of linguistic meaning that is due to the grammatical structure of an expression rather than to the choice of lexical items. The variation in the definitions found in the literature suggest,

however, that the notion is not wholly well understood. Consider, for illustration, the following sample formulations: ‘the part of meaning that varies from one inflectional form to another’; ‘the meaning of a word that depends on its role in a sentence’; ‘the meaning of an inflectional morpheme or of some other syntactic device, as word order.’ A suitable point of departure for a discussion of the notion of grammatical meaning is the classic treatment by Edward Sapir (1921, Chapter V). Although

Grammatical Meaning 379

the term ‘grammatical meaning’ itself is not used there, the topic, denoted in the chapter heading as ‘grammatical concepts,’ is the same. According to Sapir’s initial taxonomy, concepts used in language are either ‘concrete’ or ‘relational.’ This coincides, more or less, with the lexical:grammatical distinction. The use of the terms ‘concrete’ and ‘relational’ is thus somewhat different from what is usual. Sapir actually never gives an explicit definition of the terms but comments on his example ‘The farmer killed the duckling’ as follows: ‘‘A rough and ready analysis discloses here the presence of three distinct and fundamental concepts’’ (p. 82) – these are expressed by the three lexical words farmer, killed, and duckling – ‘‘that are brought into connection with each other in a number of ways.’’ Apparently, then, the relational concepts, which are in this particular case, definiteness of reference, singularity, declarative modality, and ‘subjectivity’ and ‘objectivity’ (meaning the roles as subject and object), and which are expressed through grammatical morphemes or word order, are responsible for the connections between the concrete elements. However, it is really only the last two, which correspond to the syntactic functions of the two nouns in the sentence, that are relational in the sense of specifying relations between the lexical elements. It is more difficult to understand why, for instance, singularity, ‘expressed by lack of plural suffix -s . . . and by suffix -s in the following verb’ has to be regarded as relational. Moreover, as Sapir notes, the cardinality of referents would not be systematically indicated in many languages, and can be expressed also by lexical means, e.g., by a numeral. The necessary conclusions are that not all relational (i.e., grammatical) concepts are equally essential in language. Sapir is thus led to postulate a category of ‘concrete relational concepts,’ which can vary from language to language, not only in their expression, but also as to their concreteness. Relational concepts such as subjectivity and objectivity on the other hand are not subject to this variation: ‘‘The fundamental syntactic relations must be unambiguously expressed’’ (p. 94). It does seem that Sapir gets a bit entangled in his own reasoning here. On the one hand, he no is longer able to use the distinction between concrete and relational as an explanation for what is grammatical and what is lexical in languages; rather, he takes the grammatical status of a concept as an indicator of its concreteness or relationality, thus opening himself to allegations of circularity. On the other hand, the unequivocally relational concepts, which have to be expressed in languages, no longer display themselves as clearly semantic: he himself speaks of ‘fundamental syntactic relations.’ Sapir’s predicament does reflect the

complexity of what Crystal (1997: 107) aptly calls ‘‘an area of study that falls uneasily between semantics and grammar.’’ Much of the uneasiness is related to the fact that distinguishing the contributions of grammar and lexicon to meaning is no less problematic than separating the roles of genes and environment in an individual. A neat division between grammatical and lexical meaning presupposes that linguistic expressions are like buildings where the building blocks (the lexical items) are independent of the cement (the grammar). But there is always a restricted number of ways in which a lexical item can be combined with others, and the meaning of the lexical item can often be expressed only in terms of what the resulting combination means. For instance, it is rather difficult to explain the meaning of a verb such as ‘borrow’ without placing it in a construction such as ‘NP borrows NP from NP,’ or to explain the meaning of ‘father’ without mentioning the persons involved in the ‘x is the father of y’ relation. In formal semantic frameworks such as Montague Grammar, the meanings of relational terms such as ‘borrow’ and ‘father’ are specified in terms of functions (in the mathematical sense of ‘mapping’) from the meanings of the argument expressions to the meaning of the whole. By abstracting away from the content of the specific lexical items, we may arrive at more general constructions, such as ‘NP Verb NP Prep NP,’ which, in formal semantic terms, have to be interpreted as second-order functions (that is, functions that take other functions as their arguments). It is at the level of constructions that the interface between grammar and meaning has to be sought, as has been argued by proponents of Construction Grammar, and the grammatical meaning of linguistic elements in general can only be understood in terms of their roles in larger constructions. Grammatical markers and features such as word order and prosody undoubtedly serve the function of making it easier to grasp the structure of a complex expression, in particular the hierarchical relationships between elements. However, the choice of grammatical elements such as inflectional morphemes may also depend on factors that have little to do with the structure of a sentence and sometimes directly reflect extralinguistic features of the situation or of referents of the discourse. This makes it difficult to uphold a thesis that the meaning of inflectional markers (and of grammatical morphemes in general) is different in kind from that of lexical morphemes. Rather, inflectional markers differ from lexical morphemes in the role their meanings play in the speech act. A past tense marker in a language such as English normally does not have the function of

380 Grammatical Meaning

informing the listener that the event or state referred to took place in the past; rather, it is an obligatory feature (except in some specific styles or genres) of any verb that refers to a state-of-affairs in the past. A related property of inflectional markers is that it is typically difficult or impossible to focus on them or, what can be seen as a special case of focusing, to negate them separately. See also: Categorial Grammar, Semantics in; Causatives;

Compositionality; Diminutives and Augmentatives; Future Tense and Future Time Reference; Generative Semantics; Ingressives; Interrogatives; Phrastic, Neustic, Tropic:

Hare’s Trichotomy; Plurality; Polarity Items; Scope and Binding; Serial Verb Constructions; Specificity; Speech Acts and Grammar; Syntax-Semantics Interface; Tense; Thematic Structure.

Bibliography Contini-Morava E & Tobin Y (eds.) (2000). Between grammar and lexicon. Amsterdam: John Benjamins. Crystal D (1997). The Cambridge encyclopedia of language (2nd edn.). Cambridge: Cambridge University Press. Sapir E (1921). Language. New York: Harcourt, Brace & World.

H Honorifics M Shibatani, Rice University, Houston, TX, USA

Titles

ß 2006 Elsevier Ltd. All rights reserved.

The commonest forms of referent honorifics are honorary titles used in conjunction with a name. Many languages have honorary titles; the English form Mr and the German form Herr derive from nouns designating higher social roles or divine beings. Also common are titles deriving from the names of kin terms or of occupations that are considered high in social standing (e.g., uncle, aunt, doctor, professor) or from ranks in specific social groups, such as military units (general) and business groups. Honorific endings are similar to honorary titles; this grammatical form is used in Korean, Japanese, and other languages. The Korean suffix -s’i attaches either to the full name of a respected person or just to the family name of a person engaged in a menial labor, whereas the higher level ending -nim attaches to the combination of a family name and a professional title – e.g., Kim kyoswu-nim (Kim professorSUFFIX) ‘Professor Kim’. The Japanese ending -san and its higher level counterpart -sama attach to family names as well as to given names, yielding honorified forms – e.g., Yamada-sama ‘Mr Yamada’, Masao-san ‘Masao (honorified first name)’. These are, in fact, part of the general amelioratory system of reference, which includes other endings such as -kun (used for the names of male equals or inferiors) and the diminutive -tyan (used typically for children’s first names).

The term ‘honorifics’ refers to special linguistic forms that are used to signify deference toward the nominal referent or the addressee. The system of honorifics constitutes an integral component of the politeness dimension of language use, but whereas every language appears to have ways of expressing politeness, only certain languages have well-developed honorifics. Generally speaking, languages with highly developed honorifics systems are concentrated in Asia – Japanese, Korean, Tibetan, Javanese, and Thai are among the most familiar languages of this group. A thorough study of honorifics requires a twopronged approach. The description of honorifics as grammatical forms is one approach and is relatively easy to accomplish, whereas the description of their actual use requires wider pragmatic as well as sociolinguistic perspectives, taking into consideration the elements of conversational situations, such as relationships between the speakers and the addressees, and the functional roles that honorifics play in communicative interaction. Though the pragmatic aspects of honorifics usage are gaining increasing attention in the field, descriptions are by and large purely grammatical, with little information on usage. This article endeavors to correct this imbalance by examining both the grammatical aspects of honorifics and the pragmatic and sociolinguistic issues; aspects of Japanese honorifics usage are particularly examined in some detail.

Referent Honorifics ‘Referent honorifics’, the most widely used grammatical form of honorifics, are used to show deference toward the nominal referents. Studies of the historical development of the honorifics system in the Japanese language indicate that referent honorifics are the most basic form of honorifics (see also Politeness).

Pronouns

Pronominal forms, especially those referring to the addressee, namely, the second-person pronouns, are often the targets of honorific elaboration. The most well-known example of this is the use of plural pronouns, such as the forms for you (PL), they, and we, in reference to a singular addressee (or a third-person referent) as a sign of respect – French vous, German Sie, Russian vy, Tagalog kayo ‘you.PL’, sila ‘they’, Turkish siz ‘you.PL’, Ainu aoko ‘we.INCL’. Javanese presents one of the most complex pronominal systems elaborated along the honorific dimension.

382 Honorifics

One notable aspect of honorifics having to do with the second-person pronominal forms is that many languages of Asia, e.g., Japanese, Korean, and Dzongkha (Bhutan), out of deference to a superior, simply avoid using these forms (see later, sections on Avoidance Languages and Forms of Honorifics). In these languages, the second-person reference, if the need arises, is made by the use of a professional title (e.g., Japanese sensei ‘teacher’, Korean sacangnim ‘company president’), kin terms (e.g., Korean emeni ‘mother’), or a combination of a kin term and an honorific ending (e.g., Japanese ozi-san ‘uncle’). Nouns

Honorified nouns express deference either directly toward the referent or indirectly toward the owner, the creator, or the recipient of the referred object. As opposed to the first two categories of referent honorifics (titles and pronouns), there are far fewer languages with a system of honorified nouns. In languages in which honorified nouns are used, there are both suppletive and regular morphological processes as well as combinations of both. Korean nominal honorific forms cinji, yonse, and songham supplete pap ‘meal’, nai ‘age’, and irum ‘name’, respectively. Often suppletive honorific forms are loanwords borrowed from cultures of higher status. For example, the honorific forms bida ‘father’ and manda ‘mother’ for the native Thai words po and me, respectively, are from Pali. Likewise, many honorific forms in Javanese are loanwords from Sanskrit and Arabic – e.g., arta ‘money’ (Sanskrit) and kuwatir ‘to fear’ (Arabic). Kin terms in Japanese are made honorific by the suffixation of honorific endings -sama or -san. There are degrees of respect: in comparing o-kaa-sama ‘mother’ and kaa-san ‘mother’, for example, the first form, with the honorific prefix o- and the -sama ending, is the more elevated form. Nominal honorific forms typically refer to objects possessed or created by respected persons. The favorite nominal honorific derivation in Japanese is constructed by means of the prefix o- or go- (e.g., o-kaban ‘bag’, go-hon ‘book’, o-hanasi ‘talk’, and go-koogi ‘lecture’). A combination of suppletion and prefixation is observed in the Japanese form o-mesimono for kimono ‘Japanese-style clothes’ or huku ‘Western-style clothes’. The all-purpose Japanese nominal honorific prefixes o- (for native words) and go- (for Sino-Japanese loanwords) have their etymologies in words with meanings associated with greatness (for the native o-) and ruling (for the Chinese go-). Tibetan nominal honorific prefixes, on the other hand, have a classificatory scheme as their basis (Kitamura, 1974). Thus, the honorific form bdu for mgo ‘head’ functions as an

honorific prefix for objects having to do with the head or upper/superior body part (e.g., skra ‘hair’ ! bdu-skra, zhwa-mo ‘hat’ ! bdu-zhwa). Likewise, the honorific forms gsol for bzas ‘eat’ and btung ‘drink’ go with objects having to do with foods and eating (e.g., ito ‘food’ ! gsol-chas, thab-tshang ‘kitchen’ ! gsol-thab). Subject Honorifics

Just as the possession of something can be the target of honorification, a being or action can be honorified as a way of showing deference toward the referent of the subject nominal, namely, the actor. Of course, a subject nominal can be honorified by, for example, the choice of the second- or third-person pronominals (as previously noted), but the case under consideration involves alterations in verbal form. A case can be clarified by comparing a plain Japanese sentence, (1) Tanaka ga ki-ta (Tanaka NOM come-PAST) ‘Tanaka came’, and the honorific counterparts, (2) Tanaka-kyoozyu ga ki-ta (Tanaka-professor NOM come-PAST) and (3) Tanaka-kyoozyu ga ko-rare-ta (Tanaka-professor NOM come-HON-PAST) ‘Professor Tanaka came’. Sentence (2), with a professional title for the subject nominal, expresses a certain degree of deference toward the teacher. But a more appropriate form would be sentence (3), in which the verb also changes its form in accordance with the speaker’s deference toward the referent of the subject. Japanese, Korean, and Tibetan have a highly developed system of subject honorification. As shown in the preceding example, Japanese uses the passive/potential/spontaneous suffix -(ra)re attached to a verbal stem and derives the subject honorific verbal form: ik-u (go-PRES) ‘go’ ! ika-re-ru (go-HON-PRES). Korean has the subject honorific suffix -si that attaches to a verbal stem: o-ta (come-IND) ‘come’ ! o-si-ta (comeHON-IND). Tibetan uses the honorific form gnang for the verbs of giving and receiving as a productive subject honorific verbal suffix: thugs ‘meet’ ! thugs-gnang (meet-HON), bzos ‘make’ ! bzos-gnang (make-HON). In addition to these productive honorific forms, a fair number of suppletive subject honorific forms are also seen in all of these languages. The Japanese form irassyaru suppletes iru ‘be’, iku ‘go’, and kuru ‘come’. Khesi-ta is the Korean subject honorific form of iss-ta ‘be’. The Tibetan form gnang suppletes ster ‘give’, btang ‘send’, and sprad ‘hand over’. In addition to these examples, Japanese has a circumlocution subject honorific form. This involves the following processes: (a) the conversion of the verbal complex into a nominalized form, (b) the attachment of the honorific prefix o-/go- to the nominalized form of the verbal complex, and (c) the

Honorifics 383

predication of the subject by the verb naru ‘become’ together with the adverbial complement form of the nominalized verbal complex. This converts the sentence Tanaka-kyoozyu ga aruk-u (Tanaka-professor NOM walk-PRES) ‘Professor Tanaka walks’ to Tanakakyoozyu ga o-aruk-i ni naru (Tanaka-professor NOM HON-walk-NOMINALIZER DAT become), whereby the oaruk-ini (HON-walk DAT) portion is the adverbial complement form of the nominalized verbal complex. Notice that this type of circumlocution is fully grammaticized in the sense that it is associated only with the honorific value, with no literal reading available. In fact, this grammaticization aspect is a basic defining characteristic of honorifics, distinguishing them from ordinary polite expressions such as English Will you open the door for me? and Do you mind opening the door for me?, which are still associated with literal meanings. The Japanese honorific prefixes o-/go- also attach to adjectival predicates, yielding the hird type of subject honorific form in the language – e.g., Hanako wa utukusii (Hanako TOP beautiful) ‘Hanako is beautiful’ ! Hanako-san wa o-utukusii (HanakoHON TOP HON-beautiful). Humbling Forms

Deference toward a superior may be shown by humbling oneself or one’s speech directed toward a superior. A fair number of languages have humbling first-person pronominals. The ordinary Thai first-person pronoun chan is replaced by phom or by the even more humbling form kha ‘(lit.) servant’ or kha cau ‘(lit.) master’s servant’. Korean na ‘I’ can be replaced by the humbling form co. In letter writing, a Japanese male may humble himself by referring to himself by the Chinese derivative syoo-sei ‘(lit.) small person’. In fact, the Japanese epistolary style contains a whole series of honorific/humbling noun pairs adopted from Chinese – e.g., rei-situ ‘(your) honorable wife’: gu-sai ‘(my) stupid wife’, gyoku-koo ‘(your) splendid piece of writing’: sek-koo ‘(my) humble piece of writing’. Less common are humbling verbal forms, which are sometimes called ‘object honorifics’, because they express deference toward the referents of nonsubject nominals by humbling the actor’s action directed toward them. The Japanese humbling forms include both suppletive verbal forms (e.g., sasiageru and o-me ni kakaru for yaru ‘give’ and au ‘meet’, respectively), and a circumlocution, the latter being quite pervasive. The humbling circumlocution involves (a) the prefixation of the nominalized verbal form by the prefix o- and (b) the verb suru ‘do’. This converts the plain form Watasi wa Tanaka-kyoozyu o tazuneta (I TOP Tanaka-professor ACC visit-PAST) ‘I visited

Professor Tanaka’ to Watasi wa Tanaka-kyoozyu o o-tazune si-ta. Tibetan also has a number of suppletive humbling forms of verbs – e.g., thugs ‘meet’ ! mjal, ster ‘give’ ! phul: Nga pa-pha dang thugs byung (I father.PLAIN with meet PAST) ‘I met (my) father (plain)’ ! Nga pa-lags dang mjal byung (I father.HON with meet.HUM PAST) ‘I met (my) father (humbling)’. Subject honorification and the humbling processes are controlled by the referents of different nominals – the former by the subject nominal and the latter by the nonsubject nominal – and thus they can be, in theory, combined. Indeed, such a combination appears possible in Tibetan (Kitamura, 1974). As in the Tibetan example, the word mjal ‘meet’ is the humbling form, expressing the speaker’s deference toward the person to be met. Recalling that gnang is the productive subject honorific form, the combination of mjal and gnang, mjal gnang, thus expresses the speaker’s deference to both the person who is meeting someone and the one who is met – Pa-lags sku-mdun dang mjal gnang song (father.HON Dalai Lama with meet.HUM HON PAST) ‘(My) father met Dalai Lama (humbling-subject honorific)’. In the case of modern Japanese, however, such a combination of a subject honorific form and a humbling form is generally avoided, and, when the occasion arises, simply a subject honorific form alone is used: Tanaka-kyoozyo ga gakubu-tyoo ni o-ai ni natta (Tanaka-professor NOM dean DAT HON-meet DAT became) ‘Professor Tanaka met with the dean (subject honorific)’.

Addressee Honorifics Addressee honorifics are those forms that show the speaker’s deference toward the addressee. In the case of honorific second-person pronouns, the reference honorific function and the addressee honorific function converge, but certain languages have special addressee-oriented honorific forms. Perhaps the most familiar is the use of sir, ma’am, etc. in English, as in, Yes, sir or Thank you, ma’am. Many languages mark addressee honorifics by the use of particles: Tagalog po, Thai kha (for female speakers), khrap (for male speakers) Tamil nka, lii. More elaborate systems are found in Korean and Japanese, both of which have special verbal endings. Korean and Japanese attach the suffixes -sumni and – mas, respectively. Notice that subject honorifics and addressee honorifics are again two independently parameterized systems, and therefore that one can, in principle, occur independently of the other. Thus, the Japanese subject honorific form Tanaka-kyoozyu ga ika-re-ru (Tanaka-professor NOM go-S.HON-PRES) ‘Professor Tanaka goes’ can be used by itself between two students. The subject honorific ending- (ra)re

384 Honorifics

(S.HON) can then be combined with the addressee honorific ending -mas (A.HON), when the same sentence is uttered toward a respected person: Tanaka-kyoozyu ga ika-re-mas-u (Tanaka-professor NOM go-S.HON-A.HON-PRES). When the subject referent is not appropriate for showing the speaker’s respect, only the addressee honorific form can occur, as in Watasi ga iki-mas-u (I NOM go-A.HON-PRES) ‘I go’. In Japanese, even further respect toward the addressee can be shown by the use of humbling verbal forms. The sentence Watasi ga iki-mas-u can be made even politer by suppleting the verb iku ‘go’ by the humbling equivalent mairu, as in Watasi ga mairi-mas-u.

Avoidance Languages The use of honorific forms may be conditioned by persons who are in the vicinity of a speaker, especially when a respected person being talked about is within earshot. Thus, a Japanese student who normally utters a plain form such as Sensei ga ki-ta (teacher NOM come-PAST) ‘The teacher came’ might use the subject honorific form Sensei ga ko-rare-ta when he notices the presence of the teacher in question. Much more regulated forms of bystander honorifics are seen among the Australian aboriginal languages. A variation of the language, known as the ‘mother-in-law’ or ‘brother-in-law’ language, is used specifically in the presence of certain ‘taboo’ kin. In the case of the mother-in-law language of Dyirbal, a speaker must switch from the ‘everyday’ language, Guwal, to the mother-in-law language, Dyalnguy, when a taboo relative, e.g., a parent-in-law of the opposite sex, appears within earshot. Dyalnguy is in fact part of the avoidance behavior between certain taboo relatives that was once strictly observed in Australian aboriginal societies. A son-in-law avoids speaking directly to his mother-in-law, and the mother-in-law must use Dyalnguy in speaking to her son-in-law (Dixon, 1972). In Guugu Yimidhirr society, neither a male nor his mother-in-law can speak to each other directly, and, accordingly, the avoidance language is more like a ‘brother-in-law’ language, the form used typically when a male speaks to his brother-in-law or father-in-law (Haviland, 1979). Avoidance language has an honorific function as well. In Guugu Yimidhirr, the brother-in-law language is to be used with kin who must be treated with respect. It is spoken slowly in a subdued voice, accompanied by a behavioral pattern of avoidance, effacing the addressee directly, e.g., by sitting sideways with some distance maintained. The Guugu Yimidhirr brother-in-law language also has linguistic features of honorific languages. For example, the everyday second-person plural pronoun yurra is

employed as a second-person singular pronoun, as in the case of Russian vy and French vous. Honorifics often neutralize semantic distinctions made in ordinary words, and so does the Guugu Yimidhirr brother-in-law language. The polite second-person pronoun yurra ‘you’ neutralizes the distinctions among the singular, the dual, and the plural forms. With various suppletive forms, certain brother-in-law expressions show a marked difference from the everyday language: e.g., Nyundu buurraay waami? (you water found) ‘Did you find water?’ (everyday) ! Yurra wabirr yudurrin (you water found) ‘Did you find water?’ (brother-in-law). Speech behavior involving avoidance languages is akin to that of honorifics languages such as Javanese and Japanese in three other respects. Voice modulation is also seen in Javanese honorific speech, wherein ‘‘the higher the [honorific] level one is using, the more slowly and softly one speaks’’ (Geertz, 1968: 289). In the feudal society of Japan, inferiors were not permitted to speak directly to their superiors of the highest rank in close proximity to them. Emperors received courtiers behind a bamboo screen, and warlords were addressed by vassals of lower rank only from some distance away. Avoidance of superiors is also manifested in the use of second-person pronouns and names. In languages such as Japanese, Korean, and Dzongkha, the use of second-person pronouns is avoided in addressing superiors. Japanese, for example, has a polite second-person pronominal form anata or the even more honorified form anata-sama, but these can never be used in addressing a superior; the form anata-sama is marginally usable in a highly impersonal situation. Likewise, in Japanese and many other societies (e.g., traditional China, Igbo), inferiors cannot use the given names of certain superiors in addressing them. One of the parameters that determines the use of honorifics is the social and psychological distance between the interlocutors (see later, Use of Honorifics). The avoidance languages of Australia and similar speech behaviors in honorifics languages clearly point out this correlation, which is often reinforced by physical distance.

Beautification Japanese has extended the honorific prefixes o-/goto other uses wherein no respect for the referent, e.g., the possessor of the designated object, or for the addressee is intended. Thus, the prefixes may be attached to the nouns designating what belongs to the speaker or to no particular person; e.g., watakusi no o-heya (I GEN HON-room) ‘my room’, o-biiru ‘beer’, o-nabe ‘cooking-pot’. This particular use of honorific

Honorifics 385

prefixes is simply motivated by the speaker’s demeanor and is called bika-go ‘beautification language’ in Japanese. The form is typically used by women, and accordingly those nouns that take beautification prefixes typically designate domestic matters such as household goods and foods. Though beautification prefixes are not strictly honorifics, because they beautify even those nouns designating objects belonging to the speaker, their honorific origins are not entirely obliterated. The prefixes cannot be used for those nouns designating highly personal objects belonging to oneself, such as body parts. Even those who are prone to use beautification prefixes excessively would not use words such as o-yubi ‘finger’ or o-asi ‘leg’ in reference to their own fingers or legs, but they are perfectly appropriate as honorific forms used in reference to a respected possessor.

Form of Honorifics Though the specific forms of honorifics vary considerably – some involving suppletion, others involving affixes or particles – it is possible to detect a certain general tendency in the formation of honorifics. The most basic characteristic is that honorific expressions avoid direct attribution of an action to the respected person, or sometimes to the speaker (out of humbleness in the latter case). Avoidance of speaking or avoidance of using second-person pronouns is an extreme case of this, but many languages use a special device for blurring the identity of an actor as a means of avoiding directness of expression. The most widely adopted method of blurring or defocusing an actor is in terms of oblique referencing, which takes a number of forms. Less common than pluralization but an occasionally observed way of avoiding direct reference is the use of locational nouns and deictic expressions. The Japanese secondperson pronominal anata ‘you’ has its etymology in the archaic expression anata ‘yonder’. However, since this form has been fully grammaticized as a secondperson pronominal, and since Japanese tends to avoid using second-person pronominals, the form o-taku ‘HON-house’ is often used as an honorific replacement for anata. Shifting person is another method of oblique referencing, as observed in Italian: Lei va? (she go) ‘You (honorific) go?’. The most popular method of oblique referencing is in terms of shifting number from the singular to the plural. Polite plural pronominal forms are a case in point. Regarding the evolution of the honorific second-person plural forms in European languages, Brown and Oilman (1960) referred to a theory that attributes the polite use of the Latin plural form vos to the existence of the two

emperors in the 4th century – one in Constantinople and the other in Rome. However, the widespread use of plural forms as second-person honorific pronouns in diverse languages such as Ainu, Tagalog, Turkish, Guarijı´o (North America), and Guugu Yimidhirr indicates that the plurality in form has no direct connection to the concept of two emperors or to royalty. Pluralization is simply a favorite way of defocusing the identity of an actor across languages. Of course, only those languages that make a singular/plural distinction can exploit this method of agent defocusing. Indeed, if a language has plural verb forms, they may also be utilized as a means of subject honorification. For example, Ainu has plural verb forms that express the plurality of the affected object or the actor, and these plural forms may be used to express deference toward the referent of the subject nominal. The plural verb form kor-pa (have-PL) may mean either ‘He/they have a lot of things’ or ‘He has something (honorific)’. Turkish also exploits the plural verb forms for honorific purposes: e.g., Es¸iniz daha gelmedi-ler mi? (wife.your(HON) still arrive-PL Q) ‘Has your (honorific) wife not yet arrived (honorific)?’ and Ali Bey orada-lar mi? (Ali-HON there-PL Q) ‘Is Ali (honorific) there (honorific)?’. In European languages, the role of plural agreement in the verb is not entirely clear. In the German and Russian languages, the choice of second-person plural subject forms for an honorific purpose automatically triggers verbal agreement – Russian Vy (PL) videli (PL) ‘You saw’. The question is whether the plural verb form, independently of the plural pronouns, can be used as a mark of subject honorification. There are indications that in some European languages, plurality in the verb marks subject honorification, just as in the Turkish case; in Polish dialects (Comrie, 1975) an example is Dziadek (SG) widza (PL) ( ‘Grandfather sees (honorific)’, and in a variety of German (Wolfgang Dressler, personal communication) there is Sind (PL) der Herr (SG) schon bedient? ‘(lit.) Are the Sir already served?’. Some languages create a sense of oblique referencing by shifting case marking from the ordinary forms to special oblique markers. Korean particles k’e and k’eso are special honorific dative and ablative (archaic) particles, respectively. Japanese epistolary style replaces the normal nominative case particle ga for the subject nominal by the dative particle ni as a way of showing respect to its referent. The passive construction effects agent defocusing, and an honorific expression is often derived in the development of the passive from the middle/spontaneous, as in the case of the Japanese subject honorific suffix -(ra)re, or in the development of the passive from the indefinite person construction, as in the

386 Honorifics

case of Indonesian, wherein the passive prefix di-, which goes back to the third-person plural marker, also derives an honorific verbal form: e.g., Silakan di-minum (please HON-drink) ‘Please drink’. Circumlocution-type honorifics observed in Japanese subject honorific and humbling forms are another way of avoiding direct attribution of the action to the respected agent or to the speaker. Shifting tense from the present to the past in polite speech, as observed in English (e.g., Can you . . .? ! Could you . . .?, Will you . . .? ! Would you . . .?) is analogous in method to oblique referencing and the circumlocution honorifics discussed here. Honorific forms are marked as secondary forms, because they tend to be signaled by additional affixes or particles. The secondary nature of honorifics is seen even in suppletive forms from the fact they often do not make as fine semantic distinctions as do ordinary words. For example, the suppletive form irassyaru in Japanese obliterates the distinctions between iru ‘be’, iku ‘go’, and kuru come’. The Tibetan subject honorific form gnang and the humbling form phul neutralize the distinctions made between the ordinary words sprad ‘hand over’, ster ‘give’, and btang ‘send’. Finally, to a considerable extent, honorific expressions are iconic to the relevant social and psychological distances: the longer the form, the politer the expression. In the case of avoidance languages and some honorifics languages, physical distance accompanies honorific speech.

Distribution and Development of Honorifics Among the various honorific forms in languages, certain types are more widely observed than others, and historically, certain forms change their category from one type to another. Synchronic distribution and historical development seem to be correlated to a great extent. More languages include referent honorifics, compared to addressee honorifics, and among referent honorifics, subject honorifics are more widely distributed than are humbling, nonsubject honorifics. The fact that use of referent honorifics is more widespread than use of addressee honorifics is may seem strange in view of the fact that in conversation the addressee is more directly involved than the nominal referent is, and is accordingly expected to be a likelier target of the speaker’s deference. However, the referent honorific system serves the function of the addressee honorific system when the subject referent and the addressee converge, as in the expression of

Are you leaving now?. Thus, a referent honorific system is a useful system that allows the speaker to express deference, not only to a third-person referent, but also to the addressee as well. Whereas Japanese, Korean, Thai, and Javanese have both referent and addressee verbal honorifics, Tibetan and Dravidian languages appear to have only well-developed referent verbal honorifics. Unlike Japanese, Korean does not have a systematic humbling, or nonsubject, honorific system, and even among the Japanese dialects, e.g., Miyazaki, there are many that lack a systematic humbling verbal mechanism. In the diachronic dimension, the history of Japanese reveals that referent honorifics were more prevalent than addressee honorifics in 8th-century Japanese. Subject honorifics often give rise to humbling verbal forms, which in turn supply new addressee honorific forms.

Use of Honorifics A more challenging aspect of the study of honorifics is the description of the actual use of honorifics in speech situations – e.g., who uses honorifics, whom they are used in reference to, and what their functions are in communicative interaction. Since addressing these issues requires knowledge of actual speech situations in some detail, the focus here is on a single language, namely, Japanese, though preliminary remarks on some general concepts are in order. In the case of the avoidance languages of Australia, use of honorifics seems to be controlled rigidly according to the tribal membership and genealogical relationships that determine for each speaker which kinsmen (e.g., mother-in-law, brother-in-law) are taboo, and which are to be avoided and treated deferentially. Notice again that the notions of avoidance and of deference converge in the use of a special language. Indeed, the use of honorific speech in general is controlled by the social and psychological distance among the interactants. Power and Solidarity

Brown and Oilman (1960), in a seminal work, identified two factors – ‘power’ and ‘solidarity’ – that determine social and psychological distance relevant to the use of honorific speech. Power, as determined in each culture according to social class, age, sex, profession, etc., establishes the superior–inferior relationship that characterizes a vertical social distance – the greater the power difference is, the greater the social distance is. Solidarity, on the other hand, determines a horizontal psychological distance. Those having the same or similar attributes (e.g., power equals;

Honorifics 387

members of the same family, profession, or political persuasion) are solidaristic or intimate and short in psychological distance, whereas those that do not share attributes (e.g., power unequals; strangers) are distant. Brown and Oilman showed that these two differently defined parameters of distance are correlated with the use of pronouns in European languages (e.g., French tu vs. vous, German du vs. Sie). Owing to the work of Brown and Oilman (1960), the former, plain forms of pronouns are now customarily referred to in the literature as ‘T-forms’ and the latter honorific forms are referred to as ‘V-forms.’ Power-based honorifics and solidarity-based honorifics reveal a difference in the reciprocity of the forms used among interactants. As in the case of German du used among intimates and Sie used among strangers, the solidarity-based honorifics are formally symmetric or reciprocal, whereas a strictly power-based system is nonsymmetric or nonreciprocal in that the inferior is obliged to use thex honorific forms (the V-forms) toward the superior, who returns the plain forms (the T-forms). Brown and Oilman showed that in the case of T/V variation in Europe, it was largely controlled by the power relationship until the middle of the 19th century, whereas in the 20th century solidarity has become a dominant factor. Power-Based Honorific Pattern

Japanese honorific speech shows both power-based and solidarity-based aspects. When superiority is fairly rigidly determined, as in a business organization, and when the rank and age difference are major determinants of superiority, the power-based, nonreciprocal pattern is observed; an inferior consistently uses honorifics toward a superior. A superior might thus invite his subordinate for a drink by using plain form, (1) Konban nomi ni ikoo ka (tonight drink to go Q) ‘Shall we go drink tonight?’. The subordinate must reply in the addressee honorific form, (2) Ee, ikimasyoo (yes go-A.HON) ‘Yes, let’s go’, and must never reply in the plain expression, (3) Un, ikoo ‘Yeah, let’s go’, which is appropriate only to his inferior or equal. When the subordinate asks his superior out, the reverse pattern obtains; the subordinate cannot use form (1) and must use its addressee honorific version, (4) Konban nomi ni iki-masyoo ka ‘Shall we go drink tonight?’, and his superior is most likely to reply with the plain form (3). As far as the inferior is concerned, this pattern of exchange must be maintained even if he and his superior are quite intimate and can converse quite informally. The mutual use of plain speech between non-equals is permitted only in an unusual circumstance, such as during the late

hours of a drinking party, when all the formalities might be done away with. Solidarity-Based Honorific Pattern

Whether a superior uses honorifics toward an inferior depends on a number of factors. Among these, a major factor is psychological distance or degree of intimacy. Though the use of plain or rough speech motivated by power on the part of a superior is occasionally encountered, it is becoming increasingly rare in contemporary Japanese society to see the powerbased use of plain form – a major exception being a scene of conflict or dispute between power unequals, e.g., between an angry customer and a sales clerk. This trend is due to several factors. When a superior uses plain forms toward an intimate subordinate, it is a mark of intimacy, whereas use of honorific speech creates a distance and is a sign of formality. In other words, a superior’s use of honorifics is solidarity based; honorifics are used (as in form (4) in the preceding section) when a superior and a subordinate are not very intimate, whereas plain speech (e.g., form (1)) is directed toward the subordinate as a sign of solidarity, whereby the sense of camaraderie is engendered. Notice, however, that between power unequals, only a superior has the option of using plain forms – inferiors must always use honorifics – unlike the mutual T-form exchanges in Europe. (Even in Europe, it is power superiors who can initiate the reciprocal T.) Between power equals, the plain/ honorific variation is by and large controlled by the solidarity factor; plain forms are used among intimates to confirm camaraderie, whereas honorifics are used between less familiar persons and strangers as a means of keeping psychological distance, by way of which the addressee’s personal integrity is honored. In the majority of contemporary Japanese households, the solidarity factor has primacy over the power factor, and thus parents and children, and elder siblings and younger ones, also exchange plain forms, much like the use of du within the contemporary German family. This linguistic manifestation of the Western egalitarian ideology was introduced to Japan in the middle of the 19th century and it spread throughout the country after World War II. China has witnessed perhaps the most dramatic effect of the egalitarian ideology on honorifics. The socialist revolution in 1949 and the cultural revolution of the 1960s wiped out the traditional social classes, and with the demise of the aristocratic and the elite classes, once-flourishing honorifics too were all but obliterated. The other manifestation of the egalitarian ideology is the use of honorifics on the part of power superiors, as described previously. Even

388 Honorifics

Emperor Akihito of Japan uses honorifics when he addresses an ordinary citizen. Thus, the egalitarian ideology has facilitated the growth of reciprocal solidarity-based use of honorifics in Japanese as well. However, the reciprocal speech pattern can in principle go in either direction: toward the symmetrical honorific pattern or toward the symmetrical nonhonorific, plain pattern. As noted previously, the Chinese language has taken the path to the latter. Besides the solidarity factor, there seems to be another prevailing factor at work in languages that display the reciprocal honorific speech pattern. Demeanor

Nonreciprocal plain/honorific exchanges can be observed in some Japanese families, especially between husband and wife and/or between parents and adult children (daughters in particular). The motivation for such an exchange seems hardly to be power based: instead, what underlies speech patterns observed in those families is the idea of proper language usage, which prescribes that superiors be treated deferentially through honorific speech. After all, honorifics are consciously taught and learned by the Japanese with this kind of prescriptive idea. This conscious teaching and learning of honorifics and their historical connection to the nobility has produced a situation in which appropriate honorific usage is regarded as a mark of good breeding. The use and nonuse of honorifics as indications of class membership have the effect of making the use of honorifics part of the speaker’s self-presentation effort, an aspect of what sociologist Erving Goffman (1956) calls ‘demeanor’. That is, though correct honorifics usage has the effect of paying respect to the addressee, it is at the same time a way of presenting the speaker as a cultivated person of good demeanor. The demeanor aspect of honorifics usage also has the effect of producing the reciprocal honorific speech pattern; speakers who have the prerogative of using either plain or honorific speech – as in the case of the Emperor, or power superiors in the business world, or professors in academic life – may constantly use honorifics as a way of selfpresentation. An extreme manifestation of this in Japanese is the excessive use of the beautification prefixes o-/ go- by women, whereby the level of politeness is at times consciously elevated to an absurd level. Formality

Though the goal here is to isolate and delineate those factors that determine the use and nonuse of honorifics, these grammatical forms of language do not in reality occur in isolation. In an actual speech situation, all of the factors that control speech form are typically compounded. Take the notions of solidarity

and of demeanor previously discussed: when a woman shops in an elegant boutique in Ginza in downtown Tokyo, she is likely to use honorifics, as in Motto ookii no o-ari ka sira? (more large one HONhave Q wonder) ‘(I) wonder if (you) have a larger one? (honorific)’, as opposed to a plain form such as Motto ookii no aru? (more large one have Q) ‘(Do you) have a larger one? (plain)’, which might be used when she is shopping in a neighborhood grocery shop. Here, it is likely that both solidarity and demeanor factors are at work. In addition, the formality factor involved in the former situation cannot be ignored. Formality overrides all of the considerations previously discussed and requires the use of honorifics on the part of all of the concerned parties. Thus, powerequal colleagues, who normally exchange plain forms, would exchange honorifics in a formal meeting or on ceremonial occasions. Other factors contributing to the formality of the speech setting include the nature of the topic of conversation and the turns of conversation that occasion formality, such as the opening and closing of a new topic. One clear instance whereby the formality factor alone dictates the use of honorifics is letter writing. Letter writing is a formal activity, being associated with a long history of a variety of epistolary styles, and it triggers the use of honorifics even if the letter is addressed to an intimate person. For example, a son, who usually uses plain forms to his mother, would write to his mother in the honorific style: on the telephone, he would say to his mother Raisyuu kaeru yo (next week return PART) ‘(I’ll) come home next week, all right? (plain)’, but he would write, Raisyuu kaeri-masu in the addressee honorific style. Relativity of Social Distance

One final characteristic of Japanese honorific usage to be discussed here has to do with the relativity of the social distance of a nominal referent. The use of both subject honorifics and humbling forms is essentially determined by the social and psychological distance between the speaker and the nominal referent. A subject-honorific form would be used when the subject referent is a superior, or a humbling form would be used when an action is directed toward a superior. When this kind of pattern is absolutely maintained, regardless of the nature of the addressee, as in the case of the Korean honorific system, then a so-called absolute honorifics system exists. The Japanese honorific system differs from this in that the group identity of the addressee enters into the picture of proper honorific usage. Japanese society makes a rather clear division between those who belong to a social group and those who are outside. With respect to outsiders, insiders are treated as an extension of

Honorifics 389

oneself. One consequence of this in regard to honorifics usage is that, with respect to outsiders, status differences among the members of a given social group are obliterated and members are identified with the status of the speaker. Accordingly, when speaking to an outsider, honorifics would not be used with respect to a member of the social group, e.g., family members or colleagues in a business firm; humbling forms would be used instead, even with respect to the superior’s action. For example, a secretary would normally use honorifics with respect to her boss when she speaks to her colleagues, as (1) Syatyoo wa soo ossyatte imasu (company-president TOP so say.s.HON be.A.HON) ‘The company president says so (subject honorific, addressee honorific)’. However, when she is speaking to someone from outside her company, she would treat her boss as if the boss were herself and would use humbling forms, as (2) Syatyoo wa soo moosite orimasu (companypresident TOP so say.HUM be.HUM.A.HON) ‘The company president says so (humbling, addressee honorific)’. Thus, in Japanese, with its relative honorifics system, the interpretation of honorific value is not as straightforward as in absolute honorifics languages such as Korean and Tibetan, in which the type (l) honorific form is consistently used regardless of the addressee. That is, in Japanese, the interpretation of the honorific index must take into account not only the usual relationships among the speaker, the nominal referent, and the addressee, but also whether the addressee is an insider or outsider with respect to the speaker’s social group.

Conclusion Human interaction is facilitated by minimizing conflict between interactants. One of the ways in which conflict arises is by mistreating people such that they feel that their personal integrity is threatened, that the expected camaraderie is not confirmed, or that the deference that their social standing has earned them has not been paid. Potential conflict is therefore removed when clarification of the relationships of the involved parties is made, and when they are maintained and reinforced with attendant protocols throughout an interaction. Human behavior, verbal or otherwise, is polite to the extent that it satisfies these requirements for facile interaction. Honorifics permit the speaker to express his relationship to the addressee and to the nominal referent in a highly codified manner. They indicate a speaker’s recognition of the power and personal integrity of the person spoken to or talked about, and conversely they indicate a speaker’s social and psychological position in relation to the involved parties. Thus honorifics

remove potential conflict and facilitate communication. It is precisely because of this function of honorifics in smooth communicative interaction that they instantiate a prototypical case of politeness phenomena of language use, and this in turn inspires a study of honorifics within the framework of general theories of linguistic politeness, a field gaining increasing attention thanks to works such as Lakoff (1975) and Brown and Levinson (1987). Apparently, all languages, regardless of the degree of elaboration in the honorifics system, have ways of making speech behavior polite. An interesting issue therefore is what difference, if any, possession of a highly elaborate honorifics system entails in the communicative function of languages. The relevant research, such as that of Hill et al. (1986), indicates that honorifics languages are more strongly indexical than are those languages (e.g., English) that lack an elaborate honorifics system. That is, different honorific forms are used according to different types of addressee, and thus honorific forms indicate the types of interactants. Hill et al. (1986) showed that American students use the same expression, May I borrow your pen?, to fairly diversified categories of people, including professors, strangers, physicians, and store clerks. On the other hand, Japanese students would use the super-polite form Pen o okarisitemo yorosii desyoo ka ‘(lit.) Is it all right if I humbly borrowed your pen?’ specifically to address their professors, but the middle-level polite form Pen o kasite itadake masu ka ‘(lit.) Is it possible that you give me the favor of lending me your pen?’ to address classmates or other people, including strangers, physicians, store clerks, etc. Clearly the Japanese students’ speech form sets their professors apart from other categories of people they interact with, reflecting the distinct status that the Japanese students, unlike their American peers, accord to their professors. In other words, the most elaborate honorific expression indexes the occasion in which the most relevant power superior is involved. This kind of indexing function of honorifics is much more clearly seen in those languages in which speech levels are more rigidly determined in relation to speech situations. A given speech level, called a ‘register’ precisely for the reason being discussed here, in such a language indicates clearly what sort of interaction is involved. For example, Javanese has as many as 10 speech levels, and each register indicates the nature of interactants (Sakiyama, 1989). For example, the low, ordinary style (Ngoko lugu) is used when elders address younger ones, or between intimate friends. The middle style (Madya˚ ngoko) is used among female itinerant merchants, among country folk, and when the nobility address their

390 Human Reasoning and Language Interpretation

subordinates. The highest form (Kra˚ma˚ inggil), on the other hand, is used when a superior is to be treated with special deference (see also Geertz, 1968). Honorifics, in sum, convey social meaning, as opposed to the referential meaning a sentence is normally understood to express. They indicate, among other things, the social categories of the people with whom the speaker is interacting. This in turn has the effect of making the speakers of an honorifics language more attentive to those factors (e.g., age, wealth, occupation, family background) that determine the categories of people. The speaker must ascertain quickly what kind of people he is dealing with so as to choose honorific forms appropriate to his interactants. Thus the presence or absence of an elaborate honorifics system may have a rather profound effect on how people perceive their environments and how they structure their communicative strategies. See also: Classifiers and Noun Classes; Connotation; Context and Common Ground; Conventions in Language; Cooperative Principle; Face; Gender; Irony; Memes; Politeness Strategies as Linguistic Variables; Politeness; Pragmatic Determinants of What Is Said; Pragmatic Presupposition; Pronouns; Taboo, Euphemism, and Political Correctness.

Bibliography Agha A (1994). ‘Honorification.’ Annual Review of Anthropology 23, 277–302. Agha A (1998). ‘Stereotypes and registers of honorific language.’ Language in Society 27(2), 151–193. Braun F (1988). Terms of address: problems of patterns and usage in various languages and cultures. Berlin: Mouton de Gruyter. Brown P & Levinson S (1987). Politeness: some universals in language usage. Cambridge: Cambridge University Press.

Brown R & Oilman A (1960). ‘The pronouns of power and solidarity.’ In Sebeok T A (ed.) Style in language. Cambridge, MA: MIT Press. Comrie B (1975). ‘Polite plurals and predicate agreement.’ Language 51, 406–418. Cook H M (1996). ‘Japanese language socialization: indexing the modes of self.’ Discourse Processes 22(2), 171–197. Dixon R M W (1972). The Dyirbal language of North Queensland. Cambridge: Cambridge University Press. Geertz C (1968). ‘Linguistic etiquette.’ In Fishman J (ed.) Readings in the sociology of language. The Hague: Mouton. Goffman E (1956). ‘The nature of deference and demeanor.’ American Anthropologist 58, 473–502. Haviland J (1979). ‘How to talk to your brother-inlaw in Guugu Yimidhirr.’ In Shopen T (ed.) Languages and their speakers. Cambridge, MA: Winthrop Publications. Hill B, Ide S, Ikuta S, Kawasaki A & Ogino T (1986). ‘Universals of linguistic politeness: quantitative evidence from Japanese and American English.’ Journal of Pragmatics 10, 347–371. Kitamura H (1974). ‘Tibetto-go no keigo.’ In Hayashi S & Minami F (eds.) Sekai no keigo. Tokyo: Meiji Shoin. Lakoff R T (1975). Language and women’s place. New York: Harper and Row. Martin S E (1966). ‘Speech levels in Japan and Korea.’ In Hymes D (ed.) Language in culture and society. New York: Harper and Row. Sakiyama O (1989). ‘Zyawa-go (Javanese).’ In Kamei T, Kono R & Chino E (eds.) The Sanseido encyclopedia of linguistics. Part 2: languages of the world. Tokyo: Sanseido. Shibatani M (1990). The languages of Japan. Cambridge: Cambridge University Press. Wenger J R (1982). Some universals of honorific language with special reference to Japanese. Ph.D. diss. (unpubl.), University of Arizona, Tempe, AZ.

Human Reasoning and Language Interpretation K Stenning, Edinburgh University, Edinburgh, UK ß 2006 Elsevier Ltd. All rights reserved.

Psychologists concerned with deductive reasoning have developed a number of laboratory tasks, bodies of experimental data, and theoretical frameworks concerned with what undergraduate subjects do when presented with what are intended as the premises of classical deductive arguments (for a review, see Evans et al., 1993). The linguistic materials used in these tasks often originate in logic textbooks, and the criteria of

correctness used in assessing subjects’ performance also originate in classical logic. The experimental subject is generally presented with these materials out of all context other than the injunction to ‘say what follows’ from the premises. The premises are like messages washed up in a bottle on the laboratory beach. Viewed from a suitable perspective, some of the data from these tasks may yet be extremely interesting as communications in vacuo and be made to speak to theories about communication under more normal pressures. For example, when subjects are given the premises If she has an essay, she’s in the library and She has an

Human Reasoning and Language Interpretation 391

essay, then an overwhelming majority of them endorse the conclusion She’s in the library. However, given the additional premise If the library is open, then she’s in the library, about half of them withdraw the previous conclusion (Byrne, 1989). Subjects thus reason nonmonotonically – an additional premise destroys the perceived validity of an earlier conclusion. Classical logic is always monotonic – adding premises never invalidates conclusions. For another example, about 20% of undergraduate students given the premise All A are B and asked whether it then follows that All B are A, famously accept this as a valid conclusion. This pattern of reasoning is so well known among logic teachers that it was long ago dubbed the ‘fallacy of illicit conversion.’ Less well known is the fact that about 20% of undergraduate students given the premise Some A are B and asked whether it then follows that Some B are A refuse to accept this as a valid conclusion. Even more interestingly, these two 20% groups of subjects are disjoint. Subjects who commit the famous fallacy do not commit the lessillustrious error and vice versa (Stenning, 2002: Ch. 5; Stenning and Cox, 2006). Again, the classification of correct reasoning and error are derived from classical logic in which the concept of validity stipulates that a valid conclusion must be true in all models of the premises. By way of a last example, subjects can be given four cards with letters on one side and numbers on the other, a conditional rule to the effect that If there’s a vowel on one side then there’s an even number on the other, and the instruction to choose all and only the cards that they need to turn in order to test the truth value of the rule. Such undergraduate subjects overwhelmingly fail to turn over the vowel and the odd number that classical logic dictates they should. Most turn the vowel and the even number (Wason, 1968). This last experimental observation provoked a search for materials that would make this reasoning as simple as the task appears, and this search eventually yielded the following observation. The rule was changed to a familiar drinking-age law: If you drink alcohol, you must be over 18 years of age and the cards described drinkers’ ages and their drinks (16/20; whiskey/orange), and the subjects were asked to choose the cards they needed to turn to find out who, if anyone, was breaking the law. The vast majority turned the whiskey and the 16 card, just as classical logic dictates. This observation gave rise to a widespread rejection of logic as a basis for a theory of human reasoning, the argument being that changing the content changes the reasoning, and logic stipulates that reasoning always proceeds purely in virtue of form, so logic is for the birds – or at least not for humans.

This argument has provided inspiration to a number of influential theories of human reasoning: mental models theory (Johnson-Laird and Byrne, 1991), evolutionary psychology (Cosmides and Tooby, 1992), relevance theory (Sperber and Wilson, 1995), and dual-process theory (Evans, 2003). All these observations of striking reasoning phenomena share the simplicity of the sentences involved. Simple conditional sentences or simple monadic quantifications might appear to be stony ground on which to develop rich theories of human interpretation. However, the present argument is that these materials can provide strong motivation for taking seriously, and at face value, some of the contributions of modern logic to the theory of interpretation. A theory of interpretation that can encompass these materials and situations has to regard interpretation as a process that assigns logical forms to sentences only in their communication setting – embedded in their tasks and contexts. Far from a translation process from sentences of English into logical sentences, the process of assigning logical form becomes one of setting a host of parameters affecting the nature of validity, the syntax and semantics of the language, the number and identity of truth values, and many other features. These parameters are set in virtue of what is being done with the sentence – the participants’ goals and understanding of each others’ intentions – as much as by inspecting words in sentences. To cut a long story short and bring an abstract logical treatment down to a concrete analysis of these examples, the drinking-age law is known from its content to have deontic force. As a consequence, assessing whether cases violate the law does not involve testing the truth value of the sentence, and the task is trivially easy. The vowels and consonants rule is likely, in vacuo, to be interpreted as a description, and this likelihood is reinforced by the instruction to assess its truth value. The truth conditions of conditional sentences occupy substantial shelf space in the philosophy of logic literature. On a lawlike interpretation of the conditional robust to exceptions, no amount of card turning will determine the truth of the rule, and so the subjects’ logically correct response (on this logic) would be to reject the task as impossible. There is evidence of just such inclinations (Stenning and van Lambalgen, 2004). This common lawlike robust interpretation of the conditional also arguably underlies the nonmonotonic reasoning in the first example of our diligent student’s whereabouts. The task here is likely to be interpreted by the subject as to credulously interpret the premises in the experimenter’s materials. Encountering the second conditional, if the library is open then she’s in the library, with this mind-set is

392 Human Reasoning and Language Interpretation

likely to result in its being interpreted as a repair to the first too-general statement – a nonmonotonic process that changes the interpretation of the first conditional so that the original conclusion is no longer valid in any logic. The credulous logic involved in this interpretation is different in its syntax, semantics, number of truth values, and concept of validity from the material implication of classical logic. Here the conditional is not a connective so much as a noniterable license-for-inference; its semantics are given by a three-valued interpretation in which the concept of validity is not the classical concept of the conclusion’s truth in all models of the premises but rather its truth in preferred models on a certain technical definition of preference. If many subjects can reasonably be expected to adopt a credulous attitude to the interpretation of experimenters’ messages from the bottle on the laboratory beach, then this may also explain the patterns of fallacy and omission of valid conclusions in the quantified examples. In vacuo, our credulous construction of a model for the premise All A are B will have some As in it that are Bs, and when we are asked whether all the Bs in that model are As then, operating by what is called closed-world reasoning, the correct answer would be yes. So subjects with this strongly credulous preferred-model-construction understanding of the experimenter’s intentions will commit what would be a fallacy of classical logic, if classical logical inference were what they were trying to do. A subject with a somewhat different, but still nonclassical, understanding of the experimenter’s intentions might observe that the conditions under which a speaker would assert Some A are B are distinctively different from the conditions under which they would assert Some B are A – the same information is packaged for different audiences – and this might be sufficient to lead them, not unreasonably, to reject the idea that the one follows from the other. ‘Following from’ is, after all, a highly elastic concept in English. An approach along these lines invests in a theory of the many interpretational stances subjects may take. It stands some hope of explaining why the 20% of subjects who commit the fallacy don’t fail to draw the conclusion and vice versa: having one model of the communication makes it unlikely that one holds other incompatible models simultaneously. Logic has always been a game of two halves – interpretation and derivation – and modern logic is a representational supermarket of distinct systems for modeling the highly heterogeneous things that people can do in communicating. Logical reasoning (derivation) proceeds purely in virtue of form once an interpretation has been assigned, but the process of assigning an interpretation is as content-involved as it

is possible to be. If derivation yields some unexpected result, reinterpretation is often the order of the day, so these processes are interleaved in real communication. Modern logic is a highly articulated framework for explaining why and how reasoning is so exquisitely sensitive to task, context, and content – possibly useful to birds but certainly essential to understanding human reasoning. It is instructive to compare this logical approach to interpretation with Gricean programs of research, which assume the uniqueness of classical logic as an interpretation, but augment it by pragmatic principles such as Grice’s maxims of quantity – ‘‘say as much as is needed but no more!’’ A number of psychologists have appealed to such programs to bridge the observed gaps between classical logic and their observations (Politzer, 2004; Roberts et al., 2001). The two programs have much ground in common. For example, the closed-world reasoning that makes illicit conversion valid may be reconstructable in Gricean terms. Helpful speakers provide all and only relevant information and expect us to draw these implicatures. Taking the multiplicity of logics seriously and assuming that logical form is a function of much more than the sentences that appear opens up research to the rigors of formulating logical systems that model the data. The Gricean (and related) programs have always been notoriously hard to compute. From a cognitive point of view, the logical approaches have opened up the issue of how the systems can be implemented in the mind. The default logic appealed to here turns out to be highly tractable and implementable in spreading activation networks (Stenning and van Lambalgen, 2008). The next few years should determine which is the more productive program of research for semantics and pragmatics. Assessment of peoples’ reasoning requires a firm foundation on the interpretations they adopt. This does not mean that their reasoning will be perfect, or that they cannot be faulted for adopting inappropriate interpretations. The rich framework for interpretation provided by modern logic offers the apparatus to link formal studies with a much wider range of communication, both in vacuo and in context. See also: Category-Specific Knowledge; Cognitive Semantics; Coherence: Psycholinguistic Approach; Concepts; Context and Common Ground; Context Principle; Cooperative Principle; Field Work Methods in Semantics; Implicature; Intention and Semantics; Lexical Meaning, Cognitive Dependency of; Logic and Language; Logical Consequence; Logical Form; Mass Expressions; Modal Logic; Multivalued Logics; Neo-Gricean Pragmatics; Neologisms; Nonmonotonic Inference; Nonstandard Language Use; Propositional and Predicate

Hyponymy and Hyperonymy 393 Logic; Psychology, Semantics in; Quantifiers; Representation in Language and Mind; Semantic Change; Semantic Change, the Internet and Text Messaging; Speech Acts and AI Planning Theory; Thought and Language.

Bibliography Byrne R M J (1989). ‘Suppressing valid inferences with conditionals.’ Cognition 31, 61–83. Cosmides L & Tooby J (1992). ‘Cognitive adaptations for social exchange.’ In Barkow J, Cosmides L & Tooby J (eds.) The adapted mind: evolutionary psychology and the generation of culture. New York: Oxford University Press. 163–228. Evans J (2003). ‘In two minds: dual-process accounts of reasoning.’ Trends in Cognitive Sciences 7(10), 464–469. Evans J, Newstead S & Byrne R (1993). Human reasoning: the psychology of deduction. Hillsdale, NJ: Lawrence Erlbaum. Johnson-Laird P N & Byrne R M J (1991). Deduction. Hillsdale, NJ: Lawrence Erlbaum.

Politzer G (2004). ‘Reasoning, judgement and pragmatics.’ In Noveck I A & Sperber D (eds.) Experimental pragmatics. London: Palgrave MacMillan. 94–115. Roberts M J, Newstead S E & Griggs R A (2001). ‘Quantifier interpretation and syllogistic reasoning.’ Thinking and Reasoning 7(2), 173–204. Sperber D & Wilson D (1995). Relevance: communication and cognition (2nd edn.). Oxford: Blackwell. Stenning K (2002). Seeing reason: language and image in learning to think. Oxford: Oxford University Press. Stenning K & Cox R (2006). ‘Rethinking deductive tasks: Relating interpretation and reasoning through individual differences.’ Quarterly Journal of Experimental Psychology: Human Experimental Psychology 59, 1454–1483. Stenning K & van Lambalgen M (2004). ‘A little logic goes a long way: basing experiment on semantic theory in the cognitive science of conditional reasoning.’ Cognitive Science 28(4), 481–529. Stenning K & van Lambalgen M (2008). Human reasoning and cognitive science. Cambridge MA: MIT Press. Wason P (1968). ‘Reasoning about a rule.’ Quarterly Journal of Experimental Psychology 20, 273–281.

Hyponymy and Hyperonymy M L Murphy, University of Sussex, Brighton, UK ß 2006 Elsevier Ltd. All rights reserved.

Hyponymy is the ‘type of’ relation among lexical items; for example rose is a hyponym of flower in that roses are types of flowers. In other words, if X is a hyponym of Y, then the extension of X is a subset of the extension of Y. Thus, we can say that hyponymy is a relation of inclusion. Its converse is hyperonymy (also called hypernymy or superordinacy); flower is the hyperonym (or superordinate) of rose. The term hyponymy is also used generally to refer to the relation that holds between a hyponym and hyperonym. While this relation is central to many theories of lexical organization, it is arguably not a lexical relation at all but instead a semantic relation that reflects the relations among the things words designate. Nevertheless, it is a central relation (along with incompatibility) in many theories of lexical organization. We can use an angle bracket (>) to indicate this relation between two expressions (e.g., flower > rose).

Note that the tree (and the relation itself) is asymmetrical, in that any word may have many hyponyms, but in most cases has only one immediate hyperonym. For example, orange has several hyponyms (navel, Valencia, mandarin), but each of these has only one hyperonym (orange), and it has only one immediate hyperonym (citrus), and so forth. Lyons notes (Lyons, 1977) that for some syntactic categories, hyperonyms are often of a different syntactic category than the hyponyms. For example, adjectives may have nominal hyperonyms, as in emotion > {happy, sad, angry}. Because hyponymy is considered to be a paradigmatic relation, and paradigmatic relations are generally defined as holding between members of the same syntactic category: Lyons dubs these cross-categorical relations quasi-hyponymy. But even in nominal taxonomies, we see some differences in syntactic categories at the highest levels. In Figure 1, while citrus can be used as a noun, it is more often used as an adjective (citrus fruit), as are the orange types (navel, mandarin) at the bottom extreme. And unlike

Hyponymy as a Paradigmatic Relation Hyponym relations can be represented in tree structures, as in the abbreviated FRUIT taxonomy in Figure 1. The items in this tree can be considered to be parts of a hyponymic (or taxonomic) paradigm.

Figure 1 Hyponym relations in the FRUIT lexical field.

394 Hyponymy and Hyperonymy

the countable items at the basic level (an apple, a pear), fruit is in a non-count noun subcategory (some fruit, a piece of fruit). Such facts (in an otherwise straightforward taxonomy) raise the question of whether so-called hyponymy is the norm and thus challenge the notions that paradigmatic relations exist within specific syntactic categories and that hyponymy is a paradigmatic relation. Another problem in defining and diagnosing hyponymy is that the natural language diagnostic ‘X is a type of Y’ is often in conflict with the set inclusion definition. For example, (1) is an odd statement, even though it is indisputable that the set of queens (in the ‘monarch’ sense of queen) is a subset of the set of women. (1) ?A queen is a type of woman.

To get around these problems of definition, Cruse proposes that HYPONYM is a prototype-based category (Cruse, 2002). Taxonyms, hyponyms that felicitously occur in type of statements with their hyperonyms, are among the most prototypical cases of hyponymy in that they participate in entailment relations, and the extra specificity of the hyponym in relation to the hyperonym is central to the meaning of the hyponym. Other features of prototypical hyponym relations (such as dog > terrier) are directness of the relation (as opposed to animal > terrier, in which intervening levels have been elided) and sameness of non-denotative aspects of meaning/communicative function, such as register (i.e., as opposed to pooch > terrier). Such an approach accounts for the perception that some superordinate/subordinate relations are better examples of hyponymy than others.

Types and Properties of Hyponyms Hyponymy, or at least taxonymy, is transitive, in that if X < Y and Y < Z, then X < Z. Such entailment relations contribute to classical syllogisms, as in (2): (2) A terrier is a type of dog. A dog is a type of animal: [ A terrier is a type of animal.

However, some items that pass the ‘type of’ diagnostic for hyponymy do not result in valid syllogisms, as in (3): (3) A game console is a type of computer. A computer is a type of office equipment: # A game console is a type of office equipment.

There are two problems with the second premise of (3) that result in the failure of the syllogism. First, the statement is only true when we interpret a computer as ‘a (proto)typical computer’. So, computer in the

second premise is defined using different criteria than in the first premise. The other problem here is that the type of relations in the two premises are not equivalent. The first premise tells us what a game console is, while the second one tells us what a computer is used for. While the first premise expresses taxonomic hyponymy (or taxonymy), the second is a case of functional hyponymy. Any term might have hyp(er)onyms of both types. For example, a dog can be both an animal (taxonomy) and a pet (function). The entailment relations in syllogisms are only reliably found with taxonomic hyponyms.

Hyponymy and Lexical Organization Models of the lexicon often involve organization by semantic relation, as in network models (e.g., WordNet, Fellbaum, 1998) and semantic field theory (Lehrer, 1974). In such models, hyponymy is a key organizer of lexical items. The question arises, however, whether these relations are really relations among words (and hence should be represented in a modular lexicon) or if they are just relations among the meanings of the words or among the things that the words describe. Because hyponymy is most often defined as a relation of set inclusion, and because the sets involved are the extensions of the words, not the words themselves or their meanings (intensions), the necessity of representing hyponymy in the lexicon is called into question (Murphy, 2003). Instead, it could be said that the semantic relation of hyponymy is simply a linguistic reflex of the extensional relation of class inclusion. In this case, representing such information as lexical (i.e., specifically linguistic) information is redundant, as the information will already be available as part of our world knowledge. (This contrasts with the more arguably lexical relations of antonymy and synonymy.) Cruse’s prototype-based approach above takes into account lexical issues (e.g., similarity of register) as well as the semantic issues. This is useful in metalinguistic discussions of‘what’s a good example of hyponymy’, but again does not require direct representation in a lexicon because the relation between any two words is then derivable from the prototype model. Nevertheless, representation of hyponymy is useful in computational lexicons (such as WordNet) because the aim in these cases is to represent the knowledge base (including semantic relations) that makes using the lexicon meaningful. Like computational lexicons, dictionaries rely heavily on hyponymy. The typical genus-differentiae structure of dictionary definitions involves giving a hyperonym (genus) of the defined word, then the means to differentiate the defined word from the set

Hyponymy and Hyperonymy 395

of hyponyms of that hyperonym. For example, apple might be defined as ‘a (type of) fruit [genus], grown on a tree of the rose family, that is typically round and fleshy with thin red, yellow, or green skin [differentiae]’. Theories of lexical meaning that follow the model of dictionaries (i.e., in using semantic features or components, e.g., Katz, 1972) similarly give more detail in the representation of the meanings of hyponyms than in the representation of their hyperonyms. So, for example, the semantic representation of apple must access (either by inheritance or direct representation) all of the information in the semantic representation of fruit. In other words, the meaning of apple is the meaning of fruit plus some differentiae. However Wierzbicka (Wierzbicka, 1984) opposes this position, arguing that the meaning of fruit is more complex than that of apple. In her view, the word fruit does not (unlike apple) stand for ‘a type of thing’, but rather stands for a group of dissimilar types of thing. Therefore she argues that the meaning of fruit should encompass the meaning of apple (and orange and banana) rather than vice versa. See Bolinger (1992) for counterarguments to this position. In conclusion, there’s no doubt that class inclusion relations are central to cognitive processes such as categorization and reasoning. We use hyponym and hyperonym relations in defining words, and they are useful in creating coherence in discourse without repetition (so, we can refer to the same being as Fido, the puppy, the dog, the terrier, the animal, and so forth), and hyponyms and hyperonyms prompt each other in word association tests. Nevertheless, there is not

clear evidence that the relation is a linguistic (lexical) relation rather than a cognitive-semantic relation (which reflects relations between referents). Still, the class inclusion relation (if not the lexical relation of hyponymy) is central to most approaches to lexical semantic representation. See also: Antonymy and Incompatibility; Componential Analysis; Disambiguation; Extensionality and Intensionality; Lexical Fields; Lexicon: Structure; Meronymy; Partitives; Prototype Semantics; Synonymy.

Bibliography Bolinger D (1992). ‘About furniture and birds.’ Cognitive Linguistics 3, 111–117. Cruse D A (1986). Lexical semantics. Cambridge: Cambridge University Press. Cruse D A (2002). ‘Hyponymy and its varieties.’ In Green R, Bean C A & Myaeng S H (eds.) The semantics of relationships. Dordrecht: Kluwer. Fellbaum C (ed.) (1998). WordNet: an electronic lexical database. Cambridge, MA: MIT Press. Katz J J (1972). Semantic theory. New York: Harper and Row. Lehrer A (1974). Semantic fields and lexical structure. Amsterdam: North Holland. Lyons J (1977). Semantics (2 vols). Cambridge: Cambridge University Press. Murphy M L (2003). Semantic relations and the lexicon. Cambridge: Cambridge University Press. Wierzbicka A (1984). ‘Apples are not a ‘‘kind of fruit.’’’ American Ethnologist 11, 313–328.

This page intentionally left blank

I Ideational Theories of Meaning E J Lowe, University of Durham, Durham, UK ß 2006 Elsevier Ltd. All rights reserved.

Ideational theories of meaning are commonly attributed to 17th- and 18th-century empiricist philosophers such as Thomas Hobbes, John Locke, George Berkeley, and David Hume, and received severe criticism from 20th-century philosophers of language, notably Gottlob Frege and Ludwig Wittgenstein. Unfortunately, the work of the earlier philosophers was seriously misunderstood by their later critics, and much of the criticism was misdirected. In fact, it is highly debatable whether the empiricist philosophers in question were offering theories of meaning in anything like the sense that the phrase ‘theory of meaning’ would now be understood (see Hacking, 1975, chapter 5). Locke devoted an entire book (Book III, ‘Of Words’) of his greatest work, An Essay Concerning Human Understanding (Locke, 1975) to the topic of language, focusing on the communicative function of language, the advantages of language, and the abuses of language. According to Locke, the main purpose of language is to serve as a medium for the communication of thought from one thinker to another. Locke, like many of his contemporaries, favored an ideational view of thought – that is, he believed that thinking is essentially an exercise of the faculty of imagination and that thoughts consist in sequences of ideas in the minds of thinkers. Idea was a term of art in early modern philosophy, as ubiquitous then as the term concept is in present-day philosophical writing and playing a partly similar role, to denote a component or ingredient of the contents of thoughts. However, the empiricist philosophers used it equally to denote a component or ingredient of the contents of sensory experiences, reflecting the tight connection they presumed to obtain between thought and perception. Locke famously claimed that words ‘‘in their primary or immediate signification stand for nothing but the ideas in the mind of him that uses them’’ (Locke, 1975: 405) – one of the most misunderstood claims in the history of philosophy. Modern readers of

Locke are apt to interpret him as claiming that the meaning of a word is an idea in the mind of the speaker, which suggests that he adopted a thoroughly subjectivist – indeed, almost solipsistic – theory of meaning. But such an interpretation mistakenly conflates signification, as Locke uses this term, with meaning in the semantic sense. Locke is not claiming that words refer to or denote ideas in the mind of the speaker, but simply that speakers primarily use words to express the contents of their own thoughts, for the purpose of communicating those thoughts to others. Locke could happily concede that ‘dog’ in my mouth refers to dogs, not to my idea of a dog. His point is merely that when I assert, for example, ‘Dogs bark,’ I am expressing a thought that I have concerning dogs – a thought which is about dogs by virtue of containing as one of its ingredients my idea of a dog. To clarify this matter, it is useful to distinguish between three quite different kinds of relations: semantic relations, cognitive relations, and expressive relations (see Lowe, 1995: 145 and Figure 1). Semantic relations are word-to-world relations, such as the reference relation between the word ‘dog’ and dogs. Cognitive relations are thoughtto-world relations, such as the intentional relation between my idea of a dog and dogs, by virtue of which the former is ‘about’ the latter, or has the latter as its ‘intentional object.’ Expressive relations are word-to-thought relations, such as the signification relation (in Locke’s sense) between the word ‘dog’ as used by me and my idea of a dog. In these terms, modern critics of Locke and like-minded earlymodern empiricist philosophers may be accused of misconstruing their account of linguistic signification as a theory of semantic relations, when in fact it is a theory of expressive relations. Locke’s primary interest lies not in semantics but in the nature of thought and its relation to language, that is, in cognition and expression. Of course, given an account of word-tothought (expressive) relations and an account of thought-to-world (cognitive) relations, it is possible to construct an account of word-to-world (semantic) relations, although it seems that Locke himself was not much interested in doing this in any detail.

398 Ideational Theories of Meaning

Figure 1 Locke’s dog-legged semantic theory.

Such an account of word-to-world relations will be, in Simon Blackburn’s vivid phrase, a ‘dog-legged’ semantic theory (Blackburn, 1984: 40), because it takes such relations to be the product of two other kinds of relations, cognitive and expressive. There may be problems with semantic theories of this type, but they will only be obscured by misunderstanding the Lockean theory of linguistic signification as itself being a theory of semantic, as opposed to expressive, relations. In saying that words are signs of ideas in the minds of speakers, Locke means that they are indicators of those ideas, which speakers can exploit to enable an audience to gain knowledge of what they are thinking (see Ott, 2004). Locke presumes that signification of this kind is artificial – the product of human invention – rather than natural, but that it is otherwise comparable to the indicator relation between dark clouds and impending rain, the former constituting evidence for the latter. To evaluate the Lockean approach to language, we need to probe a little more deeply into his account of thought and ideas. Locke divides ideas into simple ideas of sensation and reflection, and complex ideas that are compounded by the mind out of those simple ideas. By ‘reflection’ Locke means what would now be called ‘introspection.’ Examples of simple ideas of sensation would be our ideas of colors, taste, and sounds, while examples of simple ideas of reflection would be our ideas of basic mental activities, such as thinking, desiring, and willing. According to Locke, many of our complex ideas are acquired by the mental process of abstraction, or what we might now call ‘selective attention,’ when we notice that certain types of simple ideas regularly accompany each other in our experience. For example, our complex idea of an apple will include various simple ideas of shape, size, color, and taste which we find that we regularly experience in conjunction with one another. It is a matter for dispute among Locke scholars whether or not he conceived of sensory ideas as mental images,

and the textual evidence is far from conclusive (see Lowe, 1995: 35–47). It is much clearer that Berkeley, writing a little later, held an imagistic view of ideas and, perhaps wrongly, construed Locke as holding one too. Berkeley famously criticized Locke’s theory of abstraction as being incoherent, but the cogency of the criticism seems to depend upon the interpretation of Locke’s view of ideas as being imagistic. (Berkeley urged that Lockean abstract ideas must lack determinacy of content in a way that seems problematic only on the assumption that such ideas are, or are relevantly like, images: see Lowe, 1995: 158–161.) Setting aside the controversy over whether or not Locke was an imagist, the essential features of his ideational conception of thought reduce to the following. First, Locke is clearly committed to a strong version of the doctrine that thought is independent of language. Indeed, his central aim in discussing language is to show how easily language can lead us astray if we fail to notice the differences between it and thought. Second, he clearly believes that thinking, involving as he takes it an exercise of the faculty of imagination, is intimately related to and, with respect to its content, ultimately wholly indebted to perceptual experience, both sensory and introspective. Modern critics of ideationism are not only apt to misconstrue the ideational account of linguistic signification as being a semantic, as opposed to an expressive, theory, but also to oversimplify the object of their criticism. For instance, it is sometimes lampooned as maintaining that every word in a speaker’s mouth signifies a discrete idea in the speaker’s mind, including words such as ‘not’ and ‘but,’ which Locke himself classifies as ‘particles’ and to which he devotes a chapter in the Essay. It is easy enough to poke fun at the suggestion that ‘not’ signifies an idea of ‘notness,’ in the way that ‘red’ supposedly signifies an idea of redness. But Locke himself suggested nothing so preposterous, contending instead that negative particles are used to convey a speaker’s mental

Ideational Theories of Meaning 399

act or attitude of denial with respect to a certain thought-content (Locke, 1975: 471). Berkeley’s version of ideationism was still more sophisticated, recognizing many uses of language other than simply to express the speaker’s ideas – for example, to invoke an emotive response in an audience – and allowed, too, that often we think ‘in words’ rather than exclusively ‘in ideas’ (see Olscamp, 1970: 130–153). Undoubtedly, the disfavor into which ideationism fell during the 20th century was largely due to the conviction that it rendered linguistic meaning excessively subjective. Frege’s attack on it was integral to his more general onslaught on psychologism, which he saw as a dire threat to the objectivity of logic and mathematics. This is why he is at pains to distinguish sharply between ‘ideas,’ which he regards as purely subjective psychological entities, and ‘senses’ of expressions, which he regards as mind-independent and intersubjectively graspable abstract objects (see Frege, 1960). Wittgenstein is equally antagonistic toward ideationism, which is a prime target of his famous ‘private language argument’ (see Wittgenstein, 1958: 94–96). Here again the complaint is that ideas are unsuited by their irredeemably subjective and private character to be recruited for a workable account of intersubjective linguistic meaning and communication, and that ideationism unavoidably degenerates into some form of scepticism or relativism. To the extent that criticisms focusing on the privacy of ideas construe ideationism as postulating ideas as the meanings of words, they are misplaced for the reasons explained above. Even so, it is fair to ask of the ideationist how words can serve to communicate ideas, given the privacy of the latter – a privacy that Locke himself acknowledged and emphasized. Indeed, for Locke, it is precisely because ideas are private – ‘invisible, and hidden from others’ – that language, in the form of ‘external sensible signs,’ is needed to ‘lay them before the view of others’ (Locke, 1975: 405). One might suppose it to be a fatal difficulty for ideationism that no one has direct access to the ideas of another speaker and so is never in a position to tell whether or not the idea that he or she associates with a given word resembles the idea that is associated with it in the mind of someone else. However, Locke himself was fully cognizant of this seeming difficulty and was not at all disconcerted by it. It was he, indeed, who first drew attention to the notorious puzzle now known as the ‘inverted spectrum’ problem – the question of how I can tell whether the way in which blue things look to me might not be how yellow things look to someone else, and vice versa (see Locke, 1975: 389). Locke’s answer is that it simply doesn’t matter, for the purpose of the successful communication of

thoughts between people concerning blue or yellow things. However, one might agree with Locke about this while failing to see how he was in a position to say it himself, given his commitment to an ideational theory of linguistic signification. For one might suppose that such a theory takes success in communication to consist in the replication in the hearer’s mind of ideas which the speaker associates with the words that he or she utters. But there is no reason to tie ideationism to such a doctrine, nor any evidence that ideationists such as Locke espoused it. Ideationism is at most committed to the thesis that in successful communication of the speaker’s thoughts to a hearer, ideas are evoked in the hearer’s mind which correspond to those in the speaker’s mind, in a sense of ‘correspondence’ which does not imply resemblance or replication. That such a correspondence obtains is subject to intersubjective confirmation without imposing upon the persons concerned the impossible burden of comparing each other’s ideas, and it may be taken to be set up through the social processes of language teaching and learning (see Lowe, 1996: 172–177). See also: Category-Specific Knowledge; Cognitive Semantics; Concepts; Metaphor and Conceptual Blending; Metonymy; Onomasiology and Lexical Variation; Philosophical Theories of Meaning; Pre-20th Century Theories of Meaning; Psychology, Semantics in; Thought and Language; Use Theories of Meaning.

Bibliography Ayers M R (1991). Locke. London/New York: Routledge. Berkeley G (1949). The works of George Berkeley, Bishop of Cloyne. Jessop T E & Luce A A (eds.). London: Nelson. Blackburn S (1984). Spreading the word: groundings in the philosophy of language. Oxford: Clarendon Press. Frege G (1960). ‘On sense and reference.’ In Geach P & Black M (eds.) Translations from the philosophical writings of Gottlob Frege, 2nd edn. Oxford: Blackwell. Hacking I (1975). Why does language matter to philosophy? Cambridge: Cambridge University Press. Locke J (1975). An essay concerning human understanding. Nidditch P H (ed.). Oxford: Clarendon Press. Lowe E J (1995). Locke on human understanding. London/ New York: Routledge. Lowe E J (1996). Subjects of experience. Cambridge: Cambridge University Press. Lowe E J (2005). Locke. London/New York: Routledge. Olscamp P J (1970). The moral philosophy of George Berkeley. The Hague: Martinus Nijhoff. Ott W R (2004). Locke’s philosophy of language. Cambridge: Cambridge University Press. Wittgenstein L (1958). Philosophical investigations (2nd edn.). Anscombe G E M (trans.). Oxford: Blackwell.

400 Ideophones

Ideophones C Kilian-Hatz, Universitat zu Ko¨ln, Ko¨ln, Germany ß 2006 Elsevier Ltd. All rights reserved.

Introduction The study of ideophones dates back to the end of the 19th century, but it was especially influenced by Diedrich Westermann (1927, 1937) who examined the sound symbolism of a special class of onomatopoeic words called ‘Lautbilder’ (‘sound pictures’) in West African languages, which ‘‘describe an object or denote an event as a whole’’ (Westermann, 1937: 159). Until the late 1970s, ideophones were almost exclusively described for African languages, but no ideophones were reported for languages of the Khoisan family. In recent years, an increased interest in ideophones has provided us with detailed studies in Khoisan, as well as in European, Asiatic, Australian, and Amerindian languages (cf. Asher, 1982; Nuckolls, 1996; Kita, 1997; Alpher, 2001; Voeltz and KilianHatz, 2001) so that it is now established that the existence of ideophones is not restricted to some African language families but seems to be a universal feature of human language. Examples of ideophones are in English wop, wiggle-waggle, tic-tac, ding-dang-dong, ptt ptt ptt or in Southern Sotho (Sotho, Southern) nele (‘disappear and be silent for a long time’), shwErErE (‘tell lies’), NEkEthE (‘have a salty taste’), hlanahlana (‘fly up and down and from side to side’), and tlcpcrc (‘be straight’) (Kunene, 1978: 50ff.). Most students of ideophones still refer to the definition given by Clement Doke (1935: 118), who created the term ‘ideophone’ according to their function in Bantu languages as a ‘‘vivid representation of an idea in sound. A word, often onomatopoeic, which describes a predicate, qualificative or adverb in respect to manner, colour, sound, smell, action, state or intensity.’’ There are, however, no formal criteria that define a word class of ideophones crosslinguistically. Therefore, there still exists confusion on how to classify ideophone-like words in a given language. The consequence is that ideophones are either ignored in grammars and/or if they are described at all, such descriptions consist only of a word list, or the study concentrates on a language internal analysis. Two methodological approaches can be distinguished, which are summarized in the following sections.

Formal Approach Similar to interjections, ideophone-like words behave differently than the well-known classes of nouns, verbs, adjectives, or adverbs, because they may be

phonologically aberrant and are mostly morphologically invariable simplicia. Despite such formal anomalies compared to other word classes, ideophones syntactically parallel them in some respect, so that it is not always clear in a given language if they should be interpreted as a phonological and/or morphological aberrant subclass of other lexical word classes or if they form a word class of their own. Ideophones may occupy the slot of a verb, noun, adverb, or adjective in the sentence where they are embedded. In some languages, their distribution may be restricted to only one of these positions, and they are, therefore, categorized as subclass(es) of those words which they replace. They form a subclass of nouns, e.g., in Somali (Tosco, 1998: 129), of verbs, e.g., in Tsonga (Marivate, 1985: 214), or of adverbs, e.g., in Yagua (Payne and Payne, 1990: 457). In most languages, however, two or more such syntactic functions may be assigned to them, depending on which slot they occupy. They may be adjectives, adverbs, and verbs in Wolaitta (Wolaytta) (Amha, 2001: 56ff.), and in Pastaza Quechua (Quechua, Pastaza, Southern) they may be used as adverbs, verbs, nouns, adjectives, or intensifiers (Nuckolls, 1996: 9f, 141f.). They behave similarly in Gbaya-Bogoto so that Pe´li (1991/2: 30) characterizes their syntactic behavior as extremely mobile and flexible. Ideophones that are categorized as a subclass are often described under various different terms like ‘nonverbal verbs’ (Marivate, 1985: 214), or ‘ideophonic verbs, nouns and adverbs’ (Newman, 1968). Based on these language specific descriptions, only a few efforts have been made to extract formal criteria that may be common to a word class ideophone crosslinguistically. In Bantu languages, ideophones form a more coherent class of words syntactically than in other language families. Based on Doke’s functional definition given above, Samarin (1971) compares ideophones in a survey of over 150 Bantu languages and draws up a list of formal features that are common to ideophones in Bantu. These include a predictable derivation of ideophones and striking parallels with verbs that often differ from ideophones only in a predictable final vowel. Whereas this list is an instrument to differentiate ideophones from other word classes in Bantu, not all of these features are applicable to languages of other language families. This is confirmed by Childs (1987, 1994) in a survey of ideophones in African languages, where he lists some properties of ideophones, like soundsymbolism, possible phonetic anomalies and violations of phonological rules, the lack of morphology except for reduplication or vowel lengthening, the

Ideophones 401

fact that they are apart from the syntax of the surrounding utterance that is called ‘‘syntactical aloofness’’ by Kunene (1978), and finally, their close connection to gestures. However, no feature seems to be unique to ideophones crosslinguistically (Childs, 1994: 182), and no feature is shared by all ideophones in a specific language, so that ideophones may better be described in a prototype approach (Childs, 1987).

Discourse Pragmatic Approach Because formal properties alone may not be sufficient to separate ideophones from other words, other investigators highlight their sound-symbolic nature, which is connected to their special pragmatic function in discourse. Most authors assign a special expressive function to ideophones or characterize them as part of an expressive register. Therefore, they are named ‘expressive adverbs,’ e.g., in Bambara (cf. Dumestre, 1981), ‘expressive onomatopes’ in Urubu-Kaapor (Urubu´-Kaapor) (cf. Kakumasu, 1986), or simply ‘expressives,’ e.g., in Semai (cf. Diffloth, 1976). Ideophones consist of a combination of motivated sounds that imitate an event rather than to describe it. Fortune (1962: 6) characterizes ideophones in Shona as part of the event level in being ‘‘a vivid re-presentation or re-creation of an event in sound.’’ Similarly, Kunene (1978: 3ff.) focuses in his study on the shift of participant roles whereby the speaker becomes the actor of the ideophonic event and the hearer becomes a witness. Kock (1985: 52) claims that the communicative purpose of ideophones is to actualize the event which it describes. And Nuckolls (1996: 12) states that ‘‘the distinction between a speech event and a narrated event is blurred [because] the speech event becomes the narrated event.’’ Kita (1997) finally sees Japanese ideophones, which he calls ‘mimetics,’ as part of the affecto-imagistic dimension, which is complementary to the analytic dimension containing descriptive information. From this perspective, ideophones are not just expressive variants of other words but are a complementary mode or dimension of speech. Whereas the slot that an ideophone occupies in the sentence was a major criterion for its syntactic categorization as subclass of verbs, nouns, adverbs, or adjectives, Kunene (1978) rejects this argument as misleading. According to him, the assignment of such a syntactic function seems merely triggered by the translation into the reference language. He demonstrates in Southern Sotho that the semantics and function of ideophones remain the same in all kinds of various syntactic environments, being invariably a ‘dramaturgic predicate,’ even if its position might

change; therefore, ideophones are ‘syntactical aloof’ from the rest of the sentence. Based on Kunene’s thesis that ideophones behave pragmatically and grammatically differently, the study of Kilian-Hatz (1999) represents the first typological attempt to define a word class ‘ideophone’ crosslinguistically. In combining formal, sound-symbolic, and pragmatic features shared by ideophone-like words in 138 languages of the world, an ideophone is defined as a proper word class that is complementary to descriptive word classes of the analytic dimension (i.e., nouns, verbs, adjectives, and adverbs) on the one hand and to the purely appellative interjections and exclamations on the other hand. Other than descriptive words, ideophones convey another mode of perception by simulating an immediate sensory experience of an event that hearer and speaker share with each other; this creates a social intimacy of the speaker community (cf. Nuckolls, 1996: 12ff.), which is a characteristic of informal speech and an expressive language register, where ideophones are exclusively found. The simulation of an event implicates that the event takes place in the imagination, and thus, ideophones are inherently affirmative in nature and are not subject to negation or interrogation. The semantic concepts of ideophones vary from audible, visible, and tactile perceptions, as well as smell and psychological states, but a language contains at least audible ideophones. An interesting areal phenomenon is that the intension of colors like ‘bright white’ or ‘bright red’ is expressed by ideophones in African languages and in Malagasy. In contrast to words of the analytic register, each denoting only a part of an event, an ideophone denotes an event as a whole and represent an independent, complete sentence- or clause-like utterance, which forms its own intonation unit (Kilian-Hatz, 1999: 234 ff.). Comparable to a live broadcast on the radio, the speaker, by using ideophones or direct speech, raises the illusion that the verbalized event or situation happens simultaneously at the moment of its production/ pronunciation. In this perspective, speech level and event level are identical (cf. von Roncador, 1988, who terms this kind of level shift ‘Referenzverschiebung’ (‘shift of reference’) on which his definition of direct speech is based). Ideophones and direct speech differ only insofar as direct speech quotes a speech act, whereas ideophones report an extralinguistic event like a sound, a smell, a taste, a visual impression, a movement, or a psychic emotion. Due to the striking formal and functional parallels of direct speech with ideophonic clauses, the latter are even defined as a special form of direct speech; cf. Voeltz (1971)

402 Ideophones

and Kunene (1978) for Bantu languages and, more generally, von Roncador (1988: 125). Like utterances of direct speech, ideophones are either obligatorily or optionally introduced into the context by a verbum dicendi, as demonstrated in ex. (1) from Southern Sotho and/or by a complementizer as in ex. (2) from Tamil. Southern Sotho (1a) ba´ they

tho´la kept quiet

ba´-re they say/do

tu´. tu

Or: (1b) ba´ tho´la tu. ´ they kept quiet tu ‘They kept quiet – (they kept) dead-quiet’ (Kunene, 1978: 29).

Tamil (2) Ava kupu kupu-00u aZutaa. she QUOT weep.PAST.3sg.F ‘She wept with a sobbing sound’ (Asher, 1982: 242).

Alternatively, a small set of so-called dummy verbs (cf. Childs, 1994: 187f) like ‘do,’ ‘give,’ ‘go,’ ‘have’ and ‘be/become’ as in English ‘The cork cried/went pop’ (Oswalt, 1994: 302) or a perception verb like ‘hear,’ ‘see,’ ‘smell,’ ‘taste,’ etc., as in the Zulu ex. (3) is used to introduce ideophones into the context. Another way to embed the ideophonic utterance similar to direct speech in a narrow sense is by an introduction that is shortened to an agent, as in the examples (4) and (5). Zulu (3) Niyamuka mfo. you.PRES.smell of evil smell ‘You smell bad’ (Voeltz, 1971: 147).

Gbaya (4) Wanto za´Fa´Fa´ W. adamant ‘Wanto was adamant’ (Noss, 1986: 253).

Yir-Yoront (Yir Yoront) (5) Ngoyo kot! ‘I [spread it], kat!’ (Alpher, 1994: 168).

As with a clause of direct speech, grammatical relations between an ideophonic clause and other constituents of the descriptive context are not expressed morphologically. Ideophones are syntactically aloof and invariable. The interaction of ideophones and descriptive context takes place on the narration level where the descriptive text is interrupted by ideophonic utterances. The descriptive context can be understood as the frame wherein the ideophonic event is

realized. Ideophones may be integrated by two ways into the context. In the first type of integration, they may replace a descriptive sentence or clause. In this context, ideophones are completely isolated, as demonstrated in ex. (6), taken from the AdamawaUbangi language Baka with a series of ideophones, or they form one clause in a series of clauses, as in the English example ‘Our eyes met. Zing. Cupid!’ (Oswalt, 1994: 302). Baka (6) Wo`a`wo`a`wo`a`wo`a`, pcA cA , the hunters are the chimpanzee interrupts discussing eating kung, wo´oo`, a spear strikes the the chimpanzee falls down chimpanzee pao, tung. the chimpanzee hard falling the chimpanzee breaks a branch arrives on the ground (Kilian-Hatz, 1999: 29).

In the second type of integration into the context, the ideophone accompanies a verb, adverb, adjective, or even a clause like a nonverbal gesture by either anticipating it as in (7a) or by being a paraphrase in apposition to it as in (7b) and as in the German ex. in (8) Gbaya (7a) Rut, a yee´ kO. ‘Flash, it entered a hole’ (Noss, 1986: 251). (7b) A yee´ kO rut. ‘It entered a hole rut (like a flash)’ or ‘It entered a hole – flash’ (ibid.).

German (8) Da knallt —peng— there bang.3sg.PRES bang ‘There the door bangs: Bang.’

die the

Tu¨r. door

Languages differ typologically in that ideophones (a) may be used preferably with, or are even restricted to accompany, only one kind of phrase (e.g., a noun phrase or a verbal phrase) or (b) may replace or accompany more than one or even all kinds of phrases and clauses, as is the case in most Bantu languages. From this perspective, ideophones that are categorized as a subclass of adverbs, e.g., in Yagua (Payne and Payne, 1990: 457) in the formal approach, may be interpreted as ideophonic clauses that are paraphrases in apposition to or anticipating a verbal state or event. Unlike descriptive words, ideophonic utterances consist of one word only wherein the whole information is bundled synthetically. Their special performative function of verbalized dramatization is achieved by the inherent sound-symbolic nature of ideophones.

Ideophones 403

Information is encoded in language-specific, soundsymbolic submorphemic units that may consist of a syllable, a sound, or a tone that follow each other iconically according to the sequence of single components of an event (cf. Weakly, 1977). According to Rhodes (1994), such submorphemes may be phonetically realized in either a ‘wild’ or merely a ‘tame’ manner by means of a speaker-dependent preference to express only one or more than one nuance of an event in one submorpheme. This realization may lead to phonological anomalies. Here it is noteworthy that only the so-called ‘tame’ ideophones are prone to change the word class and may derive into nouns or verbs, or vice versa. also: Connotation; Metaphor and Conceptual Blending; Metonymy; Sound Symbolism; Synesthesia; Synesthesia and Language.

See

Bibliography Alpher B (1994). ‘Yir-Yoront ideophones.’ In Hinton L, Nichols J & Ohala J (eds.) Sound symbolism. Cambridge: Cambridge University Press. 161–177. Amha A (2001). ‘Ideophones and compound verbs in Wolaitta.’ In Voeltz F K E & Kilian-Hatz C (eds.) Ideophones (Typological Studies in Language 44). Amsterdam/Philadelphia: John Benjamins. 49–62. Asher R E (1982). Tamil (LDS Lingua Descriptive Studies 7). Amsterdam: North-Holland Publishing Company. Childs T G (1987). ‘A prototype definition for ideophones.’ Paper presented at the Eighteenth Annual Conference on African linguistics (April 23–26), University of Quebec, Montreal, Canada. Childs T G (1994). ‘African ideophones.’ In Hinton L, Nichols J & Ohala J (eds.) Sound symbolism. Cambridge: Cambridge University Press. 178–204. Diffloth G (1976). ‘Expressives in Semai.’ In Austroasiatic Studies 1, Oceanic Linguistics. Special Publication 13, Honolulu: University of Hawaii. 249–264. Doke C (1935). Bantu linguistic terminology. London: Longmans. Dumestre G (1981). ‘Ide´ophones et adverbes expressifs en Bambara.’ Afrique et Langage 15, 20–30. Fortune G (1962). Ideophones in Shona. An inaugural lecture. London: Oxford University Press. Kakumasu J (1986). ‘Urubu-Kaapor.’ In Derbyshire D C & Pullum G K (eds.) Handbook of Amazonian languages (vol. 1). Berlin: Mouton de Gruyter. S. 326–403. Kilian-Hatz C (1999). Ideophone: Eine typologische Untersuchung unter besonderer Beru¨cksichtigung afrikanischer Sprachen. Habilitationsschrift. Ko¨ln: Universita¨t zu Ko¨ln.

Kita S (1997). ‘Two-dimensional semiotic analysis of Japanese mimetics.’ Linguistics 55(2), 379–415. Kock I (1985). ‘The speech act theory: a preliminary investigation.’ South African Journal of African Languages 5, 49–53. Kunene D P (1978). The ideophone in Southern Sotho (Marburger Studien zur Afrika-und Asienkunde, Serie A: Afrika, Band 11). Berlin: Dietrich Reimer. Marivate C T D (1985). ‘The ideophones as a syntactic category in the Southern Bantu languages.’ Studies in African Linguistics, Suppl. 9, 210–214. Newman P (1968). ‘Ideophones from a syntactic point of view.’ Journal of West African Languages 5, 107–117. Noss P A (1986). ‘The ideophone in Gbaya syntax.’ Current Approaches in African Linguistics 3, 241–255. Nuckolls J B (1996). Sounds like life: sound symbolic grammar, performance and cognition in Pastaza Quechua. New York: Oxford University Press. Oswalt R L (1994). ‘Inanimate imitations in English.’ In Hinton L, Nichols J & Ohala J (eds.) Sound symbolism. Cambridge: Cambridge University Press. 293–324. Payne D L & Payne T E (1990). ‘Yagua.’ In Derbyshire D C & Pullum G K (eds.) Handbook of Amazonian languages (vol. 2). Berlin: Mouton de Gruyter. 249–474. Pe´li G (1991/92). Les ideophones en Gbaya-Bogoto. Me´moire de maıˆtrise, Universite´ de Bangui, Centrafricaine. Rhodes R (1994). ‘Aural images.’ In Hinton L, Nichols J & Ohala J (eds.) Sound symbolism. Cambridge: Cambridge University Press. 276–292. Roncador M von (1988). Zwischen direkter und indirekter Rede: Nichtwo¨rtliche direkte Rede, logophorische Konstruktionen und Verwandtes (Linguistische Arbeiten 192). Tu¨bingen: Niemeyer. Samarin W J (1971). ‘Survey of Bantu ideophones.’ African Language Studies 12, 131–168. Tosco M (1998). ‘Somali ideophones.’ Journal of African Cultural Studies 11(2), 125–156. Voeltz E F K (1971). ‘Towards the syntax of the ideophone in Zulu.’ In Kim C (ed.) Papers in African linguistics. Edmonton, Alberta: Linguistic Research Inc. 141–152. Voeltz E F K & Kilian-Hatz C (2001). Ideophones (Typological Studies in Language 44). Amsterdam/ Philadelphia: John Benjamins. Weakley A J (1973). An introduction to Xhosa ideophone derivation and syntax (1977 printing). Grahamstown: Rhodes University, Department of African Languages, Communication 2. Westermann D (1927). ‘Laut, Ton und Sinn in westafrikanischen Sudansprachen.’ In Boas F (ed.) Festschrift Meinhof. Glu¨ckstadt–Hamburg: J. J. Augustin. 315–328. Westermann D (1937). ‘Laut und Sinn in einigen westafrikanischen Sprachen.’ Archiv fu¨r die gesamte Phonetik 1, 154–172, 193–212.

404 Idioms

Idioms J Ayto, London, UK ß 2006 Elsevier Ltd. All rights reserved.

The term idiom may be defined as an institutionalized multiword construction, the meaning of which cannot be fully deduced from the meaning of its constituent words, and which may be regarded as a self-contained lexical item. For example, in Modern English the expression haul over the coals makes little sense if each word is interpreted separately and literally; it has to be decoded as a single semantic unit: ‘to admonish severely.’ Beneath this broad definition are grouped a large number of different constructions that inhabit intersecting spectra of semantic opacity, compositional fixity, and syntactic function.

Semantic Opacity At one extreme are phrases in which each word defies literal understanding: cut the mustard ‘to come up to the expected standard’, eat crow ‘to admit humiliatingly that one was wrong’, kick the bucket ‘to die’. Such idioms may contain fossilized words that have no independent existence in Modern English: for example pig in a poke ‘a purchase which turns out not to be what the vendor claimed’, where poke is an old word for a bag or sack. Some fixed phrases may contain elements used in their literal sense. In get down to brass tacks ‘to start frankly discussing the essentials of a matter’, for instance, get down to is broadly speaking being used as it would be in a (small) range of other collocations (for example, get down to business). Such elements may be variable (for example, know the ropes, show someone the ropes, where the ropes are ‘the special methods or procedures’). In some cases, all the main word elements have their literal meaning, and it is only the particular combination in which they appear that confers a meaning beyond the sum of the parts: bread and butter, for example, is ‘bread spread with butter’ (as a fully metaphoricized idiom it can also mean ‘a source of income’). Many fixed phrases have a meaning that could not be described as literal (perhaps because their genesis was obviously metaphorical, or because they preserve a usage no longer current in the language) but which nevertheless yield fairly readily to interpretation: behind the times ‘old-fashioned’, daylight robbery ‘a sale at an extortionate price’, the talk of the town ‘a subject widely discussed or gossiped about’.

At the other extreme of the meaning spectrum are institutionalized phrases that are completely semantically transparent: beneath contempt, from bad to worse, go wrong. Within this category come many cliche´s and also so-called ‘freezes’ (Fenk-Oczlon, 1989), in which pairs of words are fixed in a particular order (knives and forks, friends and neighbors). Their compositional fixity allies them with idioms, but most linguists would exclude them from full membership of the category because of their semantic transparency. Combinations of this sort shade into collocations, in which the choice of words to express another word’s lexical or grammatical relationships is severely restricted (afraid of, arrive at/come to/reach a decision). The closer to the opaque end of the spectrum a multiword construction is, the more likely it is to be regarded as a fully fledged idiom, but assignment to a particular category may depend on the delicacy of judgment applied to the semantics of a particular combination. As we have seen, bread and butter, which from a formal point of view is a freeze, is more than the sum of its semantic parts; and some might claim that the same is true of, for example, knives and forks, the combination of which narrows down the interpretation of its constituents to eating implements. Many compound nouns satisfy the criteria of semantic opacity applied above. For instance, the semantic force of green in green room ‘room for performers when not on stage’ is not readily deducible (it probably comes originally from the painting of the room green to rest the artists’ eyes after the glare of the limelight). However, such compounds are generally not regarded as idioms unless the complete lexical item is metaphoricized, for example, blue blood ‘noble or royal ancestry’, dark horse ‘a secretive or little-known person who does something unexpectedly remarkable,’ knight in shining armor ‘someone who comes bravely to the rescue’. The referent of green room is literally a type of room, so the term does not qualify as an idiom.

Grammatical and Compositional Fixity Most idioms that function as verbs or nouns participate in the inflectional variations normal for their word-class: verbs, for example, can be marked by a particular person or tense (She has let the cat out of the bag), and nouns can be pluralized (bears with sore heads). Beyond this, however, many idioms are subject to a range of grammatical restrictions, and are

Idioms 405

capable to a greater or less degree of being altered or added to or having their word order changed. The most firmly fixed verbal idioms resist passivization and other standard transformations: Fred kicked the bucket (¼ died) is well-formed; *The bucket was kicked by Fred and *It was the bucket that Fred kicked are not. Others are not so restricted syntactically: I’m used to having my leg pulled (¼ being teased) is no less acceptable than You’re pulling my leg! Fixed idioms do not allow insertions or alterations: *call it another day would not be an acceptable variation on call it a day ‘to finish what one is doing’. On the other hand, leave no stone unturned ‘to make every possible effort’ could legitimately be expanded to leave no legal stone unturned, and You can’t keep a good man down could be recast as, for example, You can’t keep a good politician down. Kick the bucket cannot become *kick the pail, nor (except in language play) can spill the beans ‘to reveal secrets’ be *spill your beans, but Your eyes are bigger than your stomach/tummy/belly are all equally acceptable. Transitive verbal idioms may have a vacant slot for a direct object (sweep (someone) off their feet ‘to overwhelm someone suddenly by inspiring a strong emotion of love within them for you’), an indirect object (give (someone) a piece of one’s mind ‘to scold someone angrily’), or a prepositional object (clap eyes on (someone) ‘to see someone’). At the most ‘fluid’ end of the spectrum are constructions such as what is X doing Y?, seeking elucidation of an incongruous situation (as in What’s that muddy boot doing on the table?) and V þ obj þ away, denoting the using up of time by a particular activity (as in They danced the night away) (Kay and Fillmore, 1999; Jackendoff, 1997). In idioms of this sort, the overall meaning appears to be determined more by the syntactic structure than by any semantic properties of the fixed elements, and they have been termed ‘constructional idioms’. (Idioms of this sort, midway between completely fixed idioms, which have to be interpreted as indivisible units, and ordinary non-idiomatic combinations, which are interpreted according to fully productive grammatical rules, require a new type of grammar to elucidate them. An approach termed ‘construction grammar’ (Kay and Fillmore, 1999; Jackendoff, 2002: 181) has been proposed, which deploys a set of extra, bolt-on rules, beyond the general grammar of the language, to deal with these semi-idioms.) Established norms of fixity are always liable to be set aside by creative language users. Once the basic models are in existence, it is perfectly feasible that such utterances could be produced as ‘There’ll be no bucket-kicking (¼ dying) around here yet’

(violating syntactic fixity) or ‘Never darken my patio again’ (perhaps said to an unwelcome barbecue guest; replacing the expected door with patio and thus violating compositional fixity). A related but subconscious phenomenon is the splicing of two or more idioms together, as in count your lucky stars, combining count one’s blessings and thank one’s lucky stars, both invoking gratitude for good fortune, and Don’t burn your bridges until you’ve come to them, combining burn one’s bridges ‘to take an irrevocable step’ with cross one’s bridges before one has come to them ‘to act prematurely’. The resulting blend (often superficially plausible, but, if not involving nearly synonymous idioms, semantically incongruous when more closely examined) is related to the ‘mixed metaphor’ problem. Diachronically, idioms are no more fixed than any other element of the lexicon. Constituent words may change over time (have something at one’s fingers’ ends ‘to be very familiar with or skilled at something’ has largely been replaced in Modern English by have something at one’s fingertips). And semantic change, which is inherent in the initial process of metaphoricization (jump the gun ‘to anticipate the starting pistol in a race’ > ‘to act precipitately’), may continue to operate: beg the question, for instance, originally meant ‘to assume something is true without showing any proof’, but it is now frequently used for ‘to avoid giving a direct answer to a question’ and also for ‘to seem to invite an obvious question’.

Syntactic Function Idioms occupy a wide range of syntactic roles, from membership in individual word classes to predicates and entire sentences. Verbs

Idiomatic verb phrases function syntactically as verbs in a sentence. Their internal structure is commonly V þ O, with or without further elements: change hands ‘to be exchanged’, stick one’s neck out ‘to take a great risk’, clap eyes on ‘to see’, make heavy weather of ‘to perform (a task) too laboriously’, rattle (someone’s) cage ‘to agitate someone’. Idiomatic combinations of verb þ particle (phrasal verbs) are usually categorized as idioms, too: shut up ‘to stop speaking’, take in ‘to deceive’, back down ‘to resile from a previous position’, play away ‘to have an extramarital sexual relationship’, root for ‘to support enthusiastically’, put up with ‘to tolerate’. Combinations with other adverbials are also frequent: go (someone’s) way ‘to behave in the way someone wants’, cut both ways ‘to have an equal effect on both parties’, go west ‘to die or be broken’, live in sin ‘to cohabit sexually

406 Idioms

when not married’. Other fairly frequent patterns are a verb with the dummy object it: lump it ‘to put up with a disagreeable state of affairs’; and combinations of two or more verbs with a conjunctive: pick and choose ‘to be excessively selective’. Verbal idioms can frequently function as the entire predicate of a sentence: [He] threw in the towel (¼ gave up); [The bridge] blew up (¼ was destroyed by explosion).

Adverbs

Many adverbial idioms are compositionally similar to adjectival idioms: by and large ‘generally speaking’, on and off ‘irregularly’, once or twice ‘a few times’, by the skin of one’s teeth ‘by a very narrow margin’, from A to Z ‘completely, from beginning to end’, on paper ‘theoretically’. Other types occur, though: all along ‘from the beginning’, ever so ‘extremely’, no end ‘greatly’.

Nouns

Others

Nominal idioms may be formed by either premodification of a noun: the hot seat ‘a position of uncomfortable difficulties’, salad days ‘time of youthful inexperience’, monkey business ‘dishonest or suspicious activities’, the bum’s rush ‘abrupt ejection or dismissal’; or postmodification: fish out of water ‘someone in an uncomfortably unfamiliar or inappropriate situation’, salt of the earth ‘someone very honest, dependable, etc.’, manna from heaven ‘an unexpected source of relief’; or by conjunction of two or more nouns: sackcloth and ashes ‘extreme contrition’, someone’s pride and joy ‘someone or something regarded with particular affection and pride’, meat and drink ‘a main source of pleasure, encouragement, etc.’, any Tom, Dick, or Harry ‘any unspecified ordinary person’. A high proportion of nominal idioms are evaluative, quasi-adjectival, and as such generally appear as predicates: ‘Tom was a real tower of strength (¼ was very supportive) through those difficult times.’ ‘That exam was a piece of cake.’ (¼ was very easy).

Idioms can also function as prepositions (in view of ‘after taking (something) into consideration’, by dint of ‘by means of’, to the tune of ‘in the amount of’, over and above ‘in addition to’) and conjunctives (not to mention ‘and also’, as long as ‘on condition that’). Idioms can also constitute complete utterances or sentences: (Well,) I never ‘I am surprised’. Many are jussive: Never mind ‘Do not concern yourself’, God forbid ‘I hope that will not happen’; or are used interjectionally: Big deal! (used to disparage an exaggerated claim), Not on your life! ‘Certainly not!’, So there! (used as a verbal riposte). Standardized (or cliche´d) sayings (Great minds think alike, There’s one born every minute) merge into fully-fledged proverbs (Many hands make light work). More-or-less buried literary allusions (There’s the rub – identifying a drawback; originally from Shakespeare’s Hamlet) make a significant contribution to idioms of this sort.

Adjectives

The term cliche´ implies a value judgment, of overuse and consequent staleness. It denotes an institutionalized expression, of greater or less semantic transparency, which has been heard and seen so often that it strikes one as annoyingly hackneyed, and condemns its user as unoriginal. Membership of the category is adjudicated subjectively, and so will differ from person to person, but there are usually several high-profile phrases in circulation that most people would agree to stigmatize: for example, at the end of the day ‘when everything has been taken into consideration’, the name of the game ‘the main idea or intention’, in this day and age ‘now’. The desirability of originality is a comparatively recent phenomenon: before the 19th century, the use of stock expressions was not seen as worthy of condemnation, and cliche´ in its present-day usage is not recorded in English before the 1890s.

Adjectival idioms may be formed by premodification of an adjective: brand new ‘completely new’, dirt poor ‘extremely poor’; by postmodification: dyed-in-thewool ‘inveterate’, holier-than-thou ‘sanctimonious’, wet behind the ears ‘inexperienced’; and by conjunction: hot and bothered ‘excited and annoyed’, tired and emotional ‘drunk’, spick and span ‘very clean and tidy’, footloose and fancy-free ‘free from entanglements (mainly physical freedom)’. Similes, which function adjectivally, can be categorized with idioms if their meaning is not transparent: bright as a button ‘lively and clever’, crazy like a fox ‘astute’. Adjectival idioms based on prepositional phrases are by their nature usually found in predicates: ‘I’m feeling rather under the weather’ (¼ unwell), ‘They’re in cahoots’ (¼ collaborating), ‘You’re out of your mind’ (¼ insane), ‘He’s over the hill’ (¼ past his best age); but prenominal use is not ruled out: ‘an in-your-face (¼ confrontational) attitude’, ‘an out-of-touch (¼ unaware of current developments) prime minister’.

Cliche´s

See also: Collocations; Componential Analysis; Compositionality; Context; Context and Common Ground; Definition in Lexicology; Dictionaries; Jargon; Lexicon: Structure.

Implicature 407

Bibliography Cowie A P & Mackin R (1975). Oxford dictionary of current idiomatic English (vol. 1). Verbs with prepositions and particles. Cowie A P & McCaig I R (1983). (vol. 2). Phrase, clause, and sentence idioms. Oxford: Oxford University Press. Fenk-Oczlon G (1989). ‘Word frequency and word order in freezes.’ Linguistics 27, 517–556. Jackendoff R (1997). ‘Twistin’ the night away.’ Language 73, 534–560. Jackendoff R (2002). Foundations of language. Oxford: Oxford University Press.

Kay P & Fillmore C J (1999). ‘Grammatical constructions and linguistic generalizations: the What’s X doing Y? construction.’ Language 75, 1–33. Kirkpatrick B (1996). Dictionary of cliches. London: Bloomsbury. Sinclair J (ed.) (1995, 2002). Collins COBUILD dictionary of idioms. London: HarperCollins. Stern K (ed.) (1998). Longman idioms dictionary. London: Longman. Walter E (ed.) (1998). Cambridge international dictionary of idioms. Cambridge: Cambridge University Press.

Implicature J Meibauer, Universita¨t Mainz, Mainz, Germany ß 2006 Elsevier Ltd. All rights reserved.

The Basic Notions The term ‘implicature’ goes back to the philosopher Paul Grice, as laid down in his seminal article ‘Logic and Conversation’ (Grice, 1989), which is the published version of a part of his William James lectures held in 1967 at Harvard University. In Grice’s approach, both ‘what is implicated’ and ‘what is said’ are part of speaker meaning. ‘What is said’ is that part of meaning that is determined by truth-conditional semantics, while ‘what is implicated’ is that part of meaning that cannot be captured by truth conditions and therefore belongs to pragmatics. Several types of implicature are distinguished. Figure 1 shows the Gricean typology of speaker meaning (cf. Levinson, 1983: 131). The most widely accepted type of implicature is the conversational implicature. According to Grice, it comes in two ways, generalized conversational implicature (GCI) and particularized conversational implicature (PCI). The following example from Levinson (2000: 16–17) illustrates this distinction:

Figure 1 Gricean typology of speaker meaning.

Context, 1 Speaker A: What time is it? Speaker B: Some of the guests are already leaving. PCI: ‘It must be late.’ GCI: ‘Not all of the guests are already leaving.’ Context, 2 Speaker A: Where’s John? Speaker B: Some of the guests are already leaving. PCI: ‘Perhaps John has already left.’ GCI: ‘Not all of the guests are already leaving.’

Because the implicature (‘. . . not all . . .’) triggered by some arises in both contexts, it is relatively contextindependent. Relative context-independence is the most prominent property of GCIs. In addition, GCIs are normally, or even consistently, associated with certain linguistic forms. For example, if someone utters Peter is meeting a woman this evening it is, because of the indefinite article, standardly implicated that the woman is not his wife, close relative, etc. (cf. Grice, 1989: 37; Hawkins, 1991). In contrast to GCIs, PCIs are highly context-dependent, and they are not consistently associated with any linguistic form. The distinction between conversational implicatures and conventional implicatures draws on the observation that in coordinations like Anna is rich but she is happy, the truth conditions are just the truth conditions of the coordination Anna is rich and she is happy, with the exception of the contrastive meaning of but. This meaning is not truth-functional, and it is not context-dependent either; hence, there is some motivation for assuming the category of conventional implicature. Note that there may be further types of implicature, e.g., implicatures of politeness or style that are

408 Implicature

neither conventional nor conversational (cf. Leech, 1983; Brown and Levinson, 1987). Conversational implicatures come about by the exploitation (apparent flouting) or observation of the cooperative principle (CP) and a set of maxims (Grice, 1989) (see Cooperative Principle): Cooperative Principle Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged. Maxim of Quantity 1. Make your contribution as informative as is required (for the current purposes of exchange). 2. Do not make your contribution more informative than is required. Maxim of Quality Try to make your contribution one that is true. 1. Do not say what you believe to be false. 2. Do not say that for which you lack adequate evidence. Maxim of Relevance Be relevant. Maxim of Manner Be perspicuous. 1. Avoid obscurity of expression. 2. Avoid ambiguity. 3. Be brief (avoid unnecessary prolixity). 4. Be orderly.

These maxims and submaxims are conceived as rules of rational behavior, not as ethical norms. They figure prominently in the derivation of an implicature. The basic idea of such a derivation is best illustrated with a simple dialogue. Imagine that I ask my colleague Is Markus there? and she answers There is a pink Porsche behind the library building. Understood literally, such an answer does not make any sense. However, as I assume that my colleague is cooperative, and remembering that Markus drives a pink Porsche, I can figure out that Markus is in the library. In working out this information, I have made use of the assumption that my colleague’s answer has been relevant with regard to my question. Thus, conversational implicatures display the property of calculability. A general scheme for the working out of a conversational implicature is given by Grice (1989: 30–31): A man who, by (in, when) saying (or making as if to say) that p has implicated that q, may be said to have conversationally implicated that q, provided that (1) he is to be presumed to be observing the conversational maxims, or at least the Cooperative Principle; (2) the supposition that he is aware that, or thinks that, q is required in

Table 1 Typical cases of implicature Maxims

Exploitation

Observation

Quantity Quality

Tautology (1) Irony, metaphor, sarcasm (3) Implicatures due to thematic switch (5) Implicatures due to obscurity, etc. (7)

Scalar implicature (2) Belief implicature in assertions (4) Bridging (6)

Relevance Manner

Conjunction buttressing (8)

order to make his saying or making as if to say p (or doing so in those terms) consistent with this presumption; and (3) the speaker thinks (and would expect the hearer to think that the speaker thinks) that it is within the competence of the hearer to work out, or grasp intuitively, that the supposition in (2) is required.

Table 1 lists some of the most typical cases covered by the CP and the maxims. Examples for each case are given below the table. For further classical examples, see Grice (1989) and Levinson (1983). In what follows, ‘þ>’ stands for ‘implicates conversationally’: (1) War is war. þ> ‘There is nothing one can do about it.’ (2) Some men were drunk. þ> ‘Not all of them were drunk.’ (3a) He is a fine friend. þ> ‘He is not a fine friend.’ (3b) You are the cream in my coffee. þ> ‘You are my best friend.’ (4) There is life on Mars. þ> ‘Speaker believes that there is life on Mars.’ (5) Speaker A: I’m out of petrol. Speaker B: There is a garage round the corner. þ> ‘The garage is open.’ (6) Speaker A: Look, that old spinster over there! Speaker B: Nice weather today, isn’t it? þ> ‘No comment.’ (7) She produced a series of noises that resembled ‘‘Sı`, mi chiamano Mimi’’. þ> ‘Her singing was a complete disaster.’ (8) Anna went to the shop and bought jeans. þ> ‘She bought the jeans in the shop.’

For further illustration of the exploitation/observation dichotomy, look at (1) and (8). As to (1), tautological utterances are always true, which amounts to their being fundamentally uninformative. There is no situation where a speaker wants to tell someone that something is identical with itself. Thus, it seems that the utterer of (1) has violated the first maxim of Quality. Gricean reasoning then leads the hearer to the insight that this violation was only apparent (cf. Autenrieth, 1997). In (8), we have a

Implicature 409

simple conjunction of two sentences. If the meaning of and were to be the same as the meaning of the logical operator, it could not be explained that there is an additional meaning ‘and then.’ Grice’s view is that we may identify the semantic meaning of and with the pure connecting operation known from logic as long as we are able to derive the additional meaning from the maxims. The observation of the fourth maxim of Manner, ‘‘Be orderly!’’, will do this job (cf. Posner, 1980). Both observation and exploitation are in line with the general pattern for working out an implicature. Besides the property of calculability, conversational implicatures display the properties of variability and cancellability. Variability means that there are contexts where the speaker utters the same utterance, but the respective implicature does not arise. Thus, the implicature is dependent on the specific context in which it arises. (This does not exclude the notion of relative context-independency in the case of GCIs.) Cancellability (or defeasibility) means that it is possible to withdraw an implicature within the situation of utterance without any contradiction. For example, it is possible to utter Some men were drunk, indeed all. Reversely, conversational implicatures should be reinforceable, as Sadock (1978) proposed. Thus, it is possible to conjoin the content of an implicature with the utterance that triggers that implicature, as in Some of the girls were reading books but not all. Conventional implicatures are neither calculable, nor variable, nor cancellable. However, they are said to be detachable, i.e., if the elements that trigger them are replaced, the respective implicature does not arise. By contrast, conversational implicatures are nondetachable, i.e., if there is an expression X0 that shares meaning with expression X that triggers the implicature, the same implicature should arise. For example, if She is very beautiful gives rise to an ironical implicature, then She is a real beauty should have the same effect (Sadock, 1978: 287). (An obvious exception to this are Manner implicatures.) For further illustration, consider focus particles like even. An utterance such as Even JOHN drives a Porsche has the same truth conditions as the corresponding utterance without the focus particle, i.e., John drives a Porsche. The additional meaning of the type ‘John is the least likely to drive a Porsche,’ being related to a contextually given set of other individuals (e.g., Gustav, Bettina, Markus . . .), may be considered as a conventional implicature (cf. Ko¨nig, 1991), because this meaning appears to be neither truth-conditional nor context-dependent. Moreover, if even is replaced by another focus particle, the respective implicature is not triggered.

However, if the conventional implicature is bound to the specific lexical item even, and for this reason is detachable, then the implicature seems to be part of the literal meaning of this lexical item. Therefore, it is difficult to distinguish between conventional implicatures on the one hand and entailments (belonging to the ‘what is said’) on the other hand. For this and other reasons, some researchers do not accept that there is a category of conventional implicature (cf. Bach, 1999; for a logical approach, see Potts, 2005).

Beyond Grice The reception of the Gricean framework has been largely dominated by the wish to develop a more systematic architecture of maxims. Moreover, the Cooperative Principle has been on trial, as other aspects (e.g., logical, anthropological, cognitive, etc.) became more attractive. The prevailing tendency has been to reduce the set of maxims proposed by Grice. Three major reductive approaches have been developed: (a) the tri-heuristic approach by Levinson (2000), (b) the dual principle approach by Horn (1984), and (c) the monoprincipled approach by Sperber and Wilson (1995) and Carston (2002). These approaches are outlined in the following sections. It should be mentioned, however, that there are other important approaches that elaborate on the Gricean framework, e.g., Gazdar (1979) or Atlas (2005), as well as radical criticisms such as Davis (1998). For useful surveys, see Levinson (1983: Ch. 3) and Rolf (1994). Presumptive Meanings: Levinson’s Theory of Generalized Conversational Implicature

Levinson develops his revision of Grice’s maxims from three heuristics that follow from the anthropological need to overcome the ‘‘fundamental bottleneck in the efficiency of human communication, occasioned no doubt by absolute physiological constraints on the articulators’’ (Levinson, 2000: 28). Accordingly, Grice’s rationalistic CP plays no role. The heuristics are (Levinson, 2000: 31–33): Levinson’s Heuristics Heuristic 1: What isn’t said, isn’t. Heuristic 2: What is simply described, is stereotypically exemplified. Heuristic 3: What’s said in an abnormal way, isn’t normal; or Marked message indicates marked situation.

Heuristics 1 corresponds to Levinson’s Q-principle (see maxim of Quantity 1 in Grice’s framework), Heuristics 2 to Levinson’s I-principle (Grice’s maxim

410 Implicature Table 2 Correspondences between Levinson’s Heuristics and Principles, and Grice’s Maxims Heuristics

Principles

Grice’s Maxims

Example

Heuristic 1

Q-Principle

Quantity, 1

Heuristic 2

I-Principle

Quantity, 2

Heuristic 3

M-Principle

Manner, 1 and 3

Q-Implicature: (a) Some colleagues were drunk. þ> ‘Not all of them were drunk.’ (scalar implicature) (b) The doctor believes that the patient will not recover. þ> ‘The doctor may or may not know that the patient will not recover.’ (clausal implicature) I-Implicature: Anna turned the switch and the motor started. þ> ‘Anna turned the switch and then/therefore the motor started.’ (conjunction buttressing) M-Implicature: Bill caused the car to stop. (vs. Bill stopped the car.) þ> ‘He did this indirectly, not in the normal way, e.g., by use of the emergency brake.’ (periphrasis)

of Quantity 2), and Heuristics 3 to Levinson’s M-principle (Grice’s maxim of Manner 1 and 3). These three principles are said to derive GCIs. For the correspondences to Grice, and a typical example, see Table 2. Where inconsistent implicatures arise, they are ‘‘systematically resolved by an ordered set of priorities’’ (Levinson, 2000: 39), among them Q > M > I, where ‘>’ is understood as ‘defeats inconsistency.’ Levinson (2000: 153–164) gives some examples for Q > I, Q > M, and M > I. An example for Q > M is It’s not unlikely that Giant Stride will win the Derby, and indeed I think it is likely. Here, as Levinson (2000: 160) points out, the first conjunct gives rise to the M-based implicature ‘less than fully likely,’ because of the double negative not unlikely, while the second conjunct triggers the Q-based implicature ‘it is possible it is likely,’ because of the use of think, which does not entail the complement clause. In this case, the Q-implicature of the second conjunct defeats the M-implicature of the first. (However, as Traugott, 2004: 11 observes, indeed may serve as a M-implicature cancelling device.) The Q-principle is defined as follows (Levinson, 2000: 76): Q-principle Speaker’s maxim: Do not provide a statement that is informationally weaker than your knowledge of the world allows, unless providing an informationally stronger statement would contravene the I-principle. Specifically, select the informationally strongest paradigmatic alternate that is consistent with the facts. Recipient’s corollary: Take it that the speaker made the strongest statement consistent with what he knows, and therefore that: a. if the speaker asserts A(W), where A is a sentence frame and W an informationally weaker expression than S, and the contrastive expressions form a Horn scale (in the prototype case such that A(S) entails A(W) ), then one can infer that the speaker knows that the stronger statement A(S) (with S substituted for W) would be false [. . .] b. if the speaker asserted A(W) and A(W) fails to entail an embedded sentence Q, which a stronger statement A(S) would entail, and {S, W} form a contrast set,

then one can infer that the speaker does not know whether Q obtains or not (i.e., {P(Q), P  (Q)} read as ‘it is epistemically possible that Q and epistemically possible that not-Q’

The I-Principle mentioned in the Speaker’s maxim requires that a speaker should not be more informative than necessary (see below). Wherever it is possible, the speaker should build on stereotypical assumptions. In the Recipient’s corollary, two cases are distinguished, namely scalar implicature, involving Horn scales (named after Laurence Horn, see the next section) and clausal implicature, involving contrast sets. In the case of scalar implicatures, we need a Horn scale: given a scale with p as an informationally weak and q as an informationally strong element, the assertion of p implicates the negation of q. In such cases, the speaker is supposed to be as informative as possible, thus observing the Q-principle (or the maxim of Quantity). Therefore, the speaker could not say more than he actually did, and this means that the stronger statement does not hold. A classical example is the utterance p ¼ Some colleagues were drunk implicating q ¼ ‘Not all of them were drunk’. In the case of clausal implicatures, we need contrast sets. Let {know, believe} be a contrast set. Then p ¼ The doctor believes that the patient will not recover implicates q1 ¼ ‘The doctor may or may not know that the patient will not recover’ (Levinson, 2000: 110). The crucial point is that clausal implicatures indicate epistemic uncertainty about the truth of the embedded sentence. Note that, because also form a Horn scale, there is a scalar implicature as well: in this case p implicates q2 ¼ ‘The doctor does not know that the patient will not recover.’ Well-known Horn scales include the quantifiers , connectives , modals , , adverbs , degree adjectives , and verbs , . Contrast sets include verbal doublets like {know, believe}, {realize, think}, {reveal, claim},

Implicature 411

{predict, foresee}, and others (cf. Levinson, 2000: 111). Now consider the I-principle (Levinson, 2000: 114–115): I-Principle Speaker’s maxim: the maxim of Minimization. ‘Say as little as necessary’; that is, produce the minimal linguistic information sufficient to achieve your communicational ends (bearing Q in mind). Recipient’s corollary: the Enrichment Rule. Amplify the informational content of the speaker’s utterance by finding the most specific interpretation, up to what you judge to be the speaker’s m-intended [¼ meaningintended] point, unless the speaker has broken the maxim of Minimization by using a marked or prolix expression. Specifically: a. Assume the richest temporal, causal and referential connections between described situations or events, consistent with what is taken for granted. b. Assume that stereotypical relations obtain between referents or events, unless this is inconsistent with (a). c. Avoid interpretations that multiply entities referred to (assume referential parsimony); specifically, prefer coreferential readings of reduced NPs (pronouns and zeros). d. Assume the existence or actuality of what a sentence is about (if that is consistent with what is taken for granted).

This principle is said to cover a whole range of implicatures: conditional perfection (9), conjunction buttressing (10), bridging (11), inference to stereotype (12), negative strengthening (13), NEG-raising (14), preferred local coreference (15), the mirror maxim (16), specialization of spatial terms (17), and possessive interpretations (18) (cf. Levinson, 2000: 117–118). (9) If you mow the lawn, I’ll give you five dollars. þ> ‘If you don’t mow the lawn, I will not give you five dollars.’ (10) Bettina wrote an encyclopedia and sold the rights to Elsevier. þ> ‘Bettina wrote an encyclopedia and then sold the rights to Elsevier.’ (11) Gustav unpacked the picnic. The beer was warm. þ> ‘The beer was part of the picnic.’ (12) Markus said ‘Hello’ to the secretary and then he smiled. þ> ‘Markus said ‘‘Hello’’ to the female secretary and then Markus smiled.’ (13) I don’t like Alice. þ> ‘I positively dislike Alice.’ (14) I don’t think he is reliable. þ> ‘I think he is not reliable.’

(15) John came in and he sat down. þ> ‘Johni came in and hei sat down.’ (16) Harry and Sue bought a piano. þ> ‘They bought it together, not one each.’ (17) The nail is in the wood. þ> ‘The nail is buried in the wood.’ (18) Wendy’s children þ> ‘those to whom she is parent’; Wendy’s house þ> ‘the one she lives in’; ‘Wendy’s responsibility’ þ> the one falling on her; Wendy’s theory þ> ‘the one she originated’

The M-principle is defined as follows (Levinson, 2000: 136–137): M-principle Speaker’s maxim: Indicate an abnormal, nonstereotypical situation by using marked expressions that contrast with those you would use to describe the corresponding normal, stereotypical situation. Recipient’s corollary: What is said in an abnormal way indicates an abnormal situation, or marked messages indicate marked situations, specifically: Where S has said p, containing a marked expression M, and there is an unmarked alternate expression U, with the same denotation D, which the speaker might have employed in the same sentence-frame instead, then where U would have I-implicated the stereotypical or more specific subset d of D, the marked expression M will implicate the complement of the denotation d, namely d¯ of D.

The M-principle is supposed to cover a range of cases, among them lexical doublets (19) and rival word formations (20), nominal compounds (21), litotes (22), certain genitive (23) and zero morph constructions (24), periphrasis (25), and repetition (26) (cf. Levinson, 2000: 138–153). (19) She was reading a tome [vs. book]. þ> ‘She was reading some massive, weighty volume.’ (20) Ich nehme den Flieger [vs. das Flugzeug]. (¼ I take the plane [vs. the airplane]) þ> ‘Fliegen ist nichts Besonderes fu¨r mich.’ (¼ ‘Flying is quite normal for me.’) (21) This is a box for matches (vs. matchbox). þ> ‘This is a (nonprototypical) box specially made for containing matches.’ (22) It took a not inconsiderable effort. þ> ‘It took a close-to-considerable effort.’ (23) the picture of the child (vs. the child’s picture) þ> ‘the picture depicts the child’ (24) She went to the school/the church/the university (vs. to school, to church, to university, etc.) þ> ‘She went to the place but not necessarily to do the associated stereotypical activity.’

412 Implicature (25) Bill caused the car to stop. (vs. Bill stopped the car.) þ> ‘He did this indirectly, not in the normal way (e.g., by using the emergency brake).’ (26) He went to bed and slept and slept. þ> ‘He slept longer than usual.’

Note that only the first (‘Avoid obscurity of expression’) and the third (‘Be brief (avoid unnecessary prolixity)’) submaxims of the Gricean maxims of Manner survive in Levinson’s M-principle. Levinson views the second submaxim (‘Avoid ambiguity’) in connection with ‘generality narrowing’, which is subsumed under the Q-principle (Levinson, 2000: 135). The fourth submaxim (‘Be orderly’) is not needed any more, because the notorious cases of ‘conjunction buttressing’ fall under the I-principle in Levinson’s framework. Moreover, Levinson (2000: 135) notes the general cognitive status of this general semiotic principle of linearization, and he questions its status as a maxim. It seems that many of the cases in (19)–(26) may be explained in terms of the Q- or I-principle; in other cases, it is not at all clear that we have the same denotation, as required in the Recipient’s corollary of the M-principle, thus throwing into doubt whether a separate M-principle is really needed. By comparison, Horn’s (1984) approach (sketched in the next section) has no separate maxim/principle of Manner. For further discussion, see Meibauer (1997) and Traugott (2004). Obviously, the maxim of Quality and the maxim of Relevance are not maxims that figure in the derivation of GCIs. The only comment on the maxim of Quality Levinson gives is that this maxim ‘‘plays only a background role’’ in the derivation of GCIs; maybe he has the sincerity conditions for assertive acts in mind (Levinson, 2000: 74). Note that Grice (1989: 34) needed the maxim of Quality to derive the implicatures in the cases of irony, metaphor, and sarcasm (see Irony). In contrast, Levinson argues that irony and sarcasm are cases of PCIs (Levinson, 2000: 386, Fn. 2), a claim that seems somewhat premature at least when considering cases of conventional irony and sarcasm. The maxim of Relevance is a maxim that, according to Levinson (2000: 74), derives only PCIs. However, this maxim seems to play a role when it comes to disambiguation and ‘ellipsis unpacking’ (Levinson, 2000: 174, 183). In addition to the revision of the Gricean maxims just outlined, Levinson sketches a radical revision of the widely accepted Gricean view of the interaction of grammar and pragmatics according to which in language production, conversational implicatures are supposed to operate on, and follow the semantic

representation of, the said (Levinson, 2000: 173). Levinson finds this view basically wrong: Grice’s account makes implicature dependent on a prior determination of ‘the said.’ The said in turn depends on disambiguation, indexical resolution, reference fixing, not to mention ellipsis unpacking and generality narrowing. But each of these processes, which are prerequisites to determining the proposition expressed, may themselves depend crucially on processes that look undistinguishable from implicatures. Thus, what is said seems both to determine and to be determined by implicature. Let us call this Grice’s circle. (Levinson, 2000: 186)

According to Levinson, there are at least five phenomena that show the influence of GCIs on sentence meaning (Levinson, 2000: 172–187). First, GCIs (of the scalar type) are involved in the disambiguation of ambiguous constructions like some cats and dogs, for only the bracketing [[some cats] and dogs], with the appropriate implicature ‘some but not all cats, and dogs in general,’ is appropriate in the sentence He’s an indiscriminate dog-lover; he likes some cats and dogs. Second, the resolution of indexicals is dependent on the calculation of GCIs, e.g., The meeting is on Thursday. þ> ‘not tomorrow’ (when tomorrow is Thursday). Third, reference identification often requires GCIs, e.g., John came in and the man sat down. þ> ‘The man was not identical to John.’ Fourth, in ellipsis unpacking, as in simple dialogues like Who came? – John , the missing information is constructed on the basis of Relevance and I-Implicature. Finally, there is the case of generality narrowing, e.g., if someone utters I’ve eaten breakfast þ> ‘I’ve eaten breakfast [this morning]’ where the Q-principle is activated. In order to resolve the dilemma of Grice’s circle, i.e., to account for ‘pragmatic intrusion,’ Levinson proposes an alternative model (Levinson, 2000: 188). This model contains three pragmatic components, namely Indexical Pragmatics, Gricean Pragmatics 1, and Gricean Pragmatics 2, and two semantic components, namely Compositional Semantics and Semantic Interpretation (model-theoretic interpretation). The output of Compositional Semantics and Indexical Pragmatics is input for Gricean Pragmatics 1. The output of Gricean Pragmatics 1 is input for Semantic Interpretation, and its output (‘sentence meaning, proposition expressed’) is input for Gricean Pragmatics 2, whose output is ‘speaker meaning, proposition meant by the speaker.’ Whereas Indexical Pragmatics and Gricean Pragmatics 1 are presemantic pragmatic components, Gricean Pragmatics 2 is a postsemantic pragmatic component. It seems that Gricean Pragmatics 2 deals with PCIs

Implicature 413

(‘indirection, irony and tropes, etc.,’) whereas Gricean Pragmatics 1 deals with GCIs (‘disambiguation, fixing reference, generality-narrowing, etc.’). At the heart of Levinson’s approach is his analysis of GCIs, precisely because it is here that arguments for this new model of the semantics-pragmatics interaction may be found. Division of Pragmatic Labor: Horn’s Q- and R-Principles

Central to Horn’s approach to implicature is the insight that implicatures have to do with ‘‘regulating the economy of linguistic information’’ (Horn, 2004: 13). In contrast to Levinson, Horn (1984) assumes only two principles, the Q-principle and the R-principle: Q-principle Make your contribution sufficient: Say as much as you can (given R). (Lower-bounding principle, inducing upper-bounding implicatures) R-principle Make your contribution necessary: Say no more than you must (given Q). (Upper-bounding principle, inducing lower-bounding implicatures)

The Q-principle collects the Gricean maxims of Quantity 1 as well as Manner 1 and 2, while the R-Principle collects Quantity 2, Relation, and Manner 3 and 4. The maxim of Quality is considered as unreducible, as truthfulness is a precondition for satisfying the other maxims (Horn, 2004: 7). The Q-principle aims at the maximization of content. It is a guarantee for the hearer that the content is sufficient. The hearer infers from the speaker’s failure to use a more informative or briefer form that the speaker was not in a position to do so. Scalar implicatures are a case in point. The R-principle aims at the minimization of expression, and consequently, the minimization of the speaker’s effort. According to Horn, this principle holds for all indirect speech acts. The following table, which is adapted from Horn (2004: 10), shows how the Q-principle works in the case of scalar implicatures (Table 3). The two-sided reading is the default case. According to Horn, the conflict between the Q-principle and the R-principle may be resolved,

as expressed by the following principle (Horn, 1984: 22): The Division of Pragmatic Labor The use of a marked (relatively complex and/or prolix) expression when a corresponding unmarked (simpler, less ‘effortful’) alternative expression is available tends to be interpreted as conveying a marked message (one which the unmarked alternative would not or could not have conveyed).

Levinson (1987: 73) argues that Horn mixes up two things here that properly should be distinguished, namely minimization of content on the one hand, and minimization of expression on the other. According to Levinson, splitting up the maxims of Manner in the way Horn does is mistaken, because the Manner maxims are fundamentally dependent on form, and thus related to minimization of expression. Following Horn’s original work, much research has been done on Horn scales, e.g., by Hirschberg (1991), Fretheim (1992), Matsumoto (1995), Sauerland (2004), van Rooy (2004). In this connection, three further areas of research deserve to be singled out. First, as shown in Horn (1989: Ch. 4), there is the phenomenon of metalinguistic negation. For example, when uttering It’s not warm, it’s hot! the first part of the utterance gives rise to the scalar implicature ‘It is not hot,’ but this implicature is obviously denied in the second part of the utterance. Typically, utterances of this type have a humorous, ironical, or sarcastic flair (cf. Chapman, 1996 for an overview and Carston, 1996 and Iwata, 1998 for an echo-theoretic interpretation). Second, there is some discussion about the exact status of the Horn scales in the lexicon, e.g., how are elements selected for scales, how is the ordering of the elements achieved, etc. An influential approach is the one by Hirschberg (1991), who argues that there exist, in addition to lexical scales, scales that are induced pragmatically or on the basis of real-world knowledge. For example, when speaker A asks Did you get Paul Newman’s autograph? and speaker B answers I got Joanne Woodward’s, implicating ‘not Paul Newman’s,’ we are dealing with a salient scale of autograph prestige . Consequently, Hirschberg (1991: 42) denies that

Table 3 Application of the Q-Principle to scalar implicatures Statements

Lower bound, one-sided (what is said)

Upper bound, two-sided (what is implicated qua Q)

a. Pat has three children b. You ate some of the cake c. It’s possible she’ll win d. He’s a knave or a fool e. It’s warm

‘. . . at least three . . .’ ‘. . . some if not all . . .’ ‘. . . at least possible . . .’ ‘. . . and perhaps both . . .’ ‘. . . at least warm . . .’

‘. . . exactly three . . .’ ‘. . . some but not all . . .’ ‘. . . possible but not certain . . .’ ‘. . . but not both’ ‘. . . but not hot’

414 Implicature

there is any principled distinction between GCIs and PCIs. Third, the economical aspect of Horn’s reduction of the Gricean apparatus has recently become very attractive within Bidirectional Optimality Theory (cf. Blutner, 2004). This theory assumes that sentences are semantically underspecified, and therefore are in need of enrichment. A function Gen is assumed that determines for each common ground the set of possible enrichments. Bidirectional (i.e., taking the perspective of speaker and hearer) Optimality Theory then stipulates that a form-meaning pair is optimal if and only if it is taken from the set defined by Gen, and that there is no other pair that better fulfills the requirements of the Q- and I-principle. For an application and further discussion, see Krifka (2002). Relevance Theory: Carston’s Underdeterminacy Thesis

Relevance theory is a cognitive theory of meaning whose major claims are that semantic meaning is the result of linguistic decoding processes, whereas pragmatic meaning is the result of inferential processes constrained by one single principle, the Principle of Relevance, originally proposed in Sperber and Wilson (1995). However, the connection to the Gricean maxim of Relevance is rather weak, as can be seen from the following definitions (Carston, 2002; for other versions, see Wilson and Sperber, 2004): First (Cognitive) Principle of Relevance Human cognition is geared towards the maximization of relevance (that is, to the achievement of as many contextual (cognitive) effects as possible for as little processing effort as possible). Second (Communicative) Principle of Relevance Every act of ostensive communication (e.g., an utterance) communicates a presumption of its own optimal relevance.

Carston (2002) questions the standard division of labor between semantics and pragmatics and argues that pragmatics contributes much more to the construction of explicit meaning (‘what is said’) than generally assumed. Her overall aim is to establish relevance theory as a theory of cognitive pragmatics. The relevance theoretic approach is, according to Carston, ‘‘to be characterized as a sub-personal-level explanatory account of a specific performance mechanism conducted at the level of representations-andprocedures’’ (Carston, 2002: 11). Carston’s underdeterminacy thesis says that linguistic meaning generally underdetermines what is said. Pragmatic inferences are not only necessary to determine implicatures, but also to fix the proposition

directly expressed by an utterance. This discrepancy between the meaning encoded in linguistic expressions and the proposition expressed by the utterance of these expressions (‘what is said’) is illustrated by various cases (over and above the well-known cases of ambiguities and indexical resolution): missing constituents (27), unspecified scope of elements (28), underspecifity or weakness of encoded conceptual content (29), overspecifity or narrowness of encoded conceptual content (30): (27a) [Where is the book?] On the top shelf. (¼ ‘The book is on the top shelf.’) (27b) Paracetamol is better. [than what?] (27c) This fruit is green. [which part of the fruit?] (28a) She didn’t butter the toast in the bathroom with a knife. [different stress changes the information structure] (28b) There’s nothing on TV tonight. [nothing that is interesting for you] (29) I’m tired. [predicate is too weak] (30) Her face is oblong. [predicate is too narrow]

In all these cases, additional inferential steps are necessary to understand what the speaker intends to say. Since linguistically encoded meanings are necessarily incomplete, pragmatics makes an essential contribution not only to the construction of implicit meaning but also to the construction of explicit meaning. In the spirit of Relevance Theory, Carston proposes a three-level model of semantic and pragmatic interpretation of linguistic expressions. The first step involves semantic decoding of linguistic expressions. The output of the semantic decoding is an impoverished, nonpropositional semantic representation, which Carston calls logical form. It can be described as a ‘‘structured string of concepts with certain logical and causal properties’’ (Carston, 2002: 57) containing slots indicating where certain contextual values must be supplied. Hence, the output of the semantic decoding device is an incomplete template or scheme, open to a range of compatible propositions. In the second step of interpretation, the hearer reconstructs the proposition intended by the speaker through pragmatic inference. Thus, pragmatic inference bridges the gap between what is linguistically expressed (incomplete conceptual schemata/logical form) and what is said (full propositional representations). For example, when a speaker utters the subsentential expression on the top shelf in a given context of utterance, the hearer is supposed to reconstruct the missing constituents to yield the intended proposition ‘The marmalade is on the top shelf’.

Implicature 415

The pragmatic interpretation device is constrained by the First (Cognitive) Principle of Relevance, as proposed by Sperber and Wilson (1995). Finally, there has to be a third step of interpretation, in which the hearer determines implicatures, i.e., ‘what is meant.’ Thus, Carston assumes that pragmatic inference is necessary for the second and third step of interpretation. In this cognitive approach, the bulk of utterance interpretation has to be done by pragmatic inference. The pragmatic device of interpretation relies not only on linguistic information but also on additional information gained from context, perception, and world knowledge. Here, Carston essentially refers to Searle’s theory of mind, especially his notion of Background (cf. Searle, 1980). Utterances are interpreted against a set of more or less manifest background assumptions and practices. Consider, for instance, the following five sentences: (a) Jane opened the window, (b) Jane opened her book on page 56, (c) Jane opened the wall, (d) Jane opened her mouth, (e) The doctor opened her mouth. Carston assumes that the encoded meaning of the English verb open does not vary in all five examples, although open receives quite different interpretations, depending on a set of background assumptions about different practices of opening. The Background is construed as a set of weakly manifest assumptions and practices in an individual’s cognitive environment. Since the Background always supplies additional meaning to the interpretation of an utterance, the proposition expressed by an utterance cannot be fully determined by the meaning of its parts and the mode of their combination. Consequently, the principle of semantic compositionality does not hold for the proposition expressed, but only for the underdetermined logical form (i.e., the first step of interpretation). As does Levinson (2000), Carston, too, argues that Grice does not account for the fact that ‘what is said’ is not independent from pragmatic input. However, Carston and Levinson differ in their approaches to the question of how the pragmatic intrusion problem needs to be dealt with. As shown above, Levinson develops a pragmatic subtheory of GCIs, dealing only with the pragmatic processes involved in the elaboration of ‘what is said’. By contrast, Carston favors a unitary account of all pragmatic processes, irrespective of whether they contribute to the ‘what is said’ or to different implicated assumptions (corresponding to Levinson’s PCIs). Carston’s (2002: 377) use of the terms explicature and implicature, essentially based on Sperber and Wilson’s (1995: 182) distinction between explicit and implicit assumptions/propositions, is spelled out in the following way (cf. Carston, 1988):

Explicature An ostensively communicated assumption that is inferentially developed from one of the incomplete conceptual representations (logical forms) encoded by the utterance. Implicature An ostensively communicated assumption that is not an explicature; that is, a communicated assumption which is derived solely via processes of pragmatic inference.

The difference between explicatures and implicatures lies essentially in the way they are supplied: explicatures are developments of the logical form that they contain as a proper subpart, whereas implicatures are derived purely inferentially. In regard to these two kinds of pragmatic enrichment, the cognitive approach Carston promotes motivates the distinction between ‘communicated assumptions’ and the ‘inferential steps’ leading to them. Carston argues that explicatures are construed by means of interpretative hypotheses rather than by (generalized) implicatures. Consider the example: John came in and he sat down. The preferred interpretation for the personal pronoun he in the second sentence is the coreferential one. Following Levinson, this interpretation results from an I-implicature. Carston argues that this implicature must be a proposition like ‘He refers to whomever John refers to’, ‘‘a propositional form representing a hypothesis about reference assignment’’ (Carston, 2002: 151). She rejects the idea of reference assignment being an implicature and rather identifies it as an interpretative hypothesis like ‘John came in and he, John, sat down,’ which is derived online and only confirmed if it meets the expectation of relevance. Carston claims that this strategy is able to resolve the dilemma of Grice’s circle, for the simple reason that interpretation processes can be effected simultaneously. Finally, the cognitive approach leads Carston to reject conventional implicatures; these are subsumed under the procedural elements. Relevance Theory distinguishes between concepts as constituents of mental representations, and procedures that constrain pragmatic inferences. Conventional implicatures conveyed by expressions such as moreover and therefore do not contribute to the conceptual part of the utterance, but point the hearer to the kind of pragmatic processes she is supposed to perform (cf. Blakemore, 2002). Bach (1994), who tries to defend the Gricean notion of ‘what is said,’ criticizes the notion of explicature and proposes instead the term impliciture (cf. also Bach, 2001). Implicitures are either expansions of ‘what is said,’ as in You are not going to die [from this little wound] or completions, as in Steel isn’t strong enough [for what?]. In these cases, ‘‘the resulting proposition is not identical to the proposition

416 Implicature

expressed explicitly, since part of it does not correspond too any elements of the uttered sentence’’; hence Bach considers it ‘‘inaccurate to call the resulting proposition the explicit content of an utterance or an explicature’’ (Bach, 1994: 273). Carston views Relevance Theory as a cognitive theory of utterance understanding that aims at the subpersonal level, where processes are fast and automatic. Thus, it should be clear that this theoretical goal differs from that pursued by Grice (cf. Saul, 2002). It must be noted, however, that arguments from psycholinguistic research are called for in order to constrain the theory. First, it may be asked how children acquire implicatures and what roles maxims, principles, and the like play in this process. There are studies on the acquisition of irony and metaphor by Winner (1988) as well as studies on the role of Gricean principles in lexical acquisition (cf. Clark E V, 1993, 2004). More recently, studies have been done on the acquisition of scalar implicatures, in particular dealing with the hypothesis that small children are ‘‘more logical’’ than older children and adults, in that they more readily accept the ‘‘some, perhaps all’’ – reading of the quantifier some (cf. Noveck, 2001; Papafragou and Musolino, 2003). Second, there is some evidence that hearers do not first compute the literal meaning, then the nonliteral or indirect meaning, but that they arrive at the nonliteral/indirect meaning earlier or in a parallel fashion (cf. Shapiro and Murphy, 1993; Re´canati, 1995; Gibbs, 2002; Giora, 2003). It is obvious that experimental research is very important for implicature and explicature theory (cf. Wilson and Sperber, 2004: 623–628).

Quality Reconsidered In the development of neo-Gricean approaches to implicature such as Horn’s and Levinson’s, the Gricean maxim of Quality has been neglected (see Neo-Gricean Pragmatics). Thus, genuine pragmatic matters such as metaphor, irony, sarcasm, lying, etc. have become largely unattractive for some implicature theorists, although metaphor had been featured as a cardinal case of maxim exploitation already early on (cf. Levinson, 1983: 147–162). Relevance Theory, on the other hand, which takes a stand on Grice as well as on neo-Gricean approaches, has developed an independent theory of irony; moreover, Carston (2002: Ch. 5), analyzes metaphors as instances of ad hoc-concept construction. In neither of these approaches, however, does the maxim of Quality play any role. First, consider irony. If a speaker A utters X is a fine friend, referring to a person who has betrayed a secret

of A’s to a business rival, then the first maxim of Quality is flouted (Grice, 1989: 34). Because it is obvious that A does not believe what he says, the hearer reconstructs a related proposition, i.e., the opposite of p. The ironical implicature qualifies for the status of an implicature, because it is calculable, context-dependent, and cancellable. Note that this substitutional analysis is in contrast to the additive nature of other types of implicature. However, this approach has been criticized for several reasons: (i) The analysis cannot account for ironical questions, requests and understatements, (ii) it cannot explain the distinction between irony and metaphor, because the latter is also explained with regard to the first maxim of Quality, and (iii), it is not fine-grained enough, because it does not follow from ‘He is not a fine friend’ that he is not a friend at all. The Gricean approach to irony has been most prominently attacked by relevance theorists (Sperber and Wilson, 1981; Wilson and Sperber, 1992; Sperber and Wilson, 1998). Following Sperber and Wilson, ironical utterances have four main properties: (i) They are mentioned, not used, (ii) they are echoic in nature, (iii) the ironical interpretation is an implicature that is derived through recognition of the echoic character of the utterance (Sperber and Wilson, 1981: 309), (iv) the ironical speaker displays a dissociative attitude towards the proposition uttered. Take the utterance What lovely weather! as an example. When uttered during a downpour, the speaker cannot mean the opposite, because this would be uninformative. Instead, he wants to convey that it was absurd to assume that the weather would be nice. Thus, the ironical utterance is a case of echoic mention of a previously entertained proposition. Types of echo include sarcastic repetition (31), attributed thoughts (32), norms (33) and standard expectations (34) (cf. Sperber and Wilson, 1998): (31) A: I’ll be ready at five at the latest. B: Sure, you’ll be ready at five. (32) A: I’ll be ready at five at the latest. B: You mean at five tomorrow? (33) A: I’ll be ready at five at the latest. B: You are so punctual. (34) A: I’ll be ready at five at the latest. B: It’s a great virtue to be on time!

Thus, the echo theory of irony does not imply that there is always an original utterance that is exactly reproduced. The echo theory is constrained in that most utterances cannot be interpreted as echoes, and echoic interpretations must contribute to the relevance of an utterance.

Implicature 417

Several objections to this theory may be made (cf. Sperber and Wilson, 1998): (i) The notion of an echo is far too vague; it does not make sense to look for an echo in cases of conventional irony, e.g., when somebody utters Boy, is it hot! when it is icy cold. (ii) Because not every echoic mention is ironical, echoic mention is not sufficient to explain ironical interpretation. (iii) It is not clear why the substitution of the opposite should not be a starting point in the search for the dissociative attitude of the speaker towards the proposition. (iv) Relevance Theory cannot explain why hearers often fail to grasp the relevance of an ironical utterance. Second, consider metaphor. For Carston (2002), metaphors are cases of ad hoc concept construction. Ad hoc concepts are those concepts ‘‘that are constructed pragmatically by a hearer in the process of utterance comprehension’’ (Carston, 2002: 322). Typical instances of ad hoc concepts come about via narrowing or broadening. Narrowing may be illustrated by utterances like Ann is happy, where the concept associated with happy in a particular context is much narrower than the encoded concept. The case of broadening is exemplified by utterances like There is a rectangle of lawn at the back, where it is very unlikely that the encoded concept of rectangle is communicated. Both processes are cases of constructing an ad hoc concept that contributes to the explicature. If metaphors are ad hoc concepts, then they are part of the explicature as well. Thus, in Mary is a bulldozer, the logical form of bulldozer is associated with an ad hoc concept BULLDOZER* differing from the concept BULLDOZER usually encoded by this word. In this approach, metaphor isn’t an implicature any more, as Grice (1989) and Levinson (1983) would have it, but an explicature. Recall that for Horn (1984), the maxim of Quality was unreducible. Since then, its domain of application has considerably shrunk. However, it still seems to play a role when it comes to the analysis of lying, deception, insincerity, and – maybe – irony (cf. Wilson and Sperber, 2002; Meibauer, 2005). In Levinson’s (2000) approach, matters of irony, etc., are dealt with in the component called Gricean Pragmatics 2. Maybe it is there that the maxim of Quality will have a comeback. It is clear that some version of the maxim plays also a role in the definition of success conditions for assertive illocutions (see Irony).

Implicature and the Grammar/Pragmatics Interface As has become clear from the sketch presented here of Levinson’s and Carston’s frameworks, pragmatic inferencing is powerful enough to influence semantic

representations (see Semantics–Pragmatics Boundary). However, when it comes to pinpoint the exact relations of implicatures to illocutions on the one hand, and sentence types on the other, there still are many open questions. First, consider implicatures vis-a`-vis illocutions. Even if both are associated with an individual speech act, these notions refer to different entities: an additional proposition, in the case of implicature, vs. a type of act such as a promise, assertion, request etc., in the case of illocution. An important connection between illocutions and implicatures is usually seen as obtaining in the case of indirect speech acts (see Speech Acts). According to Searle (1975), a reconstructive process that leads the hearer from the secondary illocutionary point (the ‘literal’ illocution) to the primary illocutionary point (the intended illocution) is similar to the scheme of reasoning that Grice proposed for conversational implicatures; step 2 of his sample derivation even includes principles of conversational cooperation (compare also the speech act schema proposed by Bach and Harnish, 1979). Accordingly, indirect speech acts have sometimes been analyzed as implicatures, for example the question Can you close the window?, meant as a request to close the window, a case that is related to the R-Principle as proposed by Horn (1989, 2004). A case in point is the rhetorical question. Whereas Meibauer (1986) analyzes them as indirect speech acts, i.e., interrogative sentences types associated with assertive force and polar propositional content, Romero and Han (2004) analyze negative yes/no questions like Doesn’t John drink? as connected with a positive epistemic implicature such as ‘The speaker believes or at least expects that John drinks.’ It is not clear at first sight whether such analyses are compatible; in any case, as Dascal (1994) has shown, the notions of implicature and speech act are independently motivated, and should not be confused. Thus, the question of their interrelation requires further research. Second, consider implicatures vis-a`-vis sentence types. It is widely accepted that there is a systematic connection between sentence types such as declarative, interrogative, and imperative, etc., and illocutions such as assertion, question, and request, etc.; moreover, in some approaches the existence of an intermediate category ‘sentence mood’ is assumed (cf. Sadock and Zwicky, 1985; Harnish, 1994; Reis, 1999; Sadock, 2004; Zanuttini and Portner, 2003). However, while it is conceivable that sentence types determine a certain illocutionary potential, the analogical notion of an ‘implicature potential’ has never been proposed, probably because of the authors’

418 Implicature

concentration on lexical elements that give rise to GCIs. However, there are several observations showing that such a concept is not totally mistaken. Consider the following examples: (35) Who is the professor of linguistics at Tu¨bingen? þ> Someone is the professor of linguistics at Tu¨bingen. (36) [I gave the encyclopedia to Bettina.] You gave the encyclopedia to WHOM? (37) Visit Markus and you’ll get new ideas! þ> If you visit Markus then you’ll get new ideas. (38a) This is good. þ> This is not excellent. (38b) Is this good? *þ> Is this not excellent?

In (35), we have the case of an existential implicature that is typically bound to wh-interrogatives, but shows the properties of variability and cancellability. (Its classification as an existential presupposition, cf. Levinson, 1983: 184, has been abandoned, because it does not survive the negation test.) Example (36) illustrates the echo-wh-question. As Reis (1991) has persuasively argued on the basis of German data, these sentence types are neither ‘echo-wh-interrogatives’ nor wh-interrogatives. Instead, these utterances are regular instances of any sentence type, and their interrogative force is explained as a conversational implicature triggered by the wh-element (see also Reis, 1999). Another example showing that implicatures are sensitive to sentence types is the conditional imperative in (37) (cf. Davies, 1986; Clark, 1993). Finally, if elements that trigger scalar implicatures are in the scope of a question operator, the respective implicature may be blocked, as shown in (38) (the asterisk * denotes a blocked or unallowed implicature). In summary, then, there is evidence of a systematic interaction between implicatures and sentence types. The question is, then, how and where to account for this interaction. A detailed analysis of the sentence type-implicature relation is developed in Portner and Zanuttini (2000). They concentrate on negated wh-interrogatives and exclamatives in Paduan, a northern Italian dialect spoken in the city of Padua: (39a) Parcossa no ve-to anca ti!? (wh-interrogative) Why NEG go-s.cl also you ‘Why aren’t you going as well!?’ (39b) Cossa no ghe dise-lo! what NEG him say-s.cl ‘What things he’s telling him!’

(wh-exclamative)

The point is that the NEG-element has no negative force. In principle, there are two strategies for analyzing examples like (39): First, as a special type of

negation, nonpropositional, expletive, or modal in character. The second strategy, as proposed in Meibauer (1990) on the basis of German data, is to assume regular negation, and to derive the modal effect from pragmatic principles. Portner and Zanuttini (2000), drawing on the latter approach, assume that exclamatives are factive. The negation particle no triggers a conventional implicature, which says that the lowest element from a set of alternative elements (that are possible in a contextually given scale) is true. In cases like (39a), there is an expectedness scale {less expected < more expected}, in cases like (39b), there is an unexpectedness scale {more expected < less expected}. The scales are dependent on the respective sentence type. While it is not clear (i) whether exclamatives constitute a separate sentence type at all (cf. d’Avis, 2001), (ii) why the implicatures are of the conventional type, and (iii) how the relevant scales are obtained from the context, it should be clear that such an approach paves the way for a more empirical research on the interplay of sentences types and implicatures.

Conclusions On the basis of the foregoing sketch of three major approaches to implicature theory, we may state some of the prevailing tendencies. To begin with, there is a striving to understand implicatures in terms of economy. This is true for Levinson’s insight that implicatures help to overcome ‘‘the slowness of articulation,’’ as becomes clear from his slogan ‘‘inference is cheap, articulation expensive’’ (Levinson, 2000: 29), as well as for Horn’s appeal to the principle of least effort and Sperber and Wilson’s view on optimal relevance. Lately, recent developments in Optimality Theory have shown attempts to integrate the interplay of maxims into their frameworks. Second, there is a tendency to reject the classic dual distinction between ‘what is said’ on the one hand, and ‘what is implicated’ on the other. Instead, a three-level approach to meaning is favored, cf. the distinction in Levinson (2000: 21–27) between sentence meaning, utterance type meaning, and speaker meaning, or Carston’s three-level model of utterance interpretation. However, there is considerable terminological confusion here, as the diagram in Levinson (2000: 195) impressively shows; confusion that has to do with the still unsolved problem of finding demarcation lines or fixing the interfaces between ‘what is said’ and ‘what is meant.’ Further discussion of the question of level architecture can be found in Re´canati (2004). Obviously, the second tendency is connected with the widely accepted view that some sort of

Implicature 419

underdeterminacy thesis is correct, and that there are presemantic pragmatic processes that are input for model-theoretic interpretation (cf. Levinson, 2000: 188), or are necessary to fix full propositional representations (cf. Carston, 2002). As has become clear, there are still many problems to solve: the status of the maxims of Relevance and Manner, the distinction between GCI and PCI, the status of conventional implicatures, the interaction of implicatures with illocutions and sentence types, to name only a few. Besides, the role that implicatures play in many areas, such as those of language acquisition and language change, awaits much further research. See also: Context and Common Ground; Conventions in Language; Cooperative Principle; Expression Meaning vs Utterance/Speaker Meaning; Inference: Abduction, Induction, Deduction; Intention and Semantics; Irony; Neo-Gricean Pragmatics; Nonmonotonic Inference; Nonstandard Language Use; Pragmatic Determinants of What Is Said; Semantic Change; Semantics–Pragmatics Boundary; Speech Acts; Taboo, Euphemism, and Political Correctness.

Bibliography Atlas J D (2005). Logic, meaning, and conversation: semantical underdeterminacy, implicature, and their interface. Oxford: Oxford University Press. Autenrieth T (1997). ‘Tautologien sind Tautologien.’ In Rolf E (ed.) Pragmatik. Implikaturen und Sprechakte. Opladen: Westdeutscher Verlag. 12–32. Bach K (1994). ‘Semantic slack: what is said and more.’ In Tsohatzidis S L (ed.) Foundations of speech act theory. London/New York: Routledge. 267–291. Bach K (1999). ‘The myth of conventional implicature.’ Linguistics and Philosophy 22, 327–366. Bach K (2001). ‘You don’t say?’ Synthese 128, 15–44. Bach K & Harnish R M (1979). Linguistic communication and speech acts. Cambridge, MA: The MIT Press. Blakemore D (2002). Relevance and linguistic meaning: the semantics and pragmatics of discourse connectives. Cambridge: Cambridge University Press. Blutner R (2004). ‘Pragmatics and the lexicon.’ In Horn L R & Ward G (eds.) The handbook of pragmatics. Oxford: Blackwell. 488–514. Brown P & Levinson S C (1987). Politeness: some universals in language usage. Cambridge: Cambridge University Press. Carston R (1988). ‘Implicature, explicature and truththeoretic semantics.’ In Kempson R (ed.) Mental representations: the interface between language and reality. Cambridge: Cambridge University Press. 155–181. Carston R (1996). ‘Metalinguistic negation and echoic use.’ Journal of Pragmatics 25, 309–330. Carston R (2002). Thoughts and utterances: the pragmatics of explicit communication. Oxford: Blackwell.

Chapman S (1996). ‘Some observations on metalinguistic negation.’ Journal of Linguistics 32, 387–402. Clark B (1993). ‘Relevance and ‘‘Pseudo-Imperatives’’.’ Linguistics and Philosophy 16, 79–121. Clark E V (1993). The lexicon in acquisition. Cambridge: Cambridge University Press. Clark E V (2004). ‘Pragmatics and language acquisition.’ In Horn L R & Ward G (eds.) The handbook of Pragmatics. Oxford: Blackwell. 562–577. Dascal M (1994). ‘Speech Act Theory and Gricean pragmatics. Some differences of detail that make a difference.’ In Tsohatzidis S L (ed.) Foundations of speech act theory. London/New York: Routledge. 323–334. Davies E E (1986). The English imperative. London: Croom Helm. Davis W A (1998). Implicature: intention, convention and principle in the failure of Gricean theory. Cambridge: Cambridge University Press. ¨ ber ‘w-Exklamativsa¨tze’ im Deutschen. d’Avis F (2001). U Tu¨bingen: Niemeyer. Fretheim T (1992). ‘The effect of intonation on a type of scalar implicature.’ Journal of Pragmatics 18, 1–30. Gazdar G (1979). Pragmatics: implicature, presupposition and logical form. New York: Academic Press. Gibbs R W Jr (2002). ‘A new look at literal meaning in understanding what is said and implicated.’ Journal of Pragmatics 34, 457–486. Giora R (2003). On our mind: salience, context, and figurative language. Oxford: Oxford University Press. Grice P (1989). ‘Logic and conversation.’ In Grice P (ed.) Studies in the way of words. Cambridge, MA: Harvard University Press. 22–40. Harnish R M (1994). ‘Mood, meaning and speech acts.’ In Tsohatzidis S L (ed.) Foundations of speech act theory. London/New York: Routledge. 407–459. Hawkins J A (1991). ‘On (in)definite articles: implicatures and (un)grammaticality prediction.’ Journal of Linguistics 27, 405–442. Hirschberg J (1991). A theory of scalar implicature. New York: Garland. Horn L R (1984). ‘Toward a new taxonomy for pragmatic inference: Q-based and R-based implicature.’ In Schiffrin D (ed.) Meaning, form, and use in context: linguistic applications. Washington, DC: Georgetown University Press. 11–42. Horn L R (1989). A natural history of negation. Chicago/ London: The University of Chicago Press. Horn L R (2004). ‘Implicature.’ In Horn L R & Ward G (eds.) The handbook of pragmatics. Oxford: Blackwell. 3–28. Iwata S (1998). ‘Some extensions of the echoic analysis of metalinguistic negation.’ Lingua 105, 49–65. Ko¨nig E (1991). The meaning of focus particles: a comparative perspective. London: Routledge. Krifka M (2002). ‘Be brief and vague! And how bidirectional optimality theory allows for verbosity and precision.’ In Restle D & Zaefferer D (eds.) Sounds and systems: studies in structure and change. A Festschrift for Theo Vennemann. Berlin: de Gruyter. 439–458.

420 Implicature Leech G N (1983). Principles of pragmatics. London/New York: Longman. Levinson S C (1983). Pragmatics. Cambridge: Cambridge University Press. Levinson S C (1987). ‘Minimization and conversational inference.’ In Verschueren J & Bertucelli-Papi M (eds.) The pragmatic perspective. Amsterdam: Benjamins. 61–129. Levinson S C (2000). Presumptive meanings: the theory of generalized conversational implicature. Cambridge, MA: The MIT Press. Matsumoto Y (1995). ‘The conversational condition on Horn scales.’ Linguistics and Philosophy 18, 21–60. Meibauer J (1986). Rhetorische Fragen. Tu¨bingen: Niemeyer. Meibauer J (1990). ‘Sentence mood, lexical categorial filling, and non-propositional nicht in German.’ Linguistische Berichte 130, 441–465. Meibauer J (1997). ‘Modulare Pragmatik und die Maximen der Modalita¨t.’ In Rolf E (ed.) Pragmatik: Implikaturen und Sprechakte. Opladen: Westdeutscher Verlag. 226–256. Meibauer J (2005). ‘Lying and falsely implicating.’ Journal of Pragmatics 38(12). Noveck I A (2001). ‘When children are more logical than adults: experimental investigations of scalar implicature.’ Cognition 78, 165–188. Papafragou A & Musolino J (2003). ‘Scalar implicatures: experiments at the semantics-pragmatics interface.’ Cognition 86, 253–282. Portner P & Zanuttini R (2000). ‘The force of negation in wh exclamatives and interrogatives.’ In Horn L R & Kato Y (eds.) Negation and polarity: syntactic and semantic perspectives. Oxford: Oxford University Press. 193–231. Posner R (1980). ‘Semantics and pragmatics of sentence connectives in natural language.’ In Searle J R, Kiefer F & Bierwisch M (eds.) Speech act theory and pragmatics. Dordrecht: Reidel. 168–203. Potts C (2005). The logic of conventional implicatures. Oxford: Oxford University Press. Re´canati F (1995). ‘The alleged priority of literal interpretation.’ Cognitive Science 19, 207–232. Re´canati F (2004). Literal meaning. Cambridge: Cambridge University Press. Reis M (1991). ‘Echo-w-Sa¨tze und Echo-w-Fragen.’ In Reis M & Rosengren I (eds.) Fragesa¨tze und Fragen. Tu¨bingen: Niemeyer. 49–76. Reis M (1999). ‘On sentence types in German: an enquiry into the relationship between grammar and pragmatics.’ Interdisciplinary Journal for Germanic Linguistics and Semiotic Analysis 4, 195–236. Rolf E (1994). Sagen und Meinen. Paul Grices Theorie der Konversations-Implikaturen. Opladen: Westdeutscher Verlag.

Romero M & Han C (2004). ‘On negative yes/no questions.’ Linguistics and Philosophy 27, 609–658. Rooy R van (2004). ‘Signalling games select Horn strategies.’ Linguistics and Philosophy 27, 493–527. Sadock J M (1978). ‘On testing for conversational implicatures.’ In Cole P (ed.) Syntax and semantics 9: Pragmatics. New York: Academic Press. 281–298. Sadock J M (2004). ‘Speech acts.’ In Horn L R & Ward G (eds.) The handbook of pragmatics. Oxford: Blackwell. 53–73. Sadock J M & Zwicky A M (1985). ‘Speech act distinctions in syntax.’ In Shopen T (ed.) Language typology and syntactic description I: Clause structure. Cambridge: Cambridge University Press. 155–196. Sauerland U (2004). ‘Scalar implicatures in complex sentences.’ Linguistics and Philosophy 27, 367–391. Saul J M (2002). ‘What is said and psychological reality: Grice’s project and relevance theorists’ criticisms.’ Linguistics and Philosophy 25, 347–372. Searle J R (1975). ‘Indirect speech acts.’ In Cole P & Morgan J (eds.) Syntax and semantics 3: Speech acts. New York: Academic Press. 59–82. Searle J R (1980). ‘The background of meaning.’ In Searle J, Kiefer F & Bierwisch M (eds.) Speech act theory and pragmatics. Dordrecht: Reidel. 221–232. Shapiro A M & Murphy G L (1993). ‘Can you answer a question for me? Processing indirect speech acts.’ Journal of Memory and Language 32, 211–229. Sperber D & Wilson D (1981). ‘Irony and the Use-MentionDistinction.’ In Cole P (ed.) Radical pragmatics. New York: Academic Press. 295–318. Sperber D & Wilson D (1995). Relevance. Communication and cognition (2nd edn.). Oxford: Blackwell. Sperber D & Wilson D (1998). ‘Irony and relevance: a reply to Seto, Hamamoto and Yamanashi.’ In Carston R & Uchida S (eds.) Relevance theory: applications and implications. Amsterdam: Benjamins. 283–293. Traugott E C (2004). ‘A critique of Levinson’s view of Q- and M-inferences in historical pragmatics.’ Journal of Historical Pragmatics 5, 1–25. Wilson D & Sperber D (1992). ‘On verbal irony.’ Lingua 87, 53–76. Wilson D & Sperber D (2002). ‘Truthfulness and relevance.’ Mind 111, 583–632. Wilson D & Sperber D (2004). ‘Relevance theory.’ In Horn L R & Ward G (eds.) The handbook of pragmatics. Oxford: Blackwell. 607–632. Winner E (1988). The point of words: children’s understanding of metaphor and irony. Cambridge, MA: Harvard University Press. Zanuttini R & Portner P (2003). ‘Exclamative clauses: at the syntax-semantics-interface.’ Language 79, 39–81.

Indefinite Pronouns 421

Indefinite Pronouns ¨ Dahl, Stockholm University, Stockholm, Sweden O

The Some–Any Distinction

ß 2006 Elsevier Ltd. All rights reserved.

The existence in English of two series of indefinite pronouns – the some series and the any series – has given rise to an extensive discussion in contemporary linguistic and philosophical literature. In the development of generative-transformational grammar, the distribution of some and any in English was first seen as a purely syntactic problem that could be given a relatively straightforward syntactic solution (Klima, 1964) but after some time it became clear that the factors that governed the choice were at least partly pragmatic (Lakoff, 1969). Already in the pre-Chomskyan period, linguistically oriented logicians such as Hans Reichenbach tried hard to make sense of the relations between some and any and the quantifiers of predicate logic (Reichenbach, 1980). In the ensuing discussion, which is still going on, new frameworks such as Jaakko Hintikka’s game-theoretical semantics have been invoked as helpers. In a crosslinguistic perspective, it may be noted that the existence of more than one series of indefinite pronouns is a rule rather than an exception, but precise analogues to some and any are hard to find in other languages. Yet the distribution of the English indefinite pronouns obeys general principles which manifest themselves in seemingly chaotic but nonetheless systematic patterns in human languages. In English, the any series can be said to have two major types of use. The first one is in contexts connected with ‘negative polarity’ in the wide sense, also including, for example, questions and conditional protases. We thus get a contrast between the negated sentence (2), with any, which contrasts with the affirmative sentence, (1), where some is used.

General In traditional grammar, the term ‘indefinite pronouns’ is often used quite widely, comprising not only words like something and anybody, but also generic pronouns such as English one or French on, quantifiers such as few, many, all, and every, and ‘identity pronouns’ such as other and same. Given the heterogeneity of the category thus understood, a more restrictive approach seems motivated, in which the term is used only of expressions with indefinite reference in a narrower sense (Haspelmath, 1997). This would, for English, basically leave us with some and any and their derivatives, also including words such as somewhere and anytime which are, strictly speaking, pronominal adverbs, but which semantically share most of their properties with the other members of the respective series and therefore deserve to be treated together with them. (‘Indefinite proforms’ would probably be the most ‘correct’ term.) A further delimitation has to be made in that more highly grammaticalized morphemes such as indefinite articles should be not be included, in keeping with the tradition not to see them as proforms. It should be noted, however, that semantically, indefinite pronouns have much in common with other indefinite noun phrases. As we have already seen, English indefinite pronouns form what we can call ‘series’ – sets of pronouns sharing an ‘indefiniteness marker’ such as some and any. Each member in the series is connected to an ontological category such as time, place, person, thing, quantity, etc. (it may be noted that these categories are essentially those postulated by Aristotle – in fact, in choosing his categories, Aristotle at least partly seems to have been guided by the Greek pronoun system). Not infrequently, the morpheme that identifies the ontological category is formally identical to the corresponding interrogative pronoun, as in German irgendwo ‘somewhere,’ with indefiniteness marker irgend- and ontological category marker wo ‘where,’ or Russian kto-to ‘someone,’ formed by adding the indefiniteness marker -to to the interrogative pronoun kto ‘who.’ As noted by Haspelmath (1997: 24), indefinite pronouns universally tend to be derived, and typically from interrogative pronouns – the opposite direction is hardly attested.

(1) I saw someone (2) I did not see anyone

The second type of use where we find the any series is for expressing what is often called ‘free choice,’ as in (3) Anybody can do this

Attempts to reduce the ambiguity between the negative polarity use and the free choice use (e.g., Reichenbach, 1980: 105) to a difference in scope run into problems with examples where stress is used to disambiguate the two readings (Dahl, 1970: 39), as in Can I take any apple? and Can I take any apple? The prominence of negative polarity in the choice between different series of indefinite pronouns, and the particular combination of negative polarity readings with ‘free choice,’ is far from

422 Indefinite Pronouns

universal. Other distinctions, notably ones having to do with the specificity of referents, are often more salient. For instance, in Russian, the two series in -to and -nibud’ may disambiguate classical cases such as the following, where only some is possible in English: (4) ona xocˇet najti kogo-to she want.PRES.3SG find.INF someone ambiguous but with the preferred reading ‘she wants to find someone [a specific person]’ (5) ona xocˇet najti kogo-nibud’ she want.PRS.3SG find.INF someone only nonspecific: ‘She wants to find someone [nobody in particular]’

Functions of Indefinite Pronouns Taking a crosslinguistic perspective, Martin Haspelmath defined nine different functions of indefinite pronouns (Haspelmath, 1997): 1. specific, known to the speaker (Somebody called while you were away, guess who!) 2. specific, unknown to the speaker (I heard something, but I couldn’t tell what kind of sound it was) 3. nonspecific, irrealis (Please try somewhere else) 4. polar question (Did anybody tell you anything about it?) 5. conditional protasis (If you see anything, tell me immediately) 6. indirect negation (I don’t think that anybody knows the answer) 7. standard of comparison (In Freiburg the weather is nicer than anywhere in Germany) 8. direct negation (I didn’t see anybody) 9. free choice (Anybody can solve this simple problem).

from each language are shown.) In addition, however, the choice between indefinite pronouns can be influenced by factors not represented in Figure 1, such as the speaker’s expectations – for instance, if s/he expects a positive or a negative answer to a question, and differences in focus or emphasis, including distinctions relating to the notion of scalarity.

Diachronic Developments Diachronically, indefinite pronouns tend to arise from a limited number of sources, the most common being phrases with original meanings such as ‘whatever it may be,’ ‘it does not matter which,’ or ‘it is the same which,’ which come to acquire the ‘free choice’ function and then spread first to the functions adjacent to it in Figure 1 and then to more distant ones. For instance, the Russian indefinite pronouns ending in -nibud,’ such as chto-nibud’ ‘something, anything,’ which derives from chto ni budi ‘whatever may be,’ thus a typical expression of free choice. In modern Russian, however, chto-nibud’ has a number of uses (see Figure 2) in which the free-choice meaning is weakened or inappropriate. An other possibility is for indefinite pronouns to arise at the other end of the implicational map – from ‘dunno’ constructions like the one exemplified by Old Norse ne wait ek hwariR ‘I do not know who,’ which is the source of

Haspelmath claims that these functions can be ordered in an ‘implicational map,’ as in Figure 1, where an indefinite pronoun series will always express a contiguous subset of functions, as shown in Figure 2, representing English, Russian, and Japanese. (For clarity, only two series of pronouns

Figure 1 Haspelmath’s implicational map for indefinite pronouns. Adapted from Haspelmath M (1997). Indefinite pronouns. Oxford: Clarendon Press, with permission.

Figure 2 Functions of indefinite pronouns in English, Russian, and Japanese. Adapted from Haspelmath M (1997). Indefinite pronouns. Oxford: Clarendon Press, with permission.

Indefinite Pronouns 423

Scandinavian indefinite pronouns such Swedish na˚gon ‘some, someone’, which has expanded its domain of use so as to be possible in all the functions in Figure 1 (this is crosslinguistically somewhat exceptional). (In the Latin phrase nescio quid ‘something’ the origin is still transparent.) The two diachronic paths are similar in that somewhat similar literal meanings – ‘it does not matter which’ or ‘I do not know which’ – are backgrounded or disappear, but they differ in whether they start at the most or least specific end of the implicational map. A third diachronic source of indefinite pronouns is combinations of focus particles (even, also) with interrogative pronouns, nouns with general meaning, or the numeral ‘one,’ as in Japanese nan-demo ‘whatsoever,’ from nani ‘what’ and -demo ‘even.’ Sometimes interrogative pronouns used by themselves obtain functions as indefinite pronouns. This appears to happen particularly often in contexts such as conditional protases, as in Latin Si quis vult post me sequi . . . ‘‘if someone wants to follow me . . .’’. Nouns with general meanings such as ‘person’ and ‘thing’ can also develop uses in which they become indistinguishable from indefinite pronouns.

Inherently Negative Pronouns In negative contexts, indefinite pronouns such as the members of the any series compete with ‘inherently negative pronouns’ such as nobody. Cf. English: (6) I did not see anybody (7) I saw nobody

However, as noted by Haspelmath (1997: ch. 8), the distinction between indefinite and inherently negative pronouns is far from unproblematic: originally indefinite pronouns sometimes obtain a negative import when used by themselves, for instance in the answers to wh-questions. Thus, in French, ‘never’ is expressed by a combination of the negation word ne and the pronominal adverb jamais, with the earlier meaning ‘ever’ (originally deriving from Latin jam magis ‘even more’). However, used by itself in the answer to a question, jamais now means ‘never,’ without any support by a negative word. Compare also an expression such as jamais vu ‘never seen.’ In a question like (8), on the other hand, jamais can still be used without negative import: (8) Es -tu jamais alle´ are you ever go.PP ‘Did you ever go to France?’

en to

France? France

Such facts may be taken as support for treating inherently negative pronouns as a subspecies of indefinite pronoun, as Haspelmath (1997) suggests. On the other hand, in some languages, inherently negative pronouns behave in a way that suggests that they are even synchronically derivable from a combination of a negation morpheme and an indefinite pronoun. Thus, in Swedish, a negative pronoun such as ingenting ‘nothing’ can only be used if the result of replacing it by a negation inte ‘not’ immediately followed by an indefinite such as na˚gonting ‘something,’ is also grammatical. We thus obtain contrasts as illustrated by the following examples. (9a) Jag ko¨pte inte na˚gonting nothing I buy.PAST not ‘I did not buy anything’ (negation adjacent to indefinite pronoun) (9b) Jag ko¨pte ingenting I buy.PAST nothing ‘I bought nothing’ (negation adjacent to indefinite pronoun) (9c) Jag har inte ko¨pt na˚gonting I have not buy.SUP nothing ‘I did not buy anything’ (negation nonadjacent to indefinite pronoun) (9d) *Jag har ko¨pt ingenting I have buy.SUP nothing ‘I have bought nothing’

See also: Definite and Indefinite; Definite and Indefinite Articles; Definite and Indefinite Descriptions; Generic Reference; Polarity Items; Pronouns; Referential versus Attributive; Specificity.

Bibliography ¨ (1970). ‘Some notes on indefinites.’ Language 46, Dahl O 33–41. Haspelmath M (1997). Indefinite pronouns. Oxford: Clarendon Press. Kamp H (1973). ‘Free choice permission.’ Proceedings of the Aristotelian Society N. S. 74, 57–74. Klima E S (1964). ‘Negation in English.’ In Fodor J A & Katz J J (eds.) The structure of language. Englewood Cliffs, NJ: Prentice Hall. 246–323. Lakoff R (1969). ‘Some reasons why there can’t be any some–any rule.’ Language 45, 608–615. Quine W V O (1956). ‘Quantifiers and propositional attitudes.’ Journal of Philosophy 53, 263–278. Reichenbach H ([1947] 1980). Elements of symbolic logic. New York: Dover.

424 Indeterminacy

Indeterminacy M Hymers, Dalhousie University, Halifax, Nova Scotia, Canada ß 2006 Elsevier Ltd. All rights reserved.

Contemporary philosophy has presented two significant challenges to the determinacy of meaning, one contentiously associated with Ludwig Wittgenstein through the work of Saul Kripke and the other, the topic of this entry, traceable to the behaviorism of W. V. Quine by way of his thought experiment concerning radical translation. Quine purports to offer two distinct arguments for semantic indeterminacy, which he sees as a consequence of the thesis of the indeterminacy of translation: ‘‘manuals for translating one language into another can be set up in divergent ways, all compatible with the totality of speech dispositions, yet incompatible with one another’’ (1960: 27). This thesis has semantic implications for Quine, because as a behaviorist he holds that meaning is nothing above and beyond what is preserved in translation. This article will discuss both arguments for this thesis and reactions to them in separate sections below, focusing largely on the so-called argument ‘from below’ (1970: 183).

The Argument from Below The radical translator, says Quine, is someone who has at her disposal only the behavior of the speaker of the alien tongue she is trying to translate and the observable objects and events that grace the locale she shares with the speaker. More carefully, Quine focuses not on those objects and events themselves, but on the stimulations they produce at the speaker’s nerve endings and on the speaker’s observed disposition to assent to or dissent from sentences that the translator puts to her as queries under given stimulusconditions. If she can formulate correct hypotheses about the speaker’s terms for affirmation and dissent, then this evidential constraint entails that the field linguist can translate observation-sentences, which are among the class of sentences to which assent is cued directly to particular occasions and to which, moreover, assent varies little or not at all with changes in the background information available to the speaker (1960: 42). The thought here is that reports of observations are reports of things observed on particular occasions (unlike talk of biological speciation or social justice), and those things are importantly independent of the speaker’s background beliefs and mental states (unlike hallucinations, for

example). Each such sentence has what Quine calls a stimulus-meaning – the ordered pair of its positive stimulus-conditions (those prompting assent to the sentence) and its negative stimulus-conditions (those prompting dissent from the sentence). Logical connectives such as ‘not,’ ‘and,’ ‘or,’ and ‘all’ can also be translated, because ‘‘one’s interlocutor’s silliness, beyond a certain point, is less likely than bad translation’’ (59). This ‘principle of charity’ (59n) has it that, for example, a speaker will not typically affirm both a sentence and its negation. We can, as well, identify but not necessarily translate sentences that are stimulus synonymous (those provoking assent and dissent respectively under exactly the same stimulus conditions) and sentences that are stimulus analytic – those to which the speaker will assent, if not to nothing, on every stimulus occasion (54–55). Quine begins with sentences because a speaker assents to (or dissents from) utterances of sentences, not to (or from) isolated subsentential terms. But a thorough ‘translation manual’ should tell us how to match up individual words or other subsentential expressions (1960: 69). The radical translator may thus formulate a set of analytical hypotheses (68), which break down sentences into their component parts, words, which can then be found in other sentences. A survey of their occurrences in different sentences makes it possible to pair those words with words of the translator’s language, eventually yielding a comprehensive translation manual, whose adequacy can be further tested by recombining the isolated words into new sentences and observing the speaker’s pattern of assent and dissent to them. However, Quine contends, there will always be more than one overall system of analytical hypotheses that will yield workable translations, because local incompatibilities in competing hypotheses can always be compensated for by making changes elsewhere in the system. Thus, to take a trivial case, in the French ‘ne . . . rien,’ ‘rien’ can be translated either as ‘anything’ or as ‘nothing,’ says Quine, ‘‘by compensatorily taking ‘ne’ as negative or as vacuous’’ (1969a: 37). Worse still, even if one’s analytical hypotheses isolate a particular word, the reference of that word may itself remain ‘behaviorally inscrutable’ (1969a: 35), for there is no empirical test that will decide between translations of the word ‘gavagai’ as ‘rabbit’ and as ‘undetached rabbit parts,’ given that apparent evidence against one translation can again be compensated for by making adjustments elsewhere in the system. Inquiries whether this gavagai (rabbit) is the same as that gavagai (rabbit) may as easily be taken as inquiries whether this gavagai (rabbit part)

Indeterminacy 425

belongs with that gavagai (rabbit part) (33). Beyond the level of observation-sentences and logical connectives, Quine contends, translation is radically indeterminate. Quine assumes that overt bodily behavior is the only relevant evidence for the field linguist’s efforts (1990: 37–38), and he allows that the radical translator might have access to the totality of such evidence – that competing translation manuals might ‘‘accord perfectly . . . with all dispositions to behavior on the part of all the speakers concerned’’ (1969a: 29). Therefore, his conclusion is not merely that I cannot know what the unique, correct translation of the speaker’s utterances is, but that there is no fact about what their unique, correct translation is, because all the relevant evidence leaves the question unanswered. Moreover, Quine holds that the same considerations apply to understanding a speaker of my own language – even when that speaker is myself. ‘‘On deeper reflection, radical translation begins at home’’ (46). This is because first as a learner of my own language and then as an interpreter of my colinguists, I have access to no more evidence than the radical translator does. It follows that meaning itself is radically indeterminate.

Reactions to the Argument from Below Critics have complained that behaviorism is false and, so, that Quine’s argument fails to establish its conclusion (Chomsky, 1969); and that Quine, in spite of his avowed behaviorism (Quine, 1990: 37), helps himself to nonbehaviorist data about the speaker’s nonobservational terms for assent and dissent (Hockney, 1975: 421; Glock, 2003: 178) and about the communicative exchange between speaker and radical translator (Glock, 2003: 175–182). Assent and dissent cannot simply be read from the speaker’s behavior, and they cannot intelligibly be assigned stimulusmeanings. The only way that the field linguist can identify terms of assent and dissent is by assuming that the speaker wants to be interpreted and understood, that the speaker understands that she is being interpreted, that the speaker is familiar with performing the speech acts of answering a question and correcting the linguist’s proffered usage, and so on. These assumptions are all ruled out by Quine’s behaviorism. Critics have also doubted whether there really could be entire alternative translation manuals of the sort Quine envisions. The contention, very roughly, is that the compensatory changes that Quine imagines making to other terms in one’s translation manual in order to preserve one’s favored translation of some given term would quickly come into conflict

with the behavior of the speaker (Evans, 1975: 345–346) . If I translate gavagai as ‘undetached rabbit part’ instead of as ‘rabbit,’ for example, I may have to translate the speaker as allowing that an undetached rabbit part is white if and only if the entire rabbit is white (Miller, 1998: 139). But this has consequences for how I should translate the speaker’s word for ‘white’ in other contexts, and I should not be surprised to discover that I want to translate her as affirming that snow is white, even when visible patches of it have been visited by local sled dogs.

The Argument from Above Quine might try to meet some of these objections by appealing to the argument ‘from above’ (1970: 183). Physical theory, says Quine, is underdetermined by the available evidence. More than one set of physical hypotheses could be compatible with the totality of empirical evidence. However, even if we select a particular physical theory on pragmatic grounds, Quine contends, our linguistic theory will remain underdetermined relative to our physical theory – ‘‘even if we ‘fix’ the physics’’ (Putnam, 1978: 53), nonobservational meaning will remain underdetermined. Linguistic hypotheses are underdetermined by a set of more basic ‘facts’ that are themselves already underdetermined. It is this double underdetermination that distinguishes semantic indeterminacy from normal empirical underdetermination.

Reactions to the Argument from Above It is not clear, however, that this second argument operates independently of the argument from below, for the only plausible reason for thinking that the linguistic data are doubly underdetermined with respect to physical theory seems to be that mere physical behavior and the observation-sentences it undergirds are more evidentially basic – the only evidence we have to go on – and rival systems of analytical hypotheses are compatible with that evidence (Kirk, 1986: 140–146; Miller, 1998: 147). Moreover, as Chomsky complains, even if linguistics remains underdetermined once the physics has been fixed, it is equally true that the physics remains underdetermined once the linguistics has been fixed (Chomsky, 1980: 20; Rorty, 1972: 451–453). Only if we grant physics a special privilege could there be a problem for the determinacy of meaning. Quine’s work has, nonetheless, been very influential in analytical philosophy of language, leading especially to the varieties of interpretationalism defended by Donald Davidson and Daniel Dennett.

426 Indexicality See also: Definite and Indefinite Descriptions; Definitions;

Evidentiality; Multivalued Logics; Nonmonotonic Inference; Philosophical Theories of Meaning; Propositional Attitude Ascription; Propositional Attitudes; Vagueness; Vagueness: Philosophical Aspects.

Bibliography Chomsky N (1969). ‘Quine’s empirical assumptions.’ In Davidson D & Hintikka J (eds.). 53–68. Chomsky N (1980). Rules and representations. New York: Columbia University Press. Chapter 1. Davidson D & Hintikka J (eds.) (1969). Words and objections: essays on the work of W. V. Quine. Dordrecht: D. Reidel. Evans G (1975). ‘Identity and Predication.’ The Journal of Philosophy 72, 343–363. Gibson R F (1982). The philosophy of W. V. Quine: an expository essay. Tampa: University of South Florida, 63–95, 176–205. Glock H-J (2003). Quine and Davidson on language, thought and reality. Cambridge: Cambridge University Press. Chapters 6–7.

Hockney D (1975). ‘The bifurcation of scientific theories and indeterminacy of translation.’ Philosophy of Science 42, 411–427. Hookway C (1988). Quine: language, experience and reality. Stanford: Stanford University Press. Chapters 8–10. Kirk R (1986). Translation determined. Oxford: Clarendon Press. Martin R M (1987). The meaning of language. Cambridge, MA: MIT Press. Chapter 6. Miller A (1998). Philosophy of language. Montreal: McGill-Queen’s University Press. Chapter 4. Putnam H (1978). Meaning and the moral sciences. London: Routledge & Kegan Paul. Lecture IV. Quine W V (1960). Word and object. Cambridge, MA: MIT Press. Chapters I-II. Quine W V (1969a). Ontological relativity and other essays. New York: Columbia University Press. Quine W V (1969b). ‘Reply to Chomsky.’ In Davidson D & Hintikka J (eds.). 302–311. Quine W V (1970). ‘On the reasons for the indeterminacy of translation.’ The Journal of Philosophy 67, 178–183. Quine W V (1990). Pursuit of truth. Cambridge, MA: Harvard University Press. Chapter III. Rorty R (1972). ‘Indeterminacy of translation and of truth.’ Synthese 23, 443–462.

Indexicality E Corazza, University of Nottingham, Nottingham, UK ß 2006 Elsevier Ltd. All rights reserved.

In most of our linguistic interchanges and thinking episodes, we rely on context to select items of discourse and items of thought. One often succeeds in talking and thinking about something because one is situated in a given context. In natural language we have tools whose specific function is to exploit the context of use in order to select an item in one’s surroundings. If one says, ‘It is raining here’ while in London, one refers to London because one’s utterance occurs in London. Were one to utter the same sentence in Paris, one would be referring to Paris. We can use the very same words and yet refer to very different items. When you use ‘I’, for instance, you refer to yourself, whereas when I use it, I refer to myself. We use the very same linguistic expression with the same conventional meaning. It is a matter of who uses it that determines who the referent is. Moreover, when Ivan, pointing to Jane, says ‘she’ or ‘you,’ he refers to Jane; Jane, however, cannot refer to herself using ‘she’ or ‘you’ (unless she is addressing an image of herself). If we change the context – the speaker, the time, the

place – in which these expressions occur, we may end up with a different referent. Among the expressions that may switch reference with a change in context, we have personal pronouns (‘my’, ‘you’, ‘she’, ‘his’, ‘we’. . .), demonstrative pronouns (‘this’, ‘that’), complex demonstratives (‘this pencil’, ‘that brunette in the corner’. . .), adverbs (‘today’, ‘yesterday’, ‘now’, ‘here’. . .), adjectives (‘actual’, ‘present’), and possessive adjectives (‘my pencil’, ‘their car’. . .). These expressions have been termed, following Peirce, indexicals. Indexicals constitute the paradigm of context-sensitive expressions, i.e., those expressions that rely on the context of use to select an object of discourse. Reichenbach (Reichenbach, 1947) claimed that indexicals are token reflexive, for they can be defined in terms of the locution ‘this token’, where the latter (reflexively) self-refers to the very token used. So, ‘I’ can be defined in terms of ‘the person who utters this token’, ‘now’ in terms of ‘the time at which this token is uttered’, ‘this pen’ in terms of ‘the pen indicated by a gesture accompanying this token’, etc. The reference of an indexical expression depends on its particular linguistic meaning: ‘the utterer of this token’ is the linguistic meaning (the character (Kaplan) or role (Perry)) of ‘I’,

Indexicality 427

while ‘the day in which this token is uttered’ is the linguistic meaning of ‘today’, and so on. The meaning of an indexical can be viewed as a rule which one needs to master to use an indexical correctly. An indexical’s linguistic meaning can be conceived as a function taking as its argument the context and giving as its value the referent/content (this is Kaplan’s famous content/character distinction). It is often the case, however, that the linguistic meaning of expressions such as ‘this’, ‘that’, ‘she’, etc., together with context, is not enough to select a referent. These expressions are often accompanied by a pointing gesture or demonstration, and the referent will be what the demonstration demonstrates. Kaplan (1977) distinguishes between pure indexicals (‘I’, ‘now’, ‘today’, . . .) and demonstratives (‘this’, ‘she’, . . .). The former, unlike the latter, do not need a demonstration – or directing intention, Kaplan (1989) – to secure the reference. In their paradigmatic use, pure indexicals differ from demonstratives insofar as the latter, unlike the former, are perception based. When one says ‘I’, ‘today’, etc., one does not have to perceive herself or the relevant day to competently use and understand these expressions. To competently use and understand ‘this’, ‘she’, etc., one ought to perceive the referent or demonstratum. For this reason, when a pure indexical is involved, the context of reference fixing and the context of utterance cannot diverge: the reference of a pure indexical, unlike the reference of a demonstrative, cannot be fixed by a past perception. Moreover, a demonstrative, unlike a pure indexical, can be a vacuous term. ‘Today’, ‘I’, etc., never miss the referent. Even if I do not know whether today is Monday or Tuesday and I am an amnesiac, when I say ‘Today I am tired,’ I refer to the relevant day and to myself. By contrast, if one says ‘She is funny’ while hallucinating, or ‘This car is green’ while pointing to a man, ‘she’ and ‘this car’ are vacuous. In addition, pure indexicals cannot be coupled with sortal predicates, while ‘this’ and ‘that’ often are used to form complex demonstratives such as ‘this book’, ‘that water’. Sortal predicates can be considered to be universe narrowers which, coupled with other contextual clues, help us to fix a reference. If one says ‘This liquid is green’ while pointing to a bottle, the sortal ‘liquid’ helps us to fix the liquid and not the bottle as the referent. Moreover, personal pronouns which work like demonstratives (e.g., ‘she’, ‘he’, ‘we’,) have a built-in or hidden sortal. ‘She’, unlike ‘he’, refers to a female, while ‘we’ usually refers to a plurality of people, of whom one will be the speaker. Indexicals are generally conceived of as singular terms that contribute a referent to what is said.

According to the direct reference view (from Kaplan and Perry), utterances containing indexicals express singular propositions, i.e., propositions whose constituents are the referents of the indexicals. As such, indexicals are usually characterized as expressions whose interpretation requires the identification of some element of the utterance context, as stipulated by their linguistic meaning. Thus, an utterance of ‘I am tired’ expresses a proposition containing the referent of the first person pronoun, and one understands it insofar as one knows to whom the term ‘I’ refers in the context in which it is uttered. The linguistic meaning governing the use of the indexical – such as ‘the agent of the utterance’ qua meaning of ‘I’, ‘the day of the utterance’ qua meaning of ‘today’ – does not feature as a constituent of the proposition expressed. If indexical expressions are characterized as singular terms contributing their referents into what is said (i.e., the proposition expressed), adjectives such as ‘local’, ‘distant’, ‘actual’ – not to mention count nouns like ‘(a) foreigner’, ‘(an) enemy’, ‘(an) outsider’, ‘(a) colleague’ – would not fall into the same category, for they do not contribute a referent to the proposition expressed. Yet they are, plausibly, context-sensitive expressions. ‘Local’, ‘foreign’, and ‘native’ in ‘A local bar is promoting foreign wine’ and ‘A native speaker should correct your essay’ do not contribute a specific individual or individuals to the proposition expressed. Hence, they are not singular terms. It should be evident that context-sensitivity does not merely concern singular terms. It is worth distinguishing between indexicals qua singular terms, contributing their referents to the proposition expressed, and contextuals qua expressions which, though context-sensitive, are not singular terms. Adjectives such as ‘tall’, ‘big’, ‘small’, ‘old’, etc., also are context-sensitive, insofar as one is only tall/small/ big/old . . . relative to a comparison class. Jon may be too short to play tennis and yet too tall to be a jockey, while Jane may be too old to join the army and too young to take early retirement. But see Cappelen and Lepore (2004) and Borg (2004) for the view that words such as ‘tall’, ‘foreigner’, ‘old’, and the like are not genuinely context sensitive. Proper names, like indexicals, also contribute individuals into the proposition expressed. As such they are singular terms, too; yet they are not indexicals (but see Recanati, 1993 for a different view). Nouns such as ‘Monday’, ‘February’, and the like also seem to contribute specific individuals in the proposition expressed. They are best viewed in the same light as count nouns, i.e., as nouns such as ‘lemon’, ‘frog’, and ‘table’ (see Corazza, 2004). As such, they can be used to build singular terms. This happens when they are

428 Indexicality

coupled with an indexical expression such as ‘this’, ‘next’, ‘last’ and they contribute to complex demonstratives of the form ‘next week’, ‘last Saturday’, ‘next Christmas’. This peculiarity parallels the way count nouns can participate in building complex demonstratives such as ‘these lemons’, ‘that table’, ‘this pen’. (King, however, defends the view that complex demonstratives are quantified terms). One of the major features of indexicals differentiating them from other referential expressions (e.g., proper names: ‘Plato’, ‘Paris’; mass terms: ‘silver’, ‘water’, terms for species: ‘frog’, ‘raspberry’, and so on) is that they are usually used to make reference in praesentia. That is, use of an indexical exploits the presence of the referent. Usually in a communicative episode involving an indexical, the referent is in the perceptual field of the speaker and contextual clues are used to raise the referent to salience (see Smith, 1989; Sidelle, 1991; and Predelli, 1998 for a discussion of indexicals used to refer to objects not present, e.g., answering machines, post-it notes, etc.) When indexicals are not used to make reference in praesentia they exploit a previously fixed reference. ‘That boy’ in ‘That boy we encountered yesterday was in trouble with the police’ does not refer to someone present. In cases like this, the indexical makes reference in absentia. One can thus distinguish between the context of utterance and the context of reference fixing. In our example, the speaker and the hearer appeal to a past context to fix the reference. The gap between the two contexts would be bridged by memory. Another way to handle examples like this would be to argue that, in such cases, the indexical expression works like an anaphoric pronoun linked to a tacit initiator. In the sentence ‘In 1834 Jane visited her parents, now two old, sick people,’ ‘now’ does not refer to the time of the utterance. It refers to 1834. It does so because it is anaphorically linked to ‘1834’, and, as such, it inherits its reference from it. A similar story could be told about ‘that boy’: it inherits its reference from a tacit initiator, i.e., an unpronounced NP which is nonetheless presupposed in the discourse situation. To stress this interpretation, consider the following exchange: Jane: ‘It is raining’; Jon: ‘Then I won’t be there before tomorrow.’ In saying ‘It is raining,’ Jane tacitly refers to the location she is in, say London. With ‘there’, Jon refers to the very same location and we can claim that he does so because ‘there’ works in an anaphoric way, inheriting its value from the tacit reference made by Jane. Furthermore, indexicals differ from other referential expressions insofar as (in their paradigmatic use, at least) they cannot be deferential. While one often relies on the so-called division of linguistic labor when using non-indexical expressions (e.g., proper

names or mass terms), one cannot depend on the same phenomenon when using an indexical. One can, for instance, competently use ‘Feynman’ or ‘elm’ even if one does not know who Feynman is and even if one is unable to tell an elm from a pine. Indeed, a blind person can utter ‘that vase’ when she has been told that there is a vase in front of her. In these uses the reference is fixed by someone else (it is deferentially fixed). However, these are far from being the paradigmatic uses of an indexical such as ‘that/this’. In their paradigmatic uses, they refer to something the user is perceptually aware of. This difference between indexicals and other terms parallels the fact that when using proper names, mass terms, and the like, context is in play before the name is used. As Perry suggests, we often use context to disambiguate a mark or noise (e.g., ‘bank’, or ‘Socrates’ used either as a tag for the philosopher or for the Brazilian football player). These are pre-semantic uses of context. With indexicals, though, context is used semantically. It remains relevant after the language, words, and meaning all are known; the meaning directs us to certain aspects of context. This distinction reflects the fact that proper names, mass terms, etc., unlike indexicals, contribute to building context-free (eternal) sentences, that is, sentences that are true or false independently of the context in which they are used. To sum up, philosophers have made several key claims about indexicals. They are tools whose function is to exploit context, and their hallmarks include not having a fixed referent, not being easily deployed in absentia of the thing referred to, not being used deferentially, and having context play (not just a presemantic role, i.e., determining which word has been used, but also) a semantic role. Philosphers have found that indexicals come in at least three varieties: pure indexicals (‘I’, ‘now’), demonstratives (‘this’, ‘she’), and contextuals (‘foreign’, ‘local’). Key differences between the first and second variety are that, in contrast to pure indexicals, demonstratives are more perception-based, they may be vacuous, they can be combined with sortals, and directing intentions play a quite central role in their use. In addition to attracting the attention of philosophers, indexicals have also captured the interest of those working within the boundaries of cognitive science for several reasons (see, for instance, Pylyshyn, 2003 on how indexicality is relevant to the study of vision): they play crucial roles when dealing with such puzzling notions as the nature of the self (see for instance the importance of ‘I’ in Descartes’ cogito argument), the nature of perception, the nature of time, psychological pathologies, social interaction, and psychological development (see Corazza, 2004).

Inference: Abduction, Induction, Deduction 429 See also: Character versus Content; Context; Context and

Common Ground; Coreference: Identity and Similarity; Demonstratives; Direct Reference; Dthat; Possible Worlds; Pragmatic Determinants of What Is Said; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Semantic Value; Situation Semantics.

Bibliography Austin D (1990). What’s the meaning of ‘‘this’’? Ithaca: Cornell University Press. Bach K (1987). Thought and reference. Oxford: Clarendon Press. Biro J (1982). ‘Intention, Demonstration, and Reference.’ Philosophy and Phenomenological Research 43, 35–41. Borg E (2004). Minimal semantics. Oxford: Oxford University Press. Cappelen H & Lepore E (2004). Insensitive semantics. Oxford: Basil Blackwell. Castan˜eda H-N (1989). Thinking, language, and experience. Minneapolis: University of Minnesota Press. Chisholm R (1981). The first person. Minneapolis: University of Minnesota Press. Corazza E (2003). ‘Complex Demonstratives qua Singular Terms.’ Erkenntnis 59, 263–283. Corazza E (2004). Reflecting the mind: Indexicality and quasi-indexicality. Oxford: Oxford University Press. Evans G (1982). The varieties of reference. Oxford: Oxford University Press. Frege G (1918). ‘Thoughts.’ In Salmon N & Soames S (eds.) (1988). Propositions and attitudes. Oxford: Oxford University Press. 33–55. Kaplan D (1977). ‘Demonstratives.’ In Almog J et al. (eds.) (1989). Themes from Kaplan. Oxford: Oxford University Press. 481–563.

Kaplan D (1989). ‘Afterthoughts.’ In Almog J et al. (eds.) (1989). Themes from Kaplan. Oxford: Oxford University Press. 565–614. King J (2001). Complex demonstratives: A quantificational account. Cambridge: MIT Press. Lewis D (1979). ‘Attitudes de dicto and de se.’ The Philosophical Review 88, 513–543. Reprinted in Lewis D (1979). Philosophical Papers, vol. 1. Oxford: Oxford University Press. Mellor D H (1989). ‘I and Now.’ Proceeding of the Aristotelian Society 89, 79–84 Reprinted in Mellor D H (1989). Matters of metaphysics. Cambridge: Cambridge University Press. 79–84. Numberg G (1993). ‘Indexicality and Deixis.’ Linguistics and Philosophy 16, 1–43. Perry J (2000). The problem of the essential indexical and other essays. Stanford: CSLI Publications. Perry J (2001). Reference and reflexivity. Stanford: CSLI Publications. Predelli S (1998). ‘Utterance, Interpretation, and the Logic of Indexicals.’ Mind and Language 13, 400–414. Pylyshyn Z (2003). Seeing and visualizing. Cambridge: MIT Press. Recanati F (1993). Direct reference. London: Blackwell. Reichenbach H (1947). Elements of symbolic logics. New York: Free Press. Sidelle A (1991). ‘The answering machine paradox.’ Canadian Journal of Philosophy 81, 525–539. Smith Q (1989). ‘The multiple uses of indexicals.’ Synthese 78, 67–91. Valle´e R (1996). ‘Who are we?’ Canadian Journal of Philosophy 26, 211–230. Yourgrau P (ed.) (1990). Demonstratives. Oxford: Oxford University Press. Wettstein H (1991). Has semantics rested on a mistake? and other essays. Stanford: Stanford University Press.

Inference: Abduction, Induction, Deduction K Allan, Monash University, Victoria, Australia ß 2006 Elsevier Ltd. All rights reserved.

Abductive reasoning was championed by the early pragmatist Charles Peirce as an empirically focused procedure for the construction of classes and categories from observed data. He defined it as follows: The surprising fact, C, is observed; But if A were true, C would be a matter of course, Hence, there is reason to suspect that A is true (Peirce, 1940: 151).

Abduction does not really require that the fact observed be surprising. Peirce’s definition of abductive reasoning is presented as a syllogism. We use (a1),

(a2) to mark assumptions; ‘^’ is logical conjunction; and ‘(c)’ is the inferred conclusion of valid reasoning procedure. (1)

(a1) Fact C is observed; ^ (a2) If A were true, C would be a matter of course; (c) There is a reason to suspect that A is true.

Abductive inferences lead to testable hypotheses about states of affairs. Data are correlated on the basis of their similarity or by analogy with some known system, usually with an eye to their apparent function or relevance within the emerging general description. An example of abductive reasoning in historical linguistics is (2).

430 Inference: Abduction, Induction, Deduction (2)

(a1) In the ancient Indic language Sanskrit, words for numbers 2–7 are dva, tri, catur, pan˜ca, .sas. , sapta. These are similar to number words in European languages known to be related to one another: e.g., Slovak dva, Latin duo ‘2’; Slovak tri, Italian tre ‘3’; Latin quattuor ‘4’; Welsh pump, German fu¨nf ‘5’; Spanish seis, English six ‘6’; Latin septem ‘7’. ^ (a2) If Sanskrit were related to these European languages (i.e., they all have a common ancestor), the similarity would be a matter of course. (c) There is a reason to suspect that Sanskrit is related to European languages.

(2(a1)) collapses several similar observations. We could deconstruct (a1) and (a2) into a series of syllogisms like (3). (3)

(a1) Sanskrit dva means ‘2’ ^ (a2) Slovak dva also means ‘2’ (c) Perhaps Sanskrit is related to Slovak (What other evidence is there?)

(2(a2)) is an imaginative leap because Sanskrit is separated by time and thousands of kilometers from the European languages, and it was spoken by a different race. (2(a2)) expresses the intuition underlying the creation of the hypothesis in (2(c)). The essential contribution of intuition to scientific theory is widely recognized among philosophers of science. Pure logic could never lead us to anything but tautologies; it could create nothing new; not from it alone can any science issue. [. . .T]o make arithmetic, as to make geometry, or to make any science, something else than pure logic is necessary. To designate this something else we have no word other than intuition. [. . . L]ogic and intuition have each their necessary role. Each is indispensable. Logic, which alone can give certainty, is the instrument of demonstration; intuition is the instrument of invention (Poincare´, 1946: 214f., 219). The supreme task of the physicist is to arrive at those elementary laws from which the cosmos can be built up by pure deduction. There is no logical path to these laws, only intuition resting on sympathetic understanding of experience can reach them (Einstein, 1973: 221).

The abduced hypothesis (2(c)) can be inductively confirmed by finding additional systematic correspondences leading to the recognition of an IndoEuropean language family. Once a part of the system is recognized and predictions about the whole system begin to be made, the investigator moves from abduction – hypothesizing – to hypothesis testing and inductive reasoning.

Before Peirce, abduction was included within induction. An example of inductive inference is (4). (4) (a1) Every day till now the sun has risen in the east. (c) If we check tomorrow, the sun will have risen in the east.

The inductive inference (4(c)) is a prediction based on sampling; (a1) can be deconstructed into a set of assumptions: On day 1 the sun rose in the east ^ On day 2 the sun rose in the east ^ . . . Yesterday the sun rose in the east ^ This morning the sun rose in the east. If the sampling technique is good, the prediction will probably be verified: (4(c)) is highly probable given the assumption (4(a1)), but it is not necessarily going to be the case. Inductive inference is used in linguistics: for instance, if you are told that almost all French nouns ending in -ion are feminine then you can inductively infer that the next French noun you encounter that ends in -ion will most probably be feminine. Induction uncovers tendencies, but not certainties, and so it is open to dispute; so, the problem with inductive reasoning is exactly that it identifies conclusions in which we have some degree of confidence (given the assumptions) but not the kind of confidence that is given to deductions. Inductivism argues from particular instances to a general hypothesis or model for a universal set of instances (Bacon, 1620 I: 19; Mill, 1970: 188). The inductive method assumes that there is a natural order that can be discovered through rational means by inspecting natural phenomena. If this underlying assumption were correct, the inductivist model would be the only model possible because it is determined by natural order and natural laws; it is a reflection of nature. Inductivist models can be illustrated by Newtonian physics and, because language is a natural phenomenon, American structuralist linguistics: ‘‘The only useful generalizations about language are inductive generalizations’’ (Bloomfield, 1933: 20). But There is no inductive method which would lead to the fundamental concepts of physics. Failure to understand this fact constituted the basic philosophical error of so many investigators of the nineteenth century. [. . .] Logical thinking is necessarily deductive; it is based upon hypothetical concepts and axioms (Einstein, 1973: 299).

In formal semantics and other formal systems deductive reasoning is required because, provided the assumptions are correct and the reasoning process valid, a consistent conclusion is guaranteed. For instance, from any proposition p we can validly deduce p.

Inference: Abduction, Induction, Deduction 431 (5) (a) Max weighs 250 pounds. (c) Max weighs 250 pounds.

More interesting deductions involve more than one assumption, e.g., from (a1) A bachelor is a man who has never married and (a2) Max is married, we can deduce that (c) Max is not a bachelor. Even Aristotle did not recognize that this is only true when there is a referent for Max. Aristotle wrote, ‘‘When we have proved that an attribute applies to all of a kind, we shall have proved that it belongs to some of the kind’’ (Topics, 1984: 109a4). In Aristotle’s scheme, it is therefore possible to make the following chain of valid inference: No horse is a unicorn ! No unicorn is a horse ! Some unicorn is not a horse. The first proposition is true and can be represented in the predicate calculus as 8x[Hx ! :Ux], and the final proposition has the form 9x[Ux ^ :Hx], which asserts the existence of unicorns. The conditional 8x[Hx ! :Ux] ! 9x[Ux ^ :Hx] is false if 9x[Ux ^ :Hx] is false, i.e., when there are no unicorns. Aristotle, like the ordinary language user, assumed the existence of referents for the terms in statements, so for him No horse is a unicorn does entail Some horse is not a unicorn. An Aristotelian syllogistic is (6)

(a1) All humans are mortal. ^ (a2) All Greeks are human. (c) All Greeks are mortal.

If we vary this by making premise (6(a1)) false, All humans are immortal, the conclusion must be All Greeks are immortal – clearly demonstrating that false assumptions will lead by valid argument to consistent but probably false conclusions. Not always, though: (7)

(a1) All women are cats. ^ (a2) All cats are human. (c) All women are human.

It follows that we must get our assumptions right if we are to use deductive inference to seek true conclusions. Note that valid reasoning is defined within a system (a logic), whereas truth is defined in relation to (models of) worlds. A hypothetico-deductive model, e.g., of language, is the inverse of the inductivist model. Hypotheticodeductivism postulates an abstract general theory of the phenomena in question in terms of a defined vocabulary consisting of a specified set of symbols with an interpretation specified for each vocabulary item; there will also be an explicit syntax consisting of a set of axioms and rules for combining the vocabulary items and a set of axioms and rules for interpreting theorems (i.e., sentences) of the metalanguage. This is

why mathematical models of natural phenomena are preferred. The essential thing is the aim to represent the multitude of concepts and propositions, close to experience, as propositions, logically deduced from a basis, as narrow as possible, of fundamental concepts and fundamental relations (Einstein, 1973: 287).

The theorems, well-formed formulae, or sentences that output from the model should correspond with real-world data, such as utterances in the object-language. However, the correspondences will have to be evaluated intuitively. There may be no way of ensuring that the linguist’s intuitions are good, because it is not known quite what intuition is. In a coming together of all three kinds of inference, it is reasonable to suppose that the kind of data-gathering enterprise carried out by inductivists (which relies on abductivism) creates a corpus that ought to provide the best opportunity for the linguist’s intuitions to work on. The corpus will also offer data against which to check the products of any hypothetico-deductive theory a linguist creates. Linguistic theorizing needs the bottom-up data gathering and preliminary classification from phenomenological inductivism and the top-down hypothesis construction from hypothetico-deductivism. They cannot be related in neat temporal sequence; experience tells us that the linguistic researcher must expect to go to and fro between them, reviewing the data to abduce (intuit) hypotheses, and then checking the hypotheses against the data, using some sort of evaluation procedures. Data needs to be classified using inductivist principles; yet classification and taxonomy deal with abstract objects that are essentially similar to the theoretical constructs of a hypothetico-deductive system. It follows that the inductivist’s choice of data, indeed the very perception of the data, involves a categorial process that cannot, in the end, be satisfactorily distinguished from hypothetico-deductivism. We are into a chicken and egg argument if we try to rigidly determine which is prior. The significant lesson is that abductivism, inductivism, and deductivism are complementary, and all are essential to the advancement of linguistic science. We have looked at three different kinds of inference. Abductive reasoning is used in figuring out classes, categories, and functions of observed phenomena – i.e., arriving at a hypothesis. With abductive reasoning the conclusions are based on a best guess; once predictions are built on the results of abduction, we have induction based on samples of the data (market research is one practical use of induction). Deductive inference is the move from assumptions to valid conclusions by observing strict

432 Ingressives

rules of procedure (identified in systems of logic) that guarantee a valid conclusion from the assumptions by preserving monotonicity (q.v.); the assumptions must be correct if the conclusions are to accord with the facts. As Peirce (1940: 154) says: These distinctions [between abductive, inductive, and deductive reasoning] are perfectly clear in principle, which is all that is necessary, although it might sometimes be a nice question to say to which class a given inference belongs.

Any thorough account of natural language understanding uses all three kinds of inference. (It must also recognize defeasible pragmatic inferences of the Gricean kind; see Sperber and Wilson, 1995; Harman, 1999; Levinson, 2000.) See also: Acquisition of Meaning by Children; Aristotle and Linguistics; Coherence: Psycholinguistic Approach; Default Semantics; Field Work Methods in Semantics; Implicature; Logic and Language; Metalanguage versus Object Language; Monotonicity and Generalized Quantifiers; Mood, Clause Types, and Illocutionary Force; Nonmonotonic Inference; Presupposition; Projection Problem; Speech Acts and AI Planning Theory.

Bibliography Allan K (2003). Linguistic metatheory. Language Sciences 25, 533–60.

Aristotle (1984). The complete works of Aristotle. The revised Oxford translation, Barnes J (ed.). Bollingen Series 71. Princeton, NJ: Princeton University Press. Bacon F (1620). Novum organum. London: John Bill. Bloomfield L (1933). Language. New York: Holt, Rinehart and Winston. Chomsky N (1975). Introduction. The logical structure of linguistic theory. New York: Plenum Press. Einstein A (1973). Ideas and opinions. Laurel edn. New York: Dell [First published 1954]. Harman G (1999). Reasoning, meaning, and mind. Oxford: Clarendon Press [Online at http://www.oxfordscholarship.com]. Hodges W (1977). Logic. Harmondsworth: Penguin. Kneale W & Kneale M (1962). The development of logic. Oxford: Clarendon Press. Levinson S C (2000). Presumptive meanings: the theory of generalized conversational implicature. Cambridge, MA: MIT Press. Mill J S (1970). A system of logic: ratiocinative and inductive (8th edn.). London: Longman [First published 1872, first edn 1843]. Peirce C S (1940). The philosophy of Peirce: selected writings, Buchler J (ed.). London: Routledge and Kegan Paul. Poincare´ H (1946). The foundations of science: science and hypothesis, the value of science, science and method, Halstead G B (trans.). Lancaster, PA: The Science Press. Sperber D & Wilson D (1995). Relevance: communication and cognition (2nd edn.). Oxford: Basil Blackwell [First edn., 1986].

Ingressives ¨ Dahl, Stockholm University, Stockholm, Sweden O ß 2006 Elsevier Ltd. All rights reserved.

The terms ‘ingressive,’ ‘inceptive,’ and ‘inchoative’ are all used to refer to the notion the beginning of an activity or state and to verbs, verb forms, and derivational processes connected to this notion. Sometimes, the terms are used indiscriminately, but there is a tendency to reserve ‘inchoative’ for the beginnings of states. In this article, ‘ingressive’ will be used as a cover term. Ingressive meanings may be expressed in many ways in languages: by auxiliaries or auxiliary-like verbs, by derivational processes, or by inflectional means. In English, the productive way of expressing ingressivity is by verbs such as begin (taking verbs as its complement) or become (taking nominal elements or adjectives as its complement). In other languages,

the same verb may show up with all word classes, e.g., Russian stat’ ‘begin, become’ (originally a movement verb ‘to stand up’). The English unproductive suffix -en as in darken is an example of a derivational ingressive (inchoative) that combines with adjectives. The Russian ingressive prefix za- combines with verbs as in zapet’ ‘begin to sing.’ As regards inflectional forms, it does not appear to be common for such forms to have ingressivity as their primary meaning. However, perfective forms of stative verbs may acquire an ingressive (inchoative) meaning, for example Greek aorist or French passe´ simple. In general, ambiguities between stative and ingressive readings are common, in particular with verbs such as English realize and understand or posture verbs in other languages. See also: Aspect and Aktionsart; Grammatical Meaning; Role and Reference Grammar, Semantics in.

Intensifying Reflexives 433

Intensifying Reflexives E Ko¨nig, Free University of Berlin, Berlin, Germany ß 2006 Elsevier Ltd. All rights reserved.

Terminology The term intensifier is here used for expressions such as English him-/herself, for Latin ipse, Italian stesso, Russian sam, Mandarin ziji, German selbst, etc., and their counterparts in other languages. There is no generally established label for these expressions and most of the terms used, such as emphatic reflexives, emphatics, pronouns of identity, particles, scalar adverbs, etc., are highly misleading even for individual languages, to say nothing of cross-linguistic studies. The term intensifier, first used in Moravcsik (1972) and Edmondson and Plank (1978), has the advantage of being more or less free of such misleading connotations, though it is also sometimes used for adverbs of degree such as very or extremely. Intensifiers such as the ones listed above are most easily identified across languages in terms of their prosodic and their semantic properties. They are invariably focused and thus stressed. Another reasonably general property seems to be the fact that they are used as adjuncts rather than arguments within a sentence. As far as their other morphosyntactic properties are concerned, they may differ quite strikingly from language to language. In those cases in which their etymology can be established beyond reasonable doubt, we also can say that they typically derive from expressions for body parts (‘head’, ‘heart’, ‘body’, ‘bone’, ‘eye’, but also ‘soul’ and ‘mask’). This etymology is still visible in Semitic, Indic, Caucasian, and many African languages, as well as in creoles (cf. Heine, 2000, 2003). A further characterization of intensifiers is possible if one examines their typical uses. Four different use types can be distinguished, not all of which are found in each language, however. The following four uses can be found in a wide variety of languages, but there are many languages that have only the first two. The third use type seems to be the rarest of the four and the fourth use is often associated with a specific form that is differentiated from the other uses. The following English examples provide an instance of each use: (a) the adnominal use (1) Writers themselves, rather than their works, should be examined for their sense of social responsibility.

(b) the adverbial exclusive use (ffi ‘alone’) (2) Mrs. Dalloway wanted to buy the flowers herself.

(c) the adverbial inclusive use (ffi ‘too’) (3) If he’s busy breaking the rules himself, he could hardly demand that they do otherwise.

(d) the attributive/possessive use (4) This is a matter between you and your own conscience.

In English, as in many other European languages, the same intensifier (X-self ) can be used in three different contexts and with three different interpretations. In its adnominal use X-self is adjoined to the right of an NP (or DP) and exhibits agreement with that constituent. The adverbial use also exhibits agreement with a nominal constituent, although the intensifier is not adjacent to that constituent but in construction with a VP or a projection thereof (cf. (2)–(3)). The two adverbial uses are primarily differentiated on the basis of their meaning and because there is a certain complementarity between these uses, they may well be the result of contextual factors interacting with one univocal adverbial intensifier. In attributive (possessive) contexts, Modern English uses a special intensifier (own), but as late as the 17th century the intensifier self could still be used in such contexts. (5) . . . and syððan hit his discip(ul)um sealde to þicganne for his sylfes lichaman and for his agan blod. ‘. . . and then gave it to his disciples to take and eat as if it were his own body and his own blood.’ (Wulfstan Pol 3 (Jos) 17)

In other words, own fills a gap in the distribution of X-self in Modern English in contrast to earlier forms of that language or languages like Turkish, where the same expression (kendi) is found in the adnominal, adverbial, and attributive functions (kendi odam ‘my own room’). In addition to these four uses, further uses of the same expressions with different though related meanings can be found in many languages (e.g., German Selbst der Chef ‘Even the boss,’ derselbe Verbrecher ‘the selfsame criminal,’ etc.). As far as their morphology and distribution are concerned, intensifiers have special properties and cannot be subsumed under any of the established lexical (or functional) categories. In a large number of languages they inflect for the morphosyntactic features typically associated with nouns or pronouns (number, gender, person, case) and the fact that they invariably agree with a noun or noun phrase suggests that they should be analyzed as a specific subclass of adjectives. Such an analysis is particularly tempting in those cases where intensifiers precede rather than follow an adjacent noun, as in the case of Spanish

434 Intensifying Reflexives

mismo, Italian proprio, Zapotec lagakh or English own. In an equally wide variety of languages, intensifiers are invariant particles and manifest a partial overlap in their distribution with focus particles like ‘alone,’ ‘too,’ ‘only,’ ‘even.’ Moreover, the versatility in their syntax, illustrated by the examples (1)–(4), is also similar to the distribution of focus particles. Semantically and prosodically, however, they differ strikingly from both adjectives and focus particles. As pointed out above, intensifiers are invariably stressed and thus focused, rather than being merely associated with a focus (like focus particles) or stressed under specific contrastive conditions (like adjectives). This striking prosodic property is closely linked to, and in fact a consequence of, their meaning. As pointed out in Eckardt (2002) and Hole (2002), intensifiers can simply be analyzed as denoting the identity function and thus would make a completely trivial contribution to the meaning of a sentence if they were not focused. Under stress and focus these expressions denote functions that map the value denoted by their nominal co-constituent onto alternative values, i.e. entities defined in terms of the value given. Applied to a preceding noun phrase like writers in (1), the intensifier simply leads to the very same semantic value as the noun phrase alone (i.e., ‘writers’). The effect of the focus and stress on himself, however, is that the denotation of writers is mapped onto alternative values, i.e., values definable in terms of this expression (e.g., ‘their families,’ ‘their ideas,’ ‘their works,’ etc.). In (1), the third of these possible alternatives is in fact the alternative envisaged by the speaker and explicitly given in the context. More often than not such alternatives can also be described as ‘entourage, periphery’ of the value denoted by the co-constituent taken as the center (cf. Ko¨nig, 1991). These characteristic prosodic and semantic properties strongly argue for assigning intensifiers to a class of their own.

Parameters of Variation Intensifiers may differ strikingly in their formal, distributional, and selectional properties across languages, but they do so quite systematically and within very clear limits. Agreement vs. Invariance

One of the most striking parameters concerns their morphological makeup: Intensifiers may agree with their co-constituents in terms of morphological features typically associated with nouns (the so-called j-features) or simply be invariant (particles). Examples of invariant, particle-like intensifiers are found inter alia in Albanian, Bambara, Modern Breton, and all Germanic languages other than English.

German (6) [Sie selbst] hat es mir gesagt. she-INT has it1SG.-DAT said ‘She herself told me about it.’

In the other languages, by contrast, intensifiers inflect for some of the j-features mentioned above (gender, number, person, case), thus manifesting agreement with the nominal constituent to which they relate syntactically and with which they interact semantically (their ‘co-constituent’). The set of languages that manifest such agreement includes English, the Romance languages, the Finno-Ugric, and the Turkic languages: Finnish (7) Saa-n-ko puhua johtaja-lle may-1SGPRES-Q speak director-ALL itse-lle-en INT-ALL-3POSS ‘Could I talk to the director himself?’ Relation to Reflexive Anaphors

Intensifiers play an important role in the genesis, reinforcement and renovation of reflexive anaphors (cf. Ko¨nig and Siemund, 2000; Ko¨nig, 2003), and one of the most interesting parameters of variation therefore concerns their relationship to reflexive anaphors. Three options can be distinguished among the languages of the world: (a) intensifiers and reflexives are identical in their formal properties and only differ in their distribution, reflexive anaphors occurring in argument position and intensifiers in adjunct positions; (b) intensifiers and reflexives are formally differentiated, differing thus in both their form and their distribution; and (c) intensifiers and reflexives share morphological material or are at least similar in their formal properties, though not in their distribution. Complete formal identity between intensifiers and reflexives is found in all of the following languages: English, Mandarin, Finno-Ugric, most Caucasian, Indian, Semitic, and Turkic languages. In grammars of these languages intensifiers are often categorized as emphatic reflexives and discussed in the same chapter or section as reflexives (e.g. Huddleston and Pullum, 2002: 1483–1498). (8a) The bride herself remained calm throughout the ceremony. (8b) The bride is looking at herself in the mirror.

In languages of the second group (b), intensifiers and reflexive markers are clearly distinct and are never discussed under the same heading in the relevant grammars. Examples from French exemplify an equally widespread situation among the languages of the world:

Intensifying Reflexives 435

French (9a) Pierre se de´teste. ‘Peter hates himself.’ (9b) Pierre lui-meˆme va nous adresser la parole. ‘Peter himself will talk to us.’

Reflexive markers in such languages are generally highly grammaticalized instances of the pronominal strategy for marking reflexivity (cf. Faltz, 1985) and are also typically employed as middle markers (i.e., markers of derived intransitivity) in anticausative, modal passive, and reciprocal constructions (cf. Kemmer, 1993). The third type identified by the parameter of variation under discussion includes all those cases where intensifiers and reflexive markers share some morphological material without being formally identical. The typical situation here is that the reflexive marker is analysable as intensifier þ some additional morpheme, e.g., a personal pronoun or reflexive pronoun. This type thus also includes Dutch and the Scandinavian languages, in which a reinforcement of the basic reflexive pronoun (zich, sig) by the intensifier has been grammaticalized for all stereotypically other-directed predicates. Selectional Restrictions

The following parameter of variation makes intensifiers very different from focus particles, but is not all that surprising in view of the adjective-like inflectional behavior exhibited by these expressions in many languages: intensifiers may or may not exhibit selectional restrictions with regard to the nominal constituents they interact with. It is, however, not the language as a whole that imposes specific restrictions but specific intensifiers in individual languages. Examples of intensifiers with no, or at least almost no, combinatorial restrictions are German selbst, Spanish mismo and Amharic –ras-. A differentiation quite frequently found among the distribution of intensifiers relates to the opposition between animate and inanimate nouns. In Japanese zisin is only used with animate nouns, whereas zitai is used with inanimate nouns: Japanese (10a) Taroo-zisin kyouzyu-o sonkeisiteiru. Taro-self professor-ACC honours ‘Taro himself honours the professor.’ (10b) Kono hon-zitai yomunoga muzukasii. /*zisin this book-self to.read difficult.is ‘This book itself is difficult to read.’

Other frequent differentiations relate to the distinction between local, temporal, and abstract nouns vs. the rest (e.g., French meˆme vs. lui-meˆme, ellemeˆme) and to person distinctions (1st and 2nd person vs. 3rd person in Basque), so that the relevant selectional restrictions are describable in terms of the well-known Animacy Hierarchy. See also: Anaphora, Cataphora, Exophora, Logophoricity; Coreference: Identity and Similarity; Pronouns; Selectional Restrictions.

Bibliography Eckardt R (2002). ‘Reanalyzing selbst.’ Natural Language Semantics 9 (4), 371–412. Edmondson J A & Plank F (1978). ‘Great expectations: An intensive self analysis.’ Linguistics and Philosophy 2, 373–413. Faltz L (1985). Reflexivization – a study in universal syntax. New York: Garland. Gast V (2003). The grammar of identity: intensifiers and reflexives as expressions of an identity function. Unpubl. Ph.D. dissertation, FU Berlin. Gast V & Siemund P (2004). ‘Identical agents – on the interrelation between adverbial intensifiers and reflexives.’ Special issue of Linguistics (in print). Heine B (2000). ‘Polysemy involving reflexive and reciprocal markers in African languages.’ In Frajzyngier Z & Curl T (eds.) Reciprocals – form and function. Amsterdam: Benjamins. 1–29. Heine B (2003). ‘Accounting for creole reflexive forms.’ Lingua 2003 (in print). Hole D (1998). ‘Intensifiers in Mandarin Chinese.’ Sprachtypologie und Universalien forschung 51, 49–68. Hole D (2002). ‘Agentive selbst in German.’ In Katz G, Reinhard S & Reuter P (eds.) Sinn und Bedeutung 6. Proceedings of the sixth meeting of the Gesellschaft fu¨r Semantik 2001. Osnabru¨ck: Institute of Cognitive Science. 133–50 [CD-Rom/Online-publication: http:// www.cogsci.uni-osnabrueck.de.]. Huddleston R & Pullum G K (2002). The Cambridge grammar of the English language. Cambridge: Cambridge University Press. Kemmer S (1993). The middle voice: a typological and diachronic study. Amsterdam: Benjamins. Kibrik A E & Bogdanova E (1995). ‘Sam kak operator korrekcii ozˇidanij adresata.’ Voprosy jazykoznanija 3, 4–47. Ko¨nig E (1991). The meaning of focus particles: a comparative perspective. London: Routledge. Ko¨nig E (1997). ‘Towards a typology of intensifiers.’ In Caron B (ed.) Proceedings of the XVIth International Congress of Linguists. Pergamon, CD-ROM. Ko¨nig E & Siemund P (2000). ‘Intensifiers and reflexives – a typological perspective.’ In Frajzyngier Z & Curl T (eds.) Reflexives – forms and functions. Amsterdam: Benjamins. 41–74.

436 Intention and Semantics Ko¨nig E (2001). ‘Intensifiers and reflexive pronouns.’ In Haspelmath M et al. (eds.) Language typology and language universals – an international handbook. Berlin: Mouton de Gruyter. 747–760. Ko¨nig E (2003). ‘Intensification and reflexivity in the languages of Europe: Parameters of variation and areal features.’ In Loi Corvetto I (ed.) Dalla linguistica areale alla tipologia linguistica. Roma: Il Calamo. 229–252. Moravcsik E (1972). ‘Some cross-linguistic generalizations about intensifier constructions.’ Chicago Linguistic Society 8, 271–277.

Moyne J A (1971). ‘Reflexive and emphatic.’ Language 47, 141–163. Siemund P (1999). Intensifiers: a comparison of English and German. London: Routledge. Zribi-Hertz A2 (1995). ‘Emphatic or reflexive? On the endophoric character of French lui-meˆme and similar complex pronouns.’ Journal of Linguistics 31, 333–374.

Intention and Semantics S Barker, University of Nottingham, Nottingham, UK ß 2006 Elsevier Ltd. All rights reserved.

A compelling idea is that words have meanings because speakers who use them have intentions of some kind. Intentions underpin semantics. One proposal of how this might work is based on Locke’s idea (1975) that the function of a language is to express preexisting thoughts; thoughts are correlated with sentences through intentions. So intentions are the glue binding thoughts to words. Grice (1958, 1969) adopts this Lockean view. Grice divides verbal meaning into two kinds: so called ‘speaker-meaning’ and ‘sentence-meaning.’ Speakermeanings are the contents of particular utterances on occasions. They are the contents of illocutionary acts (in the sense of Austin, 1962) performed by production of sentences. Illocutionary acts include assertions, orders, questions, etc. Speaker-meanings may diverge from sentence-meanings, which are literal meanings. U may employ ‘The church is a rock’ but convey as speaker-meaning that the church is a strong support for its members; U may utter ‘The meal is edible,’ conveying as speaker-meaning that it is not very good. Grice begins his analysis of meaning by examining the intentions underpinning speaker-meaning. Grice takes the intentions that do the job to be communicative intentions. A communicative intention is social: it is directed toward an audience. In communication – speaking-meaning something – a subject U intends that her audience H gain a certain state r, a belief, desire, intention, etc. Grice’s great insight was into the structure of the communicative intention, which he characterized thus: C: U means something by utterance S if and only if U utters S and intends that H acquire characteristic r and U intends that H do so partly by recognizing this very intention.

The intention is a reflexive intention: an intention that falls within its own scope. Many philosophers have been wary of the reflexive treatment of communication and have attempted to explicate communicative intentions in terms of hierarchies of intentions. See Grice (1969), Strawson (1964), and Schiffer (1972). But the results are unwieldy. Speaker-meanings are, then, acts with the form C. The different types of speaker-meanings or illocutionary acts, that is, assertions and orders etc., are determined by the different characteristics r intended. Grice (1971: 123) analyzes assertions and orders thus: GA: U asserts (to H) that P by uttering S iff U utters S reflexively intending that H believe that U believes that P. GO: U orders (H) to do F by uttering S iff U utters S reflexively intending that H form an intention to F.

Such proposals about r are problematic, however. Restricting ourselves to assertion, GA is refuted by apparent cases in which U lacks an intention that H believe that P – e.g., where U either (a) is indifferent as to H’s belief, because, for example, U is engaged in polite conversation without intending to persuade; or (b) believes H won’t believe her. (See Alston, 2000). Bach and Harnish (1979), suggest the primary intention should rather be: that H has a reason to believe that U believes that P.

If, as suggests Recanati (1986), reasons are defeasible, then U can provide a reason for H to believe P, even though U knows it is undermined by further information, such as that H believes U is a liar. Assuming that some such explication of r works, how can we use speaker-meaning to analyze sentencemeaning? Grice (1971) introduces practices and conventions thus: SM1: S means that P for U iff U has the practice that if she desires to reflexively intend that H gain r, then she (may) utter S.

Intention and Semantics 437

The ‘may’ here can be taken as rule permissibility or epistemic ‘may.’ SM1 allows words to be ambiguous, and for there to be different ways of saying the same thing. Grice notes that the regularity governing use of the sentence S does not itself have to be one that correlates use of S with full-fledged reflexive intentions. Rather we simply need: SM2: S means that P for U iff U has the practice that if she desires to intend that H gain r, then she (may) utter S.

If the conventions/practices governing sentences have this form, we can then explain how speakermeanings – full illocutionary acts – emerge in particular utterance events. In uttering S, U intends H to think thus: U’s policy is that if U intends that H have r, then U (may) utter S. U uttered S, so (given context) I can infer that U intends that H have r. U intends that H have r. So, I should have r.

Thus, U is intending that H have r, partly in virtue of recognizing U’s intention that she have r. But this is just reflexively intending that H have r. To analyze word-meaning we need regularities such as: WM: The word O means O for U iff U has the practice that if U desires that H believe U believes something about O, then U (may) utter a sentence . . .O . . . .

Given that U has such dispositions for the basic vocabulary of her language, we can deduce the kinds of dispositions that will underpin her production of novel (unuttered) sentences. We provide thereby a compositional account of the meaning of sentences in U’s language. Platts (1979: 86–94) doubts this last point. He thinks the very thing we are meant to be explicating, sentence-meaning, will have to be brought in to fix the intentions that U would have if she were to use novel sentences. Blackburn (1984: 127–129) objects that it is just the practices and dispositions themselves that fix what we would intend, and thus mean. Nothing else need back up such dispositions. A more serious objection to the Gricean account of meaning is that repertoire rules for the words require us to appeal to the semantic relation of aboutness. WM invokes U’s believing something about object O. Aboutness is related to denotation, that is, a representational relation between mental state or word and world. But meanings, one might think, are simply representations. So, as Devitt and Sterelny (1999: 150–151) point out, the Gricean analysis effectively leaves unexplained the realm of content as such. Perhaps we need a Fodorean (1975) language of

thought hypothesis to finish the story. Or perhaps there is some way of giving a pragmatic reduction of representation in terms of an inferential semantics (Brandom, 1994). Or we should see Grice’s analysis of speaker-meaning and its relation to sentence-meaning as merely providing an explanation of what it is for particular speakers to use one language – whose representational contents are abstractly defined in terms of truth-conditions – rather than another (Lewis, 1975). If so, as Blackburn (1984: 134) points out, there is no rivalry between intention-based semantics and formal truth-conditional approaches, contra Strawson (1964). Or there is this possibility. An intention-based semantics might change tack by denying that meanings, qua semantic interpretations, are representations as such. This approach is found in Barker (2004). It contends that the semantic interpretations of words and sentences are not representational contents but speech act types – acts whose nature is not merely representational. These speech acts are a specific kind called ‘proto-speech-acts.’ A proto-act is uttering a word, phrase, or sentence and advertising certain intentions to denote, represent, or communicate. Advertising is engaging in the behavior characteristic of a speaker who, following certain rules, intends to do something. In uttering a name, for example Pegasus, U utters a term, Pegasus, and advertises an intention to denote an object, something called Pegasus. If U utters ‘I saw Pegasus,’ she wants H to believe she has the intention she advertises, but if she asserts ‘Pegasus does not exist,’ she does not. What defines a name’s meaning is not any object denoted, since like Pegasus it might be empty, but the proto-referring act associated with it. The meaning of Pegasus is that proto-referring act all of whose tokens are nodes of a certain referential tree: that subset of uses of the name Pegasus that we group together as instances of the name for the mythical flying horse. The meaning of a declarative sentence is a ‘protoassertion.’ A proto-assertion involves two parts: advertising (a) a representational intention and (b) a communicative intention. In uttering ‘Snow is white,’ U advertises intentions (a) to represent that snow is white, and (b) to defend the intention to represent snow’s whiteness. Defending is dialectical engagement with an audience H: in defending an intention to represent that snow is white, one wants H to accept or reject such an intention in her own case, and to give reasons for rejection. H judges correct an assertion of ‘Snow is white’ – where U really does intend to defend an intention to represent that snow is white – iff H accepts what U defends. Assertion of ‘Snow is white’ is a report, since what is defended is a representational intention. But not all

438 Interpreted Logical Forms

assertions are reports. Utterance of ‘Haggis is tasty’ is an expression of taste: U advertises intentions to (a) represent her possession of a gustatory preference state of liking haggis, and to (b) defend that gustatory preference state. Preference states are not representational. H judges correct U’s assertion iff H accepts the gustatory preference state in her own case; not if and only if H thinks U has represented correctly her, U’s, state. In this framework, all logically complex sentences have expressive proto-assertions as their meanings. Negations express rejective states. In uttering ‘Snow is not white,’ U advertises intentions to (a) represent that she has a rejective state with respect to intending to represent snow is white, and to (b) defend that rejective state. In uttering ‘Haggis is not tasty,’ U expresses rejection of the gustatory property. And so on for other logically complex sentences. Statements of the form ‘S is true’ are expressive as well; U expresses her acceptance of the state defended in assertion of S. Because sentence-meanings are not representations, we are not committed to logically complex entities in the world, such as negative or universal facts, or mysterious properties of truth. A compositional semantics can be built through constructing meanings in terms of proto-acts, protoreferrings, and proto-assertions. Proto-assertions, for example, can embed in logical compounds. In ‘either S or R,’ S and R are proto-asserted. Hence advertising intentions is weaker than the condition articulated above of giving a defeasible reason to believe. This approach does not attempt to explicate representation in speech act terms. Rather, it displaces representation as the keystone of meaning. Names and sentences don’t have to denote/represent to be meaningful. Truth-bearers are not propositions, qua representational contents, but assertions: acts of defending states. This account does not tell us what denotation and representation are, but, unlike the Gricean approach, it is not committed to saying that meaning resides in such relations holding. The result is an intention-based semantics that seriously challenges the dominant truth-conditional approach to meaning.

See also: Assertion; Cognitive Semantics; Context Princi-

ple; Cooperative Principle; Default Semantics; Evolution of Semantics; Expression Meaning vs Utterance/Speaker Meaning; Human Reasoning and Language Interpretation; Ideational Theories of Meaning; Implicature; Mentalese; Speech Acts and Grammar; Thought and Language; Truth Conditional Semantics and Meaning.

Bibliography Alston W J (2000). Illocutionary acts and sentence meaning. Syracuse: Cornell. Austin J (1962). How to do things with words. Urmson J O & Warnock G J (eds.). Oxford: Oxford University Press. Bach K & Harnish R M (1979). Linguistic communication and speech acts. Cambridge, MA: MIT Press. Barker S J (2004). Renewing meaning. Oxford: Clarendon Press. Blackburn S J (1984). Spreading the word. Oxford: Clarendon Press. Brandom R (1994). Making it explicit. Cambridge, MA: Harvard University Press. Devitt M & Sterelny K (1999). Meaning and reality. Cambridge, MA: MIT Press. Fodor J (1975). The language of thought. New York: Crowell. Grice P (1958). ‘Meaning.’ Philosophical Review 67, 377–388. Grice P (1969). ‘Utterer’s meaning and intentions.’ Philosophical Review 78, 147–177. Grice P (1971). ‘Utterer’s meaning, sentence-meaning, and word-meaning.’ In Searle J (ed.) The philosophy of language. Oxford: Oxford University Press. Lewis D K (1975). ‘Languages and language.’ In Gunderson K (ed.) Minnesota studies in the philosophy of science VII, Book 3, ch. 1–3. Minneapolis: University of Minnesota Press. Locke J (1975). An essay concerning human understanding. Nidditch P (ed.). Oxford: Oxford University Press. Platts M (1979). The ways of meaning: an introduction to a philosophy of language. London: Routledge & Kegan Paul. Recanati F (1986). ‘On defining communicative intentions.’ Mind and Language 1, 213–241. Schiffer S (1972). Meaning. Oxford: Clarendon Press. Strawson P (1964). ‘Intention and convention in speech acts.’ Philosophical Review 73, 439–460.

Interpreted Logical Forms M Montague, University of California, Irvine, CA, USA ß 2006 Elsevier Ltd. All rights reserved.

Interpreted logical forms (ILFs) were originally introduced by Harman (1972) to answer the

question: What are the objects of propositional attitudes (belief, desire, hope, regret, etc.)? The theory has since been developed by a number of philosophers, most notably Higginbotham (1986, 1991), Segal (1989), Larson and Ludlow (1997), and Larson and Segal (1995). Seymour (1996) has suggested

Interpreted Logical Forms 439

that ILF theories can also solve certain puzzles in quotational environments.

Propositional Attitude Reports Consider the following two propositional attitudes: (1) Lois believes that Superman can fly. (2) Lois believes that Clark Kent can fly.

It is an ‘intentional fact’ that Lois may have the attitude displayed in (1) without having the attitude displayed in (2). It is a ‘semantic fact’ that sentences reporting these propositional attitudes may differ in truth value. Are these facts the same? No: one is a fact about intentionality, and the other is a fact about semantics. Is this difference important? According to the theory of ILFs advanced by Ludlow, Larson, and Segal, it is. Giving a semantics for propositional-attitude reports, which is the goal of the ILF theories, has virtually nothing to say about propositional attitudes themselves: ‘‘The ILF theory . . . (as a semantic theory) . . . addresses only the truth conditions of sentences involving believe, think, assert, etc., it does not address the beliefs, thoughts, and assertions of persons’’ (Larson and Ludlow, 1997: 1035). By contrast, the ILF theory proposed by Higginbotham (1986, 1991) countenances a closer relationship between the theory of propositional attitudes and the semantics of attitude reports. Since Frege (1892), a huge amount of attention has been given to the aforementioned semantic fact, which has become known has ‘Frege’s Puzzle.’ Frege discovered that if ‘Superman’ and ‘Clark Kent’ refer to their ordinary referents in (1) and (2), according to the principle of substitution, that co-referring expressions may be substituted salva veritate, (1) and (2) should share a truth value. Since they intuitively do not, the principle of substitution seems to fail in propositional-attitude contexts. Frege’s puzzle has given rise to the very difficult project of giving a semantic theory that deals satisfactorily with propositional-attitude reports. Frege offered a solution based on his sense/ reference distinction. In addition to extensional entities (individuals, sets, and relations-in-extension) he postulated senses or modes of presentation, which we now sometimes call intensions, and which are, roughly, ways of determining referents. Briefly, in attitude contexts, ‘Superman’ and ‘Clark Kent’ do not refer to their ordinary referents, but to their ordinary senses. Since ‘Superman’ and ‘Clark Kent’ refer to different senses in (1) and (2), we shouldn’t expect to substitute them salva veritate. In this way Frege preserves the principle of substitution,

but at a cost. First, he introduces intensions, which seem to many to be dubious entities (Quine, 1961). Second, he violates Davidson’s (1968) requirement of ‘semantic innocence’ – that expressions should have the same semantic values in all contexts. ILF theories aim to offer a semantic theory of propositional-attitude reports that can avoid these supposed problems. They aim to give a purely extensional semantics for attitude contexts, while preserving semantic innocence. Achieving these aims would make them extremely attractive. (See Larson and Segal, 1995: 437 for their rejection of intensional strategies for preserving semantic innocence.) ILF theories are standardly embedded in a Davidsonian truth-theoretic semantics for natural language. The introduction of ILFs allows one to provide truth conditions for sentential complements embedded in propositional-attitude reports within the truth-theoretic-semantics framework. (For a clear explanation of why the Davidsonian framework (without ILFs) is not adequate to capture the truth conditions of propositional-attitude reports, see Larson and Segal, 1995: 415–418.)

What Are ILFs? The basic idea is that propositional-attitude verbs express relations between an agent and an ‘interpreted logical form.’ The ‘logical form’ part of an ILF is, roughly, a sentential complement (a syntactic item), which usually takes one of the following forms: ‘a is F,’ ‘a’s being F,’ or ‘that a is F.’ (Lexical items are part of an expression’s logical form.) The ‘interpretation’ part of an ILF is the assignment of referents to parts of the sentential complement. ILFs can be represented with phrase structure trees whose terminal nodes are pairings of the relevant lexical items with their referents, and whose nonterminal nodes are pairings of the relevant phrasal categories with their referents. Simplifying, the sentential complement in an attitude reports such as (3) Lois believes John is funny

receives the following ILF:



In a nutshell, ILFs are a combination of syntactical (linguistic) and referential (nonlinguistic) material – they are hybrids relative to purely syntactical approaches and purely referential approaches. (See Larson and Segal, 1995: 419–422 for a discussion of purely syntactical approaches.) Since ILFs

440 Interpreted Logical Forms

conjoin only lexical items and extensional referents, ILF theories provide a purely extensional semantics for propositional-attitude reports.

Puzzles and Problems An adequacy test for any semantics of the attitudes is how well it does in solving traditional puzzles. To begin, consider the way in which the linguistic and nonlinguistic features of ILFs allow them to solve one puzzle involving names and one puzzle involving demonstratives. I call these the ‘simple name puzzle’ and the ‘simple demonstrative puzzle.’ (The use of the term ‘simple’ here is of course theory relative. Russellians (e.g. Salmon, 1986, and Soames, 1987) have great difficulty with name puzzles.) The Simple Name Puzzle

Most agree that the following two belief reports can differ in truth value, despite the fact that ‘Fido’ and ‘Rex’ refer to the same dog (this is a variant of the aforementioned Clark Kent/Superman case): (4) John believes that Fido barks. (5) John believes that Rex barks.

John, for example, may only know the dog by the name ‘Fido’ and so only believe (4). Because ILFs are partly constituted by lexical items, the ILF theory can easily account for the difference in truth value between (4) and (5). Since ‘Fido’ and ‘Rex’ are different lexical items, the ILFs associated with the sentential complements in (4) and (5) will be different. The Simple Demonstrative Puzzle

Consider (6) John believes that that is funny

used to report John’s referring to a comedian’s skit with the use of the embedded demonstrative. Now consider (6) used to report John’s referring to a TV show with the embedded demonstrative: clearly a different belief report. A purely sententialist theory of the attitudes (one that appeals only to syntactical items) has difficulty accounting for this difference. In both instances, John would be standing in relation to the same sentential complement, ‘that is funny.’ Since ILFs contain the referents of lexical items, they can easily account for differences between belief reports that share sentential complements. In this case, one ILF would contain the comedian’s skit as the semantic value of the embedded ‘that,’ and the other would contain the TV show. These two puzzles show the power of ILF theories. The lexical features of ILFs allow them to be

fine-grained enough to solve simple name puzzles without postulating intensional entities. The objectual features of ILFs prevent them from being too coarse-grained (as purely syntactical theories are) to solve simple demonstrative puzzles. Consider now two further puzzles that prove to be more difficult for ILF theories. I call these ‘the hard name puzzle’ and ‘the hard demonstrative puzzle.’ The Hard Demonstrative Puzzle

Suppose that John assents to (7) That is a philosopher

while pointing to his professor in philosophy class (context 1), but denies (7) while pointing to a man seen from behind (he doesn’t know it’s his professor) at a party (context 2). Relative to these different contexts, then, intuitively, (8) John believes that that is a philosopher

according to the context in which it is uttered, can differ in truth value. On its face, since the sentential complement, ‘that is a philosopher,’ and the referents assigned to its parts are the same in both contexts, ILF theories appear unable to account for the possible difference in truth value. Larson and Ludlow (1997) respond to the hard demonstrative puzzle by appealing to a Burgean (1974) idea about the semantic values of demonstratives: according to Burge, the semantic value of a demonstrative is a pairing of what is demonstrated and the act of demonstration. By including acts of demonstration in the semantic value of demonstratives in attitude contexts, Larson and Ludlow can account for the truth value differences of (8) since the acts of demonstration will be different in the two contexts. Relative to context 1 the semantic value of the demonstrative will be , where x is the philosopher and a1 is the speaker’s demonstrative act, and relative to context 2 the semantic value will be where a2 is a distinct demonstrative act from a1. Pietroski (1996) argues that it is unclear whether this response is nonFregean and well motivated. That is, Pietroski observes that acts of demonstration can affect the truth of (8) but not the truth of (7). This, he claims, concedes the Fregean point that a demonstrative way of thinking of a referent affects the truth value of a sentence with an embedded demonstrative. Pietroski suggests that Larson and Ludlow can avoid this complaint by agreeing with Burge (1974), on grounds independent of attitude contexts, that the semantic values of demonstratives are always ordered

Interpreted Logical Forms 441

pairs of the thing demonstrated and the act of demonstration. A suggestion made by Higginbotham (1991) may provide an alternative way of dealing with cases like (8). In discussing an objection to both Creswell’s (1985) structured meanings account of the attitudes and Larson and Ludlow’s theory of ILFs, Higginbotham argues that both fail to include a crucial parameter in their accounts: ‘‘. . . namely that complement sentences are to be understood as if their speakers said them’’ (1991: 352). Lau (1995) argues that applying this parameter to (7), relative to context 1, in uttering (7) John would understand himself as referring to his philosophy professor and (8) would turn out true, but relative to context 2, John would not understand himself as referring to his philosophy professor and (8) would turn out false. Although this suggestion accounts for this case, more would need to be said about how this new parameter should be incorporated into the semantics. The Hard Name Puzzle

Kripke’s (1997) Paderewski case (and others like it) pose a more difficult challenge for ILF theories. Suppose John comes to know his shy neighbor, who never plays music at home and does not own a piano, as ‘Paderewski.’ Thus (9) is true: (9) John does not believe that Paderewski is a pianist.

John also attends many concerts where he meets his favorite pianist, also called ‘Paderewski.’ Although these are the same Paderewski, John does not realize this. So, it is also true that (10) John believes that Paderewski is a pianist.

There seems to be no way of distinguishing John’s beliefs either in terms of lexical items (since there is only one linguistic item, ‘Paderewski’) or in terms of semantic values (since there is only one referent, Paderewski). So ILF theories apparently attribute contradictory beliefs to John. The different responses to this puzzle (and the complexity of some of them) are evidence that it is particularly difficult for ILF theories. Larson and Ludlow (1993) suggest that although (9) and (10) contain the homophonous ‘Paderewski,’ there are really distinct syntactical items in play – ’PaderewskiI’ and ‘PaderewskiII’ (much as there are distinct syntactical items associated with ‘bank’). They argue that since there are actually two names in (9) and (10), different ILFs would be assigned to them, thus avoiding the assignment of contradictory beliefs to John.

Many (e.g., Forbes, 1996; Taylor, 1995; Richard, 1990) have found this response implausible. They argue that unlike the term ‘bank,’ there is only one name ‘Paderewski’ in our common language. John certainly doesn’t seem to have had a private baptism, introducing two names into his idiolect; rather he picked up the name in the ordinary way. Since he picked up the name in the ordinary way, and our language only has one name, John is using only one name. (See Pietroski, 1996: 366–368 for a different objection to Larson and Ludlow’s solution to the Paderewski puzzle.) Ludlow (2000) attempts a different response to the Paderewski puzzle, arguing that it dissolves on a correct understanding of language and the goals of ascribing propositional attitudes. First, language is not an external, social object, but what Chomsky (1986) calls an I-language – each person has her own lexicon, relying on substantial overlap with others for communication. Second, in ascribing propositional attitudes we are not trying to describe what is going on in an agent’s head. Rather, we are helping a hearer construct a theory of an agent’s mental life. Our focus, therefore, should be on the relationship between the ascription and the hearer, not on the ascription and the bearer of the attitude. Working with the hearer is a complicated process that involves theories of tacit belief, of goals of belief ascription, and of belief ascription logistics. But most importantly for Paderewski cases, the speaker and hearer are engaged in a negotiation about the best expressions for ascribing a propositional attitude to the agent. In Paderewski cases we need some way of distinguishing between an agent’s beliefs, and the speaker and hearer negotiate this by choosing distinct expressions. Signaling John’s beliefs about Paderewski qua piano player will involve an appropriate choice of expression – perhaps speaker and hearer will use the expression ‘Paderewski qua piano player.’ Similarly for John’s beliefs about Paderewski qua shy neighbor. The upshot is that we do not get a straightforward contradiction of the form (Fa & Fa) – our different I-languages grant us fluidity with our lexicons, making contradictions across discourses unlikely.

Prospects Interestingly, this response to the Paderewski case is related to another common complaint about ILF theories. Maintaining that ILFs are partly constituted by English lexical items results in several related difficulties: if a monolingual French speaker believes that Fido barks, he does not believe an ILF with English expressions; ILF theories seem incapable of capturing the idea that English and French speakers

442 Interpreted Logical Forms

believe the same thing (or something very similar) when they believe that Fido barks; phonetically distinct sentential complements cannot co-refer. These problems result from the plausible idea that a semantic theory ought to capture what is grasped in understanding sentences of a language; and speakers using different languages can presumably grasp the same thing. This immediately forges a close link between the objects of the attitudes and the semantics of attitude ascriptions. These problems may be dealt with semantically or pragmatically. Davidson (1968), Lepore and Loewer (1989) and Higginbotham (1986) suggest a semantic solution by building a notion of same-saying or similarity into the truth conditions for attitude reports. Agents stand in intentional relations to ILFs similar to (or that say the same thing as) those involving English lexical items. The similarity or same-saying relation provides a way of capturing what is common among thinkers across languages and thinkers without linguistic abilities. This is accomplished by indicating a close relation between the theory of propositional attitudes and the semantics of attitude ascriptions. Larson and Ludlow (1997), Larson and Segal (1995), and Larson (2000) offer a pragmatic solution. Similarity of propositional attitude or same-saying is a matter of usage, not content: it is a pragmatic matter whether two propositional-attitude sentences can be used to report the same attitude. The form of this pragmatic solution has been cast in different ways. In Larson and Ludlow (1997), choosing the best propositional-attitude sentence is based on the same considerations Ludlow (2000) appealed to in addressing the Paderewski puzzle: speaker and hearer work out the best attitude ascriptions based on theories of tacit belief, goals of belief ascription, and belief ascription logistics. Larson and Segal (1995) rely on the notion of expression to elucidate their pragmatic solution. In short, when an ILF is used to report an agent’s belief, it is not that an agent is standing in a relation to an ILF; rather, the ILF is used to express what the agent believes: ‘‘to believe an ILF is to have a belief expressed by it, to desire an ILF is to have a desire expressed by it, and so on’’ (Larson and Segal, 1995: 445). Explicating believing and desiring in terms of the notion of expression is not part of the semantics – it is an explanation of what believing and desiring an ILF involves. So, to believe that grass is green is to believe an ILF, which means to have a belief that is expressed by that ILF. But what does an ILF express? The answer to this question seems to present ILF theorists with a dilemma. Either an ILF expresses a proposition, something that transcends any specific language, or an ILF expresses something that involves English

expressions. The first horn of the dilemma seems to lead in the direction of Fregean propositions, and the second horn of the dilemma does not seem capable of capturing what a French speaker believes. In summary, ILFs are combinations of linguistic items and nonlinguistic items. Their linguistic features are at once their most coveted and their most objectionable feature. They provide a solution to a traditional name puzzle without postulating intensional entities. But if the semantics of attitude reports indicate what we stand in relation to when we believe, desire, and hope, it is problematic to appeal to English expressions. If one chooses a semantic solution to this problem, building a similarity or same-saying relation into the truth conditions, then a careful consideration of these relations is required. If one chooses a pragmatic solution, one must decide whether one has given up too much in surrendering the idea that the semantics of attitude reports overlaps in a natural way with the theory of the attitudes. See also: Anaphora, Cataphora, Exophora, Logophoricity; Dictionaries; Dictionaries and Encyclopedias: Relationship; Logical Form; Meaning, Sense, and Reference; Montague Semantics; Operators in Semantics and Typed Logics; Pragmatics and Semantics; Pronouns; Proper Names; Proper Names: Philosophical Aspects; Propositional Attitude Ascription; Propositional Attitudes; Quantifiers; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Representation in Language and Mind; Rigid Designation; Semantics– Pragmatics Boundary; Sense and Reference; Thought and Language.

Bibliography Bealer G (1993). ‘A solution to Frege’s puzzle.’ Philosophical Perspectives, Language and Logic 7, 17–60. Burge T (1974). ‘Demonstrative constructions, reference and truth.’ The Journal of Philosophy 71, 205–223. Chomsky N (1986). Knowledge of language. New York: Praeger. Crane T (2001). Elements of mind. Oxford: Oxford University Press. Creswell M J (1985). Structured meanings. Cambridge, MA: MIT Press. Davidson D (1968). ‘On saying that.’ In Davidson D (ed.) Inquiries into truth & interpretation. Oxford: Clarendon Press. 93–108. Den Dikken M, Larson R & Ludlow P (1997). ‘Intensional ‘‘transitive’’ verbs and concealed complement clauses.’ In Ludlow P (ed.). Fiengo R & May R (1996). ‘Interpreted logical form: a critique.’ Rivista di Linguistica 105, 349–374. Forbes G (1990). ‘The indispensability of Sinn.’ Philosophical Review 99, 535–563.

Interrogatives 443 Forbes G (1996). ‘Substitutivity and the coherence of quantifying in.’ Philosophical Review 105, 337–372. Forbes G (2000a). ‘Intensional transitive verbs: the limitations of a clausal analysis.’ Unpublished ms., http:// www.Tulane.edu/forbes/preprints.html. Forbes G (2000b). ‘Objectual Attitudes.’ Linguistics and Philosophy 23, 141–183. Forbes G (2002). ‘Intensionality.’ Proceedings of the Aristotelian Society supplementary volume 76, 75–99. Frege G (1892). ‘On sense and nominatum.’ In Martinich A P (ed.) The philosophy of language, 3rd edn. Oxford: Oxford University Press. 186–198. Harman G (1972). ‘Logical Form.’ Foundations of Language 9, 38–65. Higginbotham J (1986). ‘Linguistic theory and Davidson’s program in semantics.’ In Lepore E (ed.) Truth and interpretation: perspectives on the philosophy of Donald Davidson. Oxford: Basil Blackwell. 29–48. Higginbotham (1991). ‘Belief and logical form.’ Mind & Language, 344–369. Kripke S (1997). ‘A puzzle about belief.’ In Ludlow P (ed.). 875–920. Larson R (2002). ‘The grammar of intensionality.’ In Peter G & Preyer G (eds.) Logical form and language. Oxford: Clarendon Press. 228–262. Larson R & Ludlow P (1997). ‘Interpreted logical forms.’ In Ludlow P (ed.). 993–1039. Larson R & Segal G (1995). Knowledge of meaning. Cambridge: MIT Press.

Lau J (1995). ‘Belief reports and interpreted-logical forms.’ Unpublished manuscript, http://www.hku.hk/philodep/ joelau/phil/ILF.htm. Lepore E & Loewer B (1989). ‘You can say that again.’ In French P, Uehling T & Wettstein H (eds.) Midwest Studies in Philosophy XIV: Contemporary Perspectives in the Philosophy of Language II. Notre Dame: University of Notre Dame Press. Ludlow P (ed.) (1997). Readings in the philosophy of language. Cambridge: MIT Press. Ludlow P (2000). ‘Interpreted logical forms, belief attribution, and the dynamic lexicon.’ In Jaszczolt K M (ed.) The pragmatics of propositional attitude reports. Oxford: Elsevier Science. Pietroski P (1996). ‘Fregean innocence.’ Mind & Language 11, 338–370. Quine W O (1961). From a logical point of view. New York: Harper and Row. Richard M (1990). Propositional attitudes. Cambridge: Cambridge University Press. Salmon N (1986). Frege’s puzzle. Cambridge: MIT Press. Segal G (1989). ‘A preference for sense and reference.’ The Journal of Philosophy 86, 73–89. Seymour D (1996). ‘Content and quotation.’ Rivista di Linguistica 8, 309–330. Soames S (1987). ‘Direct reference, propositional attitudes, and semantic content.’ Philosophical Topics 15, 47–87. Taylor K (1995). ‘Meaning, reference, and cognitive significance.’ Mind & Language 10, 129–180.

Interrogatives B Sherman, Princeton University, Princeton, NJ, USA ß 2006 Elsevier Ltd. All rights reserved.

Many problems in semantics are typically expressed as problems about how to define a function from the form of a sentence to the conditions in which the sentence can be uttered truly (or, alternatively, to a proposition that can be expressed by an utterance of the sentence). The main problem posed by interrogatives for the truth-conditional approach to semantics is rather different. They do not give rise to a problem of how to map a function from the form of a sentence to its truth conditions. Rather, they give us reason to suspect that there is no such function. For interrogatives, unlike indicatives, do not seem to have truth conditions, and nor do they seem to express propositions that themselves have truth conditions. The main problem interrogatives pose is metasemantic: to say what a semantics for interrogatives must do. In the ‘Metasemantics’ section of this article, I survey three attempts to deal with this question. In

the ‘Semantics’ section, I present some basic-level semantic questions that would need to be addressed even if the metasemantic question were settled – though in all probability they will in fact help us to settle the metasemantic question.

Metasemantics Above, I introduced one constraint on choosing a semantic framework for interrogatives: interrogative sentences are not true or false, so we should not assign to them some semantic object that is true or false, at least not without further explanation. Another is the Davidsonian constraint of ‘semantic innocence’: the fixed meaning of a word should not vary with the environment in which it occurs. This constraint generates a tension with any approach to the metasemantic problem that ignores the truthconditional approach that works so nicely for indicative sentences. For this reason, the three dominant frameworks that I will focus on all treat the semantics of interrogatives in broadly truth-conditional

444 Interrogatives

terms. That is, in all of the frameworks to be discussed here, the meaning of an interrogative is in some way tied to a propositional object of some sort. In the rest of this section, I will look at the following three approaches to the metasemantic problem: the force/radical approach, the epistemicimperative approach, and the question-as-answer approach. According to the force/radical approach, sentences can be factored into two components: a force indicator and a propositional radical. Roughly, the propositional radical supplies the truth-conditional content to the sentence, and the force indicator suggests the attitude taken toward that content. So, on this approach, advanced by McGinn (1977), Davidson (1984), and Stainton (1999), sentences (1) and (2) have the same radical and different moods: (1) The monkey is hungry. (2) Is the monkey hungry?

The mood of the sentence indicates the force of a typical utterance of the sentence. Sentences in the indicative mood are typically used for asserting the proposition expressed by the radical. Sentences in the interrogative mood are typically used to ask whether the proposition expressed by the radical is true. This picture works well for yes/no questions, such as (2). What about wh-questions, such as (3)? (3) What does the monkey want?

The radical of (3) could be thought of as a propositional function of the form in (4): (4) lx: the monkey wants x

Some story would then be needed to explain why attaching an interrogative mood to (4) would yield the desired interpretation. In addition, it seems like the only mood one could attach to (4) is the interrogative mood. This seems out of character with the general approach, which emphasizes the possibility of attaching different moods to the same radical, thus requiring some explanation if the approach is to be defended. The epistemic-imperative approach treats the meaning of an interrogative as an imperative concerning one’s epistemic state. Sentence (2) would be analyzed as meaning what (5) means, and (3) would be analyzed as meaning what (6) means. (5) Let it be the case that (or: Bring it about that) I know whether the monkey is hungry. (6) Let it be the case that (or: Bring it about that) I know what the monkey wants.

´˚ On this approach, advocated by A qvist (1965) and Hintikka (1983), the direct interrogative exemplified by (2) and (3) is analyzed in terms of the indirect interrogative contained in (5) and (6). In order for this approach to provide a semantic framework for interrogatives, an account of the semantics of indirect interrogatives is needed. Hintikka gave an analysis of ‘know whether’ in terms of ‘know that.’ Sentence (5) on this analysis would be equivalent to (7): (7) Bring it about that either I know that the monkey is hungry, or I know that the monkey is not hungry.

However, this analysis of indirect interrogatives faces difficulties when embedded under verbs such as ‘wonder’: (8) I wonder whether the monkey is hungry.

Sentence (8) does not seem susceptible to a similar sort of paraphrase, causing trouble for any view that attempts to analyze away the indirect interrogative. Finally, the question-as-answer approach treats the interrogative, either direct or indirect, as denoting a question, where this is understood to be a special sort of semantic object. Since a question determines a set of answers, the set of answers is used as a surrogate object for the question (much in the way that a set of possible worlds is used as a surrogate object for propositions). The intuitive idea motivating this approach, originating in Hamblin (1958), is that to know the meaning of a question is to know what counts as an answer to it. Different versions of the approach differ in which set of answers is used. On the most standard version of this approach, advanced by Hamblin (1958) and Groenendijk and Stokhof (1994), the meaning of an interrogative should be thought of as the set of possible answers to a question, where this set of possible answers forms a partition of logical space. Every possible answer on this view is a complete answer to the question, and the set of answers jointly exhausts the possibilities of complete answers. Karttunen (1977) provided two reasons for treating the meaning of an interrogative as the set of its true, rather than possible, answers. First, consider sentence (9): (9) Who is elected depends on who is running.

According to Karttunen, this sentence says that the true answer to the subject position question depends on the true answer to the object position question. So treating the meanings of interrogatives as sets of true answers provides a more straightforward account of

Interrogatives 445

verbs such as ‘depend on.’ Second, consider sentences (10) and (11): (10) John told Mary that Bill and Susan passed the test. (11) John told Mary who passed the test.

Sentence (11) entails that John told Mary the truth, whereas this is not the case with sentence (10). By treating the indirect interrogative in (11) as denoting a set of true answers, this entailment is straightforwardly explained. Karttunen’s account has some counterintuitive consequences, however. For example, somebody who asks question (2) in a situation where the monkey is hungry intuitively asks the same question as somebody who asks question (2) in a situation where the monkey is not hungry. But on Karttunen’s account, the meaning of the questions asked in the two situations is different, since the true answers to the questions asked are different. A more general worry with the question-as-answer approach, articulated by Stainton (1999), is that it makes the domain of the interrogative – the set of objects that figure in the possible answers to it – a part of its meaning. It seems intuitive that one can understand a question without knowing anything about the objects that figure in the possible answers to it. For example, if an alien from outer space lands on Earth, we might ask the question ‘What does the alien want?’ Surely, among the possible answers to this question are objects that we’ve never seen or imagined before. But that doesn’t stop us from understanding the meaning of the question, as it seems it should on the question-as-answer approach. Although the approaches to the metasemantic problem presented here are dominant in the literature, they are by no means exhaustive. Ultimately, the success of a given framework will depend on the extent to which it is successful in accounting for various semantic phenomena.

Semantics One of the ongoing semantic debates in the semantics of interrogatives concerns the ambiguity that results from a wh-question containing a universal quantifier, as in (12): (12) Who does everyone like?

According to one reading of (12), the question is asking which people are such that every person likes them. An appropriate answer might be, for example, ‘Bill and Mary.’ On the other reading of (12), the question is asking which person each person likes.

An appropriate answer to this question would be a list of pairs of people of the form ‘Bill likes Mary, Mary likes Sue, Sue likes Bill.’ The debate specifically concerns this second reading of the question, called the pair/list reading. The question is how to account for that reading semantically. There have been two main sides to the debate. According to one standard view, the role of the quantifier that occurs in the interrogative is to restrict the domain of the question. So (12) could be paraphrased roughly as (13): (13) For every person, who does that person like?

Exactly how this paraphrased reading is derived from (12) will depend in large part on the metasemantic approach that one favors. For example, Groenendijk and Stokhof (1984) modified Karttunen’s metasemantic approach in implementing their view. According to an alternative view, pair/list readings are an instance of a more general kind of reading. Consider the following question/answer pair in (14): (14a) Who does every man love? (14b) His mother.

The answer in (14) gives rise to a functional reading of the question. The answer to the question is not an individual, but a function – that is, a rule which takes one from an object to another object. Pair/list readings are, according to this alternative, functional readings, where the function is specified extensionally, in terms of the ordered pairs in the extension of the function. This view was presented in Engdahl (1985) and developed by Chierchia (1993). Accounting for functional readings is an interesting semantic issue in its own right. For alternative accounts of pair/list readings, see Beghelli (1997), Szabolcsi (1997a), and Pafel (1999). Another semantic issue concerns the nature of the presuppositions that different sorts of questions give rise to. For example, the question ‘What is it like owning a monkey?’ presupposes that the addressee owns a monkey. The question ‘Who came to the party?’ presupposes that someone came to the party. The question ‘Which monkey ate the banana?’ presupposes that a unique monkey ate the banana. Whether these presuppositions are semantic in nature and, if so, where they arise from has been a contested issue. See Belnap and Steel (1976), Karttunen (1977), Higginbotham and May (1981), and Hintikka (1983). A final semantic issue that is of both linguistic and philosophical interest concerns the context sensitivity involved in whether something counts as an answer to a question. This issue is particularly pressing for the question-as-answer approach, since most versions of that approach assume that each question has some unique complete answer. Ginzburg (1995) developed

446 Irony

a novel account of the semantics of interrogatives aimed at accommodating various sorts of context sensitivity. In general, questions calling for the identification of something seem to be interest relative. For example, the sentence ‘Where am I?’ might be used to ask for the country in which one is located, the street on which one is located, the room in which one is located, etc. In some situations, one counts as knowing who killed Lady Chittlesworth when one knows simply that the murderer is the person that was wearing the yellow shirt. In other situations, this would not count as an acceptable answer. See Boe¨r and Lycan (1985) for an account of knowing who someone is. This brief survey of issues is far from exhaustive. For a very useful overview of both metasemantic approaches to interrogatives and semantic issues concerning interrogatives, see Groenendijk and Stokhof (1994). See also: Grammatical Meaning; Mood and Modality;

Mood, Clause Types, and Illocutionary Force; Phrastic, Neustic, Tropic: Hare’s Trichotomy; Semantic Value; Speech Acts; Truth Conditional Semantics and Meaning.

Bibliography ´ A˚qvist L (1965). A new approach to the logical theory of interrogatives 1: Analysis. Uppsala: Uppsala Universiteit. Beghelli F (1997). ‘The syntax of distributivity and pair-list readings.’ In Szabolcsi (ed.). 349–408. Belnap N & Steel T (1976). The logic of questions and answers. New Haven: Yale University Press.

Boe¨r S & Lycan W (1985). Knowing who. Cambridge, MA: MIT Press. Chierchia G (1993). ‘Questions with quantifiers.’ Natural Language Semantics 1(2), 181–234. Davidson D ([1979] 1984). ‘Moods and performances.’ In Inquiries into truth and interpretation. Oxford: Clarendon Press. Engdahl E (1985). Constituent questions. Dordrecht: Reidel. Ginzburg J (1995). ‘Resolving questions I & II.’ Linguistics and Philosophy 18, 459–527, 567–609. Groenendijk J & Stokhof M (1984). Studies on the semantics of questions and the pragmatics of answers. Ph.D. thesis, University of Amsterdam. Groenendijk J & Stokhof M (1994). ‘Questions.’ In van Bentham J & ter Meulen A (eds.) Handbook of logic and language. Amsterdam: Elsevier. Hamblin C (1958). ‘Questions.’ Australasian Journal of Philosophy 36, 159–168. Higginbotham J & May R (1981). ‘Questions, quantifiers, and crossing.’ Linguistic Review 1, 41–79. Hintikka J (1983). ‘New foundations for a theory of questions and answers.’ In Kiefer F (ed.) Questions and answers. Dordrecht: Reidel. 159–190. Karttunen L (1977). ‘Syntax and semantics of questions.’ Linguistics and Philosophy 1, 3–44. McGinn C (1977). ‘Semantics for nonindicative sentences.’ Philosophical Studies 32, 301–311. Pafel J (1999). ‘Interrogative quantifiers within scope.’ Linguistics and Philosophy 22, 255–310. Stainton R (1999). ‘Interrogatives and sets of answers.’ Critica 91, 75–90. Szabolcsi A (1997a). ‘Quantifiers in pair-list readings.’ In Szabolcsi A (ed.). 311–348. Szabolcsi A (ed.) (1997b). Ways of scope taking. Dordrecht: Kluwer Academic.

Irony S Attardo, Youngstown State University, Youngstown, OH, USA ß 2006 Elsevier Ltd. All rights reserved.

The term ‘irony’ is commonly used to describe both a linguistic phenomenon (verbal irony) and other phenomena including ‘situational’ irony (i.e., irony of facts and things dissociated from their linguistic expression; Shelley, 2001) such as a fire-station burning to the ground, various more-or-less philosophical ideas (Socratic irony, Romantic irony, Postmodern irony), and even a type of religious experience (Kierkegaard, 1966). While there may be connections between situational and verbal irony, it does not appear that literary and religious uses can be fruitfully

explained in terms of linguistic irony. This treatment will be limited to verbal irony. Other definitional problems include the purported distinction between irony and sarcasm. While some have argued that the two can be distinguished (for example, irony can be involuntary, while sarcasm cannot be so), others maintain that no clear boundary exists. A further problem is presented by the fact that in some varieties of English, the term irony is undergoing semantic change and is assuming the meaning of an unpleasant surprise, while the semantic space previously occupied by irony is taken up by the term sarcasm. The word irony goes back to the Greek eironeia (pretense, dissimulation) as does the history of its definition and analysis. Irony is seen as a trope

Irony 447

(i.e., a figure of speech) in ancient rhetorics and this analysis has remained essentially unchallenged until recently. In the traditional definition irony is seen as saying something to mean the opposite of what is said. This definition is demonstrably incorrect, as a speaker may be ironical but not mean the opposite of what he/she says; cf. It seems to be a little windy (uttered in the middle of a violent storm), in which the speaker is saying less than what is meant. Similarly, overstatements and hyperbole may be ironical (Kreuz and Roberts, 1995). A recent and fruitful restatement of the irony-astrope theory has been presented by Paul Grice who sees irony as an implicature, i.e., as a deliberate flouting of one of the maxims of the principle of cooperation. Relatedly, speech-act approaches to irony see it as an insincere speech act. Initially, Grice’s approach saw irony as a violation of the maxim of quality (i.e., the statement of an untruth) but this claim has been refuted, as seen above. Broadening the definition to, for example, ‘saying something while meaning something else,’ runs the risk of obliterating the difference between irony and other forms of figurative or indirect speech. However, this loss of distinction may be a positive aspect of the definition, as has been recently argued (Kreuz, 2000, Attardo, 2002). While the idea of ‘oppositeness’ in irony is problematic, approaches to irony as negation have been presented (Giora, 1995), who sees irony as ‘indirect’ (i.e., inexplicit; cf. Utsumi, 2000) negation; related ideas are that of contrast (Colston, 2002) and inappropriateness (Attardo, 2000). A very influential approach to irony is the mention theory (Sperber and Wilson, 1981), which claims that an utterance is ironical if it is recognized as the echoic mention of another utterance by a more or less clearly identified other speaker. Furthermore, the ironical statement must be critical of the echoed utterance (cf. Grice, 1989: 53–54). Similar theories based on the ideas of ‘pretense’ and ‘reminder’ have been presented as well. Criticism of the mention theory notes that not all irony seems to be interpretable as the echo of someone’s words, or that if the definition of mention is allowed to encompass any possible mention it becomes vacuous (since any sentence is potentially the mention of another sentence). Furthermore, there exists an admittedly rarer, non-negative, praising irony, called asteism (Fontanier, 1968: 150). An example of asteism might be a colleague describing Chomsky’s Aspects of the theory of syntax as a ‘moderately influential’ book in linguistics. Other approaches to irony include the ‘tinge’ theory, which sees irony as blending the two meanings (the stated and the implied ones) with the effect of attenuating the ironical one (Colston, 1997).

All the theories of irony mentioned so far share the idea that the processing of irony is a two-step process in which one sense (usually assumed to be the literal meaning) of the utterance is accessed and then a second sense of the utterance is discovered (usually under contextual pressure). Thus, for example, in a Gricean account of irony as implicature, the hearer of an utterance such as That was smart (uttered as a description of clumsy behavior, such as spilling one’s wine upon someone’s clothing) will first process the utterance as meaning literally roughly ‘This behavior was consonant with how smart people behave’ and then will discard this interpretation in favor of the implicature that the speaker means that the behavior was not consonant with how smart people behave. This account has been challenged recently by ‘direct access’ theories. The direct access theories claim that the hearer does not process the literal meaning of an ironical utterance first and only later accesses the figurative (ironical) meaning. Rather, they claim that the literal meaning is either not accessed at all or only later. Direct access interpretations of irony are squarely at odds with the traditional interpretation of irony as an implicature. Some results in psycholinguistics have been seen as supporting this view (Gibbs, 1994). The mention theory of irony was commonly interpreted as a direct access theory, but recent work (Yus, 2003) seems to indicate that it too can be interpreted as a two-step process. Other researchers (e.g., Dews and Winner, 1999) have presented contrasting views which support the two-step approach, although not always the claim that the literal meaning is processed first: claims that interpretations are accessed in order of saliency (Giora, 2003) or in parallel have been put forth. Psycholinguistic studies of irony have focused on children’s acquisition of irony (Winner, 1988), progressively lowering the age at which children understand irony to under ten years old; on the neurobiology of the processing of irony (McDonald, 2000), emphasizing the role of the right hemisphere alongside the left one (in which most language processing takes place); and on the order of activation of the various meanings in the ironical text. A significant issue is the degree and nature of the assumptions that the hearer and speaker must share for irony to be understood; this can be summed up as the ‘theory of mind’ that the speakers have. In particular, irony involves metarepresentations (Bara et al., 1997, Curco´, 2000). Considerable attention has been paid to the optional markers of irony, i.e., primarily intonational and kinesic indications of the speaker’s ironical intent. While several phonological and other features have been considered ‘markers’ of irony, it appears that none of these features is exclusively a marker of

448 Irony

irony. Reviews of markers include phonological (e.g., intonation), graphic (e.g., italics, punctuation), morphological (e.g., quotatives), kinesic (e.g., winking), and contextual clues (Haiman, 1998). Recently, the social and situational context of irony as well as its pragmatic ends have begun being investigated in sociolinguistics and discourse/conversation analysis as well as in psycholinguistics. Work on the social functions of irony has found a broad range of functions, including in- and out-group definition, evaluation, aggression, politeness, verbal play, and many others (e.g., Clift, 1999; Anolli et al., 2002; Gibbs and Colston, 2002; Kotthoff, 2003). It is likely that this list may be open-ended. The relationship between irony and humor remains underexplored, despite their obvious connections, although some studies are beginning to address the interplay of irony and other forms of implicature, such as indirectness, and metaphoricity. Finally, it is worth noting that dialogic approaches to language (e.g., Ducrot, 1984) see irony as a prime example of the co-presence of different ‘voices’ in the text, in ways that avoid the technical problems highlighted in the mention theories.

See also: Context and Common Ground; Counterfactuals; Expression Meaning vs Utterance/Speaker Meaning; Human Reasoning and Language Interpretation; Implicature; Metaphor and Conceptual Blending; Neo-Gricean Pragmatics; Nonstandard Language Use; Pragmatic Determinants of What Is Said; Pragmatics and Semantics; Prosody; Rhetoric, Classical; Semantic Change, the Internet and Text Messaging; Semantics–Pragmatics Boundary; Speech Acts.

Bibliography Anolli L, Ciceri R & Riva G (eds.) (2002). Say not to Say: New Perspectives on miscommunication. Amsterdam: IOS Press. Anolli L, Infantino M G & Ciceri R (2002). ‘‘‘You’re a real genius!’’: irony as a miscommunication design.’ In Anolli, Ciceri & Riva (eds.). 135–157. Attardo S (2000). ‘Irony as relevant inappropriateness.’ Journal of Pragmatics 32(6), 793–826. Attardo S (2002). ‘Humor and irony in interaction: from mode adoption to failure of detection.’ In Anolli, Ciceri & Riva (eds.). Bara B, Maurizio T & Zettin M (1997). ‘Neuropragmatics: neuropsychological constraints on formal theories of dialogue.’ Brain and Language 59, 7–49. Booth W (1974). A rhetoric of irony. Chicago: University of Chicago Press.

Clift R (1999). ‘Irony in conversation.’ Language in Society 28, 523–553. Colston H L (1997). ‘Salting a wound or sugaring a pill: the pragmatic function of ironic criticism’ Discourse Processes 23, 25–45. Colston H L (2002). ‘Contrast and assimilation in verbal irony.’ Journal of Pragmatics 34(2), 111–142. Curco´ C (2000). ‘Irony: negation, echo and metarepresentation.’ Lingua, 110, 257–280. Dews S & Winner E (1999). ‘Obligatory processing of literal and non-literal meanings in verbal irony.’ Journal of Pragmatics 31(12), 1579–1599. Ducrot O (1984). Le dire et le dit. Paris: Editions de Minuit. Fontanier P (1968). Les figures du discours. Paris: Flammarion. Originally published as two volumes in 1821 and 1827. Gibbs R W (1994). The poetics of mind: figurative thought, language, and understanding. Cambridge/New York: Cambridge University Press. Gibbs R W & Colston H L (2002). ‘The risks and rewards of ironic communication.’ In Anolli, Ciceri & Riva (eds.). 181–194. Giora R (1995). ‘On irony and negation.’ Discourse Processes 19, 239–264. Giora R (2003). On our mind. Oxford: Oxford University Press. Haiman J (1998). Talk is cheap: sarcasm, alienation, and the evolution of language. Oxford/New York: Oxford University Press. Katz A N (ed.) (2000). ‘The Uses and Processing of Irony and Sarcasm.’ Special issue of Metaphor and Symbol 15(1/2). Kierkegaard S (1966). The concept of irony, with constant reference to Socrates. Capel L M (trans.). London: Collins. Kotthoff H (2003). ‘Responding to irony in different contexts: on cognition in conversation.’ Journal of Pragmatics 35(9), 1387–1411. Kreuz R J & Roberts R M (1995). ‘Two cues for verbal irony: hyperbole and the ironic tone of voice.’ Metaphor and Symbolic Activity 10(1), 21–31. McDonald S (2000). ‘Neuropsychological studies of sarcasm.’ Metaphor and Symbol 15(1/2), 85–98. Shelley C (2001). ‘The bicoherence theory of situational irony.’ Cognitive Science 25, 775–818. Sperber D & Wilson D (1981). ‘Irony and the use mention distinction.’ In Cole P (ed.) Radical Pragmatics. New York/London: Academic Press. 295–318. Toplak M & Katz A N (2000). ‘On the uses of sarcastic irony.’ Journal of Pragmatics 32(10), 1467–1488. Utsumi A (2000). ‘Verbal irony as implicit display of ironic environment: distinguishing ironic utterances from nonirony.’ Journal of Pragmatics 32(12), 1777–1806. Winner E (1988). ‘The point of words: children’s understanding of metaphor and irony’. Cambridge, MA: Harvard University Press. Yus F (2003). ‘Humor and the search for relevance.’ Journal of Pragmatics 35(9), 1295–1331.

J Jargon K Allan, Monash University, Victoria, Australia ß 2006 Elsevier Ltd. All rights reserved.

The word jargon probably derives from the same source as gargle, namely Indo-European *gargmeaning throat, and it originally referred to any noise made in the throat. In Middle English it was generally used to describe the chattering of birds, or human speech that sounded as meaningless as the chattering of birds. It was used (contemptuously) to refer to trade languages such as Chinook Jargon (q.v.). Jane Austen used jargon in the sense ‘cliche´’ (Sense and sensibility, Ch.18). Today, Jargon is the language peculiar to a trade, profession, or other group; the language used in a body of spoken or written texts dealing with a circumscribed domain in which speakers share a common specialized vocabulary, habits of word usage, and forms of expression.

This definition includes what some scholars call ‘specialist’ or ‘technical’ language, ‘restricted’ language (Firth, 1968: 98), ‘sublanguage’ (Kittredge and Lehrberger, 1982), and others ‘register’ (e.g., Zwicky and Zwicky, 1982; Wardhaugh, 1986). Jargons differ from one another grammatically and sometimes phonologically or typographically, as can be seen by comparing a statement of some of the requirements on the cricket field with the two-line excerpt from a knitting pattern, then the wedding invitation that follows it, and all of these with the excerpt from a Wordsworth poem and a text message version of that (such as might be conveyed using the SMS facility on a mobile phone). A fast-medium right arm inswing bowler needs two or three slips, a deep third man, a gully, a deepish mid-off, a man at deep fine leg and another at wide mid-on. Cast on 63 sts: Knit 6 rows plain knitting. 7th row: K4, wl. fwd. K2 tog to the last 3 sts. K3.

Earth has not anything to shew more fair: Dull would he be of soul who could pass by A sight so touching in its majesty. (Wordsworth Upon Westminster Bridge)

erth nt a thng so brill

hes dul v soul pssng by

sght of mjstic tch. (Peter Finch N Wst Brdg, a text message version of the Wordsworth lines from http:// .guardian.co./uk//// |0,12241,785819,00.html in September2002)

A jargon is identified by one or more of the following criteria. i. Lexical markers. a. Vocabulary specialized for use in some particular domain (the subject matter of a jargon). The lexical relations among specialized vocabulary will reflect the accepted taxonomies within the domain (e.g., forms, varieties, species, genera, families, orders, classes in biology).

450 Jargon

b. Idioms and abbreviations (e.g., in telecommunications, DNA ‘does not answer’, MBC ‘major business customer’, HC&F ‘heat coil and fuse’, LIBFA ‘line bearer fault analysis’; in linguistics, Noun Phrase and NP; in logic, if and only if and iff; in biology, ? and /). ii. Syntactic markers such as a. imperatives in recipes and knitting patterns; b. large numbers of impersonal passives in reports of scientific experiments (e.g., It was observed that . . .); c. full noun phrases in place of pronouns in legal documents (e.g., A term of a sale shall not be taken to exclude, restrict, or modify the application of this Part unless the term [not ‘it’] does so expressly or is inconsistent with that provision). iii. Presentational markers. a. Prosodic (voice quality, amplitude, rhythm, etc.) and paralinguistic and/or kinesic (gaze, gesture, etc.) characteristics within a spoken medium, typographical conventions within a written medium; e.g., a hushed tone and minimal kinesic display is more frequently expected in funeralese than in football commentary or anecdote; in mathematics, {a,b}, , and (a,b) will normally have different and conventionally prescribed interpretations; in linguistics, language expressions that are mentioned rather than used are usually italicized. b. Format in which a text is presented; this is particularly evident in the written medium, as can be seen from the preceding four examples. Jargon has two functions: 1. to serve as a technical or specialist language; 2. to promote in-group solidarity, and to exclude as out-groupers those people who do not use the jargon. To the initiated, jargon is efficient, economical, and even crucial in that it can capture distinctions not made in the ordinary language. Linguists, for instance, redefine everyday terms such as sentence, word, syllable, and grammar and add a number of new terms to overcome imprecision and to distinguish things that nonlinguists ignore and, in consequence, ordinary language lacks terms for. They distinguish between grammatical, orthographic, and phonological words as well as introducing terms such as lex, lexeme, morph, and morpheme to capture additional distinctions. Jargons, like bureaucratese (the language of government and corporate offices), have two motivations. One, shared with criminal jargon and

slang, is the exclusion of out-groupers. The second motivation is to augment the bureaucrat’s self-image by using a Graeco-Latinate lexicon, which (to the outsider, at least) obfuscates the commonplace (and often trivial) and endows it with gravity; this achieves a double-whammy by also mystifying and intimidating the clientele. We rarely uncover such blatant lexical substitution as [Emendation to the traffic plan for a London borough] Line 5. Delete ‘Bottlenecks’, insert ‘Localised Capacity Deficiencies’. (Quoted in Cutts and Maher, 1984: 45)

No wonder Charles Dickens referred to Whitehall as the Circumlocution Office. Because it is founded on a common interest, the most remarkable characteristic of a jargon is its specialized vocabulary and idiom. Although jargons facilitate communication among in-groupers on the one hand, on the other they erect communication barriers to keep out-groupers out. It is, of course, outgroupers who find jargon ‘abounding in uncommon or unfamiliar words,’ and therefore ‘‘unintelligible or meaningless talk or writing; gibberish’’ (Macquarie Dictionary, 1991). If the out-grouper is sufficiently rancorous she or he might also conclude the jargon is ‘debased, outlandish or barbarous.’ For some jargons such as legalese or linguisticalese, the in-group is fairly well defined. For others, such as stock market reports and games such as bridge, football, and cricket, there is barely an in-group at all except among professionals. For language dealing with death (outside the funeralese of the bereavement industry), birth notices, and recipes, there is even less of an in-group – though there is a special vocabulary and there are conventional patterns of expression for all of these. The facts are clear: where in-groupers are associated with a particular trade or profession, they constitute a well-defined group; they are somewhat less well defined by a common recreational interest; and the most ill-defined in-groups are those defined merely by a temporary interest in the topic of the jargon – such as those members of the public who publish birth notices. It follows that every text/discourse, including social chitchat, is jargon of one kind or another and that everybody uses at least one jargon. In fact, almost everybody controls several jargons and often many. It is often the case that an expert in one domain (an in-grouper with respect to its jargon) needs to explain something within the domain to a novice outside the domain (an out-grouper) with minimal use of jargon. An example would be where a lawyer needs to explain some point of law to a client, or a doctor needs to explain a medical condition to a patient, and educationists most all of the time. Any jargon is in

Jargon 451

constant contact with others, and it should be fairly obvious that jargons are not discrete from one another: all of them borrow from language that is common to other jargons. The following is an excerpt from a chatroom interchange replete with chatroom jargon such as lol (the acronym from ‘‘laughing out loud,’’ which is used here more like a grin in f2f ‘face to face’ conversation as a mark of empathy), j/k ‘just kidding’, and emoticon smiles :), :-) and wicked smile >:); there is also jargon appropriate to in-group discussion of computer hardware, but note the divergence into less esoteric matters than disk drives and controllers. This mixing of jargons is very common in regular conversational discourse. so wait becuz i have the onboard raid controller i could fit 2 hdd’s on the raid controllers and 2 on the ide controllers? RAID is just a standard of combining different drives.. you can make a raid out of scsi drives along side ide drives if you want you could make a raid out of a hard drive and a ram drive just forget about the raid stuff lol and picture it as another ide controller thats all it is unles syou setup the raid stuff so wait i can have 2 hdd’s on the raid, 2 on teh primary ide and then 2 drives on the secondary ide? i guess.. tried reading the manual? lol manual? :) didnt your motherboard come with any papers iz that that book that says A7V333 on it? yes lol the one that i’m using to prop up my comp table? probably whoops :-) j/k lol yur supposed to use your school books for that dummy ummm no i dun wanna look at em they’re evil keep your motherboard manual close to your heart >:) lol (Sic. Logged August 29, 2002)

A jargon cannot be precisely defined because the boundaries of any one particular jargon are impossible to draw nonarbitrarily.

It is impossible to taboo jargon. Jargon cannot be translated into ‘ordinary English’ (or whatever language) because there is no such thing. Changing the jargon alters the message: a speaker simply cannot exchange faeces for shit or terrorist for freedomfighter or even bottlenecks for localised capacity deficiencies without changing the connotations of the message the person intends to convey. There is no convenient substitute for some jargon: to replace legalese defendant with a person against whom civil proceedings are brought is communicatively inefficient. It would be inappropriate for a lawyer not to use jargon when creating a legal document: that is exactly what legalese is for. Legal language is difficult because laws are complex and not because lawyers try to obfuscate. Similar remarks apply to other wellmotivated uses of jargon. For additional discussion, see Partridge, 1952; Hudson, 1978; Gowers, 1987; Green, 1987; Lutz, 1989; Asprey, 1991; Nash, 1993; Burke and Porter, 1995; Lutz, 1999; Allan, 2001; Bell, 2003; Allan and Burridge, 2006. See also: Connotation; Context and Common Ground; Register; Taboo, Euphemism, and Political Correctness; Thesauruses.

Bibliography Allan K (2001). Natural language semantics. Oxford & Malden, MA: Blackwell. Allan K & Burridge K (2006). Forbidden words: Taboo and the censoring of language. Cambridge: Cambridge University Press. Asprey M M (1991). Plain language for lawyers. Sydney: Federation Press. Bell A (2003). Your sharemarket jargon explained: tricks, traps and insider hints. Milsons Point: Random House. Burke P & Porter R (eds.) (1995). Languages and jargons: contributions to the social history of language. Cambridge: Polity Press. Cutts M & Maher C (1984). Gobbledygook. London: George Allen & Unwin. Firth J R (1968). ‘Descriptive linguistics and the study of English.’ In Palmer F R (ed.) Selected papers of J. R. Firth, 1952–59. Bloomington: Indiana University Press. 96–113. [First published 1956]. Gowers E (1987). The complete plain words. Revised by Sidney Greenbaum and Janet Whitcut. Harmondsworth: Penguin. Green J (1987). Dictionary of jargon. London: Routledge & Kegan Paul. Hudson K (1978). The jargon of the professions. London: Macmillan. Kittredge R & Lehrberger J (eds.) (1982). Sublanguage: studies of language in restricted semantic domains. Berlin: De Gruyter.

452 Jargon Lutz W (1989). Doublespeak: from ‘‘revenue enhancement’’ to ‘‘terminal living’’. How government, business, advertisers, and others use language to deceive you. New York: Harper and Row. Lutz W (1999). Doublespeak defined: cut through the bull**** and get to the point. New York: Harper Resource. Macquarie dictionary (rev. 2nd edn.). McMahons Point: Macquarie Library. Nash W (1993). Jargon: its uses and abuses. Oxford: Blackwell.

Partridge E (1952). Introduction to chamber of horrors: A glossary of official jargon both English and American. ‘Vigilans’ (ed.). London: Andre Deutsch. Wardhaugh R (1986). An introduction to sociolinguistics. Oxford: Basil Blackwell. Zwicky A & Zwicky A (1982). ‘Register as a dimension of linguistic variation.’ In Kittredge R & Lehrberger J (eds.) Sublanguage: studies of language in restricted semantic domains. Berlin: De Gruyter. 213–218.

L Lexical Acquisition D McCarthy, University of Sussex, Brighton, UK ß 2006 Elsevier Ltd. All rights reserved.

Introduction Lexical acquisition is the production or augmentation of a lexicon for a natural language processing (NLP) system. The resultant lexicon is a resource like a computerized dictionary or thesaurus but in a format for machines rather than people. The entries in a lexicon are lexemes, and the information acquired for these includes the forms, meanings, collocations, and associated statistics. Acquisition is vital for NLP because the performance of any system that processes text or speech is dependent on its knowledge of the vocabulary of the language being processed. As well as an extensive grasp of the vocabulary, a system needs a means to cope when it encounters a word that it has not seen before. Because of the requirements of an application, there may be a case for limiting a system to the words appropriate within a specific domain. However, even in a specific domain, everyday words will be used and there will be additional domain-specific terminology. It is possible, and sometimes appropriate, to build a system that can run using only a small vocabulary. However, it is nevertheless necessary to acquire appropriate forms and meanings of the lexemes for the given domain. The lexical information required is not explicitly listed in any available resource, and getting humans to provide it is extremely costly. For this reason lexical acquisition is frequently referred to as the bottleneck for NLP. In order to produce lexicons for deployable NLP systems, lexical acquisition must be automated. There are significant differences between the requirements of a lexicon intended for a computer system and the contents of a dictionary or thesaurus written for humans. Machine-readable dictionaries and thesauruses (MRDs and MRTs) have been designed for online use by humans. If the information contained were sufficient then one could permit a NLP system to use them directly, or after some simple

reformatting. However, while it was hoped that machine-readable resources produced by humans for humans would be a viable means of populating lexicons, it is now widely recognized that this method will not work. The resources are prone to omissions, errors, and inconsistencies. Furthermore, the lexicographers producing these resources rely on the fact that users have an adequate grasp of the language, knowledge of the world, and basic intelligence to understand the definitions supplied and fill in any gaps. For example, a human user can look up unknown words in definitions and determine the meaning of these words given the context in the definition. A computer is not predisposed to determining the meaning from definitions, e.g., given the definition of pipette as a slender tube for measuring or transferring small quantities of liquid, a computer may need to look up the meaning of tube, and would then be faced with the ambiguity that tube can mean metro as well as hollow object. In addition to the errors, omissions and problems of format, NLP systems require information that is simply not present in a dictionary because humans do not need it. A prime example of this deficiency is frequency information. Frequency information permits systems to concentrate effort on the more likely analyses or utterances. This evidence is important since a great many interpretations are possible for most utterances, and most information can be said in more than one way. The majority of NLP systems rely on statistical processing that requires probability estimates obtained from frequency data. Not only do lexicons require frequency data, but they need to be tailored to the domain of the application. Man-made resources are typically general purpose. For these reasons, most acquisition is now performed automatically from corpora, comprising large collections of real life text or speech samples. The corpus is chosen with regard to the intended domain of the application. Man-made resources are still often used in hybrid systems that collect the frequency data from the corpora but with recourse to the entities provided in the MRDs. There are also cases where acquisition is performed

454 Lexical Acquisition

semi-automatically, with human lexicographers guiding the acquisition process or correcting the output.

Resources Machine-Readable Dictionaries

Dictionaries contain entries for words that are broken down into the various meanings, or senses, of the word. These meanings are supplied with definitions and part-of-speech information, i.e., whether this use of the word is a noun, verb, adjective, or adverb. Other information is also sometimes available, for example the various morphological forms of the word, its phonological realization and information on syntactic behavior. Some dictionaries also include a subject code or domain label, for example, the Longman Dictionary of Contemporary English (LDOCE) (Proctor, 1978), or the Oxford Dictionary of English (Soanes and Stevenson, 2003). LDOCE also provides information on the semantic class of the arguments of verbs (for the subject, direct object, and indirect object grammatical slots). Machine-Readable Thesauruses

Entries in thesauruses provide words with similar meanings. The most widely used MRT is WordNet (Fellbaum, 1998) not just because of the wealth of information, particularly semantic, but also because of its free availability. WordNet is an online thesaurus, organized by semantic relations rather than alphabetically. Words are classified by their part-of-speech (noun, verb, adjective, and adverb). They are then subdivided into small classes called ‘synsets’ where members are near synonyms of each other. These synsets are then linked together by semantic relationships such as hyponymy (nouns and verbs), meronymy (nouns), antonymy (adjectives), and entailment (verbs). Figure 1 illustrates some of these relationships. Similar resources are available in other languages, for example, a number of WordNets are being distributed in other languages, notably European languages. There are also dictionaries in other languages organized by semantic relationships, for example, EDR (NICT, 2002). Corpora

Corpora range in the amount of linguistic annotation provided. Some occur in raw form, as archives of text or spoken language. Some are ‘balanced’ with a mix of texts selected from a range of genres and domains, for example, the British National Corpus (Leech, 1992). This corpus also includes part-of-speech

annotations for every word form. Other annotations are sometimes available, for example the Penn Treebank II (Marcus, 1995) includes syntactic analyses that have been hand-corrected from the output of an automatic parser. SemCor (Landes et al., 1998) is a 220 000-word corpus that has been manually annotated with WordNet sense tags. Acquisition from labeled data (supervised acquisition) is often more accurate than acquisition from raw text (unsupervised acquisition). However, annotations are costly to create and are not always available for a particular language or text type. Unsupervised systems are useful when there is no labeled data to learn from. The lack of annotation for corpora is compensated in part by the size of the corpora available. Multilingual Resources

As well as monolingual dictionaries and corpora, there are also some multilingual resources available. These resources are potentially extremely useful since the redundancies and different forms in one language can help resolve ambiguities in another. CELEX (Baayen et al., 1995) contains databases for English, Dutch, and German that includes syntactic, morphological, phonological, and orthographic information. Other multilingual dictionaries and thesauruses often have links between words with the same meaning in different languages. Multilingual corpora include both parallel corpora, where the data in one language has been translated into another, and comparable corpora where essentially the same content, such as news events, has been collected from different sources.

Automatic Techniques There are a wide variety of techniques used for lexical acquisition, and the appropriate solution will depend on the information sought and the resources available. The simplest approach is to look up information directly in an existing MRD. For many types of information this approach is not possible because the information is not present or complete. Furthermore, it is quite possible that a usage of an existing word will not be listed in a MRD. Alternatively, a system can be built to learn the information from data, referred to as training data, usually obtained from corpora. An extremely simple method is to apply pattern-matching techniques to the training data. This method has yielded success for some types of information, notably ‘is-a’ semantic relations between nouns. So, for example using the pattern, ‘X is a type of Y’ ! X is-a Y and the text,

Lexical Acquisition 455

Figure 1 Some semantic relationships in WordNet.

Table 1 Clustering verbs by the frequency of their direct objects

A dog is a type of animal, a system can be made to infer dog is-a animal. The majority of approaches use some form of statistics. There are straightforward applications of textbook statistics, e.g., the chi-square or t-test, to frequency counts of word co-occurrences. A significant problem for all statistical approaches is that the frequency distributions of linguistic phenomena are skewed so that a minority of forms or meanings takes up the majority of instances in a corpus. This inequity means that systems need to cope with the problem of data sparseness, and that for a great many forms it is hard to get good frequency estimates. There are more complicated methods that incorporate statistical estimates. There are techniques that use concepts from information-theory. For example, Lin (1998) proposed a measure of similarity that takes into account the shared contexts of words.

Machine-learning techniques, which may also involve a statistical component, induce lexical information from examples. The examples can be labeled (supervised training) or unlabeled (unsupervised). In supervised approaches, often referred to as memorybased learning, stored examples can be simply compared to a new candidate, and the closest fitting neighbor is used to obtain a label for the new candidate. Alternatively, instead of simply storing the examples, a system can generalize from the examples and find a decision tree that best partitions the examples according to the labels. Unsupervised learning is frequently performed by clustering entities according to features, e.g., words according to the contexts in which they occur. For example, the verbs in Table 1 might be clustered into three groups according to the frequency of their direct objects.

456 Lexical Acquisition

The Entries and Acquired Information

Morphology

The entries stored in a lexicon are referred to as lexemes. They are listed with lexical information depending on the needs of the application that the system is being produced for. For example, a spelling checker may only require orthographic information to render the lexeme into word forms that can occur in text. Additional lexical information is often needed by applications including pronunciation; part-of-speech; morphology; syntax, argument structure; preferences; semantics; pragmatics; and multiwords.

The word forms that a lexeme can take are important for NLP systems. The variety of forms is due to inflections (to indicate number, gender, tense, etc., for example, eat versus ate) and derivations (from a different part-of-speech, for example, central centralize). While word forms are explicit in corpus data, acquisition is not straightforward because of both unseen and ambiguous forms. It is quite common for a given form to serve several functions (syncretism). For example, one cannot distinguish between the past tense and past participle forms of many English verbs, e.g., I jumped versus I have jumped. There are also cases where word forms of the same lexeme bear no morphological relationship e.g., go and went. Morphologically related words can be found using a combination of string edit measures, which measure the changes required to turn one string into another, and word co-occurrence statistics designed to capture semantic similarity. Neural networks, a method useful for detecting patterns in data, have been used for learning roots and suffixes from input data. Furthermore, the stems, suffixes, and families (or signatures) of suffixes of a language can all be learned from corpora, for example, using methods from information theory that strive for the best compromise between a compact model and a good explanation of the data (Goldsmith, 2001).

Pronunciation

The phonetic realization of a word is needed by systems that recognize or produce speech rather than text. Word-specific pronunciation can be learned automatically from examples. This possibility has been demonstrated for Dutch (Daelemans and Durieux, 2000). In this memory-based approach, the examples to teach the system were constructed by aligning the letters of a word and its context to the phonemes. Pronunciations of new words are acquired by comparing the input letter sequence to stored examples. The approach could be applied to a language other than Dutch, provided that a grapheme-phoneme alignment is possible for that language. In other work, stress has also been acquired automatically using memory-based learning techniques (Daelemans et al., 1994). Part-of-Speech

The part-of-speech of a given word is important for morphological, syntactic, and semantic analysis. Indeed, since a given word may belong to different parts-of-speech, frequency information is often required in order to arrive at the most likely part-of-speech for a word in a given sentence. This information is typically obtained using a labeled corpus, but the possible parts-of-speech for a word can also be induced using frequency information of other words in the context (Finch and Chater, 1994). While acquisition of part-of-speech is required for all words, further information on morphology, syntax, and semantics is usually acquired for nouns, verbs, adjectives, and adverbs only. These are openclass words that change meaning and behavior depending on the domain and with time. The set of open class words increases, whereas closed-class words are a restricted set for which most of the information can be obtained in one go and does not change radically.

Syntax, Argument Structure, and Preferences

The syntactic structures that a word can occur in are important for understanding utterances and ensuring grammatical output. There is a good deal of work on the argument structure of verbs since these play a key role relating the phrases within sentences and verbs have the most complex syntactic constructions. The correct arguments of a verb are more easily identified in a sentence when the parser knows the types of construction (subcategorization frames) possible for a verb. For example, information that give can take both a single object and a double object construction allows both the following interpretations: (1) ‘She gave (the dog) (bones).’ i.e., the dog gets the bones ‘She gave (the dog bones).’ i.e., some unidentified entity gets dog bones

Subcategorization information is acquired with rudimentary syntactic analysis (without subcategorization frame information) and statistical filters that reduce the impact of noise in the analysis. Subcategorization acquisition has included basic sets of frames (Brent, 1993), as well as more detailed sets (Briscoe and Carroll, 1997) and languages other than English.

Lexical Acquisition 457 Table 2 Selectional preferences for the direct object slot of the verb start Semantic class

Preference score

Example direct objects

time period communication activity entity

0.1 0.08 0.2 0.14

week, day speech, yelling construction, war, migration car, meal

As well as argument structure, the semantic classes of arguments are acquired. For example, the direct objects of the verb eat are typically types of food. These are referred to as selectional restrictions, implying hard and fast constraints, or preferences. Preferences are usually acquired as a set of semantic classes, each with a score representing the strength of the preference for that class, as illustrated in Table 2. Many systems have followed the approach of Resnik (1998) and used corpus counts collected over WordNet. However semantic classes induced from corpus data (see below) can also be used for representation. Given both subcategorization frames and selectional preferences, alternative ways of expressing the verbs underlying arguments can be identified. For example, the alternation: (2) ‘The boy broke the window.’ $ ‘The window broke.’

can be observed by detecting the subcategorization frames for the verb break and observing the similarity between the preferences for words that occupy the object slot in the first variant with those in the subject slot in the alternate form. These are known as diathesis alternations and are important because they are a link between the syntax and semantics of the verb. Verbs sharing diathesis patterns typically also share semantic characteristics (Levin, 1993). As well as detecting diathesis patterns directly, researchers have demonstrated that syntactic and semantic features can be used to cluster verbs into classes relevant to these alternations. Merlo and Stevenson (2001) have a method where manually selected and linguistically motivated features are used for classifying verbs into three classes relating to a selection of alternations using a supervised decision tree machine learning algorithm. Subsequent work has shown that a multilingual approach can facilitate acquisition using features available in another language to help make distinctions in the target language.

Semantics

The meanings of words are required for some applications. For example, given the query, Who employs Mr. Smith and the sentence, Mr. Smith works for Hydro-Systems, lexical knowledge of the relationship between employ and work can be used to ensure an appropriate answer. Words can have more than one meaning, and this multiplicity may also need taking into account. For example, a question such as Who is on the board? would not want responses concerning planks of wood. One problem with including word meanings in lexicons is that the set of possible meanings is vast. When defining the meanings of a word it is possible to arrive at an extremely fine-grained classification with many different senses that are related, though subtly different in some way. For example, the verb break has 59 senses in WordNet version 2.0. Having a fine-grained sense inventory is not always beneficial for NLP, since distinguishing these meanings in text is not easy even for human annotators. There is typically a large skew in the frequency distribution of senses so that one or two senses of a word are much more predominant than the others. This emphasis is particularly the case when dealing with language in a given domain. As a consequence, much acquisition of other information, such as subcategorization, has assumed a predominant sense for a word and not differentiated information by sense. Rather than use a man-made resource for lexical meaning, many have exploited the fact that semantically similar words tend to occur in similar contexts. These are distributional approaches that attempt to find similar words given input data such as that in Table 1. The data can be clustered to give distinct classes, e.g., (begin, commence), (eat, consume, devour), and (construct, build). Or, instead of explicitly creating classes, a target word can simply be listed with a specified number of ‘nearest neighbor’ words as an acquired thesaurus, e.g., eat: consume devour, etc. These lists of nearest neighbors are produced to avoid losing information during the clustering process. The neighbors are associated with the distributional similarity scores used for ordering them. Nearest neighbors or induced classes can be used by an NLP system to provide evidence when there is not enough data for a related word and to cope with the fact that things can be said in different ways, depending on subtle nuances. As well as finding related words, different senses of the same word can be found from the data by either (i) clustering the nearest neighbors of a word (Pantel and Lin, 2002) according to the contexts that these

458 Lexical Acquisition

neighbors occur in or (ii) clustering the contexts of individual tokens of the word (Schu¨tze, 1998). Another approach to determining the meanings of words from data is to use parallel corpora and define senses where a word in one language is realized by more than one word in another language (Resnik and Yarowsky, 2000). This technique can be done with more than two languages, and with languages that are more distantly related since the ambiguities are more likely to differ. The sense inventory produced by this strategy will be relevant for translation between those languages. This approach contrasts with using a predefined sense inventory such as WordNet, where the senses may be too fine-grained and never reflect ambiguities that actually give rise to problems for an application such as machine translation. As an alternative to identifying senses in text, there are also approaches that discover patterns of use within a generative lexicon framework (Pustejovsky, 1995) rather than assuming distinct senses. Pustejovsky and Hanks (2004) propose a semi-automatic method using selectional preference patterns identified by hand. Current work on acquisition of lexical semantics focuses on relationships between linguistic units. Model-theoretic semantics where linguistic units are related to entities in the world have not received much attention. Though recently there has been work on acquisition of lexical semantics from non-linguistic data (Barzilay et al., 2003). Pragmatics

One way of conveying the user’s intent in an utterance is the choice of words. This choice can reflect stylistic preferences such as formality and attitude, for example, knowing whether to use demand, stipulate, request, or ask in a given utterance. Such information is particularly important for generation, but could also be useful for a system trying to detect sentiment. It is not obvious how to identify the nuances of meaning of near-synonyms from raw data. Work has been done however extracting this sort of information from MRDs (Inkpen and Hirst, 2001). Work on sentiment detection identifies words that are subjective, and those that indicate a positive or negative sentiment by partitioning good and bad reviews. Though sentiment detection is not focused on contrasting the meanings of semantically related words, e.g., brave versus foolhardy, it might benefit from such information. Multiwords

Lexemes can have more than one word stem, for example, post office. These are referred to as ‘multiwords’ and are phrases where the meaning is not compositional, that is, the meaning of the phrase

is not simply a sum of the meaning of component words. Multiwords are important for interpretation and for production of natural sounding text. There are a variety of multiword types including idioms, specific constructions such as phrasal verbs and collocations, i.e., words which occur together by convention (Sag et al., 2002). Acquisition of multiwords is typically done using a combination of a statistical association measure and a linguistic filter (Smadja, 1993). Another approach is to search for potential candidates with syntactic analysis and statistics, and then look for signs of noncompositionality. This approach can be done by finding a lack of combinations of component words with substitutions from the same semantic class in a corpus. For example, we find red herring in corpus data but not usually yellow herring. Both WordNet and acquired thesauruses can be used to define the semantic classes. Non-compositionality has also been detected using distributional approaches to demonstrate that the semantics of the multiword has very different characteristics to that anticipated, given the component words.

Updating the Lexicon Most lexical acquisition systems have been designed to extract lexical knowledge from resources in advance and subsequently deploy the lexicon within a NLP system. However, due to data sparseness, the creative way in which language is used, and the very nature of open-class words, there will inevitably be linguistic forms and meanings that have not been acquired beforehand. A method is required for handling new phenomena as they occur. One updating task is the detection of words that are not already in the lexicon. This process must filter misspellings and non-real words, for example codes, from lists of candidates. Genuine words can be identified by examining the letter combinations and ensuring that these would produce regular sounding words for the given language. This process is performed using frequency data collected over character sequences from a given corpus (Park, 2002). When obtaining estimates for unseen word forms, for example, morphological forms, it might seem appropriate to take an average over forms observed in a training corpus. However, Baayen and Sproat (1996) demonstrate that care should be taken since frequent forms behave differently to infrequent forms. They recommend using estimates from forms that have occurred only once, rather than an average over all forms in the training corpus. As well as finding new words, there are also methods aimed at learning lexical information on the fly.

Lexical Acquisition 459

Part-of-speech information is obtained using morphological information. Morphological information can also be used to identify the basic semantic category of a word, alongside other information, such as collocations. A chief motivation for thesaurus acquisition and clustering is that these will help with prediction of unseen behavior for members within a given class. There are also systems that place new word forms into a predefined inventory, such as WordNet.

Evaluation The output of lexical acquisition systems must be evaluated. There are two main ways in which this evaluation can be done. Firstly, the performance of the lexicon on a task can be measured. A task-based evaluation is appropriate when there is a specific task that the lexicon is being designed for, however a readymade ‘plug and play’ application is not always available. Acquired lexical information have often been evaluated on sub-tasks, so for example, selectional preferences have been evaluated on their performance at determining whether a given prepositional phrase should be attached to the preceding verb or noun phrase. For example, given the sentence The boy hit the man with a stick, the preference of hit with a stick is contrasted with the preference for man with a stick. The second method is evaluation of the quality and coverage of the lexical entries. Evaluating the entries is often done by consulting a man-made resource referred to as a ‘gold standard.’ The obvious problem with using such a resource, or even a combination of several resources, is that there will be rare forms in the gold standard that are simply not attested in the corpus data used for training. Likewise the very need for lexical acquisition means that there will be legitimate information that is omitted from a gold standard. For this reason, evaluation is also performed by examining a manually annotated sample of text input to the acquisition process to verify that what is acquired does indeed reflect what is attested in the corpus, and also the adequacy of any acquired frequency estimates. Human experts can also perform a manual verification of the acquired information. This approach is used to identify the errors in the output rather than to prove that all information available in the input was successfully extracted. There are further problems when there is no widely agreed upon classification, as is the case when acquiring semantic classes, or no precise definition of the phenomena being acquired, for example, in the case of multiword acquisition. These problems show up in a lack of agreement between human judges. For example, if asked to determine the multiwords in:

(3) ‘They ate the chocolate cake in the wendy house.’

human judges might unanimously decide that wendy house is a multiword, but will potentially have more trouble with chocolate cake since the interpretation is more standard.

Future Directions Currently, acquisition focuses on coverage and accuracy of extracted information. There are issues of organization and representation that received attention when lexicons were manually constructed. Expressive representations with inheritance of features capture generalizations and avoid unnecessary repetition while handling exceptions. Acquisition from corpora currently focuses on acquiring one type of information as a flat list, perhaps ordered by frequency, alphabetically or arbitrarily. While issues of representation have been seen as important in the semi-automatic construction of some lexicons from machine readable resources (Copestake, 1992), there has been little done on acquiring structured information from corpora. One reason that lexical acquisition from corpora has not used more complex representations is due to the widespread use of statistics and absence of a formal model for storing probability estimates in an inheritance framework. If such a framework were developed, acquisition of information into more complex representations might help in identifying cross-lingual generalizations and establishing correlations between different types of linguistic information, for example, linking a semantic class with syntactic behavior. With the increasing variety of information required, lexical acquisition continues to play a crucial role within NLP. The importance is emphasized by many large-scale projects focusing on specific aspects of lexical acquisition, e.g., the Multiword Expression Project. The process of acquisition has been greatly enhanced by vast quantities of data available in corpora, along with statistical techniques to derive the information from the data and the increasing computational power available. In recent years, new resources have proved useful, e.g., comparable corpora in different languages and the World Wide Web. In the future, further resources could be exploited. Speech or image processing might provide the prosody and visual cues that humans use when acquiring vocabulary. Dialogue systems might provide feedback when acquisition has gone wrong. See also: Acquisition of Meaning by Children; Antonymy

and Incompatibility; Hyponymy and Hyperonymy; Lexical Semantics; Lexicon: Structure; Lexicon/Dictionary:

460 Lexical Acquisition Computational Approaches; Meronymy; Natural Language Understanding, Automatic; Speech Acts and AI Planning Theory; Summarization of Text: Automatic; Thesauruses; WordNet(s).

Bibliography Baayen R H, Piepenbrock R & Gulikers L (1995). The CELEX Lexical Database (Release 2) [CD-ROM]. Philadelphia, PA: Linguistic Data Consortium, University of Pennsylvania. Baayen H & Sproat R (1996). ‘Estimating lexical priors for low-frequency morphologically ambiguous forms.’ Computational Linguistics 22(2), 155–166. Barzilay R, Reiter E & Siskind J (eds.) (2003). Proceedings of the HLT-NAACL03 workshop on learning word meaning from non-linguistic data. Canada: Edmonton. Brent M (1993). ‘From grammar to lexicon: unsupervised learning of lexical syntax.’ Computational Linguistics 19(2), 243–262. Briscoe E & Carroll J (1997). ‘Automatic extraction of subcategorization from corpora.’ In Proceedings of the Fifth Applied Natural Language Processing Conference. 356–363. Copestake A (1992). ‘The ACQUILEX LKB: Representation Issues in the Semi-automatic Acquisition of Large Lexicons. (ACQUILEX WP NO. 36).’ In Proceedings of the Third Conference on Applied Natural Language Processing. Italy: Trento. Daelemans W & Durieux G (2000). ‘Inductive Lexica in Lexicon Development for Speech and Language Processing.’ In van Eynde F & Gibbon D (eds.) Lexicon development for speech and language processing. Dordrecht: Kluwer Academic Publishers. 115–139. Daelemans W, Gillis S & Durieux G (1994). ‘The acquisition of stress, a data-oriented approach.’ Computational Linguistics 20(3), 421–451. Fellbaum C (ed.) (1998). WordNet an electronic lexical database. Cambridge, Massachusetts: MIT Press. Finch S & Chater N (1994). ‘Learning syntactic categories: a statistical approach.’ In Oaksford M & Brown G D A (eds.) Neurodynamics and psychology. San Diego, California: Academic Press. Goldsmith J (2001). ‘Unsupervised learning of the morphology of a natural language.’ Computational Linguistics 27(2), 153–198. Inkpen D & Hirst G (2001). ‘Building a lexical knowledgebase of near-synonym differences.’ In Proceedings of the Workshop on WordNet and other Lexical Resources, Second Meeting of the North American Chapter of the Association for Computational Linguistics. Pittsburgh, PA. June. Landes S, Leacock C & Tengi R (1998). ‘Building semantic concordances.’ In Fellbaum C (ed.) WordNet: An electronic lexical database. 199–216. Leech G (1992). ‘100 million words of English: the British National Corpus.’ Language Research 28(1), 1–13.

Levin B (1993). English verb classes and alternations: a preliminary investigation. Chicago/London: University of Chicago Press. Lin D (1998). ‘An information-theoretic definition of similarity.’ In Proceedings of International Conference on Machine Learning. Madison, Wisconsin, July. Marcus M (1995). ‘The Penn Treebank: a revised corpus design for extracting predicate-argument structure.’ In Proceedings of the ARPA Human Language Technology Workshop. Princeton, NJ. Merlo P & Stevenson S (2001). ‘Automatic verb classification based on statistical distributions of argument structure.’ Computational Linguistics 27(3), 373–408. NiCT (2002). ‘EDR Electron Dictionary version 2.0, technical guide.’ National Institute of Information and Communications Technology, Tokyo, Japan. Pantel P & Lin D (2002). ‘Discovering Word Senses from Text.’ In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Canada: Edmonton. 613–619. Park Y (2002). ‘Identification of probable real words: an entropy-based approach.’ In Proceedings of the Workshop on Unsupervised Lexical Acquisition, 40th meeting of the Association for Computational Linguistics. Philadelphia, PA. Procter P (1978). Longman dictionary of contemporary English. Longman Group Ltd. Harlow: UK. Pustejovsky J (1995). The Generative Lexicon. Cambridge, MA: MIT Press. Pustejovsky J, Hanks P & Rumshisky A (2004). ‘Automated induction of sense in context.’ In Proceedings of the 20th International Conference of Computational Linguistics, COLING-2004 2, 924–930. Resnik P (1998). ‘WordNet and class-based probabilities.’ In Fellbaum C (ed.) WordNet: An electronic lexical database. 239–264. Resnik P & Yarowsky D (2000). ‘Distinguishing systems and distinguishing senses: new evaluation methods for word sense disambiguation.’ Natural Language Engineering 5(2), 113–133. Sag I, Baldwin T, Bond F et al. (2002). ‘Multiword expressions: a pain in the neck for NLP.’ In Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics (CICLING 2002). Mexico City: Mexico. 1–15. Schu¨tze H (1998). ‘Automatic word sense discrimination.’ Computational Linguistics 24(1), 97–123. Smadja F (1993). ‘Retrieving collocations from text: Xtract.’ Computational Linguistics (Special Issue on Using Large Corpora) 19(1), 143–177. Soanes C & Stevenson A (eds.) (2003). The Oxford Dictionary of English. Oxford University Press.

Relevant Website http://mwe.stanford.edu – The Multiword Expression Project.

Lexical Conceptual Structure 461

Lexical Conceptual Structure J S Jun, Hankuk University of Foreign Studies, Seoul, Korea ß 2006 Elsevier Ltd. All rights reserved.

Introduction The lexical conceptual structure (LCS) or simply the conceptual structure (CS) is an autonomous level of grammar in conceptual semantics (Jackendoff, 1983, 1990, 1997, 2002), in which the semantic interpretation of a linguistic expression is explicitly represented. Jackendoff’s (1983) original conception is to posit a level of mental representation in which thought is couched (cf. the language of thought in Fodor, 1975). CS is a relay station between language and peripheral systems such as vision, hearing, smell, taste, kinesthesia, etc. Without this level, we would have difficulty in describing what we see and hear. There are two ways to view CS in formalizing a linguistic theory. One is to view CS as a nonlinguistic system that serves as an interface between meaning and nonlinguistic modalities. Then, we need another level of representation for meaning (cf. Chomsky’s [1981, 1995] LF); and CS is related to the linguistic meaning by pragmatics as shown in Figure 1. This is the view of Katz and Fodor (1963), Jackendoff (1972), Katz (1980), and Bierwisch and Schreuder (1992). The alternative conception is to view CS as the semantic structure. The linguistic meaning as well as nonlinguistic information compatible with sensory and motor inputs is directly represented in CS. CS is related with other linguistic levels such as syntax and phonology by correspondence rules, and therefore CS is part of the lexical information (hence called

LCS) as shown in Figure 2. This is the current view of conceptual semantics. One argument that supports the latter view comes from generic judgment sentences. In the standard view of linguistic meaning, judgments of superordination, subordination, synonymy, entailment, etc., are linguistic. We judge that ‘bird’ and ‘chicken’ make a superordinate-subordinate pair; that in some dialects ‘cellar’ and ‘basement’ are synonymous; and that ‘Max is a chicken’ entails ‘Max is a bird.’ Linguistic judgments of this sort are formalized in theories such as meaning postulates (Fodor, 1975) and semantic networks (Collins and Quillian, 1969). Jackendoff (1983) points out one problem in formalizing these judgments from a purely linguistic perspective: judgments of superordination and subordination, for instance, are directly related to judgments of generic categorization sentences such as ‘A chicken is a bird.’ The judgment about generic categorization is, however, not entirely linguistic or semantic, in that it behaves creatively enough to include ambiguous cases such as (1) below. (1a) (1b) (1c) (1d)

A piano is a percussion instrument. An australopithecine was a human. Washoe (the chimp)’s sign system is a language. An abortion is a murder. (Jackendoff, 1983: 102)

We make generic categorization judgments about (1) not on the basis of meaning postulates or semantic networks but on the basis of our factual, often political, world knowledge. For instance, our judgment about (1d) is influenced by our political position, religion, and knowledge about biology. This is analogous to Labov’s (1973) dubious ‘cup-bowl’

Figure 1 CS as a nonlinguistic system (adapted from Jackendoff R (1983). Semantics and cognition. Cambridge, MA: MIT Press, 20, with permission).

462 Lexical Conceptual Structure

Figure 2 CS as part of the linguistic system (adapted from Jackendoff R (1983). Semantics and cognition. Cambridge, MA: MIT Press, 21, with permission).

judgment, which obviously resorts to nonlinguistic encyclopedic knowledge as well as the linguistic type system. CS is, by definition, the level that represents encyclopedic knowledge as part of our thought. Hence, we should refer to CS to make generic categorization judgments about (1). Jackendoff’s (1983) puzzle is summarized, as follows. We make judgments of semantic properties such as superordination and subordination at the level of semantic structure. We make generic categorization judgments at the level of CS as shown by (1). If the semantic structure were separated from CS, we would fail to catch the obvious generalization between the superordinate-subordinate judgment and the generic categorization judgment. If, by contrast, CS were the semantic structure, we would have no trouble in accounting for the intuitive identity between the two judgments. Therefore, CS is the semantic structure. For more arguments to support the view that CS is the semantic structure, see Jackendoff (1983: Ch. 6).

Overview of Conceptual Semantics Autonomy of Semantics

A central assumption in conceptual semantics is the autonomy of semantics. In Chomsky’s view of language, syntax makes an autonomous level of grammar, whereas phonology and semantics merely serve as interpretive components (PF and LF). Jackendoff (1997) criticizes this view as syntactocentric, and provides convincing arguments to support his thesis that phonology and semantics as well as syntax make autonomous levels of grammar. We find numerous pieces of evidence for the autonomy of semantics in the literature of both psycholinguistics and theoretical linguistics. Zurif

and Blumstein’s (1978) pioneering work shows that Wernicke’s area is the center of semantic knowledge in the brain in comparison with Zurif, Caramazza and Myerson’s (1972) previous finding that Broca’s area is the center of syntactic knowledge. Swinney’s (1979) classical work on lexical semantic priming shows that lexical semantics is independent of the grammatical contexts like the movement chain in a sentence. Pin˜ango, Zurif, and Jackendoff (1999) report more workload for the online processing of aspectual coercion sentences (e.g., John jumped for two hours) than for the processing of syntactically equivalent noncoerced sentences (e.g., John jumped from the stage). Semantic categories are not in one-to-one correspondence with syntactic categories. For instance, all physical object concepts correspond to nouns, but not all nouns express physical object concepts; e.g., earthquake and concert express event concepts. All verbs express event/state concepts, but not all event/state concepts are expressed by verbs; e.g., earthquake and concert are nouns. Contrary to Chomsky’s (1981) theta criterion, we have plenty of data that shows mismatch between syntactic functions and thematic roles. For instance, the semantic interpretation of buy necessarily encodes both the transfer of money from the buyer to the seller and the transfer of the purchased entity from the seller to the buyer. Among the three semantic arguments, i.e., the buyer, the seller, and the purchased object, only the buyer and the purchased entity are syntactic arguments (e.g., John bought the book). The seller is syntactically expressed as an adjunct (e.g., John bought the book from Jill). Moreover, the buyer plays the source role of money and the target role of the purchased entity simultaneously; the seller plays the source role of the purchased entity and the

Lexical Conceptual Structure 463

target role of money simultaneously. In short, the buyer and the seller have multiple theta roles even though each of them corresponds to one and only one syntactic entity. A simple semantic distinction often corresponds to many syntactic devices. For instance, telicity is expressed by such various syntactic devices as choice of verb (2a), choice of preposition (2b), choice of adverbial (2c), choice of determiner in the subject NP (2d) and in the object NP (2e), and choice of prepositional object (2f) (Jackendoff, 1997: 35). (2a) John destroyed the cart (in/*for an hour). John pushed the cart (for/*in an hour). (2b) John ran to the station (in/*for an hour). John ran toward the station (for/*in an hour). (2c) The light flashed once (in/*for an hour). The light flashed constantly (for/*in an hour). (2d) Four people died (in/*for two days). People died (for/*in two days) (2e) John ate lots of peanuts (in/*for an hour) John ate peanuts (for/*in an hour). (2f) John crashed into three walls (in/*for an hour) John crashed into walls (for/*in an hour)

! Telic ! Atelic ! Telic ! Atelic ! Telic ! Atelic ! Telic ! Atelic ! Telic ! Atelic ! Telic ! Atelic

To sum up, the mapping between syntax and semantics is not one-to-one; rather, it is one-to-many, many-to-one, or at best many-to-many. The mapping problem is not easy to explain in the syntactocentric architecture of language. The overall difficulty in treating semantics merely as an interpretive component of grammar along with a similar difficulty treating phonology as an interpretive component (cf. Jackendoff, 1997: Ch. 2) leads Jackendoff to propose a tripartite architecture of language, in which phonology, syntax, and semantics are all independent levels

of grammar licensed by phonological formations rules, syntactic formation rules, and conceptual/ semantic formation rules respectively, and interfaced by correspondence rules between each pair of modules, as shown in Figure 3. Lexical Conceptual Structure

Conceptual semantics assumes striking similarities for the organization of CS with the structural organization of syntax. As syntax makes use of syntactic categories, namely syntactic parts of speech like nouns, adjectives, prepositions, verbs, etc., semantics makes use of semantic categories or semantic parts of speech such as Thing, Property, Place, Path, Event, State, etc. As syntactic categories are motivated by each category member’s behavioral properties in syntax, semantic or ontological categories are motivated by each category member’s behavioral properties in meaning. Syntactic categories are combined by syntactic phrase-structure structure rules into larger syntactic expressions; likewise, semantic categories are combined by semantic phrase-structure rules into larger semantic expressions. The syntactic representation is structurally organized, so we can define dominance or government relations among syntactic constituents; likewise, the semantic representation is structurally organized, so we can define grammatically significant hierarchical relations among semantic constituents. Various syntactic phrase-structure rules can be generalized into a rule schema called X-bar syntax (Jackendoff, 1977); likewise, various semantic phrase-structure rules can be generalized

Figure 3 The tripartite parallel architecture (reproduced from Jackendoff R (2002). Foundations of language: brain, meaning, grammar, evolution. Oxford: Oxford University Press).

464 Lexical Conceptual Structure

into a rule schema called X-bar semantics (Jackendoff, 1987b). Ontological Categories Ontological categories are first motivated by our cognitive layouts. To mention some from the vast psychology literature, Piaget’s developmental theory of object permanence shows that infants must recognize objects as a whole, and develop a sense of permanent existence of the objects in question when they are not visible to the infants. Researchers in language acquisition have identified many innate constraints on language learning like reference principle, object bias, whole object principle, shape bias, and so on (cf. Berko Gleason, 1997). For instance, children rely on the assumption that words refer to objects, actions, and attributes in the environments by reference principle. Wertheimer’s (1912) classical experiment on apparent movement reveals that humans are equipped with an innate tendency to perceive the change of location as movement from one position to the other; the apparent movement experiment renders support for the expansion of the event category into function argument structures like [Event GO ([Thing ], [Path ])]. Ontological categories also have numerous linguistic motivations. Pragmatic anaphora (exophora) provides one such motivation. In order to understand the sentence in (3), the hearer might have to pick out the referent of that among several entities in the visual field. If the hearer did not have object concepts to organize the visible entities, (s)he could not pick out the proper referent of the pragmatic anaphora that. The object concept involved in the semantic interpretation of (3) motivates the ontological category Thing. (3) I bought that last night.

The category Thing proves useful in interpreting many other grammatical structures. It provides the basis of interpreting the Wh-variable in (4a); it supports the notion of identity in the same construction in (4b); and it supports the notion of quantification as shown in (4c). (4a) What did you buy last night? (4b) John bought the same thing as Jill. (4c) John bought something/everything that Jack bought.

Likewise, we find different sorts of pragmatic anaphora that motivate ontological categories like Place (5a), Direction (5b), Action (5c), Event (5d), Manner (5e), and Amount (5f). (5a) (5b) (5c) (5d)

Your book was here/there. They went there yesterday. Can he do this/that? It happened this morning.

(5e) Bill shuffled a deck of cards this way. (5f) The man I met yesterday was this tall.

These ontological categories provide innate bases for interpreting Wh-variables, the identity construction, and the quantification, as shown in (6)–(8). (6a) Where was my book? (6b) Where did they go yesterday? (6c) What can he do? (6d) What happened this morning? (6e) How did Bill shuffle a deck of cards? (6f) How tall was the man you met yesterday? (7a) (7b) (7c) (7d)

John put the book on the same place as Bill. John went the same way as Bill. John did the same thing as Bill. The same thing happened yesterday as happened this morning. (7e) John shuffled a deck of cards the same way as Bill. (7f) John is as tall as the man I met yesterday. (8a) (8b) (8c) (8d)

John put the book at some place that Bill put it. John went somewhere that Bill went. John did something Bill did. Something that happened this morning will happen again. (8e) John will shuffle cards in some way that Bill did. (8f) (no parallel for amounts)

For more about justifying ontological categories, see Jackendoff (1983: Ch. 3). Conceptual Formation Rules Basic ontological categories are expanded into more complex expressions using function-argument structural descriptions. (9) shows such expansions of some ontological categories. (9a) EVENT ! [Event GO (THING, PATH)] (9b) EVENT ! [Event STAY (THING, PLACE)] (9c) EVENT ! [Event CAUSE (THING or EVENT, EVENT)] (9d) EVENT ! [Event INCH (STATE)] (9e) STATE ! [State BE (THING, PLACE)] (9f) PLACE ! [Place PLACE-FUNCTION (THING)] (9g) PATH ! [Path PATH-FUNCTION (THING)]

The function-argument expansion is exactly parallel with rewriting rules in syntax (e.g., S ! NP VP; NP ! Det (AP)* N; VP ! V NP PP), and hence can be regarded as semantic phrase-structure rules. The semantic phrase-structure rules in (9) allow recursion such as syntactic phrase-structure rules: an Event category can be embedded in another Event category as shown in (9c). We also can define hierarchical relations among conceptual categories in terms of the depth of embedding as we define syntactic dominance or government in terms of the depth of

Lexical Conceptual Structure 465

embedding in syntactic structures. The depth of embedding in CS plays a significant role in explaining such various grammatical phenomena as subject selection, case, binding, control, etc. See Culicover and Jackendoff (2005) for more about these issues. Place functions in (9f) may include IN, ON, TOP-OF, BOTTOM-OF, etc. Path functions in (9g) may include TO, FROM, TOWARD, VIA, etc. Conceptual semantics is a parsimonious theory, in that it makes use of only a handful of functions as conceptual primitives. All functions should be motivated on strict empirical grounds. This is exactly parallel with using only a handful of syntactic categories motivated on strict empirical grounds. Syntactic phrase-structure rules do not refer to unlimited number of syntactic categories. Syntactic categories such as noun, adjective, preposition, verb, etc. are syntactic primitives, and they are motivated by each category member’s behavioral properties in syntax. Likewise, semantic phrase-structure rules refer to a restricted set of semantic or conceptual primitives that are empirically motivated by general properties of meaning. Functions such as GO, BE, and STAY are empirically motivated in various semantic fields. They are the bases for interpreting spatial sentences in (10).

(15b) John is a doctor.

(Ascription)

(16a) John kept the CD in his pocket. (Spatial) (16b) John kept the CD. (Possession) (17a) The professor remained in the driveway. (17b) The professor remained a pumpkin.

(Spatial) (Ascription)

In (13), the verb turn is used in both spatial and ascription sentences with the GO meaning. How do we use the same verb for two different semantic fields? Do we have to assume two different lexical entries for turn? Conceptual semantics does not pay anything to explain this puzzle. We do not need two different lexical entries for turn to explain the spatial and ascription meanings. We just posit the event function GO for the lexical semantic description or LCS for turn in (13). Both spatial and ascription meanings follow form the LCS for turn, since the function GO is in principle motivated by both spatial and ascription sentences. We can provide similar accounts for all the data in (14)–(17). For more about the general overview of conceptual semantics, see Jackendoff (1983, 1987a, 1990, 2002). X-bar Semantics

(10a) GO: The train traveled from Boston to Chicago. (10b) BE: The statue stands on Cambridge common. (10c) STAY: John remained in China.

These functions also support the interpretation of possession sentences in (11). (11a) GO: John gave the book to Bill. (11b) BE: John had no money. (11c) STAY: The library kept several volumes of the Korean medieval literature.

Interpreting ascription sentences also require GO, BE, and STAY, as shown in (12). (12a) GO: The light turned from yellow to red. (12b) BE: The stew seemed distasteful. (12c) STAY: The aluminum stayed hard.

One interesting consequence of having GO, BE, and STAY in both spatial and nonspatial semantic fields is that we can explain how we use the same verb for different semantic fields. (13a) The professor turned into a driveway. (13b) The professor turned into a pumpkin.

(Spatial)

(14a) The bus goes to Paris. (14b) The inheritance went to Bill.

(Spatial) (Possession)

(15a) John is in China.

(Spatial)

(Ascription)

Generative linguists in the 1950s and 1960s succeeded in showing the systematic nature of language with a handful of syntactic phrase-structure rules. But they were not sure how the phrase-structure rules got into language learners’ minds within a relatively short period of time; it was a learnability problem. X-bar syntax (Chomsky, 1970; Jackendoff, 1977) opened a doorway to the puzzle. Children do not have to be born with dozens of syntactic categories; children are born with one syntactic category, namely, category X. Children do not have to learn dozens of totally unrelated syntactic phrase-structure rules separately; all seemingly different syntactic phrase-structure rules share a fundamental pattern, namely, X-bar syntax. Jackendoff (1987b, 1990), who was a central figure in developing X-bar syntax in the 1970s, has completed his X-bar theory by proposing X-bar semantics. We have so far observed that CS is exactly parallel with the syntactic structure. Conceptual categories are structurally organized into CS by virtue of semantic phrase-structure rules, as syntactic categories are structurally organized into syntactic structure by virtue of syntactic phrase structure rules. (18) is the basic formation of X-bar syntax. (18a) XP ! Spec X’ (18b) X’ ! X Comp (18c) X ! [  N,  V]

466 Lexical Conceptual Structure

Now that CS has all parallel properties with the syntactic structure, all semantic phrase-structure rules are generalized into X-bar semantics along the same line with X-bar syntax as shown in (19). 2 3 Event Thing Place . . . 5 (19) [Entity] !4 Token Type Fð< Entity1 ; < Entity2 ; < Entity3 >>>

(19) provides not only the function-argument structural generalization for all the semantic phrasestructure rules but also shows how major syntactic constituents correspond to major conceptual categories. That is, the linking between syntax and semantics can be formalized as (20) and (21). (20) XP corresponds to [Entity] (21)

 X0

 < YP < ZP >>

corresponds to

  Entity FðE1 ; < E2 ; < E3 >>Þ

where YP corresponds to E2, ZP corresponds to E3, and the subject (if there is one) corresponds to E1. To sum up, the obvious similarity between (18) and (19) enables us to account for the tedious linking problem without any extra cost.

General Constraints on Semantic Theories Jackendoff (1983) suggests six general requirements that any semantic theory should fulfill: expressiveness, compositionality, universality, semantic properties, the grammatical constraint, and the cognitive constraint. First, a semantic theory must be observationally adequate; it must be expressive enough to describe most, if not all, semantic distinctions in a natural language. Conceptual semantics has expressive power, in that most semantic distinctions in a natural language can be represented by CS with a handful of conceptual categories plus conceptual formation rules. What is better is that the expressive power has improved since the original conception of the theory. For instance, Jackendoff (1990: Ch. 7) introduced the action tier into the theory to represent the actor/patient relation aside from motion and location. In (22a), John is the source of the ball and the actor of the throwing event simultaneously; the ball is a moving object, the theme, and an affected entity, the patient, simultaneously. It is quite common for one syntactic entity to bear double theta roles contra Chomsky’s (1981) theta criterion; conceptual semantics captures this by representing the motion/location event in the thematic tier (22b), and the actor/patient relation in the action tier (22c). (22a) John threw the ball. Source Goal Actor Patient

(22b) [Event CAUSE ([JOHN], [Event GO([BALL], [Path TO([ . . . ])])])] (22c) [AFF([JOHN], [BALL])]

The action tier not only explains the fine semantic distinction in language but also plays a central role in such grammatical phenomena as linking and case. Besides the action tier, Jackendoff (1991) introduced an elaborate feature system into CS to account for the semantics of parts and boundaries; Csuri (1996) introduced the referential tier into CS that describes the definiteness of expressions; Jackendoff (2002) introduced the lambda extraction and the topic/ focus tier into CS. All these and many other innovations make the theory expressive enough to account for significant portion of natural language semantics. The second constraint on a semantic theory is compositionality: an adequate semantic theory must show how the meanings of parts are composed into the meaning of a larger expression. Conceptual semantics is compositional, in that it shows how combinatorial rules of grammar compose the meanings of ontological categories into the CS of a larger expression. The third requirement is universality: an adequate semantic theory must provide cross-linguistically relevant semantic descriptions. Conceptual semantics is not a theory of meaning for any particular language. It is a universal theory of meaning; numerous cross-linguistic studies have been conducted with the conceptual semantic formalism. See Jun (2003), for instance, for a review of many conceptual semantic studies on the argument linking and case in languages such as Korean, Japanese, Hindi, Urdu, English, Old English, French, etc. The fourth requirement is semantic properties: an adequate semantic theory should be able to explain many semantic properties of language like synonymy, anomaly, presupposition, and so on. That is, any semantic theory must explicate the valid inference of expressions. CS provides a direct solution to this problem in many ways. The type/token distinction is directly expressed in CS, and explains most semantic distinctions made by the semantic type system. By decomposing verbs such as kill into [CAUSE ([THING], [NOT-ALIVE ([THING])])], conceptual semantics explains how John killed Bill entails Bill is dead. For more about semantic properties, see Jackendoff (1983, 1990, 2002). The fifth requirement is the grammatical constraint: if other things were equal, a semantic theory that explains otherwise arbitrary generalizations about the lexicon and the syntax would be highly preferable. Conceptual semantics is a theory of meaning that shows how a handful of conceptual primitives organize the vast domain of lexical semantics.

Lexical Conceptual Structure 467

Conceptual semantics also explains how semantic entities are mapped onto syntactic entities in a principled manner. For instance, the linking principle in conceptual semantics states that the least embedded argument in the CS is mapped onto the least embedded syntactic argument, namely the subject. In (22b & c), [JOHN] is the least embedded argument in both the action and thematic tiers; this explains why [JOHN] instead of [BALL] is mapped onto the subject of (22a). Jun (2003) is a conceptual semantic work on case; Culicover and Jackendoff (2005) offer conceptual semantic treatments of binding, control, and many other syntax-related phenomena. In short, conceptual semantics is an interface theory between syntax and semantics. The theory has a desirable consequence for the learnability problem, too. Language learners cannot acquire language solely by syntax or solely by semantics. As Levin (1993) demonstrates, a number of syntactic regularities are predicted by semantic properties of predicates. Conceptual semantics makes a number of predictions about syntax in terms of CS. Chomsky’s explanatory adequacy is a requirement for the learnability problem; conceptual semantics is thus a theory that aims to achieve the highest goal of a linguistic theory. The final requirement on a semantic theory is the cognitive constraint: a semantic theory should address interface problems between language and other peripheral systems like vision, hearing, smell, taste, kinesthesia, etc. Conceptual semantics fulfills this requirement, as CS is by definition a level of mental representation at which both linguistic and nonlinguistic modalities converge. Jackendoff (1987c) focuses on the interface problem, and shows, for instance, how the visual representation is formally compatible with the linguistic representation based on Marr’s (1982) theory of visual perception.

Comparison with Other Works Bierwisch and Schreuder’s (B&S; 1992) work is another influential theory that makes explicit use of the term conceptual structure. Conceptual semantics shares two important assumptions with B&S, but there are crucial distinctions between the two theories. First, B&S also assume a separate level of conceptual structure. Their conception of CS is similar to Jackendoff’s conception of CS in that CS is a representational system of message structure where non-linguistic factual/encyclopedic information is expressed. B&S, however, assume that CS strictly belongs to a nonlinguistic modality, and that the linguistic meaning is represented in another level called semantic form (SF). As a result, SF, but not CS, is the object of lexical semantics, and hence LCS does not

make much sense in this theory. In the first section of this article, we discussed two possible views of CS; B&S take the former view of CS, whereas Jackendoff advocates the latter view. Second, SF in B&S’s theory is compositional as CS in conceptual semantics. B&S’s lexical decomposition relies on two sorts of elements: constants such as DO, MOVE, FIN, LOC, etc., and variables such as x, y, z. Constants and variables are composed into a larger expression in terms of formal logic. (23a) illustrates B&S’s SF for enter; (23b) is the CS for the same word in Jackendoff’s theory. (23a) [y DO [MOVE y] : FIN [y LOC IN x]] (23b) [Event GO ([Thing ], [Path TO ([Place IN ([Thing ])])])]

One reason B&S maintain a purely nonlinguistic CS as well as a separate SF is that factual or encyclopedic knowledge does not seem to make much grammatical contribution to language. To B&S, there is a clear boundary where the semantic and the encyclopedic diverge. Pustejovsky’s (1995) generative lexicon (GL) theory is interesting in this regard. GL also assumes lexical decomposition. Pustejovsky’s lexical decomposition makes use of factual or encyclopedic knowledge in a rigorous formalism called the qualia structure. The qualia structure of book, for instance, expresses such factual knowledge as the origin of book as write(x, y) in the Agentive quale, where x is a writer (i.e., human(x)), and y is a book (i.e., book(y)). The qualia structure also expresses the use of the word in the Telic quale; hence, the lexical semantic structure for book includes such factual knowledge as read(w, y), where w is a reader (i.e., human(w)), and y is a book. The factual or encyclopedic knowledge is not only expressed in formal linguistic representations but also plays a crucial role in explaining a significant portion of linguistic phenomena. We interpret (24) as either Chomsky began writing a book or Chomsky began reading a book. Pustejovsky suggests generative devices like type coercion and co-composition to explain the two readings of (24) in a formal theory; i.e., writing or reading is part of the qualia structure of book, and, hence, the two readings of (24) are predicted by formal principles of lexical semantics. (24) Chomsky began a book.

It is far beyond the scope of this article to discuss the GL theory in detail. But the success of the GL theory for a vast range of empirical data shows that the boundary between semantic and encyclopedic or between linguistic and nonlinguistic is not

468 Lexical Conceptual Structure

so clear as B&S assume in their distinction between CS and SF.

Suggested Readings For a quick overview of conceptual semantics with one paper, see Jackendoff (1987a). For foundational issues of conceptual semantics, see Jackendoff (1983). For an overview of language and other cognitive capacities from a broad perspective, see Jackendoff (1987c). Jackendoff (1990) offers a comprehensive picture of conceptual semantics. Jackendoff (1997) is a bit technical, but it is important to set up the parallel architecture of language. For syntactic issues of conceptual semantics, see Jun (2003) and Culicover and Jackendoff (2005). See also: Anaphora, Cataphora, Exophora, Logophoricity;

Cognitive Semantics; Constants and Variables; Frame Semantics; Generative Lexicon; Hyponymy and Hyperonymy; Lexical Meaning, Cognitive Dependency of; Meaning Postulates; Natural Semantic Metalanguage; Role and Reference Grammar, Semantics in; Semantic Primitives.

Bibliography Berko Gleason J (ed.) (1997). The development of language. Boston: Allyn and Bacon. Bierwisch M & Schreuder R (1992). ‘From concepts to lexical items.’ Cognition 42, 23–60. Chomsky N (1970). ‘Remarks on nominalization.’ In Jacobs R A & Rosenbaum P S (eds.) Readings in English Transformational Grammar. Waltham: Ginn and Company. 184–221. Chomsky N (1981). Lectures on government and binding: the Pisa lectures. Dordrecht: Foris. Chomsky N (1995). The minimalist program. Cambridge, MA: MIT Press. Csuri P (1996). ‘Generalized dependencies: description, reference, and anaphora.’ Ph.D. diss., Brandeis University. Collins A & Quillian M (1969). ‘Retrieval time from semantic memory.’ Journal of Verbal Learning and Verbal Behavior 9, 240–247. Culicover P & Jackendoff R (2005). Simpler syntax. Oxford: Oxford Univ. Press. Fodor J A (1975). The language of thought. Cambridge, MA: Harvard University Press. Jackendoff R (1972). Semantic interpretation in generative grammar. Cambridge, MA: MIT Press.

Jackendoff R (1977). X-bar syntax: a study of phrase structure. Cambridge, MA: MIT Press. Jackendoff R (1983). Semantics and cognition. Cambridge, MA: MIT Press. Jackendoff R (1987a). ‘The status of thematic relations in linguistic theory.’ Linguistic Inquiry 18, 369–411. Jackendoff R (1987b). ‘X-bar semantics.’ In Pustejovsky James (ed.) Semantics and the lexicon. Dordrecht: Kluwer Academic Publishers. 15–26. Jackendoff R (1987c). Consciousness and the computational mind. Cambridge, MA: MIT Press. Jackendoff R (1990). Semantic structures. Cambridge, MA: MIT Press. Jackendoff R (1991). ‘Parts and boundaries.’ Cognition 41, 9–45. Jackendoff R (1997). The architecture of the language faculty. Cambridge, MA: MIT Press. Jackendoff R (2002). Foundations of language: brain, meaning, grammar, evolution. Oxford: Oxford University Press. Jun J S (2003). ‘Syntactic and semantic bases of case assignment: a study of verbal nouns, light verbs, and dative.’ Ph.D. diss., Brandeis University. Katz J J (1980). ‘Chomsky on meaning.’ Language 56(1), 1–41. Katz J J & Fodor J A (1963). ‘The structure of a semantic theory.’ Language 39(2), 170–210. Labov W (1973). ‘The boundaries of words and their meanings.’ In Bailey C -J N & Shuy R W (eds.) New ways of analyzing variation in English, vol. 1. Washington, DC: Georgetown University Press. Levin B (1993). English verb classes and alternations. Chicago: University of Chicago Press. Marr D (1982). Vision. San Francisco: W. H. Freeman. Pin˜ango M M, Zurif E & Jackendoff R (1999). ‘Real-time processing implications of aspectual coercion at the syntax-semantics interface.’ Journal of Psycholinguistic Research 28(4), 395–414. Pustejovsky J (1995). The generative lexicon. Cambridge, MA: MIT Press. Swinney D (1979). ‘Lexical access during sentence comprehension: (re)consideration of context effects.’ Journal of Verbal Learning and Verbal Behavior 18, 645–659. Wertheimer M (1912). ‘Experimentelle Studien u¨ber das Sehen von Bewegung.’ Zeitschrift fu¨r Psychologie 61, 161–265. Zurif E & Blumstein S (1978). ‘Language and the brain.’ In Halle M, Bresnan J & Miller G A (eds.) Linguistic theory and psychological reality. Cambridge, MA: MIT Press. 229–245. Zurif E, Caramazza A & Myerson R (1972). ‘Grammatical judgments of agrammatic aphasics.’ Neuropsychologia 10, 405–417.

Lexical Conditions 469

Lexical Conditions P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

In many schools of linguistics it is assumed that each sentence S in a natural language has a so-called semantic analysis (SA), a syntactic structure representing S in such a way that the meaning of S can be read off its SA in a regular way. The SA of a sentence S is distinct from its surface structure (SS), which corresponds directly with the way S is to be pronounced. Each language has a set of rules, its grammar G, defining the relationship between the SAs and the SSs of its sentences. The SA of a sentence S is often also called its logical form, because the SA exhibits not only the predicate-argument structure of S and its embedded clauses if S has any, but also the logically correct position of tense, quantifiers, negation, modalities, and other possible operators – besides all the meaningful lexical items of the corresponding SS. SAs are thus analytical as regards their structure, not as regards their lexical items. The lexical items of SSs are in place in SAs: in principle, SAs provide an analysis that goes as far as the lexical items and stops there. SAs do not specify lexical meanings. Lexical meanings are normally specified in dictionaries, but dictionaries do so from an SS point of view. However, linguistic theories assuming an SA-level of representation for sentences require that lexical meanings be specified at SA-level. The difference is that, at SA-level, lexical items are allowed to occur only in predicate positions. A surface sentence like (1a) is represented at SA-level as (1b), written as the linear formula (1c) and read intuitively as (1d): (1a) The farmer was not working on the land. (1b)

The items not, past, on, be–ing, work, farmer, and land are all labeled ‘V’, which makes them predicates in (1b). In (1a), however, farmer is a noun, the past tense is incorporated into the finite verb form was, not is usually considered an adverb, working is a present participle in the paradigm of the verb work, on is a preposition, and land is again a noun. Because predicates express properties, the question is what property the predicates at issue assign to what kind of objects. Not assigns the property of being false to the proposition in its scope. (Finnish and cognate languages use verbs for the negation: ‘John nots working’ for ‘John does not work.’) Past places its proposition in a given past time. On says that the farmer’s being at work is on the land. (Some American Indian languages say ‘the farmer’s working on-s the land,’ with on as a verb.) Be–ing stretches the farmer’s working out over a period of time. Farmer and land assign the property of being a farmer, or land, to the values of their variables. Thus, despite differences in surface categories, all lexical words can be regarded as predicates at SA level. Analyzing all lexical meanings as predicate meanings has the advantage of a uniform format of lexical specification for all lexical items. The format is that of a definition of satisfaction conditions or lexical conditions. The lexical conditions of an n-ary predicate Pn define the property assigned by Pn. They are the conditions that must be fulfilled by any object (or n- tuple of objects) o for o to deserve Pn, in the sense that when Pn is applied to o, a true proposition results. Thus, for example, the conditions that determine whether a sentence like This animal is a dog is true are the lexical conditions associated with the predicate dog, applied to whatever object is referred to by the definite term this animal. Only if that object fulfills the conditions that are necessary for doghood is the sentence true. Generally, the extension [[Pn]] of the predicate Pn is the set of n- tuples of world objects o that fulfill the conditions set for Pn. Or: (1) [[Pn]] ¼ { | . . . (lexical conditions) . . . }

(1c) S[V[not] S[V[past] S[V[on] S[V[be] S[V[work] NP[the y S[V[farmer] NP[y]]]]] NP[the x S[V[land] NP[x]]]]]] (1d) It is not so that in the past on the land the farmer was working.

It is important to note that the lexical conditions thus specified do not, generally, exhaust the meaning of a predicate, even though lexical conditions can be formulated with great subtlety. Meanings often have vague boundaries, which makes the formulation of lexical conditions difficult. Words are often polysemous in that they have different but related meanings, such as the word chest, which applies either to a box meant for storage or to the part of a human body that is enclosed by the ribs. Polysemy often leads to homonymy or near homonymy (again with vague boundaries),

470 Lexical Fields

as in the case of table (piece of furniture, slab of stone with symbols on it, or well-ordered list of data) or leaf (of a tree or of a book). Moreover, there is often dynamic filtering in word meanings, as in The office is on fire versus The office has a day off. In the former, the term the office denotes a building, in the latter a group of employees. The difference is caused by the nature of the predicate: be on fire requires a combustible object, whereas have a day off requires humans under a statute imposing duties, but how to integrate such possible referential differences into the format shown in (1) is unknown (and largely undiscussed in the literature). Then there is object dependency, as with verbs of cutting: one cuts the grass, one’s hair or nails, one’s finger, and the meat (though cutting one’s finger is very different from cutting the meat); one trims the hedge and the dog, and sometimes one’s hair also; one tailors a suit (German: schneiden); one gelds a horse (French: couper), etc. It is such phenomena that make it hard to use the format shown in (1) for the practical purposes of dictionaries. In one respect, the format of (1) can be refined. Presuppositions are naturally accounted for by making a distinction between two kinds of lexical conditions, preconditions and update conditions (see Presupposition). Presuppositions are derivable from the preconditions of SA-predicates (see Fillmore, 1971; Seuren, 1985: 266–313). Consider the predicate be divorced. For someone to be divorced, they must have been married first. Or the predicate be back: for someone to be back, they must have been away first. The conditions of having been married first or having been away first are the preconditions of these predicates. The condition that the marriage has been dissolved, or that the person in question is no longer away, is the update condition. When a precondition is not fulfilled, the sentence suffers from presupposition failure, a condition that, according to some (in particular Strawson, 1950),

leads to a lack of truth value and according to others (Blau, 1978; Seuren, 1985), to a third truth value, strong or ‘radical’ falsity. If an update condition is not fulfilled, the sentence is simply, or minimally, false. In presupposition theory, the lexical conditions of a predicate Pn can thus be presented in the following general format: (2) [[Pn]] ¼ { : . . . (preconditions) . . . | . . . (update conditions). . .}

This format is exemplified in, for example, the following specification for be divorced: (3) [[be divorced1]] ¼ { o : o was married | o’s marriage has been legally dissolved }

Or: ‘the extension of the predicate be divorced is the set of entities o such that o (precondition) was married, and (update condition) o’s marriage has been legally dissolved’. See also: Cognitive Semantics; Discourse Domain; Dis-

course Semantics; Human Reasoning and Language Interpretation; Lexical Meaning, Cognitive Dependency of; Multivalued Logics; Nonmonotonic Inference; Presupposition; Representation in Language and Mind; Selectional Restrictions.

Bibliography Blau U (1978). Die dreiwenige logik der sprache. Ihre syntax, semantik und anwendung in der sprachanalyse. Berlin: De Gruyter. Fillmore C J (1971). ‘Types of lexical information.’ In Steinberg D & Jakobovits L (eds.) Semantics. An interdisciplinary reader in philosophy linguistics and psychology. Cambridge: Cambridge University Press. 370–392. Seuren P A M (1985). Discourse semantics. Oxford: Blackwell. Strawson P F (1950). ‘On referring.’ Mind 59, 320–344.

Lexical Fields P Lutzeier, University of Surrey, Surrey, UK ß 2006 Elsevier Ltd. All rights reserved.

Introduction Lexical fields have an immediate intuitive appeal. The reference to one or two examples like the field of verbs of motion walk, run, skip, . . . or the field of adjectives of emotion happy, angry, disappointed, . . . normally is enough to give the feeling that one knows

what we are talking about. A widespread curiosity about words also helps, not the least in the context of being a parent and trying to collect the child’s first words. At the same time, one cannot help noticing that textbooks very rarely go beyond the mere mentioning of lexical fields in form of a few examples, plus perhaps some critical remarks about the apparent lack of rigor around the concept. In other words, the intuitive strength of the concept may go together with some theoretical vagueness. Nonetheless, what remains at this stage is the widespread appeal of the

Lexical Fields 471

concept and undoubtedly successful application of the concept in several disciplines: . Lexicology, Semantics and Cognitive Linguistics. Lexical fields are a useful tool for holistic approaches about lexical meaning, structures of the vocabulary and mental lexicon as well as issues around categorization. . Lexicography. The codification of the vocabulary of a language can be done in several different formats, and the organization of entries around lexical fields is one of them and leads to specialized dictionaries. . Psycholinguistics. Lexical fields are employed in connection with word memory tests, explorations on language acquisition and language loss. . Anthropology. Lexical fields are a useful tool in fieldwork on the language and culture of societies. This remains a major area in the context of globalization and ‘Global’ English, and the concern about endangered languages. . Medical Neuroscience and Clinical Linguistics. Lexical fields are used for the investigation of different forms of aphasia. In addition to the term ‘lexical field,’ there are other terms in use, such as ‘word field’ and ‘semantic field’; but we shall confine ourselves to lexical field, which provides greater flexibility, because, in contrast to word field, it implies that the relevant groupings involve lexical elements and these are not necessarily confined to words. At least in theory idioms can be contemplated as possible members of such groupings. We also prefer lexical to semantic because the relevant groupings are parts of the lexicon, and its elements will consist of a form level as well as of a content level.

Background Lexical fields contribute to structuring the lexicon and to exploration of lexical meaning. Although the lexical meaning of any member of the lexicon must be seen as a holistic entity, this does not preclude its conception as something internally structured. This structure must make provision for phenomena such as monosemy and polysemy; and, for each individual sense, phenomena such as prototypicality, stereotypes, and family resemblances need to be incorporated. In addition, the outer boundaries of the lexical meaning/senses of any member of the lexicon will be established by finding its unique position in the content plane of the lexicon. This happens in contradistinction to other similar lexical meanings along the paradigmatic dimension and in connection with other different, but compatible lexical meanings along the syntagmatic dimension. The paradigmatic dimension is mainly captured by membership in the

same lexical fields and by means of sense relations, but also partly by associations. The syntagmatic dimension is mainly captured by collocations, but also partly by associations. Whichever structure one adopts for the lexical meaning, it cannot be a static one. One has to take into account the dynamics of the lexicon, which expresses itself at the synchronic level in form of variation and at the diachronic level in form of change. Such a fascinating complexity of the lexicon of any natural language allows for many different ways of ordering the lexicon, and any particular way of doing so can only capture certain aspects of such a complex system of systems. In a sense, lexical fields themselves have to find their own unique position in the vast field of all possible orderings of the lexicon, and will, in any case, only be able to catch one particular type of ordering. Therefore, stretching and extending the concept beyond its established conception will have to be looked at carefully and may not be the right way forward, especially when we have other concepts such as word families and frames.

The Concept of Lexical Field Any concept of lexical fields will try to capture the following basic ideas and principles: . Fields have a position somewhere between the individual lexical element and the whole lexicon, i.e., fields build relevant parts of the lexicon and make a contribution to the structuring of the lexicon. . Fields and individual words have in common that they are part of the lexicon. Fields and the lexicon have in common that they are constituted from words. . Fields are (higher level) signs and therefore comprise a form level as well as a content level. . Each element of the field receives its position in contradistinction and interconnection with other elements of the field. In other words, fields help to establish the senses of individual elements and therefore have to be seen as part of a semasiological approach. . Each lexical field deals with a particular conceptual domain and therefore can be seen as part of an onomasiological approach. With regard to the form level of a lexical field, lexical fields are particular paradigmatic groupings within the lexicon, i.e., their elements belong in each case to the same parts of speech. Although examples of lexical fields can be found for any part of speech, the most useful ones in terms of size and structures are found for nouns, verbs, and adjectives. Apart from possible problems with classification in terms of parts of speech, this does not preclude idioms as

472 Lexical Fields

special lexical elements from being members of lexical fields. Because there are other groupings such as word families and frames that allow links between members from different parts of speech, there is no problem in keeping lexical fields to elements of one and the same part of speech. The fact that we see lexical fields as ‘particular’ paradigmatic groupings will have to do with the way that the form level is linked to the content level. If it is true that each lexical field is meant to deal with a particular conceptual domain, then all members of a lexical field will have to relate to this particular domain and therefore will all be similar in terms of their senses. The domain will normally be labeled by a specific term such as ‘body part,’ ‘fruit,’ ‘transfer,’ ‘temperature,’ etc., and this allows us to talk of ‘the lexical field of body part nouns,’ ‘the lexical field of nouns of fruit,’ ‘the lexical field of verbs of transfer,’ ‘the lexical field of adjectives of temperature,’ etc. The term that captures the domain provides the semantic framework for the lexical field and each member of this specific field must have a sense that is compatible with the meaning of the domain term, i.e., has an identical to or a more specific sense than the meaning of the term. This way a lexical field also constitutes a special onomasiological grouping, namely all members belong to the same part of speech. This may well mean that the field does not have a member whose sense is identical with the meaning of the domain term. For instance, in the field of English adjectives of temperature, we do not have an adjective that would match the meaning of the noun ‘temperature.’ The membership of a lexical field structures the given domain and one needs some tools to describe the particular structure. The resulting description should guarantee the unique position in some kind of semantic space for each member of the lexical field. It is wise to apply a combination of dimensions and sense relations. The idea is to attach a finite number of dimensions to the field. Each dimension is meant to provide a partition of the whole set. The resulting sets are therefore disjoint with each other and the name of each set normally reflects a necessary part of the relevant sense of the element. In other words, a paraphrase of the relevant sense of the element would normally involve the partition name. It has to be stressed that the idea of such a name is similar, but not identical, to traditional features of a componential analysis and that we are talking metalinguistically of a partition of the set of elements, not of anything out there in reality. For instance, take the trivial paradigm P ¼ {Venus, morning star, evening star}. All three nouns refer to the same entity: Venus. In

line with Frege’s thoughts we could establish a dimension ‘Time of occurrence in the sky.’ This dimension would divide the paradigm into three sets: S1 ¼ {morning star}, S2 ¼ {evening star}, S3 ¼ {Venus} with the corresponding names N1 ¼ ‘the brightest star in the morning sky,’ N2 ¼ ‘the brightest star in the evening sky,’ N3 ¼ ‘neutral.’ Furthermore, there is no claim to the effect that the sum of all names of all sets of which a particular element is member of would constitute its sense in respect to the given domain. Also, paraphrases may vary accounts for the dynamic element of any description and therefore one cannot necessarily expect a unique set of dimensions for the description of the lexical field. This does not constitute a weakness of the notion but reflects an important phenomenon of natural language in general. Guiding principles for the choice amongst possible candidates are: (1) those dimensions are preferable that result in sets with several members, and (2) those dimensions are preferable that result in cross-classification. The familiar tool of sense relations can act as a complement to the tool of dimensions. As far as lexical fields are concerned, the hyponymy-relation and the incompatibility-relation tend to be the most useful ones. The analysis of the field is successful and complete when the net of links by means of these sense relations, always relative to the given domain, plus the relevant names constitute a unique position for each member of the field. An obvious exception is the case of synonyms with regard to the given domain. The interplay between dimensions and sense relations in forming the semantic space helps in several ways. It frees us from the often difficult task of expressing the unique position by means of an explicit link to necessary parts of the sense, whereas sense relations can often make a valuable contribution by means of more implicit links. The reliance on sense relations also allows an elegant solution to the supposedly thorny problem of completeness for lexical fields. How can you decide whether a particular element should be a member of the field or not? There is an obvious answer in terms of the sense relations: completeness is achieved once one has closure in terms of the sense relations relative to the given domain. Earlier on, we stressed that the senses of a lexical element will be defined through paradigmatic and syntagmatic links. In other words, it must be accepted that lexical fields as paradigmatic groupings can in most cases only make a partial contribution to the senses of their members. This again is an acknowledgment of the richness of the lexicon of any natural language rather than the admission of a weakness in the concept of lexical field.

Lexical Meaning, Cognitive Dependency of 473

Relevance of Lexical Fields In the 1930s, Jost Trier as the true founder of lexical field theory evoked for lexical fields the idea of a mosaic within the content plane. This idea was discredited in the 1960s and 1970s because of its unfortunate link with Aristotelian/structuralist ideas of categories. As soon as this link was shown not to be a necessary one and lexical fields were taken to be groupings of the kind described here, the old idea of a mosaic could gain new strength again. In this context, lexical field theory can make the legitimate claim to be a forerunner of Cognitive Linguistics. Many syntactic and semantic theories and concepts come and go. Against this pattern, lexical field theory and the concept of lexical field has proven to be remarkably stable. What is urgently needed is the will to engage in more practical descriptions of vast lexical fields across several languages, because such information would provide useful data for typological comparisons across different languages and cultures. Lexical fields are also witness to the fact that natural languages exist and change as a result of dynamic (field) forces. It goes without saying that any linguistic theory worthy of its name must take account of such changes and therefore cannot afford to ignore the concept of lexical field. See also: Acquisition of Meaning by Children; Antonymy

and Incompatibility; Category-Specific Knowledge; Cognitive Semantics; Collocations; Componential Analysis; Disambiguation; Hyponymy and Hyperonymy; Lexical Conceptual Structure; Lexical Semantics; Lexicology; Onomasiology and Lexical Variation; Semantic Change; Semantic Maps; Sound Symbolism; Speech Act Verbs; Taboo Words; Thesauruses.

Bibliography Coseriu E (1973). Einfu¨hrung in die strukturelle Betrachtung des Wortschatzes (2 edn.). Tu¨bingen: Gunter Narr. Geckeler H (1971). Strukturelle Semantik und Wortfeldtheorie. Munich: Fink Verlag.

Geckeler H (2002). ‘Anfa¨nge und Ausbau des Wortfeldgedankens.’ In Cruse D A, Hundsnurscher F, Job M et al. (eds.) Lexicology: an international handbook on the nature and structure of words and vocabularies, vol. 1. Berlin: Walter de Gruyter. 713–728. Gloning T (2002). ‘Auspra¨gungen der Wortfeldtheorie.’ In Cruse D A, Hundsnurscher F, Job M et al. (eds.) Lexicology: an international handbook on the nature and structure of words and vocabularies, vol. 1. Berlin: Walter de Gruyter. 728–737. Jones W J (1990). German kinship terms (750–1500): documentation and analysis. Berlin: Walter de Gruyter. Lehrer A (1974). Semantic fields and lexical structure. Amsterdam: North Holland. Lehrer A (1985). ‘The influence of semantic fields on semantic change.’ In Fisiak J (ed.) Historical semantics/ historical word-formation. Berlin: Mouton. 283–296. Lutzeier P R (1981). Wort und Feld. Wortsemantische Fragestellungen mit besonderer Beru¨cksichtigung des Wortfeldbegriffes. Tu¨bingen: Max Niemeyer Verlag. Lutzeier P R (ed.) (1993). Studien zur Wortfeldtheorie/Studies in lexical field theory. Tu¨bingen: Max Niemeyer Verlag. Lutzeier P R (1995). Lexikologie. Ein Arbeitsbuch. Tu¨bingen: Stauffenburg Verlag. Lutzeier P R (2005). ‘Die Wortfeldtheorie unter dem Einfluss des Strukturalismus.’ In Auroux S, Koerner E F, Niederehe H-J et al. (eds.) History of the language sciences: an international handbook on the evolution of the study of language from the beginnings to the present, vol. 3. Berlin: Walter de Gruyter. Lyons J (1968). Introduction to theoretical linguistics. Cambridge: Cambridge University Press. Pottier B (1963). Recherches sur l’analyse se´mantique en linguistique et en traduction me´canique. Nancy: Universite´ de Nancy. Schmidt L (ed.) (1973). Wortfeldforschung. Zur Geschichte und Theorie des sprachlichen Feldes. Darmstadt: Wissenschaftliche Buchgesellschaft. Seiffert L (1968). Wortfeldtheorie und Strukturalismus. Studien zum Sprachgebrauch Freidanks. Stuttgart: Kohlhammer Verlag. Trier J (1973). Der deutsche Wortschatz im Sinnbezirk des Verstandes. Die Geschichte eines sprachlichen Feldes. Band 1 (Von den Anfa¨ngen bis zum Beginn des 13. Jahrhunderts) (2 edn.). Heidelberg: Carl Winter.

Lexical Meaning, Cognitive Dependency of P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

It is often thought or implicitly assumed, even in circles of professional semanticists, that predicate meanings, as codified in their satisfaction conditions

(see Lexical Conditions), are lexically fixed in such a way that they automatically produce truth or falsity when applied to appropriate reference objects. This assumption is unfounded. In many, perhaps most, cases, the satisfaction conditions imply an appeal to nonlinguistic knowledge, so that the truth and falsity of assertive utterances are not the product of mere linguistic compositional computation, but are

474 Lexical Meaning, Cognitive Dependency of

codetermined by nonlinguistic knowledge, either of a general encyclopedic or of a context-bound, situational nature. An obvious case is provided by a large class of gradable adjectival predicates, such as expensive, old, and large, whose applicability depends on (preferably socially recognized) standards of cost, age, and size, respectively, for the objects denoted by their subject terms. The description of such standards is not part of the description of the language concerned, but of (socially shared) knowledge. Further obvious examples are ‘possession’ predicates, such as English have, lack, and with(out), and whatever lexical specification is needed for genitives, datives, and possessive pronouns. These clearly require general encyclopedic knowledge for their proper interpretation. Consider the following examples: (1a) This hotel room has a bathroom. (1b) This student has a supervisor.

For (1a) to be true, it is necessary that there be one unique bathroom directly connected with the room in question, whose use is reserved for the occupants of that room. When the room carries a notice that its bathroom is at the end of the corridor to the right, while the same bathroom serves all the other rooms in the corridor, (1a) is false – not just misleading but false, as any judge presiding over a court case brought by a dissatisfied customer will agree. But for (1b) to be true, no such uniqueness relation is required, as one supervisor may have many students to look after. This is not a question of knowing English, but of knowing about the world as it happens to be. The same goes for the parallel sentences: (2a) This is a hotel room with a bathroom. (2b) This is a student with a supervisor.

Possession predicates, therefore, must be specified in the lexicon as involving an appeal to what is normally the case regarding their term referents. They express a well-known relation of appurtenance between the kind of object referred to in subject position and the kind of object referred to in object position. The semantic description (satisfaction conditions) of have and other possessive predicates is thus taken to contain a parameter for ‘what is well-known,’ making the interpretation of this predicate in each token occurrence truth-conditionally dependent on world knowledge. Not all possession predicates are subject to the same conditions. Possessive pronouns, for example, may express a relation of ‘being responsible for’ or ‘taking care of,’ which other possession predicates cannot express. An example is sentence (3) uttered

by a gardener with regard to the flower beds he is tending: (3) Please don’t mess up my flower beds.

This sentence can be uttered appropriately without the speaker implying that the flower beds are owned by him. Many such examples can be given. Consider the predicate flat said of a road, a tire, a mountain, a face, or the world. There is an overall element ‘spread out, preferably horizontally, without too much in the way of protrusions or elevations,’ but that in itself is insufficient to determine what ‘being flat’ amounts to in these cases. The full meaning comes across only if it is known what roads, tires, mountains, faces, and the world are normally thought to be like. Dictionaries, even the best ones, limit themselves to giving examples, hoping that the user will get the hint. Another example is the predicate fond of, as in: (4a) John is fond of his dog. (4b) John is fond of cherries. (4c) John is fond of mice.

In (4a), obviously, John’s fondness is of a rather different nature from what is found in (4b): the fondness expressed in the one is clearly incompatible with the fondness expressed in the other. The fondness of (4c) can be either of the kind expressed in (4a) or of the kind expressed in (4b). The common element in the status assigned to the object-term referents is something like ‘being the object of one’s affection or of one’s pleasure,’ but again, such a condition is insufficient to determine full interpretation. Cognitive dependency is an essential aspect in the description of predicate meanings. The fact that some predicate meanings contain a parameter referring to an available nonlinguistic but language-independent, cognitive knowledge base means that neither utterance-token interpretation nor sentence-type meaning is compositional in the accepted sense of being derivable by (model-theoretic) computation from the linguistic elements alone. As regards utterance-token interpretation, this is already widely accepted, owing to valuable work done in pragmatics. The noncompositionality of sentence-type meaning, defined at the level of language description, is likewise beginning to be accepted by theorists of natural language. This type-level noncompositionality does not mean, however, that the specification of the satisfaction conditions of predicates is not truthconditional, only that standards embodied in socially accepted knowledge have become part of the truth conditions of sentences in which the predicate occurs. In most treatises on lexicology, the term polysemy is used for phenomena such as those presented above.

Lexical Meaning, Cognitive Dependency of 475

At the same time, however, it is widely recognized that this is, in fact, little more than a term used to give the problem a name. The problem itself lies in the psychology of concepts. One may assume that there are socially shared concepts like ‘possession,’ ‘flatness,’ and ‘fondness,’ but it is not known in what terms such concepts are to be defined. In a general sense, Fodor (1975, 1998) is probably right in insisting that lexical meanings are direct reflexes of concepts that have their abode in cognition but outside language. The necessary and sufficient conditions taken to define the corresponding lexical meanings cannot, according to Fodor, be formulated in natural language terms, but must be formulated in a ‘language of thought,’ which is categorically different from any natural language and whose terms and combinatorial properties will have to be established as a result of psychological theorizing. It is clear, in any case, that phenomena like those shown in (1)–(4) pose a serious threat to any attempt at setting up a model-theoretic theory of lexical meaning, such as Dowty (1979): the neglect of the cognitive factor quickly becomes fatal in lexical semantics. Context-bound or situational knowledge plays a role in the interpretation of predicates that involve a ‘viewpoint’ or ‘perspective,’ such as the pair come and go, or predicates such as to the right (left) of, in front of, and behind. The two versions of (5) are truth-conditionally identical, but they differ semantically in that the ‘mental camera,’ so to speak, has stayed in the corridor in the went version, but has moved along with Dick into the office in the came version. (5) Dick and Harry were waiting in the corridor. Then Dick was called into the office. After five minutes, Harry [went/came] in too.

In similar manner, the sentences (6a) and (6b) may describe the same situation, but from different points of view. In (6a), schematically speaking, the viewer, the tree, and the statue are in a straight line; in (6b), it is the viewer, the tree, and the fountain that are in a straight line: (6a) There was a statue behind the tree, and a fountain to the left of the tree. (6b) There was a fountain behind the tree, and a statue to the right of the tree.

A further cognitive criterion for the lexical meaning of predicates, especially those denoting artifacts, seems to be the function of the objects denoted. What defines a table or a chair is not their physical

shape or the material they are made of, but their socially recognized function. The same holds for a concept like ‘luxury.’ Laws imposing special taxation on luxury goods or luxury activities usually enumerate the goods and activities in question, making exceptions for special cases (such as frock coats for undertakers). Yet what defines luxury is not a list of goods or activities, but socially recognized function – roughly, anything relatively expensive and exceeding the necessities of life. A peculiar example of cognitive dependency, probably based on function, is provided by the English noun threshold and its Standard German translation equivalent Schwelle. In their normal uses, they denote the ridge or sill usually found between doorposts at floor level. Yet these two words differ in their capacity for semantic extension: the elevations in roads and streets that are normally called speed bumps in English are called Schwelle in German. Yet it is unthinkable that speed bumps should be called thresholds in English. The question is: why? One is inclined to think that, at some ill-understood level of interpretation, the word threshold implies containment within a space or a transition from one kind of space to another, perhaps as a result of its etymology (which is not fully known). Schwelle, by contrast, is a swelling in the ground that forms an obstacle to be got over – which is also its etymology, although, on the whole, German speakers do not realize that. The difference between the two words is not a question of the ontological properties of the objects concerned, but, apparently, of the ways they are conceived of. The role of etymology in this case is intriguing. See also: Acquisition of Meaning by Children; Category-

Specific Knowledge; Cognitive Semantics; Componential Analysis; Concepts; Field Work Methods in Semantics; Frame Semantics; Human Reasoning and Language Interpretation; Lexical Conceptual Structure; Lexical Conditions; Lexical Semantics; Mentalese; Montague Semantics; Polysemy and Homonymy; Psychology, Semantics in; Representation in Language and Mind; Thought and Language; Virtual Objects.

Bibliography Dowty D (1979). Word meaning and Montague grammar. Dordrecht: Reidel. Fodor J A (1975). The language of thought. Hassocks, Sussex: Harvester Press. Fodor J A (1998). Concepts: Where cognitive science went wrong. New York: Oxford University Press.

476 Lexical Semantics

Lexical Semantics J Pustejovsky, Brandeis University, Waltham, MA, USA ß 2006 Elsevier Ltd. All rights reserved.

Word Knowledge Semantic interpretation requires access to knowledge about words. The lexicon of a grammar must provide a systematic and efficient way of encoding the information associated with words in a language. Lexical semantics is the study of what words mean and how they structure these meanings. This article examines word meaning from two different perspectives: the information required for composition in the syntax and the knowledge needed for semantic interpretation. The lexicon is not merely a collection of words with their associated phonetic, orthographic, and semantic forms. Rather, lexical entries are structured objects that participate in larger operations and compositions, both enabling syntactic environments and acting as signatures to semantic entailments and implicatures in the context of larger discourse. There are four basic questions in modeling the semantic content and structure of the lexicon: (1) What semantic information goes into a lexical entry? (2) How do lexical entries relate semantically to one another? (3) How is this information exploited compositionally by the grammar? and (4) How is this information available to semantic interpretation generally? This article focuses on the first two. The lexicon and lexical semantics have traditionally been viewed as the most passive modules of language, acting in the service of the more dynamic components of the grammar. This view has its origins in the generative tradition (Chomsky, 1955) and has been an integral part of the notion of the lexicon ever since. While the aspects model of selectional features (Chomsky, 1965) restricted the relation of selection to that between lexical items, work by McCawley (1968) and Jackendoff (1972) showed that selectional restrictions must be available to computations at the level of derived semantic representation rather than at deep structure. Subsequent work by Bresnan (1982), Gazdar et al. (1985), and Pollard and Sag (1994) extended the range of phenomena that can be handled by the projection and exploitation of lexically derived information in the grammar. With the convergence of several areas in linguistics (lexical semantics, computational lexicons, and type theories) several models for the determination of selection have emerged that put even more compostional power in the lexicon, making explicit reference to the paradigmatic systems that allow for grammatical

constructions to be partially determined by selection. Examples of this approach are generative lexicon theory (Bouillon and Busa, 2001; Pustejovsky, 1995) and construction grammar (Goldberg, 1995; Jackendoff, 1997, 2002). These developments have helped to characterize the approaches to lexical design in terms of a hierarchy of semantic expressiveness. There are at least three such classes of lexical description: sense enumerative lexicons, where lexical items have a single type and meaning, and ambiguity is treated by multiple listings of words; polymorphic lexicons, where lexical items are active objects, contributing to the determination of meaning in context, under welldefined constraints; and unrestricted sense lexicons, where the meanings of lexical items are determined mostly by context and conventional use. Clearly, the most promising direction seems to be a careful and formal elucidation of the polymorphic lexicons, and this will form the basis of the subsequent discussion of both the structure and the content of lexical entries.

Historical Overview The study of word meaning has occupied philosophers for centuries, beginning at least with Aristotle’s theory of meaning. Locke, Hume, and Reid all paid particular attention to the meanings of words, but not until the 19th century did the rise of philological and psychological investigations of word meaning occur, with Bre´al (1897), Erdmann (1900), Trier (1931), Stern (1931/1968), and others focused on word connotation, semantic drift, and word associations in the mental lexicon as well as in social contexts. Interestingly, Russell, Frege, and other early analytic philosophers were not interested in language as a linguistic phenomenon but simply as the medium through which judgments can be formed and expressed. Hence, there is little regard for the relations between senses of words, when not affecting the nature of judgment, for example, within intensional contexts. Nineteenth-century semanticists and semasiologists, on the other hand, viewed polysemy as the life force of human language. Bre´al, for example, considered it to be a necessary creative component of language and argued that this phenomenon better than most in semantics illustrates the cognitive and conceptualizing force of the human species. Even with their obvious enthusiasm, semasiology produced no lasting legacy to the study of lexical semantics. In fact, there was no systematic research into lexical meaning until structural linguists extended the relational techniques of Saussure (1916/1983) and elaborated the framework of componential analysis for language meaning (Jakobson, 1970).

Lexical Semantics 477

The idea behind componential analysis is the reduction of a word’s meaning into its ultimate contrastive elements. These contrastive elements are structured in a matrix, allowing for dimensional analysis and generalizations to be made about lexical sets occupying the cells in the matrix. This technique developed into a general framework for linguistic description called distinctive feature analysis (Jakobson and Halle, 1956). This is essentially the inspiration for Katz and Fodor’s 1963 theory of lexical semantics within transformational grammar. In this theory, usually referred to as ‘markerese,’ a lexical entry in the language consists of grammatical and semantic markers and a special feature called a ‘semantic distinguisher.’ In Weinreich (1972) and much subsequent discussion, it was demonstrated that this model is far too impoverished to characterize the compositional mechanisms inherent in language. In the late 1960s and early 1970s, alternative models of word meaning emerged (Fillmore, 1965; Gruber, 1965; Jackendoff, 1972; Lakoff, 1965) that respected the relational structure of sentence meaning while encoding the named semantic functions in lexical entries. In Dowty (1979), a model theoretic interpretation of the decompositional techniques of Lakoff, McCawley, and Ross was developed. Recently, the role of lexical–syntactic mapping has become more evident, particularly with the growing concern over projection from lexical semantic form, the problem of verbal alternations and polyvalency, and the phenomenon of polysemy.

Ambiguity and Polysemy Given the compactness of a lexicon relative to the number of objects and relations in the world, and the concepts we have for them, lexical ambiguity is inevitable. Add to this the cultural, historical, and linguistic blending that contributes to the meanings of our lexical items, and ambiguity can appear arbitrary as well. Hence, ‘homonymy’ – where one lexical form has many meanings – is to be expected in a language. Examples of homonyms are illustrated in the following sentences: (1a) Mary walked along the bank of the river. (1b) Bank of America is the largest bank in the city. (2a) Drop me a line when you are in Boston. (2b) We built a fence along the property line. (3a) First we leave the gate, then we taxi down the runway. (3b) John saw the taxi on the street. (4a) The judge asked the defendant to approach the bar. (4b) The defendant was in the pub at the bar.

Weinreich (1964) calls such lexical distinctions ‘contrastive ambiguity,’ where it is clear that the senses associated with the lexical item are unrelated. For this reason, it is generally assumed that homonyms are represented as separate lexical entries within the organization of the lexicon. This accords with a view of lexical organization that has been termed a ‘sense enumeration lexicon’ (cf. Pustejovsky, 1995). That is, a lexicon is sense enumerative when every word o that has multiple senses stores these senses as separate lexical entries. This model becomes difficult to maintain, however, when we consider the phenomenon known as ‘polysemy.’ Polysemy is the relationship that exists between different senses of a word that are related in some logical manner rather than arbitrarily, as in the previous examples. We can distinguish three broad types of polysemy, each presenting a novel set of challenges to lexical semantics and linguistic theory. a. Deep semantic typing: single argument polymorphism b. Syntactic alternations: multiple argument polymorphism c. Dot objects: lexical reference to objects that have multiple facets The first class refers mainly to functors allowing a range of syntactic variation in a single argument. For example, aspectual verbs (begin and finish), perception verbs (see, hear), and most propositional attitude verbs (know, believe) subcategorize for multiple syntactic forms in complement position, as illustrated in (6): (5a) Mary began to read the novel. (5b) Mary began reading the novel. (5c) Mary began the novel. (6a) Bill saw John leave. (6b) Bill saw John leaving. (6c) Bill saw John. (7a) Mary believes that John told the truth. (7b) Mary believes what John said. (7c) Mary believes John’s story.

What these and many other cases of multiple selection share is that the underlying relation between the verb and each of its complements is essentially identical. For example, in (7), the complement to the verb believe in all three sentences is a proposition; in (5), what is begun in each sentence is an event of some sort; and in (6), the object of the perception is (arguably) an event in each case. This has led some linguists to argue for semantic selection (cf. Chomsky, 1986; Grimshaw, 1979) and others to

478 Lexical Semantics

argue for structured selectional inheritance (Godard and Jayez, 1993). In fact, these perspectives are not that distant from one another (cf. Pustejovsky, 1995): in either view, there is an explicit lexical association between syntactic forms that is formally modeled by the grammar. The second type of polysemy (syntactic alternations) involves verbal forms taking arguments in alternating constructions, the so-called ‘verbal alternations’ (cf. Levin, 1993). These are true instances of polysemy because there is a logical (typically causal) relation between the two senses of the verb. As a result, the lexicon must either relate the senses through lexical rules (such as in head-driven phrase structure grammar (HPSG) treatments; cf. Pollard and Sag, 1994) or assume that there is one lexical form that has multiple syntactic realizations (cf. Pustejovsky and Busa, 1995). (8a) The window opened suddenly. (8b) Mary opened the window suddenly. (9a) Bill began his lecture on time. (9b) The lecture began on time. (10a) The milk spilled onto the table. (10b) Mary spilled the milk onto the table.

The final form of polysemy reviewed here is encountered mostly in nominals and has been termed ‘regular polysemy’ (cf. Apresjan, 1973) and ‘logical polysemy’ (cf. Pustejovsky, 1991) in the literature; it is illustrated in the following sentences: (11a) Mary carried the book home. (11b) Mary doesn’t agree with the book. (12a) Mary has her lunch in her backpack. (12b) Lunch was longer today than it was yesterday. (13a) The flight lasted 3 hours. (13b) The flight landed on time in Los Angeles.

Notice that in each of the pairs, the same nominal form is assuming different semantic interpretations relative to its selective context. For example, in (11a) the noun book refers to a physical object, whereas in (11b) it refers to the informational content. In (12a), lunch refers to the physical manifestation of the food, whereas in (12b) it refers to the eating event. Finally, in (13a) flight refers to the flying event, whereas in (13b) it refers to the plane. This phenomenon of polysemy is one of the most challenging in the area and has stimulated much research Bouillon, 1997; Bouillon and Busa, 2001. In order to understand how each of these cases of polysemy can be handled, we must first familiarize ourselves with the structure of individual lexical entries.

Lexical Relations Another important aspect of lexical semantics is the study of how words are semantically related to one another. Four classes of lexical relations, in particular, are important to recognize: synonymy, antonymy, hyponymy, and meronymy. Synonymy is generally taken to be a relation between words rather than concepts. One fairly standard definition states that two expressions are synonymous if substituting one for the other in all contexts does not change the truth value of the sentence where the substitution is made (cf. Cruse, 1986, 2004; Lyons, 1977). A somewhat weaker definition makes reference to the substitution relative to a specific context. For example, in the context of carpentry, plank and board might be considered synonyms, but not necessarily in other domains (cf. Miller et al., 1990). The relation of antonymy is characterized in terms of semantic opposition and, like synonymy, is properly defined over pairs of lexical items rather than concepts. Examples of antonymy are rise/fall, heavy/light, fast/slow, and long/short (cf. Cruse, 1986; Miller, 1991). It is interesting to observe that co-occurrence data illustrate that synonyms do not necessarily share the same antonyms. For example, rise and ascend as well as fall and descend are similar in meaning, yet neither fall/ascend nor rise/descend are antonym pairs. For further details see Miller et al. (1990). The most studied relation in the lexical semantic community is hyponymy, the taxonomic relationship between words, as defined in WordNet (Fellbaum, 1998) and other semantic networks. For example, specifying car as a hyponym of vehicle is equivalent to saying that vehicle is a superconcept of the concept car or that the set car is a subset of those individuals denoted by the set vehicle. One of the most difficult lexical relations to define and treat formally is that of meronymy, the relation of parts to the whole. The relation is familiar from knowledge representation languages with predicates or slot-names such as ‘part-of’ and ‘made-of.’ For treatments of this relation in lexical semantics, see Miller et al. (1990) and Cruse (1986).

The Semantics of a Lexical Entry It is generally assumed that there are four components to a lexical item: phonological, orthographic, syntactic, and semantic information. Here, we focus first on syntactic features and then on what semantic information must be encoded in an individual lexical entry. There are two types of syntactic knowledge associated with a lexical item: its category and its

Lexical Semantics 479

subcategory. The former includes traditional classifications of both the major categories, such as noun, verb, adjective, adverb, and preposition, as well as the minor categories, such as adverbs, conjunctions, quantifier elements, and determiners. Knowledge of the subcategory of a lexical item is typically information that differentiates categories into distinct, distributional classes. This sort of information may be usefully separated into two types, contextual features and inherent features. The former are features that may be defined in terms of the contexts in which a given lexical entry may occur. Subcategorization information marks the local syntactic context for a word. It is this information that ensures that the verb devour, for example, is always transitive in English, requiring a direct object; the lexical entry encodes this requirement with a subcategorization feature specifying that a noun phrase (NP) appear to its right. Another type of context encoding is collocational information, where patterns that are not fully productive in the grammar can be tagged. For example, the adjective heavy as applied to drinker and smoker is collocational and not freely productive in the language (Mel’cˇuk, 1988). ‘Inherent features’ on the other hand, are properties of lexical entries that are not easily reduced to a contextual definition but, rather, refer to the ontological typing of an entity. These include such features as count/mass (e.g., pebble vs. water), abstract, animate, human, physical, and so on. Lexical items can be systematically grouped according to their syntactic and semantic behavior in the language. For this reason, there have been two major traditions of word clustering, corresponding to this distinction. Broadly speaking, for those concerned mainly with grammatical behavior, the most salient aspect of a lexical item is its argument structure; for those focusing on a word’s entailment properties, the most important aspect is its semantic class. In this section, these two approaches are examined and it is shown how their concerns can be integrated into a common lexical representation. Lexical Semantic Classifications

Conventional approaches to lexicon design and lexicography have been relatively informal with regard to forming taxonomic structures for the word senses in the language. For example, the top concepts in WordNet (Miller et al., 1990) illustrate how words are characterized by local clusterings of semantic properties. As with many ontologies, however, it is difficult to discern a coherent global structure for the resulting classification beyond a weak descriptive labeling of words into extensionally defined sets.

Figure 1 Type structures.

One of the most common ways to organize lexical knowledge is by means of type or feature inheritance mechanisms (Carpenter, 1992; Copestake and Briscoe, 1992; Evans and Gazdar, 1990; Pollard and Sag, 1994). Furthermore, Briscoe et al. (1993) described a rich system of types for allowing default mechanisms into lexical type descriptions. Similarly, type structures, such as that shown in Figure 1, can express the inheritance of syntactic and semantic features, as well as the relationship between syntactic classes and alternations (cf. Alsina, 1992; Davis, 1996; Koenig and Davis, 1999; Sanfilippo, 1993) and other relations (cf. Pustejovsky, 2001; Pustejovsky and Boguraev, 1993). In the remainder of this section, we first examine the approach to characterizing the weak constraints imposed on a lexical item associated with its arguments. Then, we examine attempts to model lexical behavior by means of internal constraints imposed on the predicate. Finally, it is shown how, in some respects, these are very similar enterprises and both sets of constraints may be necessary to model lexical behavior. Argument Structure

Once the base syntactic and semantic typing for a lexical item has been specified, its subcategorization and selectional information must be encoded in some form. There are two major techniques for representing this type of knowledge: 1. Associate ‘named roles’ with the arguments to the lexical item (Fillmore, 1985; Gruber, 1965; Jackendoff, 1972). 2. Associate a logical decomposition with the lexical item; meanings of arguments are determined by how the structural properties of the representation are interpreted (cf. Hale and Keyser, 1993; Jackendoff, 1983; Levin and Rappaport, 1995). One influential way of encoding selectional behavior has been the theory of thematic relations (cf. Gruber, 1976; Jackendoff, 1972). Thematic relations are now

480 Lexical Semantics

generally defined as partial semantic functions of the event being denoted by the verb or noun, and they behave according to a predefined calculus of roles relations (e.g., Dowty, 1989). For example, semantic roles such as agent, theme, and goal can be used to partially determine the meaning of a predicate when they are associated with the grammatical arguments to a verb. (14a) put (14b) borrow

Thematic roles can be ordered relative to each other in terms of an implicational hierarchy. For example, there is considerable use of a universal subject hierarchy such as is shown in the following (cf. Comrie, 1981; Fillmore, 1968): (15) AGENT > RECIPIENT/BENEFACTIVE > THEME/PATIENT > INSTRUMENT > LOCATION>

Many linguists have questioned the general explanatory coverage of thematic roles, however, and have have chosen alternative methods for capturing the generalizations they promised. Dowty (1991) suggested that theta-role generalizations are best captured by entailments associated with the predicate. A theta-role can then be seen as the set of predicate entailments that are properties of a particular argument to the verb. Characteristic entailments might be thought of as prototype roles, or proto-roles; this allows for degrees or shades of meaning associated with the arguments to a predicate. Others have opted for a more semantically neutral set of labels to assign to the parameters of a relation, whether it is realized as a verb, noun, or adjective. For example, the theory of argument structure as developed by Williams (1981), Grimshaw (1990), and others can be seen as a move toward a more minimalist description of semantic differentiation in the verb’s list of parameters. The argument structure for a word can be seen as the simplest specification of its semantics, indicating the number and type of parameters associated with the lexical item as a predicate. For example, the verb die can be represented as a predicate taking one argument, kill as taking two arguments, where as the verb give takes three arguments: (16a) die (x) (16b) kill (x,y) (16c) give (x,y,z)

What originally began as the simple listing of the parameters or arguments associated with a predicate has developed into a sophisticated view of the way arguments are mapped onto syntactic expressions. Williams’s (1981) distinction between external (the

underlined arguments above) and internal arguments and Grimshaw’s proposal for a hierarchically structured representation (cf. Grimshaw, 1990) provide us with the basic syntax for one aspect of a word’s meaning. Similar remarks hold for the argument list structure in HPSG (Pollard and Sag, 1994) and Lexical Functional Grammar (Bresnan, 1994). The interaction of a structured argument list and a rich system of types, such as that presented previously, provides a mechanism for semantic selection through inheritance. Consider, for instance, the sentence pairs in (17): (17a) The man/the rock fell. (17b) The man/*the rock died.

Now consider how the selectional distinction for a feature such as animacy is modeled so as to explain the selectional constraints of predicates. For the purpose of illustration, the arguments of a verb will be identified as being typed from the system shown previously. (18a) lx: physical[fall(x)] (18b) lx: animate[die(x)]

In the sentences in (17), it is clear how rocks cannot die and men can, but it is still not obvious how this judgment is computed, given what we would assume are the types associated with the nouns rock and man, respectively. What accomplishes this computation is a rule of subtyping, Y, that allows the type associated with the noun man (i.e., ‘human’) to also be accepted as the type ‘animate,’ which is what the predicate die requires of its argument as stated in (18b) (cf. Carpenter, 1992). (19) Y [human v animate]: human ! animate

The rule Y, applies since the concept ‘human’ is subtyped under ‘animate’ in the type hierarchy. Parallel considerations rule out the noun rock as a legitimate argument to die since it is not subtyped under ‘animate.’ Hence, one of the concerns given previously for how syntactic processes can systematically keep track of which ‘selectional features’ are entailed and which are not is partially addressed by such lattice traversal rules as the one presented here. Event Structure and Lexical Decomposition

The second approach to lexical specification mentioned previously is to define constraints internally to the predicate. Traditionally, this has been known as ‘lexical decomposition.’ In this section, we review the motivations for decomposition in linguistic theory and the proposals for encoding lexical knowledge as structured objects. We then relate this to the way in which verbs can be decomposed in terms of eventualities (Tenny and Pustejovsky, 2000).

Lexical Semantics 481

Since the 1960s, lexical semanticists have attempted to formally model the semantic relations between lexical items such as between the adjective dead and the verbs die and kill (cf. Lakoff, 1965; McCawley, 1968) in the following sentences: (20a) John killed Bill. (20b) Bill died. (20c) Bill is dead.

Assuming the underlying form for a verb such as kill directly encodes the stative predicate in (20c) and the relation of causation, generative semanticists posited representations such as (21): (21) (CAUSE (x, (BECOME (NOT (ALIVE y))))

Here the predicate CAUSE is represented as a relation between an individual causer x and an expression involving a change of state in the argument y. Carter (1976) proposes a representation quite similar, shown here for the causative verb darken: (22) (x CAUSE ((y BE DARK) CHANGE))

Although there is an intuition that the cause relation involves a causer and an event, neither Lakoff nor Carter make this commitment explicitly. In fact, it has taken several decades for Davidson’s (1967) observations regarding the role of events in the determination of verb meaning to find their way convincingly into the major linguistic frameworks. A new synthesis has emerged that attempts to model verb meanings as complex predicative structures with rich event structures (cf. Hale and Keyser, 1993; Parsons, 1990; Pustejovsky, 1991). This research has developed the idea that the meaning of a verb can be analyzed into a structured representation of the event that the verb designates, and it has furthermore contributed to the realization that verbs may have complex, internal event structures. Recent work has converged on the view that complex events are structured into an inner and an outer event, where the outer event is associated with causation and agency, and the inner event is associated with telicity (completion) and change of state (cf. Tenny and Pustejovsky, 2000). Jackendoff (1990) developed an extensive system of what he calls ‘conceptual representations,’ which parallel the syntactic representations of sentences of natural language. These employ a set of canonical predicates, including CAUSE, GO, TO, and ON, and canonical elements, including Thing, Path, and Event. These approaches represent verb meaning by decomposing the predicate into more basic predicates. This work owes obvious debt to the innovative work within generative semantics, as illustrated by McCawley’s (1968) analysis of the verb kill. Recent versions of lexical representations inspired

by generative semantics can be seen in the lexical relational structures of Hale and Keyser (1993), where syntactic tree structures are employed to capture the same elements of causation and change of state as in the representations of Carter, Levin and Rapoport, Jackendoff, and Dowty. The work of Levin and Rappaport, building on Jackendoff’s lexical conceptual structures, has been influential in further articulating the internal structure of verb meanings (Levin and Rappaport, 1995). Pustejovsky (1991) extended the decompositional approach presented in Dowty (1979) by explicitly reifying the events and subevents in the predicative expressions. Unlike Dowty’s treatment of lexical semantics, where the decompositional calculus builds on propositional or predicative units (as discussed previously) a ‘syntax of event structure’ makes explicit reference to quantified events as part of the word meaning. Pustejovsky further introduced a tree structure to represent the temporal ordering and dominance constraints on an event and its subevents. For example, a predicate such as build is associated with a complex event such as the following (cf. also Moens and Steedman, 1988): (23) [transition[e1:PROCESS] [e2:STATE]]

The process consists of the building activity itself, whereas the state represents the result of there being the object built. Grimshaw (1990) adopted this theory in her work on argument structure, where complex events such as break are given a similar representation. In such structures, the process consists of what x does to cause the breaking, and the state is the resultant state of the broken item. The process corresponds to the outer causing event as discussed previously, and the state corresponds in part to the inner change of state event. Both Pustejovsky and Grimshaw differ from the previous authors in assuming a specific level of representation for event structure, distinct from the representation of other lexical properties. Furthermore, they follow Higginbotham (1985) in adopting an explicit reference to the event place in the verbal semantics. Rappaport and Levin (2001) adopted a large component of the event structure model for their analysis of the resultative construction in English. Event decomposition has also been employed for properties of adjectival selection, the interpretation of compounds, and stage and individual-level predication. Qualia Structure

Thus far, we have focused on the lexical semantics of verb entries. All of the major categories, however, are encoded with syntactic and semantic feature

482 Lexical Semantics

structures that determine their constructional behavior and subsequent meaning at logical form. In generative lexicon theory (Pustejovsky, 1995), it is assumed that word meaning is structured on the basis of four generative factors, or ‘qualia roles’, that capture how humans understand objects and relations in the world and provide the minimal explanation for the linguistic behavior of lexical items (these are inspired in large part by Moravcsik’s (1975, 1990) interpretation of Aristotelian aitia). These are the formal role, the basic category that distinguishes the object within a larger domain; the constitutive role, the relation between an object and its constituent parts; the telic role, its purpose and function; and the agentive role, factors involved in the object’s origin or ‘coming into being.’ Qualia structure is at the core of the generative properties of the lexicon since it provides a general strategy for creating new types. For example, consider the properties of nouns such as rock and chair. These nouns can be distinguished on the basis of semantic criteria that classify them in terms of general categories such as natural_kind, artifact_ object. Although very useful, this is not sufficient to discriminate semantic types in a way that also accounts for their grammatical behavior. A crucial distinction between rock and chair concerns the properties that differentiate natural_kinds from artifacts: Functionality plays a crucial role in the process of individuation of artifacts but not of natural kinds. This is reflected in grammatical behavior, whereby ‘a good chair’ or ‘enjoy the chair’ are well-formed expressions reflecting the specific purpose for which an artifact is designed, but ‘good rock’ or ‘enjoy a rock’ are semantically ill formed since for rock the functionality (i.e., telic) is undefined. Exceptions exist when new concepts are referred to, such as when the object is construed relative to a specific activity, as in ‘The climber enjoyed that rock’; rock takes on a new meaning by virtue of having telicity associated with it, and this is accomplished by integration with the semantics of the subject NP. Although chair and rock are both physical_object, they differ in their mode of coming into being (i.e., agentive): artifacts are manmade, rocks develop in nature. Similarly, a concept such as food or cookie has a physical manifestation or denotation, but also a functional grounding, pertaining to the relation of ‘eating.’ These apparently contradictory aspects of a category are orthogonally represented by the qualia structure for that concept, which provides a coherent structuring for different dimensions of meaning. also: Componential Analysis; Compositionality; Frame Semantics; Generative Lexicon; Lexical Fields; Lexical Meaning, Cognitive Dependency of; Lexicon/Dictionary: Computational Approaches; Meronymy; Montague Semantics; Polysemy and Homonymy; Pre-20th

See

Century Theories of Meaning; Prosody; Selectional Restrictions; Semantic Primitives; Speech Act Verbs; Synonymy; Syntax-Semantics Interface; Thesauruses.

Bibliography Alsina A (1992). ‘On the argument structure of causatives.’ Linguistic Inquiry 23(4), 517–555. Apresjan J D (1973). ‘Regular polysemy.’ Linguistics 142, 5–32. Bouillon P (1998). Polymorphie et semantique lexicale: le cas des adjectifs. Lille: Presse du Septentrion. Bouillon P & Busa F (2001). The language of word meaning. Cambridge [England], New York: Cambridge University Press. Bre´al M (1897). Essai de se´mantique (science des significations). Paris: Hachette. Bresnan J (ed.) (1982). The mental representation of grammatical relations. Cambridge, MA: MIT Press. Bresnan J (1994). ‘Locative inversion and the architecture of universal grammar.’ Language 70, 72–131. Briscoe T, de Paiva V & Copestake A (eds.) (1993). Inheritance, defaults, and the lexicon. Cambridge, UK: Cambridge University Press. Carpenter R (1992). ‘Typed feature structures.’ Computational Linguistics 18, 2. Chomsky N (1955). The logical structure of linguistic theory. Chicago: University of Chicago Press (Original work published 1975). Chomsky N (1965). Aspects of the theory of syntax. Cambridge: MIT Press. Comrie B (1981). Language universals and linguistic typology. Chicago, IL: The University of Chicago Press. Copestake A & Briscoe T (1992). ‘Lexical operations in a unification-based framework.’ In Pustejovsky J & Bergler S (eds.) Lexical semantics and knowledge representation, Berlin: Springer Verlag. Cruse D A (1986). Lexical semantics. Cambridge, UK: Cambridge University Press. Cruse D A (2004). Meaning in language: an introduction to semantics and pragmatics (2nd edn.). Oxford: Oxford University Press. Davidson D (1967). ‘The logical form of action sentences.’ In Rescher N (ed.) The logic of decision and action. Pittsburgh: Pittsburgh University Press. Davis A (1996). Lexical semantics and linking and the hierarchical lexicon. Ph.D. diss., Stanford University. Davis A & Koenig J-P (2000). ‘Linking as constraints on word classes in a hierarchical lexicon.’ Language 2000. Dowty D R (1979). Word meaning and Montague grammar. Dordrecht, The Netherlands: D. Reidel. Dowty D R (1989). ‘On the semantic content of the notion ‘‘thematic role’’.’ In Chierchia G, Partee B & Turner R (eds.) Properties, types, and meaning, vol. 2. Semantic issues. Dordrecht: D. Reidel. Dowty D (1991). ‘Thematic proto-roles and argument selection.’ Language 67, 547–619. Erdmann K (1900). Die Bedeutung des Wortes: Aufsa¨tze aus dem Grenzgebiet der Sprachpsychologie und Logik. Avenarius: Leipzig.

Lexical Semantics 483 Evans R & Gazdar G (1990). ‘The DATR papers: February 1990.’ Cognitive Science Research Paper CSRP 139, School of Cognitive and Computing Science, University of Sussex, Brighton, England. Fillmore C (1965). Entailment rules in a semantic theory. POLA Report 10. Columbus, OH: Ohio State University. Fillmore C (1968). ‘The case for case.’ In Bach E W & Harms R T (eds.) Universals in linguistic theory. New York: Holt, Rinehart and Winston. Gazdar G, Klein E, Pullum G & Sag I (1985). Generalized phrase structure grammar. Cambridge, MA: Harvard University Press. Goldberg A (1995). Constructions: a construction grammar approach to argument structure. Chicago: University of Chicago Press. Grimshaw J (1979). ‘Complement selection and the lexicon.’ Linguistic Inquiry 10, 279–326. Grimshaw J (1990). Argument structure. Cambridge: MIT Press. Gruber J S (1965/1976). Lexical structures in syntax and semantics. Amsterdam: North-Holland. Gruber J S (1976). Lexical structures in syntax and semantics. Amsterdam: North-Holland. Hale K & Keyser J (1993). On argument structure and the lexical expression of syntactic relations: the view from building 20. Cambridge, MA: MIT Press. Halle M, Bresnan J & Miller G (eds.) (1978). Linguistic theory and psychological reality. Cambridge: MIT Press. Higginbotham J (1985). ‘On Semantics.’ Linguistic Inquiry 16, 547–593. Hjelmslev L (1961). Prolegomena to a theory of language. Whitfield F (ed.). Madison: University of Wisconsin Press (Original work published 1943). Jackendoff R (1972). Semantic interpretation in generative grammar. Cambridge: MIT Press. Jackendoff R (1983). Semantics and cognition. Cambridge, MA: MIT Press. Jackendoff R (1990). Semantic structures. Cambridge: MIT Press. Jackendoff R (1992). ‘Babe Ruth homered his way into the hearts of America.’ In Stowell T & Wehrli E (eds.) Syntax and the lexicon. San Diego: Academic Press. 155–178. Jackendoff R (2002). Foundations of language: brain, meaning, grammar. Oxford: Oxford University Press. Jakobson R (1970). Recent developments in linguistic science. Perenial Press. Jakobson R (1974). Main trends in the science of language. New York: Harper & Row. Jakobson R & Halle M (1956). Fundamentals of language. The Hague, The Netherlands: Mouton. Katz J (1972). Semantic theory. New York: Harper & Row. Katz J & Fodor J (1963). ‘The structure of a semantic theory.’ Language 39, 170–210. Lakoff G (1965/1970). Irregularity in syntax. New York: Holt, Rinehart, and Winston. Levin B & Rappaport Hovav M (1995). Unaccusativity: at the syntax–semantics interface. Cambridge: MIT Press. Lyons J (1977). Semantics (2 volumes). Cambridge: Cambridge University Press.

McCawley J (1968). ‘Lexical insertion in a transformational grammar without deep structure.’ Proceedings of the Chicago Linguistic Society 4. Mel’cuk I A (1988b). Dependency syntax. Albany, NY: SUNY Press. Miller G (1991). The science of words. New York: Scientific American Library. Miller G, Beckwith R, Fellbaum C, Gross D & Miller K J (1990). ‘Introduction to WordNet: an on-line lexical database.’ International Journal of Lexicography 3, 235– 244. Moens M & Steedman M (1988). ‘Temporal ontology and temporal reference.’ Computational Linguistics 14, 15–28. Moravcsik J M (1975). ‘Aitia as generative factor in Aristotle’s philosophy.’ Dialogue 14, 622–636. Moravcsik J M (1990). Thought and language. London: Routledge. Parsons T (1990). Events in the semantics of English. Cambridge, MA: MIT Press. Pinker S (1989). Learnability and cognition: the acquisition of argument structure. Cambridge: MIT Press. Pollard C & Sag I (1994). Head-driven phrase structure grammar. Chicago University of Chicago Press, Stanford CSLI. Pustejovsky J (1991). ‘The syntax of event structure.’ Cognition 41, 47–81. Pustejovsky J (1995). The generative lexicon. Cambridge: MIT Press. Pustejovsky J (2001). ‘Type construction and the logic of concepts.’ In Bouillon P & Busa F (eds.) The syntax of word meaning. Cambridge: Cambridge University Press. Pustejovsky J & Boguraev P (1993). Lexical knowledge representation and natural language processing. Artificial Intelligence 63, 193–223. Pustejovsky J & Busa F (1995). ‘Unaccusativity and event composition.’ In Bertinetto P M, Binachi V, Higginbotham J & Squartini M (eds.) Temporal reference: aspect and actionality. Turin: Rosenberg and Sellier. Rappaport Hovav M & Levin B (2001). ‘An event structure account of English resultatives.’ Language 77, 766–797. Sanfilippo A (1993). ‘LKB encoding of lexical knowledge.’ In Briscoe T, de Paiva V & Copestake A (eds.) Inheritance, defaults, and the Lexicon. Cambridge: Cambridge University Press. Saussure F de (1983). Course in general linguistics. Harris R (trans.). (Original work published 1916). Stern G (1968). Meaning and change of meaning. With special reference to the English langage. Bloomington: Indiana University Press (Original work published 1931). Tenny C & Pustejovsky J (2000). Events as grammatical objects. Chicago: University of Chicago Press. Trier J (1931). Der deutsche Wortschatz im Sinnbezirk des Verstandes: Die Geschichte eines sprachlichen Feldes. Band I. Heidelberg: Heidelberg. Weinreich U (1972). Explorations in semantic theory. The Hague, The Netherlands: Mouton. Williams E (1981). ‘Argument structure and morphology.’ Linguistic Review 1, 81–114.

484 Lexicology

Lexicology A Cowie, University of Leeds, Leeds, UK ß 2006 Elsevier Ltd. All rights reserved.

Introduction: The Scope of Lexicology Lexicology is ‘the study of the lexicon or lexis (specified as the vocabulary or total stock of words of a language)’ (Lipka, 1992: 1). For English-speaking linguists familiar with teaching and research in the subject in France and Germany, there is a ‘classical’ as well as an ‘evolved’ view of what is meant by the term. The classical view is brought home by examining the contents of standard textbooks written in Germany for university students of English. The title of one of the best known is Englische Lexikologie, with the subtitle Einfu¨hrung in Wordbildung und lexikalische Semantik (‘Introduction to word formation and lexical semantics’), which highlights its central elements (Hansen et al., 1992). And the handbook does indeed deal in successive major sections with word formation (broken down into ‘compounding’ ‘derivation’ [including ‘zero-derivation’], ‘reduplication’ and ‘blends’) and lexical semantics, in which one major subsection deals with ‘relations within words’ (including ‘homonymy’ and ‘polysemy,’ and ‘metonymy’ and ‘metaphor’) and another with ‘paradigmatic semantic relations between words’ (including ‘antonymy’ and ‘hyponymy’), which are in Britain often referred to as sense relations (Lyons, 1977). Matching for the most part this view of what a lexicology textbook should contain is Tournier’s broad, masterly work (Tournier, 1985), with major chapters devoted to derivation, compounding, and conversion (and thus to word formation), and also to metase´mie (or meaning change), which features several of the themes that, in the German work, are subsumed under lexical semantics. There are of course differences of detail between the two works. Interestingly, there are no parallels in Britain for textbooks dealing with the lexicon in quite the way I have just described. Further, the term ‘lexicology’ has not until comparatively recently been used in linguistics textbooks published in Britain or defined in many ‘desk-sized’ dictionaries (not, for example, in the Concise Oxford dictionary until the eighth edition of 1990). Moreover, in British practice the two main topics that are given prominence by both Tournier and Hansen et al. are hived off into separate volumes: word formation to Bauer (1983), lexical semantics to Cruse (1986). However, in many areas where British lexicologists work in close collaboration with colleagues from various European countries, the term of choice is

almost universally ‘lexicology’ (rather than ‘lexical semantics’). In the proceedings of recent congresses of the European Association for Lexicography, for instance, we find section headings such as ‘Computational Lexicography and Lexicology’ and ‘Reports on Lexicographical and Lexicological Projects’ where the coupling of related areas of professional concern is underpinned by the formal relatedness of the terms (Braasch and Povlsen, 2002a). There are of course other linkages of form and meaning that help to explain a growing preference. In addition to the connection to lexicography, lexicology is related in form to a number of terms (phonology, morphology, phraseology) denoting other levels of linguistic analysis. Then again, the term has been broader in its application than lexical semantics, having, with suitable modification, referred to diachronic lexical studies (hence ‘historical lexicology’; cf. Ullmann, 1957: 39) and contrastive lexical studies (hence ‘contrastive lexicology’; cf. Van Roey, 1990). A number of major topics from outside the traditional core are currently subsumed under the heading ‘lexicology.’ This brings me to my reference earlier to an ‘evolved’ view. One cannot avoid noticing a greater breadth of coverage in the latest edition of what is probably the best-known German textbook (Lipka, 2002). Not only do we find the topics treated that we have come to take for granted in such a text – homonymy vs. polysemy, lexical fields and hierarchies, and lexical rules and semantic processes – but there are other themes that fall outside its traditional limits, including corpus linguistics and cognitive linguistics – both to be considered below – while there is extensive discussion in Lipka’s Introduction of ‘the expanding field.’ This article attempts to reflect this greater diversity of themes. Polysemy and homonymy in the first section belong to the traditional core of the subject (Cowie, 1994). The second section is devoted to cognitive approaches to the analysis of metaphor based on the work of Lakoff and to a project in the ‘syntagmatics of metaphor’ that draws extensively on very large text corpora. The final section is taken up with an approach to lexical description – ‘frame semantics’ – that brings together syntactic and semantic levels of description in a rigorous and systematic way.

Polysemy and Homonymy Though polysemy and homonymy are long-established concepts in lexicology, they continue to excite discussion among linguists (e.g., Cowie, 2001; Ravin and Leacock, 2000). The concepts themselves and the relationship between them are of crucial importance

Lexicology 485

to lexicographers, since they require sound criteria to determine whether different occurrences of a given word represent different senses and, if different, how different. As we shall see, approaches to meeting this need which lay emphasis on the contrastive lexical environments of such occurrences (their differing collocability) have much to offer, especially as evidence is now available from immensely large text corpora (Hanks, 2004a). Polysemy and homonymy can be defined in general terms as follows. When a given word form (i.e., in the written language, any sequence of letters bounded on either side by a space) realizes two or more related though separate meanings of the same lexical item, we have polysemy. (Compare: thicka ‘having density’; thickb ‘with a large number of units close together’.) When, however, the word form functions as the realization in speech or writing of more than one lexical item, we have homonymy. (Compare: meal1 ‘flour’; meal2 ‘repast’.) Though the phenomena are often linked, polysemy is more widespread than homonymy, and of much greater significance. Homonymy may come about through the chance convergence of two distinct forms (meal1 coming from Old English melo and meal2 from Old English mæl). Polysemy, however, is typically the result of lexical creativity and is crucial for the functioning of a language as an efficient semiotic system (Lyons, 1977: 567; Lipka, 1992: 136). It is also true that the distinction between homonymy and polysemy is not the simple dichotomy that is sometimes believed, but is in the nature of a scale or cline. This becomes unarguable once it is accepted that every case of polysemy involves relatedness as well as difference of meaning (Cowie, 2001). As recent studies have shown, however, there is still a tendency to emphasize the latter at the expense of the former (Ravin and Leacock, 2000b). Three major issues need to be addressed in treating this opposition. First, we need to take account of the phenomenon of ‘modulation,’ or ‘contextual variation,’ of a single sense (Cruse, 1986). It is sometimes claimed, rather loosely, that the sense of a (polysemous) item is determined by its context. Such claims, however, fail to distinguish between the unrestricted, and often ephemeral, ‘shaping’ of a word by its context and the more familiar variation attributable to context, which is the activation by different contexts of existing and possibly well-established senses of polysemous words. An example of a meaning difference attributable to modulation – but where there is only one sense – appears below (cf. Moon, 1987): He took the cigarette out of his mouth [the opening]. She had a wide and smiling mouth [the lips].

The second issue that needs to be considered is whether, when attempting to identify polysemy or homonymy, we need to take account of more than one criterion (for checklists, see Lipka, 1986; Cowie, 2001). A further, related, issue is whether we should give prominence to semantic rather than formal criteria. To begin with the question of multiple criteria, it seems vital, given the nature of polysemy as a scalar phenomenon, to refer in each case to two or more criteria. If one takes two or more occurrences of the same word form, and finds that they have similar distributions at a number of descriptive levels – syntactic, morphological, collocational – then the assumption is that they will be close in meaning and at the most remote point from homonymy. Consider, with this in mind, the following senses of the noun tour, noting especially the possible conversion of the noun – in one or more senses – to a verb, and any other characteristic derivatives (Cowie, 2001: 46). a. b. c. d. e.

tour (holidays) tour (visit, inspection) tour (military) tour (artistic) tour (sporting)

tour (v), tourist, tourism tour (v) tour (v) tour (v) tour (v), tourist

We can see that in terms of conversion (‘zero derivation’) the various senses are alike, while as regards derivation proper, only the first stands out from the others (note also the compounds package tour and tour operator). By contrast, near-homonymy will be indicated when, for a given set of criteria, one finds wide distributional differences. Such is the case with three senses of the adjective sheer, where the synonyms are sharply distinct and the pattern with sheerness is only fully acceptable in the first sense: sheera (‘very steep’)

sheerb (‘absolute’)

sheerc (‘so fine as to be transparent’)

¼ perpendicular (cf. the sheerness of the rock face) ¼ unmitigated (cf. *the sheerness of his folly) ¼ diaphanous (cf. ?the sheerness of her tights)

As for giving priority to semantic criteria, we should take account of the view of Cruse (1986, 2000) on how polysemy should be defined, while noting its limitations. Cruse favors semantic (or ‘direct’ tests), which he describes as more successful and reliable for diagnostic purposes. One such test depends on the fact that separate senses of an item cannot be brought into play in the same sentence without oddness. Consider for example this sentence, where activating two senses of expired at the same time gives rise to zeugma: ?John and his driving

486 Lexicology

license expired last Tuesday. Nevertheless, the meanings of different lexical items differ quite widely as to the degree of oddness that is revealed when they are brought into play. In this sentence, for instance, where senses of tour are (again) involved, no oddness is involved: The England cricketers and the Royal Ballet were both touring South Africa. Yet this is only to be expected if the senses of a polysemous word are – in differing degrees – related as well as separate. More positively, one could conclude that, while no better in this respect than grammatical tests, semantic tests do provide evidence of the scalar nature of polysemy.

Metaphor and the Differentiation of Meanings As many entries in ordinary dictionaries confirm, metaphor is one of the commonest means by which new meanings develop from existing senses. The prevalence of metaphor more generally, as an integral part of the structure of the lexicon, has been highlighted in the field of cognitive semantics and in particular in the work of Lakoff and Johnson (1980) and Lakoff (1987). Lakoff and Johnson argued that we can point to a number of very general concepts whose metaphorical structuring is reflected in the phrasal lexicon of the language (1980: 52), and they gave weight to their hypothesis by showing that particular metaphors, such as ‘ideas are cutting instruments,’ are manifested by a cluster of specific phrases: That’s an incisive idea. That cuts right to the heart of the matter . . . He has a razor wit. He had a keen mind. (Lakoff and Johnson, 1980: 48).

It is worth noting that elsewhere the authors pointed out that a particular set of phrases ‘structured by a single metaphorical concept’ (‘life is a gambling game’) are ‘phrasal lexical items’ (or, to use a common equivalent, multiword units). Such examples reinforce the point made earlier, that metaphorical concepts are characteristically realized by expressions from the phrasal lexicon (Lakoff and Johnson, 1980: 51). The link between Lakoff and Johnson’s hypothesis and phraseology has been seized on by a number of linguists. Referring to a collocational database enriched by having had ‘lexical functions’ assigned to each of the 70 000 pairs of collocations it contains, Fontenelle noticed that other metaphors are commonly used in the language (Fontenelle, 1994: 275; and for lexical functions cf. Mel’chuk and Zholkovsky, 1984; Fontenelle, 1997). He considered the following examples from the database (introduced by the function ‘Mult,’ expressing ‘a group or set of something’):

Mult (arrow) Mult (bullet) Mult (missile) Mult (stone)

¼ ¼ ¼ ¼

cloud, rain, shower, storm rain storm shower

Arrow, bullet and so on are projectiles aimed at a target. But as Fontenelle explained, these terms are used in collocation with words pertaining to meteorological phenomena, enabling us to posit the existence of the metaphor ‘a projectile is a meteorological phenomenon.’ The approach to the definition of metaphor followed by Patrick Hanks must be viewed as part of his much broader theory of norms and exploitations, developed as a response to the pervasiveness in language use of what he has called ‘‘stereotypical syntagmatic patterns’’ (Hanks, 2004a). What the term points to is repetitive use (abundantly illustrated by Hanks), relative fixity, and preference for the form in question over plausible alternatives (well illustrated by native speakers preferring a storm of protest to a torrent of protest). Such examples as a storm of protest and a torrent of abuse represent the ‘norm.’ As Hanks explains, ‘some uses are stereotypical; others exploit stereotypes, typically for rhetorical effect’ (2004b: 246), and he goes on to develop the point by suggesting that whereas stereotyped phrases and idioms are ‘norms,’ innovative adaptations – often based on existing phrases, as in the vultures are circling vs. the lawyers are circling – are ‘exploitations.’ Clearly, dead metaphors will figure in the first category and freshly minted metaphors in the second. The treatment of metaphor provided by Hanks is detailed and often illuminating. It is clear, first, that we must recognize ‘degrees’ of metaphoricity, in the sense that neither, one, or both of the constituents of a phrase may be metaphorical: a. The lowest point on the scale is represented by storms blow, storms abate, where there are arguably two literal elements and ‘a reductionist interpretation is appropriate’ (Hanks, 2004b: 256). b. We come next to cases where storm is literal but the associated verb is figurative: storms brew and rage; storms batter, lash, and ravage. Such combinations are so common that ‘it is easy to overlook the metaphorical status of the verb’ (Hanks, 2004b: 256). It is worth noting, too, that in treatments influenced by ‘classical’ Russian theory, storm would be said to ‘shape’ the figurative senses of batter, lash, etc. (cf. Cowie, 1999). c. In the third type, storm is metaphorical, as in a political storm, a storm of protest. d. At the final stage we find idioms, such as a storm in a teacup.

Lexicology 487

There is a particularly interesting comment on storm (i.e., as a conventional metaphor) as the direct object of a causative verb. The causative may be literal, as witness cause, create and possibly raise. But ‘storm in this sense is found as the direct object of both literal and metaphorical causative verbs’ (Hanks, 2004b: 259). In the latter case, we have mixed metaphors: spark a storm (of protest, etc.), whip up a storm. The mixed metaphor effect is perhaps caused by spark (say), which acts originally as a collocate of explosion, later carrying over the appropriate features to storm (for ‘oblique’ metaphors, see Cowie, 2004). Finally, and still on the subject of conventional metaphors, Hanks made two observations that only access to extensive data could make possible. In causative brew up a storm, whip up a storm, etc., the noun is almost certainly metaphorical. In inchoative a storm is brewing up, by contrast, the noun could be either. Other contrastive comments are prompted by a storm of something, where the final noun is almost always a conventional metaphor and negative: a storm of protest, a storm of criticism, etc. Less common are positive reactions, such as a storm of applause.

Frame Semantics Considerable interest is currently being shown by lexicologists and specialists in lexicography and natural language processing in frame semantics, an approach to the study of lexical meaning based on work by Charles Fillmore and his associates. A particular strength of the theory is that it brings together syntactic and semantic levels of analysis in a rigorous and systematic way. But of particular significance is the contribution that the theory makes to the analysis of words, and thus to lexicography. As Atkins, Fillmore and Johnson pointed out, experienced lexicographers will use various clues to identify differences in the meaning of a word when examining citations in which it appears. In the case of the verb argue, for example, the indicators of difference will include different synonyms (quarrel in one case, reason in another) and contrastive sets of prepositions: ‘in the quarrelling sense, you argue with someone, about something, while in the reasoning sense you argue for or against something, or that something should be done’ (Atkins et al., 2003: 253). By making implicit use, as here, of various kinds of linguistic information, the practicalxx lexicographer can make considerable progress towards identifying or separating senses. However, if recognized explicitly, the information can lead to deeper understanding. One kind of information has to do with the

syntactic contexts in which argue occurs; the other concerns the meanings of the prepositions (with, for, and so on) which function as its complements. Beyond that awareness is the recognition that we need a theory that links the meanings of words very explicitly to the syntactic contexts in which those words are used, and to the semantic properties of those contexts. In frame semantics, this requires a further step, which is to recognize that ‘word meanings must be described in relation to semantic frames – schematic representations of the conceptual structures and patterns of beliefs, practices, institutions, images, etc. that provide a foundation for meaningful interaction in a speech community’ (Fillmore et al., 2003: 235). Those working on semantic frames have an associated computational lexicography project, FrameNet, which extracts information about the related semantic and syntactic properties of English words from large text corpora, and analyzes the meanings of words ‘‘by directly appealing to the frames that underlie their meanings and studying the syntactic properties of words by asking how their semantic properties are given syntactic form’’ (Fillmore et al., 2003: 235). In this way are the grammatical and semantic features of argue, presented informally earlier on, given more rigorous and systematic shape. The notion of valence – at the grammatical level, the requirement that a word combines with particular phrases in a sentence – plays an important part in the theory. However, in FrameNet, information about valence must be stated in semantic as well as syntactic terms. That is, the semantic roles that syntactic elements such as subject or object play with reference to the meaning of the word must be fully accounted for. In FrameNet, ‘the semantic valence properties of a word are expressed in terms of the kinds of entities that can participate in frames of the kind evoked by the word’ (Fillmore et al., 2003: 237). These roles are called frame elements (FEs). FEs must be taken to include not only the human participants – aggressors, victims, etc. – but conditions and objects relevant to the central concepts of the frame. In the semantics of the key term risk – the subject of a detailed study by Fillmore and Atkins (1992) – such elements include uncertainty about the future (the frame element ‘chance’) and a potential unwelcome development (the frame element ‘harm’). According to the study, these two FEs alone make up the core of our understanding of several other items within the field, including peril, danger, venture, and hazard. Semantic frame theory offers a number of opportunities to lexicologists and lexicographers in treating the vocabulary of risk and other semantic domains

488 Lexicology

(Cowie, 2002). First is the fact that the model brings together frame elements and syntactic functions, so that the one-to-two (or one-to-many) relations that often hold between FEs and lexicosyntactic structures are clearly demonstrated. To illustrate this point, we can see, just below, that the frame element ‘aggressor’ is common to both examples. This perception leads us to bring together – as semantically related – two contrasting structures incorporating the noun threat, in which ‘aggressor’ is realized first as a grammatical subject and second as a prepositional phrase introduced by from (Cowie, 2002: 327): 1.

AGGRESSOR

The work described briefly here forms part of a continuing descriptive project, of considerable scope and ambition, whose results are already proving invaluable to lexicographers and computational linguists. See also: Collocations; Componential Analysis; Definition in Lexicology; Dictionaries; Disambiguation; Frame Semantics; Idioms; Lexical Semantics; Lexicon/Dictionary: Computational Approaches; Meronymy; Montague Semantics; Polysemy and Homonymy; Pre-20th Century Theories of Meaning; Selectional Restrictions; Semantic Primitives; Speech Act Verbs; Synonymy; Thesauruses; WordNet(s).

{the dolphins} were

a threat {to the local fishing industry} 2. an imagined threat AGGRESSOR {from the few remaining ex-revolutionaries} VALUED OBJECT

A further way in which frame semantics can benefit lexical description is that is enables us to take a set of supposed synonyms and to use the theory to elucidate crucial differences of meaning and distribution – and incidentally show up the inadequacies of definitions in several current dictionaries. This has been demonstrated in Atkins’s analysis of the verbs of seeing (1994). Comparing approaches to defining the verbs behold, descry, notice, spot, and spy in two mother tongue dictionaries, she notes how the verbs are defined in terms of each other. She is also aware that recourse to corpus data alone, rich though it may be, does not enable one to identify the tiny shifts of sense which divide one verb from another (Cowie, 1999). Atkins’s approach, in the case of the verbs see, behold, catch sight of, spot, spy is to recognize a ‘perception frame,’ whose chief elements include a ‘passive’ rather than an ‘active experiencer’ (since seeing, unlike looking, does not include an element of volition), a ‘percept’ (or object perceived) and a ‘judgment,’ which refers to the opinion which the experiencer forms of the percept as a result of seeing, etc. In the first example below, the judgment being made is comparative, while in the second example it is in the nature of an inference: He looked to me like a yellow budgerigar. (judgment–simile) Peter looks relaxed. (judgment–inference)

As we can see from these examples, the relationship between semantic frame elements and grammatical functions is often far from straightforward. In the first example above, the experiencer is realized as the prepositional phrase to me; in a sentence such as Mary saw the duck, the experiencer is Mary, the subject of the perception verb see (Atkins, 1994: 5).

Bibliography Allen R E (ed.) (1990). The concise Oxford dictionary of current English. (8th edn.). Oxford: Clarendon Press. Atkins B T S (1994). ‘Analyzing the verbs of seeing: a frame semantics approach to corpus lexicography.’ In Gahl S, Johnson C & Dolbey A (eds.) Proceedings of the twentieth annual meeting of the Berkeley Linguistics Society. Berkeley: University of California at Berkeley. 1–17. Atkins B T S, Fillmore C J & Johnson C R (2003). ‘Lexicographic relevance: selecting information from corpus evidence.’ International Journal of Lexicography 16(3), 251–280. Bauer L (1983). English word-formation. Cambridge: Cambridge University Press. Braasch A & Povlsen C (2002a). ‘Introduction.’ In Braasch & Povlsen (eds.). 1. Braasch A & Povlsen C (eds.) (2002b). Proceedings of the Tenth Euralex International Conference, Euralex. Copenhagen: Center for Sprogteknologi. Cowie A P (1994). ‘Applied linguistics: lexicology.’ In Asher R E (ed.) The encyclopedia of language and linguistics 1. Oxford and New York: Pergamon Press. 177–180. Cowie A P (1999). English dictionaries for foreign learners: a history. Oxford: Clarendon Press. Cowie A P (2001). ‘Homonymy, polysemy and the monolingual English dictionary.’ Lexicographica 17, 40–60. Cowie A P (2002). ‘Harmonising the vocabulary of risk.’ In Braasch & Povlsen (eds.). 325–330. Cowie A P (2004). ‘Oblique metaphors and restricted collocations.’ In Palm-Meister C (ed.) Europhras 2000: internationale Tagung zur Phraseologie vom 15.–18. Juni 2000 in Aske/Schweden. Berlin: Stauffenburg. 45–50. Cruse D A (1986). Lexical semantics. Cambridge: Cambridge University Press. Cruse D A (2000). ‘Aspects of the micro-structure of word meanings.’ In Ravin & Leacock (eds.). 30–51. Fillmore C J & Atkins B T S (1992). ‘Towards a framebased lexicon: the semantics of RISK and its neighbors.’ In Lehrer A & Kittay E (eds.) Frames, fields, and contrasts. Hillsdale, NJ: Lawrence Erlbaum Associates. 75–102. Fillmore C J, Johnson C R & Petruck M R L (2003). ‘Background to FrameNet.’ International Journal of Lexicography 16(3), 235–250.

Lexicon/Dictionary: Computational Approaches 489 Fontenelle T (1994). ‘Using lexical functions to discover metaphors.’ In Martin W, Meijs W, Moerland M, ten Pas E, van Sterkenburg P & Vossen P (eds.) Euralex 1994 proceedings. Amsterdam: Vrije Universiteit Amsterdam. 271–278. Fontenelle T (1997). Turning a bilingual dictionary into a lexical-semantic database. Tu¨bingen: Max Niemeyer. Hanks P (2004a). ‘Corpus pattern analysis.’ In Williams G & Vessier S (eds.) Euralex International Congress: proceedings (3 vols). Lorient, France: Universite´ de Bretagne-Sud. 87–97. Hanks P (2004b). ‘The syntagmatics of metaphor and idiom.’ International Journal of Lexicography 17(3), 245–274. Hansen B, Hansen K & Neubert A (1992). Englische Lexikologie (2nd edn.). Berlin: Langenscheidt. Lakoff G (1987). Women, fire, and dangerous things: what categories reveal about the mind. Chicago/London: University of Chicago Press. Lakoff G & Johnson M (1980). Metaphors we live by. Chicago/London: University of Chicago Press. Lipka L (1986). ‘Homonymie, Polysemie oder Ableitung in heutigen Englisch.’ Zeitschrift fu¨r Anglistik und Amerkanistik 34, 128–138. Lipka L (1992). An outline of English lexicology (2nd edn.). Tu¨bingen: Max Niemeyer.

Lipka L (2002). English lexicology: lexical structure, word semantics and word-formation (3rd edn.). Tu¨bingen: Gunter Narr. Lyons J (1977). Semantics (2 vols). Cambridge: Cambridge University Press. Mel’chuk I & Zholkovsky A (1984). Tolkovo-kombinatornyi slovar’ russkogo iazyka / Explanatory and combinatorial dictionary of modern Russian. Vienna: Wiener Slawistischer Almanach. Moon R (1987). ‘Monosemous words and the dictionary.’ In Cowie A P (ed.) The dictionary and the language learner. Tu¨bingen: Max Niemeyer. 173–182. Ravin Y & Leacock C (2000a). ‘Polysemy: an overview.’ In Ravin & Leacock (eds.). 1–29. Ravin Y & Leacock C (eds.) (2000b). Polysemy: theoretical and computational approaches. Oxford: Oxford University Press. Tournier J (1985). Introduction descriptive a` la lexicoge´ne´tique de l’anglais contemporain. Paris: Champion/ Geneva: Slatkine. Ullmann S (1957). Principles of semantics (2nd edn.). Oxford: Blackwell. Van Roey J (1990). French-English lexicology: an introduction. Louvain-la-Neuve, Belgium: Peeters.

Lexicon/Dictionary: Computational Approaches K C Litkowski, CL Research, Damascus, MD, USA ß 2006 Elsevier Ltd. All rights reserved.

What Are Computational Lexicons and Dictionaries? Computational lexicons and dictionaries (henceforth lexicons) include manipulable computerized versions of ordinary dictionaries and thesauruses. Computerized versions designed for simple lookup by an end user are not included, since they cannot be used for computational purposes. Lexicons also include any electronic compilations of words, phrases, and concepts, such as word lists, glossaries, taxonomies, terminology databases, wordnets (see WordNet(s)), and ontologies. While simple lists may be included, a key component of computational lexicons is that they contain at least some additional information associated with the words, phrases, or concepts. One small list frequently used in the computational community is a list of about 100 most frequent words (such as a, an, the, of, and to), called a stoplist, because some applications ignore these words in processing text.

In general, a lexicon includes a wide array of information associated with entries. An entry in a lexicon is usually the base form of a word, the singular for a noun and the present tense for a verb. Using an ordinary dictionary as a reference point, an entry in a computational lexicon contains all the information found in the dictionary: inflectional and variant forms, pronunciation, parts of speech, definitions, grammatical properties, subject labels, usage examples, and etymology. More specialized lexicons contain additional types of information. A thesaurus or wordnet contains synonyms, antonyms, or words bearing some other relationship to the entry. A bilingual dictionary contains translations for an entry into another language. An ontology (loosely including thesauruses or wordnets) arranges concepts in a hierarchy (e.g., a horse is an animal), frequently including other kinds of relationships as well (e.g., a leg is part of a horse). The term ‘computational’ applies in several senses for computational lexicons. Essentially, the lexicon is in an electronic form. Firstly, the lexicon and its associated information may be studied to discover patterns, usually for enriching entries. Secondly, the lexicon can be used computationally in a wide

490 Lexicon/Dictionary: Computational Approaches

variety of applications; frequently, a lexicon may be constructed to support a specialized computational linguistic theory or grammar. Thirdly, written or spoken text may be studied to create or enhance entries in the lexicon. Broadly, these activities comprise the field known as computational lexicology, the computational study of the form, meaning, and use of words (see also Lexicology).

History of Computational Lexicology Computational lexicology as the study of machinereadable dictionaries (MRDs) (Amsler, 1982) emerged in the mid-1960s and received considerable attention until the early 1990s. ‘Machine-readable’ does not mean that the computer reads the dictionary, but only that it is in electronic form and can be processed and manipulated computationally. Computational lexicology had gone into decline as researchers concluded that MRDs had been fully exploited and that they could not be usefully exploited for NLP applications (Ide and Veronis, 1993). However, since that time, many dictionary publishers have taken the early research into account to include more information that might be useful. Thus, practitioners of computational lexicology can expect to contribute to the further expansion of lexical information. To provide the basis for this contribution, the results of the early history need to be kept in mind. MRDs evolved from keyboarding a dictionary onto punchcards, largely through the efforts of Olney (1968), who was instrumental in getting G. & C. Merriam Co. to permit computer tapes to be distributed to the computational linguistics research community. The ground-breaking work of Evens (Evens and Smith, 1978) and Amsler (1980) provided the impetus for a considerable expansion of research on MRDs, particularly using Webster’s seventh new collegiate dictionary (W7; Gove, 1969). These efforts stimulated the widespread use of the Longman

Figure 1 Sample entry for the word double using XML.

dictionary of contemporary English (LDOCE; Proctor, 1978) during the 1980s; this dictionary is still the primary MRD today. Initially, MRDs were faithful transcriptions of ordinary dictionaries, and researchers were required to spend considerable time interpreting typesetting codes (e.g., to determine how a word’s part of speech was identified). With advances in technology, publishers eventually came to separate the printing and the database components of MRDs. Today, the various fields of an entry are specifically identified and labeled, increasingly using eXtensible Markup Language (XML), such as shown in Figure 1. As a result, researchers can expect that MRDs will be in a form that is much easier to understand, access, and manipulate, particularly using XML-related technologies developed in computer science.

The Study of Computational Lexicons Making Lexicons Tractable

An electronic lexicon provides the resource for examination and use, but requires considerable initial work on the part of the investigator, specifically to make the contents tractable. The investigator needs (1) to understand the form, structure, and content of the lexicon, and (2) to ascertain how the contents will be studied or used. Understanding involves a theoretical appreciation of the particular type of lexicon. While dictionaries and thesauruses are widely used, their content is the result of considerable lexicographic practice; an awareness of lexicographic methods is extremely valuable in studying or using these resources. Wordnets require an understanding of how words may be related to one another. Ontologies require an understanding of conceptual relations, along with a formalism for capturing properties in slots and their fillers. A full ontology may also involve various

Lexicon/Dictionary: Computational Approaches 491

principles for ‘reasoning’ with objects in a knowledge base. Lexicons that are closely tied to linguistic theories and grammars require an understanding of the underlying theory or grammar. The actual study or use of the lexicons is essentially the development of procedures for manipulating the content, i.e., making the contents tractable. A common objective is to transform or extract some part of the content into a form that will meet the user’s needs. This can usually be accomplished by recognizing patterns in the content; a considerable amount of lexical semantics research falls into this category. Another common objective is to map some or all of the content in one format or formalism into another. The general idea of these mappings is to take advantage of content developed under one formalism and to use it in another. The remainder of this section focuses on defining patterns that have been observed in MRDs. What Can Be Extracted From Machine-Readable Dictionaries?

Lexical Semantics Olney (1968), in his groundbreaking work on MRDs, laid out a series of computational aids for studying affixes, obtaining lists of semantic classifiers and components, identifying semantic primitives, and identifying semantic fields. He also examined defining patterns (including their syntactic and semantic characteristics) to identify productive lexical processes (such as the addition of -ly to adjectives to form adverbs). Defining patterns are essentially regular expressions that specify string, syntactic, and semantic elements of definitions that occur frequently within definitions. For example, in (a|an) [adj] manner, applied to adverb definitions, can be used to characterize the adverb as manner, to establish a derived-from [adj] relation, and to characterize a productive lexical process. The program Olney initiated in studying these patterns is still incomplete. There is no systematic compilation that details the results of the research in this area. Moreover, in working with the dictionary

publishers, he was provided with a detailed list of defining instructions used by lexicographers. Defining instructions, usually hundreds of pages, guide the lexicographer in deciding what constitutes an entry, what information the entry should contain, and frequently provides formulaic details on how to define classes of words. Each publisher develops its own idiosyncratic set of guidelines, again underscoring the point that a close working relationship with the publishers can provide a jump-start to the study of patterns. Amsler (1980) and Litkowski (1978) both studied the taxonomic structure of the nouns and verbs in dictionaries, observing that, for the most part, definitions of these words begin with a superordinate or hypernym (flax is a plant, hug is to squeeze). They both recognized that a dictionary is not fully consistent in laying out a taxonomy, because it contains defining cycles (where words may be used to define themselves when all links are followed). Litkowski, applying the theory of labeled directed graphs to the dictionary structure, concluded that primitives had to be concept nodes lexicalized by one or more words and verbalized with a gloss (identical to the synonym set encapsulated in the nodes in WordNet). He also hypothesized that primitives essentially characterize a pattern of usage in expressing their concepts. Figure 2 shows an example of a directed graph with three defining cycles; in this example, oxygenate is the base word underlying all the others and is only relatively primitive. Evens and Smith (1978), in considering lexical needs for a question-answering system, presented a description of approximately 45 syntactic and semantic lexical relations. Lexical semantics is the study of these relations and is concerned with how meanings of words relate to one another (see Lexical Semantics). Evens and Smith grouped the lexical relations into nine categories: taxonomy and synonymy, antonymy, grading, attribute relations, parts and wholes, case relations, collocation relations, paradigmatic relations, and inflectional relations. Each

Figure 2 Illustrations of definition cycles for (aerify, aerate), (aerate, ventilate), and (air, aerate, ventilate) in a directed graph anchored by oxygenate.

492 Lexicon/Dictionary: Computational Approaches

relation was viewed as an entry in the lexicon itself, with predicate properties describing how to use the relations in a first-order predicate calculus. The study of lexical relations is distinguished from the componential analysis of meaning (Nida, 1975), which seeks to analyze meanings into discrete semantic components (or features). In this form of analysis, semantic features (such as maleness or animacy) are used to contrast the meanings of words (such as father and mother). These features proved to be extremely important among field anthropologists in understanding and translating among many languages. These features can be useful in characterizing lexical preferences, e.g., indicating that the subject of a verb should have an animate feature. Their importance has faded somewhat, particularly as the meanings of words have been seen to have fuzzy boundaries and to depend very heavily on the contexts in which they appear. Ahlswede (1985), Chodorow et al. (1985), and others engaged in large-scale efforts for automatically extracting lexical semantic relations from MRDs, particularly W7. Evens (1988) provides a valuable summary of these efforts; a special issue of Computational Linguistics on the lexicon in 1987 also provides considerable detail on important theoretical and practical perspectives on lexical issues. One focus of this research was on extracting taxonomies, particularly for nouns. In general, noun definitions are extended noun phrases (e.g., including attached prepositional phrases), in which the head noun of the initial noun phrase is the hypernym. Parsing the definition provides the mechanism for reliably identifying the hypernym. However, the various studies showed many cases where the head is effectively empty or signals a different type of lexical relation. Examples of such heads include a set of, any of various, a member of, and a type of. Experience with extracting lexical relations other than taxonomy was similar. Investigators examined defining patterns for regularities in signaling a particular relation (e.g., a part of indicating a part-whole relation). However, the regularities were generally not completely reliable and further work, sometimes manual, was necessary to separate good results from bad results. Several observations can be made. First, there is no repository of the results; new researchers must reinvent the processes or engage in considerable effort to bring together the relevant literature. Second, few of these efforts have benefited directly from the defining instructions or guidelines used in creating the definitions. Third, as outcomes emerge that show the benefit of particular types of information, dictionary publishers have slowly incorporated some of this

additional information, particularly in electronic versions of the dictionaries. Research Using Longman’s Dictionary of Contemporary English Beginning in the early 1980s, the Longman’s dictionary of contemporary English (LDOCE; Proctor, 1978) became the primary MRD used in the research community. LDOCE is designed primarily for learners of English as a second language. It uses a controlled vocabulary of about 2000 words in its definitions. LDOCE uses about 110 syntactic categories to characterize entries (e.g., noun and noun/count/followed-by-infinitive-withTO). The electronic version includes box codes that provide features such as abstract and animate for entries; it also includes subject codes, identifying the subject specialization of entries where appropriate. Wilks et al. (1996) provide a thorough overview of research using LDOCE (along with considerable philosophical perspectives on meaning and a detailed history of research using MRDs). In using LDOCE, many researchers have built upon the research that used W7. In particular, they have reimplemented and refined procedures for identifying the dictionary’s taxonomy and for investigating defining patterns that reveal lexical semantic relations. In addition to string pattern matching, researchers began parsing definitions, necessarily taking into account idiosyncratic characteristics of definition text as compared to ordinary text. A significant problem emerged when parsing definitions: the difficulty of disambiguating the words making up the definition. This problem is symptomatic of working with MRDs, namely, that almost any pattern that is investigated will not have complete reliability and will require some amount of manual intervention. Boguraev and Briscoe (1987) introduced a new task into the analysis of MRDs, using them to derive lexical information for use in NLP applications. In particular, they used the box codes of LDOCE to create ‘‘lexical entries containing grammatical information compatible with’’ parsing using different grammatical theories. The derivational task has been generalized into a considerable number of research efforts to convert, map, and compare lexical entries from one or more sources. Since 1987, these efforts have grown and constitute an active area of research. Conversion efforts generally involve creation of broad-coverage lexicons from lexical resources within particular formalisms. Mapping efforts attempt to exploit and capture particular lexical properties from one lexicon into another. Comparison efforts examine multiple lexicons. Comparison of lexical entries from multiple sources led to a crisis in the use of MRDs. Ide and

Lexicon/Dictionary: Computational Approaches 493

Veronis (1993), in surveying the results of research using MRDs, noted that lexical resources frequently were in conflict with one another and could not be used reliably for extracting information. Atkins (1991) described difficulties in comparing entries from several dictionaries because of lexicographic exigencies and editorial decisions (particularly the dictionary size). She noted that lexicographers could variously lump senses together, split them apart, or combine elements of meaning in different ways. These papers, along with others, seemed to slow the research on using MRDs and other lexical resources. They also underscore the major difficulty that there is no comprehensive theory of meaning, i.e., an organization of the semantic content of definitions. This difficulty may be characterized as the problem of paraphrase, or determining the semantic equivalence of expressions (discussed in detail below). Semantic Networks Quillian (1968) considered the question of ‘‘how semantic information is organized within a person’s memory.’’ He described semantic memory as a network of nodes interconnected by associative links. In explicating this approach, he visualized a dictionary as a unified whole, where conceptual nodes (representing individual definitions) were connected by paths to other nodes corresponding to the words making up the definitions. This model envisioned that words would be properly disambiguated. Computer limitations at the time precluded anything more than a limited implementation. A later implementation by Ide and Veronis (1990) added the notion that nodes within the semantic network would be reached by spreading activation. WordNet (Fellbaum, 1998) was designed to capture several types of associative links, although the number of such links was limited by practical considerations. WordNet was not designed as a lexical resource, so that its entries do not contain the full range of information that is found in an ordinary dictionary. Notwithstanding these limitations, WordNet has found widespread use as a lexical resource, both in research and in NLP applications. WordNet is a prime example of a lexical resource that is converted and mapped into other lexical databases. MindNet (Dolan et al., 2000) is a lexical database and a set of methodologies for analyzing linguistic representations of arbitrary text. It combines symbolic approaches to parsing dictionary definitions with statistical techniques for discriminating word senses using similarity measures. MindNet began by parsing definitions and identifying highly reliable semantic relations instantiated in these definitions. The set of 25 semantic relations includes Hypernym, Synonym, Goal, Logical_subject, Logical_object, and Part. A

distinguishing characteristic of MindNet is that the inverse of all relations identified by pattern-matching heuristics are propagated throughout the lexical database. As a result, both direct and indirect paths between entries and words contained in their definitions exist in the database. Given two words (such as pen and pencil), the database is examined for all paths between them (ignoring any directionality in the paths). The path lengths and weights on different kinds of connections leads to a measure of similarity (or dissimilarity), so that a strong similarity is indicated between pen and pencil because both of them appear in various definitions as means (or instruments) linked to draw. Originally, MindNet was constructed from LDOCE; subsequently, American Heritage (3rd edn., 1992) was added to the lexical database. Patterns used in recognizing semantic relations from definitions can be used as well in parsing and analyzing any text, including corpora. Recognizing this, the MindNet database was extended by processing the full text of Microsoft Encarta. In principle, MindNet can be continually extended by processing any text, essentially refining the weights showing the strength of relationships. MindNet provides a mechanism for capturing the context within which a word is used and hence is a database that characterizes a word’s usage, in line with Firth’s (1957) argument that ‘‘the meaning of a word could be known by the company it keeps.’’ MindNet is a significant departure from traditional dictionaries, although it essentially encapsulates the process by which a lexicographer constructs definitions. This process involves the collection of many examples of a word’s usage, arranging them with concordances, and examining the different contexts to create definitions. The MindNet database could be mined to facilitate the lexicographer’s processes. Traditional lexicography is already being extended through automated techniques of corpus analysis very similar in principle to MindNet’s techniques.

Using Lexicons Language Engineering

Research on computational lexicons, even with a resultant propagation of additional information and formalisms throughout the entries, is inherently limited. While a dictionary publisher makes decisions on what to include based on marketing considerations, the design and development of computational lexicons have not been similarly driven. In recent years, the new field of language engineering has emerged to fill this void. Language engineering is primarily concerned with NLP applications and includes the development of supporting lexical

494 Lexicon/Dictionary: Computational Approaches

resources. The following sections examine the role of lexicons, particularly WordNet, in word-sense disambiguation, information extraction, question answering, text summarization, and speech recognition and speech synthesis. Word-Sense Disambiguation Many entries in a dictionary have multiple senses. Word-sense disambiguation (WSD) is the process of automatically deciding which sense is intended in a given context (see Disambiguation). WSD presumes a sense inventory, and as noted earlier, there can be considerable controversy about what constitutes a sense and how senses are distinguished from one another. Hirst (1987) provides a basic introduction to the issues involved in WSD, framing the problem as taking the output of a parser and interpreting the output into a suitable representation of the text. WSD requires a characterization of the context and mechanisms for associating nearby words, handling syntactic disambiguation cues, and resolving the constraints imposed by ambiguous words, all of which pertain to the content of lexicons. (See also SaintDizier and Viegas, [1995] for an updated view of lexical semantics.) To understand the relative significance of lexical information, a community-wide evaluation exercise known as Senseval (word-sense evaluation) was developed to assess WSD systems. Senseval exercises have been conducted in 1998 (Kilgarriff and Palmer, 2000), 2001, and 2004. WSD systems fall into two categories: supervised (where hand-tagged data are used to train systems using various statistical techniques) and unsupervised (where systems make use of various lexical resources, particularly MRDs). Supervised systems make use of collocational, syntactic, and semantic features used to characterize training data. The extent of the characterization depends on the ingenuity of the investigators and the amount of lexical information they use. Unsupervised systems require substantial information, not always available, in the lexical resources. In Senseval, supervised systems have consistently outperformed unsupervised systems, indicating that computational lexicons do not yet contain sufficient information to perform reliable WSD. The use of WordNet in Senseval, both as the sense inventory and as a lexical resource for disambiguation, emphasized the difference between the two types of WSD systems, since it does not approach dictionary-based MRDs in the amount of lexical information it contains. Close examination of the details used by supervised systems, particularly the use of WordNet, can reveal the kind of information that is important and can guide the evolution of information contained in computational lexicons.

Dictionary publishers are increasingly drawing on results from Senseval and other exercises to expand the content of electronic versions of their dictionaries. Information Extraction Information extraction (IE; Grishman, 2002) is ‘‘the automatic identification of selected types of entities, relations, or events in free text.’’ IE grew out of the Message Understanding Conferences, in which the main task was to extract information from text and put it into slots of predefined templates. Template filling does not require full parsing, but can be accomplished by pattern-matching using finite-state automata (which may be characterized by regular expressions). Template filling fills slots with a series of words, classified, for example, as names of persons, organizations, locations, chemicals, or genes. Patterns can use computational lexicons; some of these can be quite basic, such as a list of titles and abbreviations that precede a person’s name. Frequently, the lists can become quite extensive, as with lists of company names and abbreviations or of gazetteer entries. Names can be identified quite reliably without going beyond simple lists, since they usually appear in noun phrases within a text. Recognizing and characterizing events can also be accomplished by using patterns, but more substantial lexical entries are necessary. Events typically revolve around verbs and can be expressed in a wide variety of syntactic patterns. Although these patterns can be expressed with some degree of reliability (e.g., company hired person or person was hired by company) as the basis for string matching, this approach does not achieve a desired level of generality. Characterization of events usually entails a level of partial parsing, in which major sentence elements such as noun, verb, and prepositional phrases are identified. Additional generality can be achieved by extending patterns to require certain semantic classes. For example, in uncertain cases of classifying a noun phrase as a person or thing, the fact that the phrase is the subject of a communication verb (said or stated) would rule out classification as a thing. WordNet is used extensively in IE, particularly using hypernymic relations as the basis for identifying semantic classes. Continued progress in IE is likely to be accompanied by the use of increasingly elaborate computational lexicons, balancing needs for efficiency and particular tasks. Question Answering Although much research in question answering has been conducted since the 1960s, this field was much advanced with the introduction of the question-answering track in the Text Retrieval Conferences beginning in 1998 and

Lexicon/Dictionary: Computational Approaches 495

Voorhees and Buckland, 2004 and earlier volumes for papers relating to question answering). From the beginning, researchers viewed this NLP task as one that would involve semantic processing and provide a vehicle for deeper study of meaning and its representation. This has not generally proved to be the case, but many nuances have emerged in handling different types of questions. Use of the WordNet hierarchy as a computational lexicon has proved to be a key component of virtually all question-answering systems. Questions are analyzed to determine what type of answer is required; e.g., ‘‘what is the length . . .?’’ requires an answer with a number and a unit of measurement; candidate answers use WordNet to determine if a measurement term is present. Exploration of ways to use WordNet in question answering has demonstrated the usefulness of hierarchical and other types of relations in computational lexicons. At the same time, however, lexicographical shortcomings in WordNet have emerged, particularly the use of highly technical hypernyms in between common-sense terms in the hierarchy. Many questions can be answered with stringmatching techniques. In the first year, most of the questions were developed directly from texts (a process characterized as back-formation), so that answers were easily obtained by matching the question text. IE techniques proved to be very effective in answering the questions. Some questions can be transformed readily into searches for string patterns, without any use of additional lexical information. More elaborate string-matching patterns have proved to be effective when pattern elements specify semantic classes, e.g., ‘accomplishment’ verbs in identifying why a person is famous. Over the 6 years of the question-answering track, the task has been continually refined to present more difficult questions that would require the use of more sophisticated techniques. Many questions have been devised that require at least shallow parsing of texts that contain the answer. Many questions require more abstract reasoning to obtain the answer. One system has made use of logical forms derived from WordNet glosses in an abductive reasoning procedure for determining the answer. Improvements in question answering will continue to be fueled in part by improvements in the content and exploitation of computational lexicons. Text Summarization The field of automatic summarization of text has also benefited from a series of evaluation exercises, known as the Document Understanding Conferences (see Over, 2004 and references to earlier research). Again, much research

in summarization has been performed (see Mani, 2001 and Summarization of Text: Automatic for an overview). Extractive summarization (in which highly salient sentences in a text are used) does not make significant use of computational lexicons. Abstractive summarization seeks a deeper characterization of a text. It begins with a characterization of the rhetorical structure of a text, identifying discourse units (roughly equivalent to clauses), frequently with the use of cue phrases (see Discourse Parsing, Automatic). Cue phrases include subordinating conjunctions that introduce clauses and sentence modifiers that indicate a rhetorical unit. Generally, this overall structure requires only a small list of words and phrases associated with the type of rhetorical unit. Attempts to characterize texts in more detail involve a greater use of computational lexicons. First, texts are broken down into discourse entities and events; information extraction techniques described earlier are used, employing word lists and some additional information from computational lexicons. Then, it is necessary to characterize the lexical cohesion of the text, by understanding the equivalence of different entities and events and how they are related to one another. Many techniques have been developed for characterizing different aspects of a text, but no trends have yet emerged in the use of computational lexicons in summarization. The overall discourse structure is characterized in part by the rhetorical relations, but these do not yet capture the lexical cohesion of a text. The words used in a text give rise to lexical chains based on their semantic relations to one another (i.e., such as the type of relations encoded in WordNet). The lexical chains indicate that a text activates templates (via the words) and that various slots in the templates are filled. For example, if word1 ‘is a part of’ word2, the template activated by word2 will have a slot part that will be filled by word1. When the various templates activated in a text are merged via synonymy relations, they will form a set of concepts. The concepts in a text may also be related to one another, particularly instantiating a concept hierarchy for the text. This concept hierarchy may then be used as the basis for summarizing a text by focusing on the topmost elements of the hierarchy. Speech Recognition and Speech Synthesis The use of computational lexicons is speech technologies is limited (see Van Eynde and Gibbon [2000] for several papers on lexicon development for speech technologies). MRDs usually contain pronunciations, but this information only provides a starting point for the recognition and synthesis of speech. Speech computational lexicons include the orthographic

496 Lexicon/Dictionary: Computational Approaches

word form and a reference or canonical pronunciation. A full-form lexicon also contains all inflected forms for an entry; rules may be used to generate a full-form lexicon, but it is generally more accurate to use a full-form lexicon. The canonical pronunciations are not sufficient for spoken language processing. Lexical needs must reflect pronunciation variants arising from regional differences, language background of nonnative speakers, position of a word in an utterance, emphasis, and function of the utterance. Some of these difficulties may be addressed programmatically, but many can be handled only through a much more extensive set of information. As a result, speech databases provide empirical data on actual pronunciations, containing spoken text and a transcription of the text into written form. These databases contain information about the speakers, type of speech, recording quality, and various data about the annotation process. Most significantly, these databases contain speech signal data recorded in analog or digital form. The databases constitute a reference base for attempting to handle the pronunciation variability that may occur. In view of the massive amounts of data involved in implementing basic recognition and synthesis systems, they have not yet incorporated the full range of semantic and syntactic capabilities for processing the content of the spoken data.

particularly variant forms, morphology and inflections, grammatical information, and example sentences. The efforts also include the development of a semantic taxonomy based on lexicographic principles and statistical measures of definitional similarity. The statistical measures are also used for automatic assignment of domain indicators. Collocates for senses are being developed based on various clues in the definitions (e.g., lexical preferences for the subject and object of verbs, see Collocations). Corpus-based methods have also been used in the construction of a thesaurus. A lexicon of a person, language, or branch of knowledge is inherently a very complex entity, involving many interrelationships. Attempting to comprehend a lexicon within a computational framework reveals the complexity. Despite the considerable research using computational lexicons, the computational understanding of meaning still presents formidable challenges. See also: Collocations; Dictionaries and Encyclopedias:

Relationship; Disambiguation; Discourse Parsing, Automatic; Frame Semantics; Lexical Acquisition; Lexical Conceptual Structure; Lexical Semantics; Lexicology; Lexicon: Structure; Meronymy; Natural Language Understanding, Automatic; Partitives; Polysemy and Homonymy; Quantifiers; Selectional Restrictions; Semantic Primitives; Summarization of Text: Automatic; Synonymy; Thesauruses; WordNet(s).

The Semantic Imperative In considering the NLP applications of word-sense disambiguation, information extraction, question answering, and summarization, there is a clear need for increasing amounts of semantic information. The main problem facing these applications is an inability to identify paraphrases, that is, identifying whether a complex string of words carries more or less the same meaning as another string. Research in the linguistic community continues to refine methods for characterizing, representing, and using semantic information. At the same time, researchers are investigating properties of word use in large corpora (see Lexical Acquisition). As yet, the symbolic content of traditional dictionaries has not been merged with the statistical properties of word usage revealed by corpus-based methods. Dictionary publishers are increasingly recognizing the value of electronic versions and are putting more information in these versions than appears in the print versions. McCracken (2003) describes several efforts to enhance a dictionary database as a resource for computational applications. These efforts include much greater use of corpus evidence in creating definitions and associated information for an entry,

Bibliography Ahlswede T (1985). ‘A tool kit for lexicon building.’ Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics. Chicago, Illinois: Association for Computational Linguistics. June 8–12. Amsler R A (1980). ‘The structure of the Merriam-Webster pocket dictionary.’ Ph.D. diss., Austin: University of Texas. Amsler R A (1986). ‘Computational lexicology: a research program.’ In Maffox A (ed.) American Federated Information Processing Societies Conference Proceedings. National Computer Conference, Arlington, VA: AFIPS Press. 397–403. Atkins B T S (1991). ‘Building a lexicon: the contribution of lexicography.’ International Journal of Lexicography 4(3), 167–204. Boguraev B & Briscoe T (1987). ‘Large lexicons for natural language processing: utilising the grammar coding system of LDOCE.’ Computational Linguistics 13(3–4), 203–218. Chodorow M, Byrd R & Heidorn G (1985). ‘Extracting semantic hierarchies from a large on-line dictionary.’ Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics. Chicago, IL: Association for Computational Linguistics.

Lexicon: Structure 497 Dolan W, Vanderwende L & Richardson S (2000). ‘Polysemy in a broad-coverage natural language processing system.’ In Ravin Y & Leacock C (eds.) Polysemy: theoretical and computational approaches. Oxford: Oxford University Press. 178–204. Evens M (ed.) (1988). Relational models of the lexicon: representing knowledge in semantic networks. Cambridge: Cambridge University Press. Evens M & Smith R (1978). ‘A lexicon for a computer question-answering system.’ American Journal of Computational Linguistics 4, 1–96. Fellbaum C (ed.) (1998). WordNet: an electronic lexical database. Cambridge: MIT Press. Firth J R (1957). ‘Modes of meaning.’ In Firth J R (ed.) Papers in linguistics 1934–1951. Oxford: Oxford University Press. 190–215. Gove P (ed.) (1972). Webster’s seventh new collegiate dictionary. Springfield, MA: G. & C. Merriam Co. Grishman R (2003). ‘Information extraction.’ In Mitkov R (ed.) The Oxford handbook of computational linguistics. Oxford: Oxford University Press. Hirst G (1987). Semantic interpretation and the resolution of ambiguity. Cambridge: Cambridge University Press. Ide N & Veronis J (1990). ‘Very large neural networks for word sense disambiguation.’ Proceedings of the 9th European Conference on Artificial Intelligence. Stockholm. Ide N & Veronis J (1993). ‘Extracting knowledge bases from machine-readable dictionaries: have we wasted our time?’ Proceedings of Knowledge Bases and Knowledge Structures 93. Tokyo. Kilgarriff A & Palmer M (2000). ‘Introduction to the special issue on SENSEVAL.’ Computers and the Humanities 34(1–2), 1–13. Litkowski K C (1978). ‘Models of the semantic structure of dictionaries.’ American Journal of Computational Linguistics 4, 25–74.

Mani I (2001). Automatic summarization. Amsterdam: John Benjamins. McCracken J (2003). ‘Oxford dictionary of English: current developments.’ Companion volume of the 10th conference of the European Association for Computational Linguistics. Budapest, Hungary. Nida E A (1975). Componential analysis of meaning. The Hague: Mouton. Olney J, Revard C & Ziff P (1968). Toward the development of computational aids for obtaining a formal semantic description of English. Santa Monica, CA: System Development Corporation. Over P (ed.) (2004). Document understanding workshop. Human Language Technology/North American Association for Computational Linguistics Annual Meeting. Association for Computational Linguistics. Proctor P (ed.) (1978). Longman dictionary of contemporary English. Harlow, Essex: Longman Group. Quillian M R (1968). ‘Semantic memory.’ In Minsky M (ed.) Semantic information processing. Cambridge: MIT Press. 216–270. Saint-Dizier P & Viegas E (eds.) (1995). Computational lexical semantics. Cambridge: Cambridge University Press. Soukhanov A (ed.) (1992). The American heritage dictionary of the English language (3rd edn.). Boston: Houghton Mifflin Company. Van Eynde F & Gibbon D (eds.) (2000). Lexicon development for speech and language processing. Dordrecht: Kluwer Academic Publishers. Voorhees E M & Buckland L P (eds.) National Institute of Science and Technology Special Publication 500-255. The Twelfth Text Retrieval Conference (TREC 2003). Gaithersburg, MD: National Institute of Standards and Technology. Wilks Y A, Slator B M & Guthrie L M (1996). Electric words: dictionaries, computers, and meanings. Cambridge: The MIT Press.

Lexicon: Structure K Allan, Monash University, Victoria, Australia ß 2006 Elsevier Ltd. All rights reserved.

A lexicon is a bin for storing listemes, language expressions whose meaning is not determinable from the meanings (if any) of their constituent forms and that, therefore, a language user must memorize as a combination of form and meaning. The way that a lexicon is organized depends on what it is designed to do. 1. The traditional Indo-European desktop dictionary is organized into alphabetically ordered entries in order to maximize look-up efficiency for the literate user.

2. Kitab al-Ayn, the Arabic dictionary of Al Khalil Ben Ahmad, has a phonetically based listing, beginning with velars and ending with bilabials (Kniffka, 1994). 3. A grammar that requires random lexical entry according to morphological class needs to access a lexicon through the morphosyntactic category of the lexicon entry. 4. The hearer accesses the mental lexicon from the phonological or graphological form of the entry. 5. The speaker accesses the mental lexicon through the meaning. However, slips of the tongue suggest that access presents a choice among several

498 Lexicon: Structure

activated forms, from among which the inappropriate ones need to be inhibited (Aitchison, 2003: 220). 6. Someone producing an alliteration (e.g., around the rugged rocks the ragged rascal ran) needs to access the onsets to phonological forms. 7. Someone looking for a rhyme needs to be able to access entries via the endings of the phonological form. 8. The meaning of one word can remind us of another with a similar or contrary meaning; this suggests that meanings in the lexicon must be organized into a network of relations similar to those in a thesaurus. Summing up, a lexicon must be accessible from three directions – form, morphosyntax, and meaning – none of which is intrinsically prior. Formal specifications include graphological specification such as alternative spellings, hyphenation location, and use of uppercase. The phonological specification includes information on alternative pronunciations, syllable weight and structure, and the relative pitch and/or tone of the various syllables. Morphosyntactic specifications include morphological and syntactic properties such as the inherent morphosyntactic (lexical) category of the item. Traditionally, the nouns helicopter and move and the semantically related verbs helicopter and move are all different listemes; yet, despite their different syntactic properties and concomitant meaning differences, the transitive verb move and the intransitive verb move traditionally share the same polysemous dictionary entry under different subheadings. The COBUILD English dictionary lists all forms of a listeme; for example, for clear it has clearer, clearest, clears, clearing, cleared (not divided between Adj and V listemes). Derived words are either generated from the listeme by morphological rules (e.g., Adv clearly from Adj clear) or else listed as distinct listemes. Should dictionaries list just morphemes or every word form, including inflected forms, or something in between? For various views see Butterworth (1983), Flores d’Arcais and Jarvella (1983), Altman (1990), Altman and Shillcock (1993), Allan (2001), and Aitchison (2003). Regularities need to be signaled, for example, the declension and gender of Latin nouns and the conjugation of verbs. So, too, must irregularities be signaled, for example, the past tense of English strong verbs, and constraints on range such as an object NP can interrupt the V þ vPrt of some phrasal verbs (run NP down; ‘denigrate NP’) but not others (put *NP up with; ‘endure NP’). Semantic specifications identify the senses of a listeme based on the salient characteristics of its typical

denotatum (see Stereotype Semantics). The form of the semantic specification depends on the preferred metalanguage (see Metalanguage versus Object Language; Cognitive Semantics; Dynamic Semantics; Frame Semantics; Generative Lexicon; Lexical Conceptual Structure; Natural Semantic Metalanguage). Boguraev and Briscoe (1989: 5) add encyclopedic knowledge to the list of semantic specifications, and many lexicographers would agree that a lexicon should also be a cultural index; thus, the Collins English dictionary (Hanks, 1998), New Oxford dictionary of English (Pearsall, 1998), and Collins English dictionary (Butterfield, 2003) list encyclopedic information about bearers of certain proper names (on the networked relationship between lexicon and encyclopedia, see Dictionaries and Encyclopedias: Relationship). Bauer (1983: 196) proposed a category of ‘stylistic specifications’ to distinguish among piss, piddle, and micturate to reflect the kind of metalinguistic information found in traditional desktop dictionary tags such as ‘colloquial,’ ‘slang,’ ‘derogatory,’ ‘medicine,’ and ‘zoology.’ Such metalinguistic information is more appropriate to the encyclopedia entry directly connected with every lexicon entry. The encyclopedia will also supply etymological information, which is not essential to the proper use of the listemes, and an account of the connotations (see Connotation) of listemes, which is. Bauer also suggested a signaling of ‘related lexemes’ such as that edible is related to eat. This should be identified through the semantic specification, which will differentiate the sense of edible from eatable (cf. Lehrer, 1990: 210). Contraries and antonyms are linked to a listeme via its semantic field (see Lexical Fields). Metalanguage terms used in semantic specifications are often semantically identical with object language listemes; for example, Pustejovsky (1995: 101) specified book as a ‘‘physical object’’ that ‘‘holds’’ ‘‘information’’ created by someone who ‘‘write[s]’’ it and whose function is to be ‘‘read.’’ Certainly, there is a relation among book, write, and read that needs to be accounted for either in the semantic specification or the associated encyclopedia entry. Used in the morphosyntactic specifications, such category terms as noun, verb, adjective, and feminine are part of the metalanguage, not the object language. But they also appear in the lexicon as expressions in the object language. Suppose that the sense ‘typically names a person, place, thing or abstract entity’ has an address in the lexicon (this is explained later). Next, suppose that in the lexicon of English, one form at this address is noun; in the lexicon of French, one form at this address is

Lexicon: Structure 499

nom; and in the lexicon of Greek, it is o´noma. Suppose that in the lexicon of the metalanguage there is a form N at this same address. This move simply reflects the well-known and widely accepted fact that there are translation (near) equivalences between the object language and metalanguage, just as there are between different natural languages. The lexicon is used in string matching among formal specifications, such as retrieving all listemes beginning with /d/ or those rhyming with cord. Similarly, a search of the morphosyntactic specification of every entry is the best way to compile a list of all nouns in the lexicon. And the semantic specification of a listeme will locate it within its semantic field and relate it to superordinate items, contraries, antonyms, and the like. Many of these relations are not primarily lexical but arise from the relationships characteristic of the denotatum of a listeme; for example, because a cat is perceived to be a kind of animal, the listeme animal is semantically superordinate to the listeme cat. Any two listemes may be connected through their formal, morphosyntactic, and/or semantic specifications in the lexicon and through the stylistic or etymological properties identified in the encyclopedia. Lyons (1977: 516) suggested that each lexicon entry be assigned a random number that serves as its address for access through any of the three modes. This proposal is very similar to one in Jackendoff (1995; 1997: 89), which ‘‘licence[s] the correspondence of certain (near-) terminal symbols of syntactic structure with phonological and conceptual structures’’ (see Lexical Conceptual Structure). Jackendoff proposed that a lexicon entry has the three parts, as shown in Figure 1, where they are tagged with our specification types. In Jackendoff’s theory, the morphosyntactic specification is the linchpin that is co-indexed j with the formal specification on the left-hand side and the subscript n of the semantic specifications on the right. This seems correct. If we ask someone the meaning of the word fly (which is many ways ambiguous), two possible answers are ‘travel though the air’ (V) and ‘cloth cover for an opening’ (N). Similarly,

Figure 1 Jackendoff-style lexicon entry for dog.

the word dog means ‘canine animal’ (N) and ‘follow dog-like’ (V). The word canine means ‘a member of the dog family’ (N) and ‘pertaining to the dog family’ (Adj). The difference between such pairs is indicated by the combination of formal and morphosyntactic specification – which corresponds to Jackendoff’s subscript j in Figure 1. This distinguishing factor is exactly the reason that a traditional desktop dictionary identifies the morphosyntactic category immediately after identifying the form of a listeme. Example (1) shows how much morphosyntax contributes to meaning. (1) the toves gimbled in the wabe

In this example, toves is clearly a plural noun and therefore denotes more than one entity, gimbled is the past tense of a verb and so denotes some act or state of the toves, and wabe is a noun – and, because it falls within the scope of the preposition in, wabe identifies a place or time. Thus, the morphosyntactic specification of these words gives us some meaning to work on, as suggested by Jackendoff’s subscript n in Figure 1. In addition, the form distinguishes between the nouns tove and wabe. For the speaker, the main function of the lexicon is to find a form to express the intended meaning – but not just a form; it should also find its morphosyntactic category, which is often at least partly determined by the content. For instance, a reference to a thing will usually require using a noun; a reference to an event will usually require a verb. Except when parroting a phrase, it is impossible when speaking a language to use a form without assigning it a morphosyntactic specification. It is impossible to assign a morphosyntactic specification to a meaning without also assigning it a form (the appropriate form for a zero morph is, of course, null). Someone who asks what’s the word that means ‘get back at’? may not be able to retrieve the form retaliate, but the intended meaning is conveyed using the forms of other lexemes. In terms of Figure 1, the speaker goes from the subscript n on the semantic specification to the morphosyntactic specification jNn that links to the formal specification through the j subscript. For the hearer, the main function of the lexicon is to attach meaning to forms that normally occur within syntactic structures, so it is a matter of finding a semantic specification for something that is both formally and morphosyntactically specified. In terms of Figure 1, the hearer goes from j to n. Jackendoff’s subscripts correspond to connection points for bidirectional lines in a network as in Figure 2, which shows that their sequence with respect to the formal specifications (F), morphosyntactic specifications (M), and semantic specifications (S)

500 Lexicon: Structure

Figure 2 Networked components of an entry in the hypothetical lexicon. F, formal specs; M, morphosyntactic specs; S, semantic specs.

Table 1 Partial entries for dog and caninea F

M

S

f8899dog

f8899Ns0017

f7656canine

f8899Vs3214 f7656Ns7439 f7656Adjs1227

dog0 s0017 follow_like_a_s00170 s3214 conical_pointed_tooth0 s7439 pertaining_to_a_s00170 s1227

a

Figure 3 Part of the network for the lexicon entries for dog (N and V) and canine (N and Adj).

F, formal specs; M, morphosyntactic specs; S, semantic specs.

is a convention without substance; that is, fF ¼ Ff and so forth. Suppose there are the grossly oversimplified components in Table 1. The formal and semantic addresses are randomly selected numbers, tagged with f and s, respectively. Table 1 ignores (1) the fact that the semantic specifications must somehow reflect the facts that a dog is an animal, a living thing, and a physical object and (2) the need to identify the kind of countable and uncountable environments in which a noun can conventionally occur; the thematic structure, valency, or frame of a verb; and the gradability of an adjective. In Figure 3, which represents a part of this network of relations, the question of how the graphological and phonological forms should be correlated is not accounted for. The form of a lexicon entry is a co-indexed tripartite network in which the indices correspond to bidirectional connectors between the components of a lexicon entry, as shown in Figure 2. Each of the formal, morphosyntactic and semantic specifications is also linked with the encyclopedia (see Dictionaries and Encyclopedias: Relationship). Such connectors model cognitive pathways that presumably have a neurological basis and could be empirically disconfirmed (e.g., by studies of aphasics). There is evidence for a spreading activation along pathways within the mental lexicon that leads to selection among partial matches between meaning and form for the appropriate listeme (Altman, 1990; Levelt, 1993; Aitchison, 2003). There is, of course, nothing to be gained by rewriting desktop dictionaries in terms of networks if they are going to be consulted in the same way human beings have consulted them in the past.

See also: Cognitive Semantics; Collocations; Componential Analysis; Connotation; Definition in Lexicology; Dictionaries; Dictionaries and Encyclopedias: Relationship; Dynamic Semantics; Formal Semantics; Frame Semantics; Generative Lexicon; Idioms; Lexical Conceptual Structure; Lexical Fields; Metalanguage versus Object Language; Natural Semantic Metalanguage; Polysemy and Homonymy; Psychology, Semantics in; Selectional Restrictions; Stereotype Semantics.

Bibliography Aitchison J (2003). Words in the mind: an introduction to the mental lexicon (3rd edn.). Oxford: Blackwell. Allan K (2001). Natural language semantics. Oxford and Malden, MA: Blackwell. Altman G T M (ed.) (1990). Cognitive models of speech processing: psycholinguistic and computational perspectives. Cambridge MA: MIT Press. Altman G T M & Shillcock R (eds.) (1993). Cognitive models of speech processing. Hove, NJ: Lawrence Erlbaum. Bauer L (1983). English word-formation. Cambridge, UK: Cambridge University Press. Boguraev B & Briscoe T (1989). ‘Introduction.’ In Boguraev B & Briscoe T (eds.) Computational lexicography for natural language processing. London: Longman. 1–40. Butterfield J et al. (eds.) (2003). Collins English dictionary: complete and unabridged (6th edn.). Bishopriggs (Glasgow), UK: HarperCollins. Butterworth B (1983). ‘Lexical representation.’ In Butterworth B (ed.) Development, writing and other language processes, vol. 2. London: Academic Press. Flores d’Arcais G B & Jarvella R J (eds.) (1983). The process of language understanding. New York: John Wiley. Hanks P et al. (eds.) (1998). Collins English dictionary (4th edn.). Glasgow: HarperCollins.

Logic and Language 501 Jackendoff R S (1995). ‘The boundaries of the lexicon.’ In Everaert M, van der Linden E-J, Schenk A & Schreuder R (eds.) Idioms: structural and psychological perspectives. Hillsdale, NJ: Erlbaum. 133–165. Jackendoff R S (1997). Architecture of the language faculty. Cambridge, MA: MIT Press. Kniffka H (1994). ‘Hearsay vs autoptic evidence in linguistics.’ In Nagel T et al. (eds.) Zeitschrift der deutschen morgenla¨ndischen Gesellschaft. Stuttgart: Franz Steiner. 345–376.

Lehrer A (1990). ‘Polysemy, conventionality, and the structure of the lexicon.’ Cognitive Linguistics 1, 207–246. Levelt W J M (ed.) (1993). Lexical access in speech production. Oxford: Blackwell. Lyons J (1977). Semantics (2 vols). Cambridge, UK: Cambridge University Press. Pearsall J et al. (eds.) (1998). New Oxford dictionary of English. Oxford: Oxford University Press. Pustejovsky J (1995). The generative lexicon. Cambridge, MA: MIT Press.

Logic and Language G Callaghan, Wilfrid Laurier University, Waterloo, Ontario, Canada G Lavers, University of Western Ontario, London, Ontario, Canada ß 2006 Elsevier Ltd. All rights reserved.

Introduction Theories of meaning and methods of linguistic analysis are key items in the agenda of contemporary analytic philosophy. Philosophical interest in language gained substantial impetus from developments in logic that took place in the latter half of the nineteenth century. It was at this time that the early modern conception of logic as an informal ‘art of thinking’ gave way to the contemporary conception of logic as a formally rigorous, symbolic discipline involving, inter alia, a mathematically precise approach to deductive inference. The systems of symbolic logic that emerged in the later stages of the nineteenth century were fruitfully applied in the logical regimentation of mathematical theories – analysis and arithmetic in particular – and logical analysis became the cornerstone of a general philosophical methodology for a number of influential figures in the first half of the twentieth century. Though the operative conception of logical analysis did not in every case treat language (or ‘natural language’) as the proper object of investigation, close connections between logic and language were stressed by virtually every proponent of the methodology. Our aim in this entry is to discuss these connections as they appear in the work of some of the eminent precursors and purveyors of the analytic tradition.

The Mathematicization of Logic: Leibniz and Boole Early modern philosophers were typically antipathetic to the formal approach to logic embodied in

Aristotle’s doctrine of the syllogism and its Scholastic variants. Among the major figures of the early modern period, Gottfried Wilhelm Leibniz (1646–1716) is distinguished both for his respect for the Aristotelian tradition in logic and for his general emphasis on the importance of formal methods. Leibniz applauded Aristotle for being ‘‘the first to write actually mathematically outside of mathematics’’ (1696: 465). However, it was Leibniz’s own works, rather than those of Aristotle or of contemporary Aristotelians, that in the period did most to advance the conception of logic as a kind of generalized mathematics. Leibniz’s logical work consists of a number of manuscripts, unpublished in his lifetime, in which he undertakes the construction of a logical calculus. In virtually all of these works, Leibniz represents judgments and logical laws in a quasi-arithmetical or algebraic notation and he assimilates processes of inference to known methods of calculation with numbers (e.g., by substitution of equals). Leibniz’s motivation for this approach stemmed from his early project of a lingua characteristica universalis – a symbolic language geared to the logically perspicuous representation of content in all fields of human knowledge. According to Leibniz, the content of any judgment consists in the composition of the concepts arrayed in the judgment as subject and predicate. A judgment is true when the predicate concept is ‘contained in,’ or partially constitutive of, the subject concept. For example, the truth of the judgment that all men are rational consists in the fact that the concept rational is contained in the concept man, as per the traditional definition of ‘man’ as ‘rational animal.’ (For obvious reasons, Leibniz’s conception of truth posed difficulties when it came to accounting for contingently true judgments, and the task of providing an account of contingency compatible with his conception of truth was one to which Leibniz devoted considerable philosophical attention.) All complex concepts can be parsed as conjunctions of concepts of lower orders

502 Logic and Language

of complexity down to the level of simple concepts that cannot be further analyzed. Leibniz’s various schemes for a universal characteristic were predicated on the idea that containment relations among concepts could be made arithmetically tractable given an appropriate assignment of ‘characteristic numbers’ to concepts. For instance, in one such scheme Leibniz proposed that the relationship between complex concepts and their simple constituents be represented in terms of the relationship between whole numbers and their prime factors, thus capturing the unique composition of any complex from its primitive components (1679, 1679/1686). In this and similar ways, Leibniz sought to provide a basis for the view that inference, and the evaluation of truth more generally, could be carried out algorithmically – that is, as a mere process of calculation – by familiar arithmetical means. By the 1680s Leibniz had grown pessimistic about the prospect of completing the project of the universal characteristic, and he turned his energies to the more confined task of devising an abstract logical calculus. Leibniz worked on a number of different versions of his logical calculus through the 1680s and 1690s. In each case he explained how standard propositional forms could be expressed in a quasi-algebraic notation. He also laid down logically primitive laws pertaining to his formulas (what he called ‘propositions true in themselves’) and abstractly specified valid inferential transformations, usually in terms of a definition of ‘sameness’ in conjunction with principles pertaining to the substitutability of identicals. Though Leibniz’s efforts at constructing a logical calculus were hampered by his view that all judgments ultimately reduce to a simple subject-predicate form – thus excluding primitive relational judgments – his emphasis on formal explicitness and mathematically exact symbolism stands as an anticipation of the main lines of development in formal logic in the following centuries. Leibniz’s mathematical approach to logic made little impression until the late nineteenth century, when his manuscripts were finally collected and published. By that time, however, the mathematical approach had gained momentum independently, largely on the basis of the work of George Boole (1815–1864). Boole shared with Leibniz the aim of devising an algebraic means of expressing relationships among terms figuring in propositions. However, Boole differed from Leibniz in treating the extensions of concepts (or classes), rather than concepts construed as attributes or ‘intensions,’ as the relevant propositional constituents. In The laws of thought (1854), Boole presented his class logic, which he called ‘the logic of primary propositions,’ as the first and fundamental division of his system. In the second part of the same work, Boole adapted the calculus of classes to a

special interpretation that allows for the representation of logically compound propositions, or ‘secondary propositions,’ thereby unifying (after a fashion) the calculus of classes with a version of modern propositional calculus. Boole’s central idea is that an algebra of logic arises as an interpretive variant of standard numerical algebra when the latter is modified by a single principle that is naturally suggested by the logical interpretation of the symbolism. In Boole’s class logic, letters (or ‘literal symbols’) are interpreted as standing for classes of things determined by some common attribute, with ‘1’ standing for the universe class and ‘0’ standing for the null class. Multiplication, addition, and subtraction operators are treated as standing for the operations of intersection, disjoint union, and difference (or ‘exception’) of classes, respectively. Primary propositions are then expressed as equations with appropriately formed class terms standing on either side of the identity sign. On the basis of this classtheoretic interpretation of the symbolism, Boole maintained that the logical calculus differs from ordinary numerical algebra only with respect to the characteristically logical law that, for any class x, xx ¼ x ðx intersect x is xÞ which holds generally for class theoretic intersection but which holds for numerical multiplication only for x ¼ 0 and x ¼ 1. Having emphasized this difference, Boole observed that the laws and transformations of numerical algebra will be identical to those of an algebra of logic when the numerical values of literal symbols in the former are restricted to 0 and 1. Boole appealed to this formal analogy between the numerical and logical algebras in justifying his approach to inference, which he presented as a process of solving sets of simultaneous equations for unknowns by standard algebraic methods. In The laws of thought, Boole transformed the calculus classes into a serviceable propositional calculus by interpreting his literal symbols over ‘portions of time’ during which elementary propositions are true, thus adapting the notation and methods designed for dealing with class relationships to the propositional case. Boole’s appeal to portions of time reflected a somewhat puzzling endeavor to assimilate or reduce propositional logic to the kind of term logic embodied in his class calculus, and the artificiality of this approach was not lost on subsequent logicians both within and without the algebraic tradition. However, peculiarities of interpretation notwithstanding, Boole can be credited with the first systematic formulation of propositional logic and a commensurate expansion of the scope of formal logic in general. Moreover, his suggestion that propositional logic, class logic, and

Logic and Language 503

numerical algebra (suitably restricted) arise as interpretive variants of a single algebraic system anticipates subsequent developments in abstract algebra and (perhaps only dimly) modern model-theoretic methods in logic. The contributions of Leibniz and Boole constitute beginnings in the fruitful deployment of artificial languages in the analysis of propositional content and the systematization of deductive inference. However, despite their considerable accomplishments, neither Leibniz nor Boole can be credited with bringing logic to its current state of maturity. They produced no inroads in the logic of relations and the use of quantifiers for the expression of generality is entirely foreign to their work. These shortcomings were addressed by later logicians working in the algebraic tradition (e.g., Pierce and Schro¨der), but the significance of their resolution for the development of modern formal logic and its philosophical offshoots will be better appreciated if we adopt a somewhat different perspective on the historical interplay between logic and mathematics.

Logic and Language in Frege For the better part of his career, the philosophermathematician Gottlob Frege (1848–1925) devoted his energies to establishing the ‘logicist’ thesis that arithmetical truth and reasoning are founded upon purely logical principles. At an early stage in his efforts, Frege realized that existing systems of logic were inadequate for carrying out the analysis of content necessary for establishing arithmetic’s logical character. His Begriffsschrift (or ‘concept notation’) (1879) was intendes to address this deficiency. The logical system that Frege presented in the Begriffsschrift and later refined in Part I of his Grundgesetze Der Arithmetic (1893) constitutes the greatest single contribution in formal logic since the time of Aristotle. The most distinctive aspects of Frege’s logic are (1) the use of variables and quantifiers in the expression of generality; (2) the assimilation of predicates and relational expressions to mathematical expressions for functions; (3) the incorporation of both propositional logic and the logic of relations within (second-order) quantificational logic; (4) the notion of a formal system – i.e., of a system comprising a syntactically rigid language along with explicit axioms and inference rules that together determine what is to count as a proof in the system. Frege’s approach to generality is based on his analysis of predication in terms of function and argument. In arithmetic, a term such as 7 þ 5 can be viewed dividing into function and argument in different ways. For instance, it can be treated as dividing into the function ( ) þ 5 with 7 as argument, or as dividing

into the function 7 þ ( ) with 5 as argument, or as dividing into the binary function ( ) þ [ ] with 7 and 5 (in that order) as arguments. Frege’s approach to predication assimilates the analysis of sentences to this feature of the analysis of arithmetical expressions. For example, a simple sentence such as ‘John loves Mary’ can be regarded as predicating the (linguistic) function ‘( ) loves Mary’ of the singular term ‘John,’ or the function ‘John loves ( )’ of the singular term ‘Mary,’ or the relational function ‘( ) loves [ ]’ of ‘John’ and ‘Mary’ (in that order). In the Begriffsschrift, Frege remarked that, for simple sentences like this, the analysis into function and argument makes no difference to the ‘conceptual content’ that the sentence expresses. However, the possibility of analyzing a sentence in these ways is nevertheless crucial to logic, since only on this basis do we recognize logical relationships between generalizations and their instances. Adopting a standard arithmetical practice, Frege makes use of variables as a means of expressing generality. For example, by putting the variable ‘x’ in the argument-place of ‘Mary’ in our example, we arrive at the statement ‘John loves x,’ which is the Begriffsschrift equivalent of the colloquial generalization ‘John loves all things.’ The inference from this generalization to ‘John loves Mary’ now requires that we regard ‘Mary’ as argument to the function ‘John loves ( ),’ since only so is ‘John loves Mary’ recognizable as an instance of the generalization. Other function-argument analyses become salient in connection with other generalizations to which the statement relates as an instance (e.g., ‘x loves Mary’). In the system of the Begriffsschrift, the above described use of variables suffices to express generality in a limited variety of sentential contexts. However, Frege’s broader treatment of generality involves a second crucial component, namely, the use of quantifiers – i.e., the variable binding operators ‘8x’ (read: ‘Every x’) and ‘9x’ (read: ‘some x’) – as a means of indicating the scope of the generality associated with a variable. (Our discussion here prescinds from the peculiarities of Frege’s now obsolete notation as well as his convention of treating existential quantification in terms of universal quantification and negation – i.e., his treatment of ‘9x . . .’ as ‘8x . . .’). One of the many ways in which the use of quantifiers has proven important to logic concerns the expression of multiply general statements, for which no adequate treatment existed prior to the Begriffsschrift. Consider, for example, the relational generalization ‘Everyone loves someone.’ This statement is ambiguous between the following two readings: (1) ‘There is some (at least one) person that is loved by all,’ and (2) ‘Every person is such as to love some (at least one) person.’ The use of quantifiers resolves this ambiguity

504 Logic and Language

by requiring that expressions of generality in multiply general statements be ordered so as to reflect scope. The first reading of the statement is expressed by the existentially quantified sentence ‘9y8x xLy’ where the scope of universal quantifier falls within that of the existential quantifier. (For convenience, we assume here that the variables ‘x’ and ‘y’ are restricted to a domain of persons.) By contrast, the second reading is given by the sentence ‘8x9y xLy’ where the universal quantifier has wide scope with respect to the existential quantifier. Since the Begriffsschrift’s formation rules ensure that the scope of a quantifier will be properly reflected in any sentence in which it occurs, an ambiguous sentence such as the one we started with cannot even be formulated in the language. Scope considerations apply in essentially the same way to sentential operators (e.g., the negation sign ‘,’ and the conditional sign ‘!’) in the logic of the Begriffsschrift. For instance, the ambiguity of the sentence ‘Every dog is not vicious’ results from the fact that, as stated, the sentence does not determine the scope of the negation sign with respect to that of the universal quantifier. On one reading, the scope of the negation sign falls within that of quantifier, i.e., ‘8x(Dx!Vx),’ the statement thus affirming that anything that is a dog is not vicious (or, colloquially expressed: ‘No dogs are vicious’). By contrast, when the statement is read as giving wide scope to the negation sign, i.e., ‘8x(Dx!Vx),’ it becomes the denial of the generalization that all dogs are vicious (i.e., ‘It is not the case that all dogs are vicious’). As the above examples begin to suggest, Frege’s technique of ordering of operators according to scope provides the basis for his incorporation of both propositional logic and the logic of relations within quantificational logic. Frege’s philosophical interest in language extended beyond his characterization the formal mechanisms of a logically perspicuous language such as the Begriffsschrift. In the classic paper ‘On Sense and Reference’ (1892), Frege presented a framework for a general theory of meaning applicable to both natural languages and formal languages. The core of the doctrine consists in the contention that any adequate account of the meaning of a linguistic expression must recognize two distinct, but related, semantic components. First, there is the expression’s reference, i.e., its denotative relation to a referent (or denoted entity). Second, there is the expression’s sense, which Frege characterized as a particular manner in which the expression’s referent is cognitively presented to

the language user. Frege motivated this distinction by drawing attention to sentences in which the meaning of a singular term is apparently not exhausted by its correlation with a referent. For example, if the meaning of a singular term were to consist only in what it refers to, then the true, but non-trivial, identity statement ‘The evening star is the morning star’ could not differ in meaning from the trivially true identity statement ‘The evening star is the evening star.’ Since the latter statement results from the former by substituting co-referential singular terms, any view that equates meaning with reference will necessarily fail to register any difference in meaning between the two statements. But the two sentences clearly do differ in meaning, since ‘The evening star is the morning star’ is not a trivial identity, but an informative identity – indeed, one that expresses the content of a genuine astronomical discovery – whereas ‘The evening star is the evening star’ is plainly trivial. Frege accounts for this by suggesting that while ‘the evening star’ and ‘the morning star’ have a common referent, they express different senses. A language user therefore grasps the common referent differently in connection with each of the two expressions, and this in turn accounts for the difference in ‘cognitive value’ between the two identity statements. Frege applies the notion of sense to similar effect in addressing puzzles concerning the meaning of singular terms in so-called ‘intensional contexts,’ for example, belief reports. In subsequent writings, Frege extended the sensereference distinction beyond the category of singular terms (which, on Frege’s account, refer specifically to ‘objects’), to all categories of linguistic expression, including mondadic and polyadic predicates (which refer to ‘concepts’ and ‘relations,’ respectively), and complete sentences. In the case of sentences, Frege identified as referents the two truth-values, ‘the true’ and ‘the false,’ and he characterized these as special ‘logical objects.’ A sentence’s sense is, by contrast, the ‘thought’ it expresses, where the thought is understood as a compositional product of the senses of the sentence’s linguistic subcomponents. As strained as this extension of the theory may appear, particularly with respect to reference, it brings to light two important features of Frege’s approach to meaning. First, it reflects his insistence that the theory of reference should comprise, inter alia, a theory of semantic value – that is, a systematic account of how the semantic values of complex expressions (which, in the case of sentences, are truth-values) are determined on the basis of the semantic values of their subordinate constituents. Second, it reflects an endeavor to integrate the theory of semantic value with a plausible general account of linguistic understanding (as given

Logic and Language 505

by the theory of sense). Seen in light of these general ambitions, Frege’s theory of sense and reference proposed an agenda that any comprehensive approach to the theory of meaning must in one way or another respect – a point that is amply borne out by subsequent developments in analytic philosophy of language.

Russell: Definite Descriptions and Logical Atomism The idea that logical analysis forms the basis of a general philosophical method is central to the philosophy of Bertrand Russell (1872–1970). It is especially prominent in the works Russell produced over the first quarter of the twentieth century. In this period, Russell developed and defended the doctrine of ‘logical atomism,’ which grew out of his attempt to establish a version of logicism in the philosophy of mathematics, and came to encompass a wide variety of semantic, metaphysical, and epistemological ambitions. The common thread in Russell’s approach to these matters consists in his emphasis on logical analysis as a method for clarifying the ontological structure of the world and the epistemological basis of our knowledge of it. As Russell put it, ‘‘the atom I wish to arrive at is the atom of logical analysis, not physical analysis’’ (1918: 37). Bound up with the notion of a logical atom, understood as a basic residue of logical analysis, is the notion of logical structure itself. Our aim in this section is to illuminate Russell’s conception of logical structure, or ‘logical form,’ as it emerges in his theory of linguistic meaning and in his broader atomism. Russell’s theory of ‘definite descriptions,’ first articulated in his classic paper ‘On denoting’ (1905), paradigmatically illustrates Russell’s methodological reliance on logical analysis in addressing questions about meaning. The argument of the paper involves, among other things, the defense of a principle that Russell regarded as fundamental to the account of linguistic understanding. In The problems of philosophy, Russell gave the following succinct statement of the principle: ‘‘Every proposition which we can understand must be composed wholly of constituents with which we are acquainted’’ (1910: 32). At the time of ‘On denoting,’ Russell meant by a ‘proposition,’ roughly, the state of affairs that is expressed by an indicative sentence, whether or not that state of affairs actually obtains. A proposition’s ‘constituents’ are the real-world entities that figure in the state of affairs (or would figure in it, were the state of affairs to obtain). So understood, a proposition is not a linguistic entity, even in the attenuated sense of a Fregean thought. A proposition is, rather, a structured

entity that comprises various nonlinguistic components of the world. What characterizes Russell’s principle of acquaintance as a principle of linguistic understanding, then, is not the linguistic nature of propositions, but the correlativity of propositions with the indicative sentences of a language. For Russell, understanding any such sentence requires direct experiential acquaintance with the non-linguistic constituents comprised in the proposition it expresses. In ‘On denoting’ Russell addressed problems that the principle of acquaintance ostensibly confronts in connection with ‘denoting phrases’ – i.e., phrases of the form ‘some x,’ ‘every x,’ and especially ‘the x’ (i.e., so-called ‘definite descriptions’). Consider the statement: ‘The author of ‘‘On denoting’’ was a pacifist.’ Since Russell’s principle requires acquaintance with a proposition’s constituents as a condition for linguistic understanding, it would seem that only those personally acquainted with the author of ‘On denoting’ (i.e., with Russell himself) are in a position to understand the sentence. However, this highly counterintuitive consequence only arises on the assumption that the denoting phrase ‘the author of ‘‘On denoting’’’ functions as a genuine singular term, one that singles out Russell as a constituent of the corresponding proposition. Russell’s account of definite descriptions challenged this assumption by arguing that the characterization of definite descriptions as singular terms arises from a mistaken account of the logical form sentences containing them. According to this mistaken analysis, the sentence ‘The author of ‘‘On denoting’’ was a pacifist’ is an instance of the simple subjectpredicate form Ps, where s indicates the occurrence of a singular term and P the occurrence of a predicate. Russell maintained that sentences containing definite descriptions have a far richer logical structure than this account would suggest. On Russell’s analysis, the statement ‘The author of ‘‘On denoting’’ was a pacifist’ is not a simple subject-predicate statement but has the form, rather, of a multiply quantified statement: 9xððx authored ‘On denoting’ & 8y ðy authored ‘On denoting’ ! y ¼ xÞÞ & x was a pacifistÞ On this analysis, the statement says: there is an x such that (1) x authored ‘On denoting,’ (2) for any y, if y authored ‘On denoting,’ then y ¼ x (this clause serving to ensure the uniqueness implied by the use of the definite article) and (3) x was a pacifist. So construed, the only nonlogical components of the sentence are the descriptive predicates ‘( ) authored ‘‘On denoting’’’ and ‘( ) was a pacifist,’ with no trace remaining of the putative singular term ‘the author of ‘‘On denoting’’’. Therefore, beyond an implicit understanding of the mechanisms of quantification and the logical relation

506 Logic and Language

of identity, acquaintance with the referents of these descriptive predicates suffices for understanding the sentence. The sentence still manages to be about Russell since he, and he alone, satisfies the descriptive predicates (or ‘propositional functions,’ in Russell’s terminology) contained in the sentence. However, it no longer singles out Russell as a constituent of the corresponding proposition, thus dispensing with the worry that the principle of acquaintance would require personal acquaintance with Russell as a condition for understanding what the sentence means. The theory of definite descriptions vividly conveys the sense in which, for Russell, the surface grammar of natural language is inadequate as a guide to the analysis of logical form. Indeed, Russell maintained that many of the metaphysical and epistemological perplexities of traditional philosophy were a direct result of conflating the grammatical forms of natural language sentences with logical forms of the propositions we manage to express in natural language. In this connection, it is important to recognize that, for Russell, logical form is not a purely linguistic notion. We have already taken note of the fact that Russell’s early philosophy treats propositions as structured complexes that array various non-linguistic components of reality. Though Russell ultimately abandoned his early theory of propositions, he never abandoned the view that reality itself exhibits varieties of structure to which the details of a suitably perspicuous logical language must answer. In his lectures on The philosophy of logical atomism (1918), this view takes the form of a doctrine of ‘facts,’ where facts are understood as real-world complexes of individual objects and the properties and relations predicable of them. On Russell’s characterization, a fact is a kind of complex that is inherently apt to determine a corresponding indicative statement as true or false – that is, true when the statement affirms the fact, and false when it denies it. The kernel of Russell’s atomism consists in the view that the content of any statement (of whatever order of complexity) is ultimately analyzable in terms of the constellation of logically primitive facts that determine the statement as true or false. Russell’s inventory of such facts includes ‘atomic facts,’ in which properties and relations are predicated of metaphysically ‘simple’ entities, and ‘general facts,’ which are facts concerning all or some of a particular category of entity. Atomic facts correspond to the atomic sentences, and general facts to the quantified sentences, of a logically regimented language. All other sentences are ‘molecular’ in the sense that they are compounds built up from atomic sentences and quantified sentences by the application of logical connectives such as ‘not,’ ‘and,’ ‘or,’ and ‘if . . . then . . . .’ Though molecular sentences assert

neither atomic nor general facts, their truth or falsity is nevertheless dependent upon such facts in the sense that a molecular sentence will be determined as true or false as a function of the truth or falsity of its non-molecular subsentences. For example, if ‘p’ and ‘q’ are atomic sentences, then the disjunction ‘p or q’ will be true just in case one or both of ‘p’ and ‘q’ are true, where the truth or falsity of these subsentences is determined directly by the atomic facts to which they relate. Russell’s metaphysical view of logical form – that is, his view that logical structure is an inherent characteristic of the facts that the real world ultimately comprises – is nicely expressed in a comment from his Introduction to mathematical philosophy. There Russell maintains that ‘‘logic is concerned with the real world just as truly as zoology, though with its more abstract and general features’’ (1919: 169). At least part of Russell’s motivation for this ‘substantive’ conception of logic consists in his abiding conviction that the structure of language (or of an ideal language, at any rate) and the structure of the world must in some way coincide if there is to be any prospect of expressing our knowledge of the world by linguistic means. The task of giving a detailed account of this community of form between language and world was one that Russell wrestled with many times over, but which he ultimately left to the talents of his most gifted student, Ludwig Wittgenstein – the second great exponent of the philosophy of logical atomism.

Wittgenstein on Logic and Language Of the figures we are discussing, arguably Wittgenstein (1889–1951) addressed the question of the relation between logic and language most extensively. His earliest major work, the Tractatus logic-philosophicus, devotes much attention to this problem. It is on this early work that we will focus here. In this work he praised Russell for discovering that the apparent logical form of a proposition need not be its real logical form. He supplemented Russell’s view with the claim that the real form of a proposition is a picture of a state of affairs in the world. Propositions, according to the Tractatus, are pictures of facts. The structure of a proposition mirrors the structure of the fact it represents. What a fact and the proposition that describes it have in common is their form. ‘‘The picture, however, cannot represent its form of representation; it shows it forth’’ (2.172). Here we see the important Tractarian distinction between saying and showing. The statement ‘it is now raining’ says something about the world. The statement ‘it is either the case that it is now raining or it is not the case that it is now raining’ says nothing about the world. It does, however, show the logical relations between

Logic and Language 507

facts. If something can be shown it cannot be said (4.1212). It follows that nothing concerning the logical relations between facts can be said. According to the Tractatus ‘‘the world is the totality of facts, not of things’’ (1.1). That is to say, the world is not completely described by a list of all the objects that it contains. Rather, a complete description of the world would consist of all true sentences. Facts can either be atomic or compound, and correspondingly there are two types of propositions. Atomic facts are the most basic type of fact, and all atomic facts are independent of one another. Likewise, any possible set of atomic propositions could be true at the same time. This does not hold generally, as p and p (it is not the case that p) cannot both be true at the same time. Compound propositions are built up by truth functions on atomic proposition. Any operator, including the logical operators (and, or, not, . . .), that takes sentences as arguments and assigns a truth value to the compound expression based only on the truth value of the arguments, is called a truth functional operator. For instance, ‘or’ designates a truth function with two argument places: the truth value of the sentence ‘p or q’ depends only on the truth value of the sentences ‘p’ and ‘q’. On the other hand, in the sentence ‘Julius Caesar conquered Gaul before Rome ’ designates a funcfell to barbarians,’ ‘. . .. before tion that takes sentences as arguments, but it is not truth functional since we need to know more than the truth value of the arguments to determine the truth value of the compound. Wittgenstein observed that all propositions are either atomic or built up by truth functions on atomic propositions. Because of this all propositions can be expressed as a truth function on a set of atomic propositions. Statements such as all of those of the form ‘p or p’ are tautologies: they are true no matter what the truth value of the constituents. We can know for certain that a tautology is true, but this is only because tautologies are true independently of which atomic facts turn out to be true (and because all sentences are truth functions of atomic sentences). We cannot say that the world has a certain logical structure, this can only be shown. It is tautologies that show the logical syntax of language, but tautologies say nothing. ‘‘Logical propositions describe the scaffolding of the world, or rather they present it. They ‘treat’ of nothing’’ (6.124). Concerning sentences of natural language, Wittgenstein thought that no serious reconstruction is necessary. ‘‘In fact, all the propositions of our everyday language, just as they stand, are in perfect logical order’’ (5.5563). Furthermore, Wittgenstein thought that what he says about the logical structure of language must already be known by anyone who

can understand the language. ‘‘If we know on purely logical grounds that there must be elementary propositions, then everyone who understands propositions in their unanalysed form must know it’’ (5.5562). Remember that Wittgenstein, following Russell, distinguished between the apparent logical form of a proposition and its real logical form. The logical form of natural languages is extremely complex and shrouded in conventions. ‘‘Man possesses the capacity of constructing languages, in which every sense can be expressed, without having an idea of how and what each word mean – just as one speaks without knowing how the single sounds are produced. Colloquial language is part of the human organism and is no less complicated than it. From it it is humanly impossible to gather immediately the logic of language’’ (4.002). While ordinary use of natural language is in perfect logical order, philosophy arises from the abuse of natural language. Wittgenstein thinks that philosophy is nonsense because it attempts to state what cannot be said. ‘‘Most propositions and questions, that have been written about philosophical matters, are not false, but senseless’’ (4.003). The view that philosophy as standardly practiced is senseless, and therefore that a radically new approach to philosophy must be developed had a profound influence on a group of philosophers who held weekly meetings in Vienna – the Vienna Circle.

Carnap and the Vienna Circle Rudolf Carnap (1891–1970) is generally regarded as the most influential member of the Vienna circle. This group (often called the logical positivists or logical empiricists) studied Wittgenstein’s Tractatus carefully and much of what they wrote was inspired by or was a reaction to this work. Logical positivism is often thought of as being characterized by its commitment to verificationism. In its strictest form, verificationism is the view that the meaning of a sentence consists in the method of its verification – that is, in the epistemic conditions under which the statement would properly be acknowledged as true. In a less strict form, it is the view that the meaning of a sentence consists of what would count as evidence for or against it. There was much debate in the circle as to what form the verificationist principle should take. There was in the circle very little objection (Go¨del being the notable exception) to the thesis that there are two different kinds of statements – empirical (synthetic) and logico-mathematical (analytic) statements. Concerning empirical statements, their meaning is given by what would amount to a verification (or confirmation on a less strict view) of the statement or its negation. Concerning logico-mathematical

508 Logic and Language

statements, the circle was much influenced by Wittgenstein’s view that tautologies are a priori truths – truths that are knowable independently of experience because they say nothing concerning the state of the empirical world. What Wittgenstein counted as a logical truth (a tautology) was not sufficiently broad to include all of mathematics. Since mathematical truths are not empirical assertions, members of the Vienna circle thought they should have the same status as other logical truths. Carnap undertook to broaden the definition of logical truth so as to include all mathematical statements. To do this Carnap had to answer the question of what makes something a logical truth. Carnap’s answer to this question involved the adoption of a strong form of conventionalism, which he expressed in terms of his famous ‘principle of tolerance’: ‘‘In logic, there are no morals. Everyone is at liberty to build up his own logic, i.e. his own form of language, as he wishes. All that is asked of him is that, if he wishes to discuss it, he must state his methods clearly, and give syntactic rules instead of philosophical arguments’’ (The logical syntax of language, x17). This principle states that logical truth is a matter of convention. Which statements are treated as belonging to the set of analytic statements is a matter of pragmatic decision, provided that the set can be clearly defined. There is, for Carnap, no logical structure of the world that is either rightly or wrongly captured by our choice of logic. Logical relationships between sentences are a matter of stipulation on our part. However, by classifying logical statements as analytic, and therefore independent of empirical circumstances, Carnap preserves the Wittgensteinian idea that logical truths say nothing about the world. Carnap later summarized his position on logico-mathematical truth by claiming that analytic statements are true in virtue of their meaning. Carnap’s principle of tolerance was inspired by the debates concerning the foundations of mathematics in the 1920s. One party in this debate were the intuitionists, who did not believe that we have grounds to assert a mathematical sentence of the form ‘p or p,’ unless we have a proof of either p or of p. According to classical logic, the sentence ‘p or p’ is a tautology; it therefore stands in no need of prior justification. Intuitionists, therefore needed to abandon classical logic in favor of a logic that would not count all instances of the law of the excluded middle (p or p) as valid. Carnap saw both classical and intuitionistic logic as well motivated, and saw nothing that could decide between the two. He therefore saw the decision of which logic to adopt as a matter of choice. Further developments in logic amplified the differences between Carnap and Wittgenstein. Go¨del’s

famous incompleteness theorems made use of a technique that has since become known as Go¨del numbering. By Go¨del numbering we assign code numbers to expressions of the language. Through this coding technique, a language capable of expressing arithmetical properties becomes a device for discussing certain syntactic properties of any language system. Carnap saw this as a refutation of Wittgenstein’s idea that the logical syntax of language is inexpressible. In fact, one of the general goals of Carnap’s The logical syntax of language was to show that it is possible to deal in a clear systematic manner with the syntactic properties of any language. Recall that for Wittgenstein, we cannot say anything concerning the logical syntax of language. Carnap’s logical tolerance led him to assert that even statements of the form (9x)Px, which assert the existence of an object with the property P, might be true by stipulation. That we could stipulate an object into existence seemed odd to many philosophers. In order to address this worry, Carnap formulated a distinction between internal and external questions of existence. In the language system of arithmetic it is provable that (9x) (7 < x < 9). Relative to this language, the question of the existence of numbers is trivial. But when someone asks whether numbers exist they do not mean to be asking the questions in such a manner that it is answerable in by appeal to the standards of proof and disproof that prevail in arithmetic. Rather, they mean to ask if the numbers really exist in some absolute sense. Carnap viewed such ‘external’ questions as unanswerable given that they remove the questions from a context in which there are clear standards for addressing it, without embedding it in another context where there are any such standards. But the coherence of the language system that includes, for instance, the numbers does not depend on a positive answer to the external question of the existence of numbers. In this way, not only the logical structure of the world but its ontology as well becomes a matter of convention.

Quine: the Thesis of Gradualism W. V. O. Quine (1908–2000) began his philosophical career as a self-described disciple of Carnap’s. However, from his earliest interaction with Carnap, Quine questioned Carnap’s strict division between analytic and synthetic sentences. This early reaction to Carnap’s work grew into a major break between the two philosophers. Recall that the analytic/synthetic distinction divides sentences (respectively) into those that concern the world and are capable of being empirically confirmed, and those that are accepted by stipulation and are true in virtue of meaning. Quine

Logic and Language 509

thought that this difference in kind ought to be replaced with a difference of degree. This is known as the thesis of gradualism. For Quine our knowledge forms a structure like a web. The nodes of the web are sentences and the links between nodes are entailment relations. Only at the periphery are our decisions to accept or reject sentences directly influenced by experience. Decisions over sentences closer to the center are of an increasingly ‘theoretical’ character, with accepted logical and mathematical statements forming the most central class. The ordering of sentences from periphery to interior is based on how willing we would be to abandon a sentence when revising our beliefs in light of new evidence. For sentences like ‘this table is red’ we can easily imagine a set of experiences that would lead us to abandon it. By contrast, it is far more difficult to imagine the experiences that would lead us to abandon ‘2 þ 2 ¼ 4.’ Abandoning this statement would entail a far more radical change in our overall belief system. However, for Quine, the difference is one of degree, rather than kind. No sentence, mathematical, logical or otherwise, is ultimately immune from revision in light of experience. Physics, for instance, in departing from Euclidean geometry has abandoned sentences such as ‘between any two points exactly one straight line can be drawn’ once believed to be on the firmest foundation. We have seen that, for Carnap, logico-mathematical truths are not responsible to any aspect of the world. We are perfectly free to accept any set of sentences to count as analytic. The way the world is affects only the practical utility of our choices of analytic statements; it does not affect the theoretical legitimacy of those choices. (However, Carnap is far more interested in giving reconstructions of existing notions instead of constructing arbitrary systems.) For Quine, on the other hand, logical and mathematical truths are on par with highly theoretical statements of physics. It may turn out that by abandoning classical logic or altering our mathematics we will be able to formulate more simple scientific theories. Since simplicity is one of the norms of theory choice, it may be that our best scientific theory does not conform to the laws of classical logic. Carnap’s principle of tolerance suggests that logical truths are true by virtue of the meanings assigned to the logical vocabulary. Quine rejects this view and sees logical truths as subject to the same standards of acceptance as any other scientific claim. Since there are a certain sets of experiences that would lead us to reject what we now regard as a logical truth, Quine maintained that we could no longer hold, as Wittgenstein did, that logical truths are true

independently of how things happen to be in the empirical world. Logical truths therefore lose their special status and become statements on par with other scientific claims. They are true because they are part of our best description of the world. See also: Aristotle and Linguistics; Boole and Algebraic Semantics; Constants and Variables; Definite and Indefinite; Dynamic Semantics; Game-theoretical Semantics; Logical and Linguistic Notation; Logical Consequence; Multivalued Logics; Philosophical Theories of Meaning; Propositional and Predicate Logic; Propositions; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Referential versus Attributive; Semantic Value; Sense and Reference; Truth Conditional Semantics and Meaning.

Bibliography Boole G (1854). The laws of thought. [Reprinted New York: Dover, 1958.] Carnap R (1937). The logical syntax of language. Amethe Smeaton (trans.). New Jersey: Littlefield Adams, 1959. Carnap R (1950). ‘Empiricism, semantics and ontology.’ In Sarkar S (ed.). 1996. Frege G (1879). ‘Begriffsschrift: a formula language modeled upon that of arithmetic, for pure thought.’ Bauer-Mengelberg S (trans.). In van Heijenoort Jean (ed.) From Frege to Go¨del: a source book in mathematical logic. Cambridge, MA: Harvard University Press, 1976. Frege G (1892). ‘On sense and reference.’ Black M (trans.). In Black M & Geach P T (eds.) Translations from the philosophical writings of Gottlob Frege, 2nd edn. Oxford: Basil Blackwell, 1960. Frege G (1893). Grundgesetze der Arithmetik (vol. 1). Jena: Verlag Hermann Pohle. [Partially translated as Furth M, The basic laws of arithmetic by University of California Press, Berkeley 1964.] Leibniz G W (1696). ‘Letter to Gabriel Wagner on the value of logic.’ Loemker D L (ed. & trans.). In Philosophical papers and letters, 2nd edn. Dordrecht & Boston: Reidel Publishing, 1976. Leibniz G W (1679). ‘Elements of a calculus.’ In Parkinson G H R (ed. & trans.). 1966. Leibniz G W (1679/1686). ‘A specimen of the universal calculus.’ In Parkinson G H R (ed. & trans.). 1966. Parkinson G H R (ed. & trans.) (1966). Logical papers. Oxford: Clarendon Press. Quine W V (1951). ‘Two dogmas of empiricism.’ In Sarkar S (ed.). 1996. Quine W V (1969). ‘Epistemology naturalized.’ In Ontological relativity and other essays. New York: Columbia University Press. Russell B (1905). ‘On denoting.’ [Reprinted in Marsh R C (ed.) Logic and knowledge. London: Unwin, 1988.] Russell B (1912). The problems of philosophy. [Reprinted London: Oxford University Press, 1986.]

510 Logical and Linguistic Notation Russell B (1918). The philosophy of logical atomism. Pears D (ed.). La Salle, Illinois: Open Court, 1972. Russell B (1919). Introduction to mathematical philosophy. New York: Clarion, 1971.

Sarkar S (ed.) (1996). Science and philosophy in the twentieth century v. 5. New York: Garland. Wittgenstein L (1918). Tractatus logico-philosophicus. Ogden C K (trans.). London: RKP, 1988.

Logical and Linguistic Notation J Lawler, University of Michigan, Ann Arbor, MI, USA ß 2006 Elsevier Ltd. All rights reserved.

Notation is a conventional written system for encoding a formal axiomatic system. Notation governs . The rules for assignment of written symbols to elements of the axiomatic system . The writing and interpretation rules for wellformed formulae in the axiomatic system . The derived writing and interpretation rules for representing transformations of formulae, in accordance with the rules of deduction in the axiomatic system. All formal systems impose notational conventions on the forms. Just as in natural language, to some extent such conventions are matters of style and politics, even defining group affiliation. Thus, notational conventions display sociolinguistic variation; alternate conventions are often in competing use, though there is usually substantial agreement on a ‘classical’ core notation taught to neophytes. This article is about notational conventions in formal logic, which is (in the view of most mathematicians) that branch of mathematics (logicians, by contrast, tend to think of mathematics as a branch of logic; both metaphors are correct, in the appropriate formal axiomatic system) most concerned with many questions that arise in natural language, e.g., questions of meaning, syntax, predication, well formedness, and – for our purposes, the most important such – detailed, precise specification. Specification is the purpose of notation, both in mathematics and in science, but such precise conventions are unavoidably context sensitive. Thus, the use of logical notation is different in logic and in linguistics. Bochenski (1948; English translation 1960) is still the best short introduction to logical notation. Almost all logical notation is modern, dating from the last century and a half. However, there is some prior work that deserves comment here, because logic, alone of all mathematical fields, was widely studied and significantly developed in the European Middle Ages.

The principal concern of medieval logicians was the syllogism. By the time of the Renaissance, there was an extensive and thorough account of syllogistic. One of its major achievements was the development of systematic names for the modes of the syllogism. These names (as conventionally grouped, Barbara, Celarent, Darii, Ferio, Barbarix, Feraxo; Cesare, Festino, Camestres, Baroco, Camestrop, Cesarox; Darapti, Disamis, Datisi, Felapton, Bocardo, Ferison; Bramantip, Camenes, Dimaris, Fesapo, Fresison, Camenop) constitute the first real notational convention in logic. The names are mnemonic, designed to be chanted, like Pa¯nini’s rules. The three vowels in each name are the ˚letters A, E, I, O, which mark the vertices of the Square of Opposition, indicating the proposition type (respectively, Universal Affirmative, Universal Negative, Existential Affirmative, and Existential Negative) of each of the three propositions of the syllogism. The letters s, p, m, and c also have specific meanings in these mnemonics, summarizing relevant logical properties of each type, thus serving the notational goal of detailed, precise specification. Syllogistic is largely of historical interest in modern logic, but its concerns, terminology, and notation continued to be used and understood until well into the development of modern logic. In modern logic and mathematics, notation is a necessary part of a calculus, one of a number of special sets of formalized concepts and techniques for manipulating them. The metaphor refers to the origins of classical calculation, which was performed with pebbles (Lat calculus) on a counting table or abacus. This metaphor licenses notational practice with formal systems, which is to . Encode: represent parts (hopefully, natural parts) of a quantity, concept, or truth with symbols, then . Calculate: push those symbols around in conventionally accepted fashions, hoping thereby to . Decode: find in the changed symbolic patterns representations of previously unknown quantities, concepts, or truths. Calculi are an invention of the 17th century; the best known are Leibniz’s integral calculus and

Logical and Linguistic Notation 511

Newton’s differential calculus, which (together) are understood as the default meaning of calculus in modern English. There are many calculi in modern mathematics, some of which exist in name only, such as Leibniz’s putative calculus ratiocinator. In symbolic logic, which is the closest thing to what Leibniz called for, the two most important calculi, each with its own notational conventions, are Propositional Calculus and Predicate Calculus, both of which were originally intended straightforwardly (as the titles of their original publications show) as tools for the representation of human thought. Language did not enter into the picture at first, except as a transparent expression of thought.

Propositional Calculus Symbolic logic, and its notation, originated in the works of George Boole (1815–1864), of which Boole (1854) is the best known. Boole’s intention was to produce an algebraic account of propositions as combined via what we have come to call Boolean connectors, principally (logical) and, or, not, equivalent, and implies, which then achieved the dual status of both English words that are prominent in logical discussion and mathematically defined functors. Boole first made explicit the alternation between logical and and or enshrined in DeMorgan’s Laws, comparing them (respectively) to multiplication and addition in algebra; thus he represented x and y as xy, whereas x or y was x þ y; he used numbers throughout, using 1  x to represent not x, for instance. This notational convention, and the system based on it, is often called Boolean Algebra (though technically it is a complemented distributive lattice, not an algebra). Boole’s logic did not use quantifiers per se; instead, he dealt with the quantification inherent in syllogistic by using the traditional letters A, E, I, O. Propositional calculus, the calculus of arbitrary whole propositions without regard to their predicates or arguments, uses two major notations. One, usually called ‘Classical’ or ‘Standard,’ exists in numerous individual variations and is usually the one taught to students; the other, called ‘Polish,’ ‘Łukasiewicz,’ or ‘Prefix,’ is standardized and in widespread use. Classical notation for propositional calculus uses lowercase letters for propositions (traditionally p, q, r, s) and special symbols for their connectives. The two truth values of a proposition are usually either T and F, or 1 and 0. In ternary logics, T / F / # is more common than numeric codes, because arithmetic systems like 1 / 0 / 1 or 1 / ½ / 0 make implicit combinatorial claims.

The Classical special symbols for functors include those shown in Table 1. In each case, the first symbol is the most widely accepted. In addition to functors, propositional logic also contains symbols for pragmatic connectives used in proofs, such as entailment, which usually uses a single arrow (!), and assertion, which uses a variety of symbols, including ‘. There are also parentheses, because grouping of formulae can introduce significant ambiguity, which is anathema in logic. In extended use, parentheses were found to be burdensome, because balancing them was a frequent source of avoidable error. To combat this, Whitehead and Russell in their monumental Principia mathematica (1910–13), developed a special parenthesis-free notation to augment their Classical formulae, based on using groups of 1,2,3, . . . dots to separate propositions. This version is rarely seen today. Polish notation was developed and popularized by Jan Łukasiewicz (1878–1956) in the early 1920s as a by-product of his development of ternary logic, for which he also invented the truth table. In this notation, propositions are again represented by lowercase letters, but functors are uppercase letters placed immediately before their argument(s): not p is Np, p and q is Kpq, p or q is Apq, p implies q is Cpq, and p is equivalent to q is Epq. Because functors form valid propositions, these can be nested indefinitely without recourse to parentheses; for instance, DeMorgan’s Laws, which are stated in classical notation as : (p ^ q)  : p _ : q and : (p _ q)  : p ^ : q, are stated in Polish notation respectively as EKNpqANpNq and EANpqKNpNq. Because the prefixal position of the Polish functors is arbitrary, a postfixal variant, called Reverse Polish Notation, or RPN (linguists note that it should be called ‘Japanese Notation,’ because it acts like a standard SOV language), is equally valid and is widely used in computing circles, because it turns out to be ideally adapted to performing calculations using a pushdown stack. In RPN, DeMorgan’s laws are stated as pNqKpNqNAE and pNqApNqNKE. Modal Logic, an extension of propositional calculus into modality, introduces two more common notational symbols, ep for p is possibly true (in Polish

Table 1 Classical special symbols for functors not p p and q p or q p is equivalent to q p implies q

:p, -p, p, Np p ^ q, p  q, p & q, pq p_q p  q, p ! q, p ) q p  q, p $ q, p , q

512 Logical and Linguistic Notation

notation Mp, for Mo¨glich), and up for p is necessarily true (Polish Lp, for Logisch). DeMorgan’s Laws for modal logic (where u is associated with ^ and e with _) can thus be stated :up  e:p (Polish ENLpMNp) and :ep  u : p (Polish ENMpLNp).

Predicate Calculus Quantified Predicate Calculus (both first- and secondorder) was first axiomatized and used notationally by Gottlob Frege (1848–1925) in 1879, a quartercentury after Boole. In predicate calculus, the atomic proposition of propositional calculus is split into predicate and argument(s), allowing far more representation of actual natural language phenomena. To represent predication, Frege introduced the now-standard functional notation, widely used in mathematics. In this notation, an atomic proposition p could now be seen to consist of a predicate (typically using uppercase letters) operating on arguments expressed by following parenthesized variables, in the same way as a mathematical function like f(x) ¼ x2; e.g., TALL (x) ¼ X is tall, SEE (x, y) ¼ X sees Y, and GIVE (x, y, z) ¼ X gives Y to Z. In particular, quantifiers were separated by Frege for the first time from their traditional Aristotelian A, E, I, O notation. Quantifiers in natural language are specialized words that often involve special syntax; normally they appear in construction with some noun, which they are said to bind. However, their syntax varies widely, and quantifier ambiguities are frequent. Modern logic admits what McCawley (1993) calls ‘‘the logicians’ favorite quantifiers’’: the existential quantifier, 9x, pronounced ‘for some x’ or ‘there exists an x,’ and the universal quantifier, 8x, pronounced ‘for all/every/each x.’ The x’s in each case are dummy variables; they do no more than indicate which variable in the proposition following is to be considered bound by the quantifier. Quantifiers are rigidly controlled in the formulae in order to avoid ambiguity (and indeed to allow natural ambiguities to be explicated). They are placed before the formula containing the variable they bind, and

their relative placement serves to denote the concept of scope, which is highly relevant to the three natural language elements represented in logic by operators (i.e., quantification, negation, and modality), all of which govern scope phenomena like Negative Polarity. Thus, the two ambiguous readings of A boy beat every girl at tennis are represented by (9x) (8y) BEAT (x, y) and (8y) (9x) BEAT (x, y). Naturally, there are variations in quantifier notation as well: a formula like DeMorgan’s Laws for quantifiers, which can be written ENPxjxSxNjx and ENSxjxPxNjx [Px is (8x) and Sx is (9x)] in Polish notation, comes out as :(8x) j(x)  (9x) :j(x) and :(9x) j(x)  (8x) :j(x) in Classical notation, which also admits a simple parenthesized variable (x) instead of (8x), and also one with a circumflex (yˆ) instead of (9y), in the appropriate position. The use of parentheses, colons, brackets, and other punctuation with quantifiers is inconsistent and follows individual style, which is usually oriented toward scope delimitation. See also: Aristotle and Linguistics; Assertion; Constants and Variables; Counterfactuals; Formal Semantics; Generative Semantics; Logic and Language; Multivalued Logics; Number; Philosophical Theories of Meaning; Propositional and Predicate Logic; Quantifiers; Scope and Binding; Truth Conditional Semantics and Meaning.

Bibliography Bochenski I M, O P (1948). Pre´cis de logique mathematique. Bussum, Netherlands: F. G. Kroonder. English translation (1960) by Otto Bird, ‘A pre´cis of mathematical logic.’ Dordrecht: Reidel. Boole G (1854). An investigation of the laws of thought, on which are founded the mathematical theories of logic and probabilities. London: Walton and Maberley. Frege G (1879). Begriffsschrift, eine der arithmetischen nachgebildete formelsprache des reinen Denkens. Halle: L. Nebert. McCawley J D (1993). Everything that linguists have always wanted to know about logic (but were ashamed to ask) (2nd edn.). Chicago: University of Chicago Press.

Logical Consequence 513

Logical Consequence P Blanchette, University of Notre Dame, Notre Dame, IN, USA ß 2006 Elsevier Ltd. All rights reserved.

Fundamentals Logical consequence is the relation that holds between the premises and conclusion of an argument when the conclusion follows from the premises, and does so for purely logical reasons. When a conclusion is a logical consequence of premises, the truth of those premises suffices to guarantee the truth of the conclusion. To clarify, we’ll look at some examples. When we reason that (A1) Socrates is mortal

follows from (A2) Socrates is human

and (A3) All humans are mortal,

we need not appeal to any known facts about Socrates, about humanity, or about mortality. These specifics are irrelevant to the fact that (A1) follows from (A2) and (A3), which shows that the sense of ‘following-from’ involved here is the purely logical sense. That is, (A1) is a logical consequence of (A2) and (A3). By contrast, when we reason that (B1) There are mammals in the ocean

follows from (B2) There are dolphins in the ocean

we must appeal to facts peculiar to the nature of mammals and of dolphins. We appeal, specifically, to the fact that dolphins are mammals. In this case, although there is a sense in which (B1) ‘follows from’ (B2), (B1) does not follow logically from (B2). It follows, one might say, biologically, because an appeal to biological facts is needed to get from (B2) to (B1). Nevertheless, the fact that (B1) follows in this extra-logical way from (B2) is because of the relation of logical consequence. Specifically, it is because of the fact that (B1) is a logical consequence of (B2) together with (B3) All dolphins are mammals.

That (B1) is a logical consequence of (B2) and (B3) can be seen by noting that it follows from them independently of the specific nature of the objects, properties, and relations mentioned in these statements.

In general, all cases of ‘following from’ are due, in this way, to the relation of logical consequence. If a conclusion follows from some collection of premises, this is because that conclusion is a logical consequence of the premises, together perhaps with various ancillary claims that are presupposed in the given context. Logical consequence is therefore a ubiquitous relation: all of our reasoning turns on recognizing (or attempting to recognize) relations of logical consequence, and virtually all of the important connections between theories, claims, predictions, and so on, are in large part due to logical consequence. Furthermore, whenever we say that a given argument is valid or that it is invalid, or that a particular set of claims is consistent or inconsistent, we are employing the notion of logical consequence: a valid argument is one the conclusion of which is a logical consequence of its premises, whereas a consistent set of claims is a collection that has no contradiction as a logical consequence. Because the central logical notions of validity, consistency, etc., are definable in terms of logical consequence, the investigation of the nature of logical consequence is at the same time the investigation of the nature of the logical properties and relations in general.

The Formal Study of Logical Consequence The modern investigation of logical consequence is closely connected to the discipline of formal logic. Formal logic is the study of formal (i.e., syntactically specified) languages, and of various philosophically and mathematically significant properties and relations definable in terms of such languages. Of particular significance for the study of logical consequence are two kinds of relations definable on the formulas of a formal language, the relations of proof-theoretic consequence and of model-theoretic consequence. Given a formal language, a relation of prooftheoretic consequence is defined via the rigid specification of those sequences of formulas that are to count as proofs. Typically, the specification is given by designating specific formulas as axioms, and designating some rules of inference by means of which formulas are provable one from another. Both axioms and rules of inference are specified entirely syntactically. A proof is then a series of formulas each of which is either taken as premise, or is an axiom, or is obtained by previous formulas in the series via a rule of inference. A formula j is a proof-theoretic consequence of a set S of formulas if and only if there is a proof the premises of which are

514 Logical Consequence

among the members of S, and the conclusion of which is j. Model-theoretic consequence, by contrast, is defined in terms of a range of interpretations (or models) of the formal language in question. While the vocabulary of the language is divided into the ‘logical’ terms (typically, analogues of the English-language ‘and,’ ‘or,’ ‘not,’ ‘if. . .then,’ and ‘for all’), the meaning of which is taken as unchanging, and the ‘non-logical’ terms (typically analogues of natural-language predicates and singular terms), an interpretation is an assignment of objects and sets of objects to the nonlogical terms. In the standard case, the formulas are taken to have a truth-value (i.e., to be either true or false) on each such interpretation. A formula j is then a model-theoretic consequence of a set S of formulas if and only if there is no interpretation on which each member of S is true while j is false. The connection between these defined relations and logical consequence arises when the formulas in question are taken to stand as representatives of natural-language sentences or the claims they express. Given such a representation-relationship, the relations of proof-theoretic and of model-theoretic consequence are typically designed so as to mirror, to some extent, the relation of logical consequence. Thus, the idea behind a standard design of a relation of proof-theoretic consequence is that it only count as axioms those formulas representing ‘logical truths’ (e.g., ‘Either 5 is even or 5 is not even’), and that its rules of inference similarly mirror logical principles. In such a case, a formula j will be a proof-theoretic consequence of a set S of formulas only if the kind of ordinary sentence represented by j is indeed a logical consequence of the ordinary sentences represented by the members of S. This does not ensure that the relation of proof-theoretic consequence exhausts the relation of logical consequence, for two reasons: first of all, the formal language in question may not contain representatives of all ordinary sentences; second, the proof system may not be rich enough to reflect all of the instances of logical consequence amongst even those ordinary sentences that are represented in the language. The system of proof-theoretic consequence will, however, have the virtue of being well defined and tractable. Similar remarks apply to the relation of model-theoretic consequence: in a well-designed formal language, the relation of modeltheoretic consequence will mirror, in important ways, the relation of logical consequence. The intention in designing such a system is, typically, that j will be a model-theoretic consequence of S if and only if the kind of ordinary sentence represented by j is a logical consequence of those represented by the members of S.

Given a particular language together with its prooftheoretic and model-theoretic consequence relations, the question arises whether those relations are coextensive: whether, that is, j is a proof-theoretic consequence of S if and only if j is a model-theoretic consequence of S. In some cases, the answer is ‘yes,’ and, in some, ‘no.’ Each half of the inclusion is a separate, significant issue: when every proof-theoretic consequence of each set of formulas is also a modeltheoretic consequence of that set, the system is said to be sound, and when every model-theoretic consequence of each set of formulas is also a prooftheoretic consequence of that set, the system is said to be complete. The soundness of a system is typically a straightforward matter, following immediately from the design of the proof-theoretic system; completeness is typically a considerably more significant issue. The most-important system of logic, that of classical first-order logic, was proven by Kurt Go¨del in 1930 to be complete; this is the celebrated ‘completeness theorem for first-order logic.’ First-order logic is, in various ways, the ‘strongest’ complete system (see Enderton, 1972). Formal systems, i.e., formal languages together with proof-theoretic or model-theoretic consequence relations, differ from each other in a number of ways. Most important for the purposes of the study of logical consequence are the following two differences: (1) proof-theoretic relations differ over the axioms and rules of inference they include, and hence over the instances of logical consequence that they represent. Some such differences are just because the languages of some such systems are expressively weaker than others, so that principles contained in one simply cannot be expressed in the other. More interesting are differences motivated by differing views of logical consequence itself. Thus, for example ‘classical’ logic differs from intuitionist logic in including the principle of excluded middle, the principle guaranteeing the truth of all statements of the form p-or-not-p. As the proponent of intuitionist logic sees it, this principle is not universally accurate, and hence should not be included in a system of logic. (2) Model-theoretic relations differ in a number of small ways, including the specifics of the definition of interpretation, and of the definition of truth-on-aninterpretation. More important, the model-theoretic consequence relations for different systems differ when the formal languages in question are importantly structurally different. Thus, for example, standard second-order logic has a richer model-theoretic consequence relation than does first-order logic, and there are natural-language arguments whose secondorder representation yields a conclusion that is a model-theoretic consequence of its premises, but

Logical Consequence 515

whose first-order representation does not (see van Dalen, 2001; Shapiro, 1991). The question of the extent to which each such system gives an accurate characterization of logical consequence is of central philosophical concern. With respect to the relations of proof-theoretic consequence, debate turns on the accuracy of specific axioms and rules of inference. With respect to relations of model-theoretic consequence, the significant debate is rather over the question of the extent to which model-theoretic consequence relations in general (or, perhaps, that relation as applied to classical first-order logic) offer an analysis of the ordinary, non-formal relation of logical consequence. If logical consequence is in some sense ‘essentially’ the relation of truth-preservation across interpretations, then model-theoretic consequence has a privileged position as simply a tidied-up version of the core relation of logical consequence. If, by contrast, the relation of truth-preservation across interpretations is simply another sometimes-accurate, sometimes-inaccurate means of representing the extension of the relation of logical consequence, then model-theoretic consequence has no immediate claim to accuracy (see Etchemendy, 1990).

General Philosophical Concerns In addition to questions surrounding its appropriate formal representation, the investigation of logical consequence includes questions concerning the nature of the relation itself. One important cluster of such questions concerns the relata of the relation. Here we want to know whether the items between which logical consequence holds are, say, the sentences of ordinary language, or the non-linguistic propositions expressed by such sentences, or something else altogether. Although logical consequence is perhaps most straightforwardly viewed as a relation between sentences, one reason to reject this idea is that sentences, at least when thought of as syntactic entities (strings of letters and spaces), seem the wrong kinds of things to bear that relation to one another. Because any given sentence so understood could, under different circumstances, have had a quite different meaning, and would thereby have borne different logical relationships to other sentences, it is arguable that the sentence itself is not the primary bearer of this relation but is, rather, just a means of expression of the primary bearer. This line of reasoning motivates the view of non-linguistic propositions, the kinds of things expressed by (utterances of) fully interpreted sentences, as the relata of logical consequence. The central reason for rejecting this proposal, though, is skepticism about the

existence of such things as nonlinguistic propositions. A third option is to take the relata of the logical consequence relation to be sentences-in-use, essentially pairs of sentences and meaning-conferring practices (see Cartwright, 1987; Strawson, 1957; Quine, 1970). The second, related collection of questions concerning logical consequence arises from the inquiry into what makes one thing a logical consequence of others. Here, we are looking for an explanation or an analysis of logical consequence in terms of other, more well-understood notions. One potential answer is that logical consequence is to be explained in terms of the meanings of various specific parts of our vocabulary, specifically in terms of the meanings of the ‘logical’ words and phrases (see above). A second, not necessarily competing, account is that logical consequence is because of the form, or overall grammatical structure, of the sentences and arguments in question. A third type of answer, mentioned above, is that logical consequence is best explained in terms of model-theoretic consequence. Various of the accounts of logical consequence have been criticized on grounds of circularity: to say that j’s being a logical consequence of S is because of some other relation between j and S is, arguably, to say that the claim that j is a logical consequence of S is itself a logical consequence of the purported explanans. If this charge of circularity is accurate, it is arguable that all such explanations of the nature of logical consequence will be found to be circular, with the result that this relation must be taken to be ‘primitive,’ not capable of reduction to anything else. Part of the debate here will turn on what one takes the nature of explanation to be, and on whether explanation requires reduction (see Quine, 1936). In short: although it generally is agreed that some claims are logical consequences of others, there is scope for important disagreement about (a) which specific claims are in fact logical consequences of which others, (b) how to construe the notion of ‘claim’ involved here, and (c) how to give a correct account of the nature of the relation of logical consequence. Because of the connections between these issues and general positions in the philosophy of logic, philosophy of mathematics, and philosophy of language, one’s preferred answers to the questions noted here will turn in large part on one’s position with respect to a host of surrounding topics.

See also: Conditionals; Counterfactuals; Discourse Repre-

sentation Theory; Dynamic Semantics; Human Reasoning and Language Interpretation; Inference: Abduction, Induction, Deduction; Logic and Language; Logical Form;

516 Logical Form Modal Logic; Monotonicity and Generalized Quantifiers; Multivalued Logics; Neo-Gricean Pragmatics; Propositional and Predicate Logic; Propositions.

Bibliography Blanchette P A (2000). ‘Models and modality.’ Synthese 124(1), 45–72. Blanchette P A (2001). ‘Logical consequence.’ In Goble L (ed.) The Blackwell guide to philosophical logic. Malden, MA/Oxford: Blackwell Publishers. 115–135. Cartwright R (1987). ‘Propositions.’ In Butler R J (ed.) Analytical philosophy, 1st series. Oxford: Blackwell. Reprinted in Cartwright R. Philosophical essays. Cambridge and London: MIT Press, 1987. 33–53. Enderton H (1972). A mathematical introduction to logic. Orlando, FL: Academic Press. Etchemendy J (1990). The concept of logical consequence. Cambridge, MA: Harvard University Press. Reprinted 1999, Stanford: CSLI Publications. Goble L (ed.) (2001). The Blackwell guide to philosophical logic. Malden, MA/Oxford: Blackwell Publishers.

Quine W V O (1936). ‘Truth by convention.’ In Lee O H (ed.) Philosophical essays for A. N. Whitehead. New York: Longmans. Reprinted in Quine W V O. The ways of paradox and other essays. Cambridge, MA/London: Harvard University Press, 1976. 77–106. Quine W V O (1970). Philosophy of logic. Englewood, NJ: Prentice Hall. Shapiro S (1991). Foundations without foundationalism: a case for second-order logic. Oxford: Oxford University Press. Strawson P F (1957). ‘Propositions, concepts, and logical truths,’ Philosophical Quarterly 7. Reprinted in Strawson P F. Logico-Linguistic Papers. London: Methuen & Co. 1971. 116–129. Tarski A (1936). ‘On the concept of logical consequence,’ translation of ‘O pojciu wynikania logicznego.’ In Przeglad Filozoficzny, 39, 58–68. English translation in Logic, semantics, metamathematics (2nd edn.). Woodger J H (trans.) & Corcoran J (ed.). Indianapolis: Hackett Publishing Company, 1983. 409–420. van Dalen D (2001). ‘Intuitionistic logic.’ In Goble L (ed.) The Blackwell guide to philosophical logic. Malden, MA/ Oxford: Blackwell Publishers. 224–257.

Logical Form D Blair, University of Western Ontario, Canada ß 2006 Elsevier Ltd. All rights reserved.

To describe the logical form of some claim is to describe its logically significant properties and structure, showing its connection to other claims via what it entails and what it entails it. Given the variety of claims that philosophers have taken in an interest in, it is not surprising that there are a large number of theories of logical form. But even if there is no shortage of theories aiming at the logical form of, e.g., propositional attitude sentences or counterfactual conditionals, surprisingly little attention has been given to the prior question of what logical form is to begin with. Just as importantly, it is not clear what it is that is supposed to have a logical form in the first instance. Is it, for example, a linguistic object like a sentence, or the utterance of a sentence, or something different from both of these, such as the proposition expressed by an utterance of a sentence? The presence of logic within the notion of logical form may make one suspicious of paying too much attention to the details of natural language. Other kinds of items seem better suited to having logical forms. For example, propositions have whatever truth conditions they have essentially, whereas sentences do not: ‘snow is white’ might have meant that

most colorless beverages lack sodium. Further, it is a notorious fact about natural language that it contains a good deal of vagueness and context sensitivity that is hard to capture within a theory of inference. Facts like these have made philosophers wary of placing too much emphasis on natural language sentences. At the very least, one would want to purge natural language of its logically problematic features before building upon it a theory of logical form. This was precisely the reaction of Frege (1952) and Russell (1919) to the defects of natural language. For them, one needed to formulate an ideal language free from the flaws of natural language in order to spell out the content of various claims. Only then could one think about constructing theories of logical form. Frege’s Begriffschrift formulated an ideal language in which to conduct arithmetic and overcame some of the difficulties of explaining inferences involving multiple quantifiers that beset earlier logical theories. But even if having a logically perspicuous representation of the propositional content of an assertion makes it easier to assess how well a theory accords with what is said about, e.g., the good or the propositional attitudes, there are serious questions concerning how such representations are related to the grammatical properties of a sentence. In the hands of Frege and Russell, one simply translated, as best one could, from natural language into an ideal language. These

Logical Form 517

languages were specifically designed to expedite inference, and so no question arises about their logical forms. But until the last few decades, the kinds of structures required for the purposes of detailing the inferential properties of natural language sentences were thought to be quite remote from anything one might call ‘the grammar’ of a language. Indeed, one way of motivating talk of logical form was by showing the deficiencies of theories of meaning built upon generalizations of apparent grammatical form and function. A number of developments in the 1960s and 1970s changed this picture. A growing number of philosophers became intrigued with the idea of constructing theories of meaning for natural languages directly. The idea that such a theory could be done systematically stems in large part from the work of Noam Chomsky in the 1950s and 1960s, showing how rigorous theories of grammatical structure were possible. In light of the success of Chomsky’s program, it was natural to wonder whether a semantic theory along the lines of his work in syntax could be constructed. The classic picture of the grammatical structure of a sentence involves a series of levels of representation, the most well known of which is the so-called ‘T-model.’ In this model, there are four ‘levels of representation’: D-structure, S-structure, LF, and then PF, or the phonological form of a sentence. Since the last item is a representation of a sentence’s phonological properties, I leave it aside. Each level is related to the one before via the application of a rule or set of rules. The conception of rules has changed over the years, but the underlying idea is that syntactic structure of a sentence is built up, step by step, through a series of representations, each having its own properties. Diagrammatically, what we have is the following:

The ‘S-structure’ or surface structure of a sentence is what corresponds, nearly enough, to the order of expressions as heard or written. ‘LF’ or logical form is a syntactic representation that is derived from the S-structure via a set of transformations, just as S-structures were derived from D-structures via transformations. Since only one level of representation seems to correspond to the overt form of a sentence, it follows that a good deal of syntactic structure remains hidden. The idea that unpronounced structure can

be given a grammatical motivation is compelling. Consider the following pair of sentences: (1) John kissed Mary (2) Who did John kiss

The leftmost WH-phrase in (2) is intuitively related to the position of ‘Mary’ in (1). The grammar of English disguises this fact by requiring that unstressed WHphrases in sentences like (2) be fronted. Were English different in this regard, the parallel would be more obvious. Interestingly, a good many languages allow for just this possibility while others require all WH-phrases to be placed at the left- periphery of a sentence. A more perspicuous representation of English would abstract from these kinds of provincial eccentricities of surface form and expose, via a logically perspicuous notation, just these parallels. There is evidence that the grammatical structure of sentences like these in different languages is abstractly identical, i.e., that all WH-phrases are located at the edge of a clause at some level of representation. In some languages, like Russian, this is overtly true, even when there are several WH phrases in the clause. In other cases, like Chinese, there is little or no movement to the edge of the clausal periphery (see Huang, 1982). The difference between the overt forms of WH-questions then doesn’t disguise just the logical or semantic structure of a sentence; it hides the grammatical structure as well. A more articulated version of (2) shows this abstract structure: (3)

The key idea is that movement of a WH-phrase may occur overtly, as in English, or ‘covertly,’ as in some cases of French. When the WH-phrase does move, however, what we end up with is (3) The movement of the WH-phrase to its position at the left edge of the clause leaves a record in the form of a ‘trace,’ notated above as ‘t.’ Structures like (3) resemble, in a rather striking way, the kinds of representations that one finds within firstorder logic, in particular with respect to the relationship between a quantificational expression and a variable that it binds. Let’s look at this in more detail. It is now commonplace to use examples of scope ambiguities as evidence for the ambiguity of sentences, one to be sorted out in a semantic theory. Thus, a sentence like (4) is ambiguous depending upon whether or not one takes the quantificational phrase ‘every boy’ to have scope over the subject quantificational phrase ‘some girl’ or vice versa, i.e., (5a/b): (4) Some girl danced with every boy

518 Logical Form (5a) 9x: girl(x) [8y: boy(y) [danced(x,y)]] (5b) 8y: boy(y) [ 9: girl (x) [danced (x,y)]]

The usual way of describing this difference is to say that in (5a), ‘some girl’ has scope over ‘every boy,’ while in (5b), the opposite relation holds. The scope of the quantifiers is determined by looking at the material appearing to its right, i.e., the closest formula that does not contain the expression within the first order translation. It turns out that one can define the relevant relation in syntactic terms as well, using the properties of phrase structure. To see this, consider the core syntactic relation of c-command. An expression a c-commands an expression b if and only if the first branching node dominating a dominates b and neither a nor b dominates the other.

What is important is that one can use this definition to say something about quantificational scope. Suppose we take quantificational expressions to move to positions from which they c-command their original position:

In this case, XP c-commands ZP and everything that is contained in the latter, including the trace of XP. Strikingly, when we look at what the structure of 1 is when this structure is explicit, we see the kind of structure required for the definition of scope: (6) [S [QP Some girl]2 [S [QP Every boy]1 [S t2 [VP danced t1 ]]]]

For the reading of (4) where the scopes of the quantificational NPs are inverted relative to their surface order, ‘every boy’ is adjoined to a position from which it c-commands both ZP and the position to which ‘some girl’ has been adjoined: (7) [S [QP Every boy]1 [S [QP Some girl]2 [S t2 [VP danced t1 ]]]]

Both of these movements can be given more detailed defense; see May (1977). The structures that seem to be needed for semantics and that philosophers have thought were disguised by ordinary grammar really are hidden, although not quite in the way they thought. What is hidden is more syntactic structure. Of course, ‘LF’ is a syntactic level of representation and is not a semantic representation. This is not to suggest, however, that no gain has been made within

theorizing about natural language by incorporating the LF hypothesis. For one could hold that the grammatical structures that are interpreted by the semantic theory are just those provided by a theory of grammar incorporating the LF hypothesis. There is no need to first regiment the formal structures of sentences into something to which semantic rules could then apply. What one finds in the idea of LF is the idea that natural languages already have enough structure to supply a lot of what is needed for the purposes of semantics. Further developments within syntactic theory have made the concept of logical form more prominent. Thus, Chomsky (1995) and others have proposed that the only level of grammatical representation is LF, although the role of LF is likely to change, just as it has in the past (see, e.g., Lasnik, 2001). Even so, it is apparent that progress has been made in joining together two bodies of thinking about language, one rooted in traditional philosophical problems about the representation of logic and inference and the other in more recent developments coming from linguistics. There are limits, however, to how much philosophical work a linguistic-based approach to logical form can do. Recall that one of the problems that has made many philosophers wary of paying too much attention to natural language concerned such things as the context sensitivity of certain aspects of natural language sentences. It is an open question just how to treat different kinds of context sensitivity within natural language, and whether revisions are needed to our conception of logical form in natural language in order to accommodate it. It is also true that a good number of philosophical projects targeting logical form are usually concerned with the conceptual analysis of certain notions, e.g., moral goodness, knowledge, etc. Indeed, one of the traditional roles of logical form within philosophy is to serve as scaffolding for just these sorts of projects. Doubts about the viability of conceptual analysis to one side, this is what has given weight to the claim that ‘ordinary language’ disguises the logically significant structure of our concepts. But if this is the role that logical form must play if it is to have a role within philosophy, then it is unclear whether the linguistic conception of logical form can wholly supplant the traditional view. The linguistic conception of logical form seemingly has little to do with the conceptual analysis. And unless conceptual analysis takes the form of a grammatical analysis, it is unlikely that one can substitute grammatical analysis for the description of the logically significant aspects of our concepts. This is not to deny that a linguistics-based conception of logical form is an important, maybe even

Logical Form 519

essential part of understanding how to think about some aspects of logic and meaning. This is particularly clear with respect to the study of quantification. But there are many questions about the nature of logical form that need to be resolved before particular view can be judged to be the most viable. See also: Anaphora, Cataphora, Exophora, Logophoricity;

Inference: Abduction, Induction, Deduction; Interpreted Logical Forms; Propositions; Quantifiers; Truth Conditional Semantics and Meaning.

Bibliography Chomsky N (1957). Syntactic structures. The Hague: Mouton. Chomsky N (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press. Chomsky N (1977). ‘On WH movement.’ In Culicover P, Wasow T & Akamajian A (eds.) Readings in English transformational grammar. Waltham, MA: Ginn. 184–221. Chomsky N (1995). The minimalist program. Cambridge, MA: MIT Press.

Davidson D (1967). ‘Truth and Meaning.’ Synthese 17, 304–323. Frege G (1952). Translations from the philosophical writings of Gottlob Frege. Oxford: Blackwell. Higginbotham J (1993). ‘Logical form and grammatical form.’ Philosophical Perspectives 7, 173–196. Huang C T J (1982). ‘Move WH in a language without WH movement.’ Linguistic Review 1, 369–416. Lasnik H (2001). ‘Derivation and representation in generative grammar.’ In Baltin M & Collins C (eds.) Handbook of contemporary syntactic theory. Oxford: Blackwell. 62–88. Lepore E & Ludwig K (2002). ‘What is logical form?’ In Preyer G & Peters G (eds.). 54–90. Ludlow P (2002). ‘LF and natural logic.’ In Preyer G & Peters G (eds.). 132–168. May R (1977). ‘The grammar of quantification.’ Ph.D. diss., MIT. Neale S (1994). ‘Logical Form and LF.’ In Otero C (ed.) Noam Chomsky: critical assessments. London: Routledge. 788–838. Preyer G & Peters G (eds.) (2002). Logical form and language. Oxford: Oxford University Press. Russell B (1919). Introduction to mathematical philosophy. London: George Allen and Unwin. Williams E (1983). ‘Syntactic and Semantic Categories.’ Linguistics and Philosophy 6, 423–446.

This page intentionally left blank

M Mass Expressions H Bunt, Katholieke Universiteit Brabant, Le Tilburg, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

By ‘mass expressions’ one usually means expressions formed with so-called mass nouns, words like water, rice, poetry, and garbage, which differ morphologically and syntactically from count nouns, like book, apple, chair, and house, in that they do not have both a singular and a plural form and differ in the possible combinations with numerals, determiners, and adjectives. In particular, mass nouns do not admit combination with numerals, with the indefinite singular article, and with a range of quantifiers: *a water, *both rice, *five poetry, *many garbage, *several music. Count nouns, on the other hand, do not combine well with certain quantifying adjectives, such as English much and little; Spanish mucho and poco; or Danish meget and lidt, which combine only with mass nouns. In contrast with count nouns, mass nouns allow the formation of determinerless singular noun phrases: There’s furniture in this room versus *Theres chair in this room. Such bare NPs are often called ‘mass terms’ and include phrases like imported furniture, eau de cologne, mousse au chocolat, orange juice from Brazil, and refined pure Cuban cane sugar. The count/mass distinction is found in many languages but is not universal and has different manifestations in different languages. The Hopi language has been mentioned by Whorf (1939) as a language that has no mass nouns, and several Asian languages, such as Chinese (Mandarin Chinese) and Japanese, do not mark the count/mass distinction and have been claimed to have only mass nouns (Sharvy, 1978). Also, what is described by a count noun in one language may be described by a mass noun in another; e.g., English fruit is a count noun, whereas Dutch fruit is a mass noun (a fruit can be translated either as een stuk fruit (a piece of fruit) or as een vrucht. In English, by far the majority of mass nouns aremorphologically and syntactically singular; a minority of mass nouns are syntactically plural. A plural example is measles: it would be funny to ask *How many

measles have you got (except maybe in a conversation between two doctors, as a way of referring to measles patients). In other languages, plural mass nouns are quite frequent – e.g., in Swahili the word for water is the plural mass noun maji; in Italian the many pasta varieties are syntactically plural, as they are when imported in other languages. One should, for example, say How much spaghetti do you want rather than *How many spaghetti do you want, just as in Italian (Quanto spaghetti. . .; *Quanti spaghetti. . .). A morphologically remarkable phenomenon in Dutch is the formation of diminutive forms of mass nouns to refer to portions with certain properties of the stuff that the mass noun refers to. For example, the mass nouns snoep (candy), drop (licorice), chocola (chocolate), plastic, papier (paper), and brood (bread) have the diminutive forms snoepje, dropje, chocolaatje, plasticje, papiertje, and broodje, which are count nouns referring to particular types of physically well-defined pieces of candy, licorice, chocolate, plastic, paper, and bread, respectively. The interesting point is that diminutive forms of mass nouns differ systematically from those of count nouns in that diminutive count nouns, as opposed to mass noun diminutives, invariably refer to small exemplars in the set of objects denoted by the count noun. For example, appeltje and boekje refer to small apples and little books, but the diminutive form of a mass noun, like broodje or plasticje, refer not to a small piece of bread or plastic but to specific pieces, like a roll and a folder for storing papers. While the count/mass distinction as just outlined may seem intuitively clear, it is difficult to make the distinction precise, especially to make it sufficiently precise for incorporation in a formal grammar. For example, while apple may seem a clear example of a count noun, it is possible to say things like Don’t put too much apple in the salad, using apple as a mass noun. David Lewis has invented a hypothetical device to show that every count noun can be used as as a mass noun. This device, the Universal Grinder, can take as input any objects, denoted by a count noun, like apples, books, or crocodiles; it grinds these and spits out the stuff that the objects are

522 Mass Expressions

made of: apple-stuff, book-stuff, crocodile-stuff. This machine could be said to turn apples into apple, books into book, and crocodiles into crocodile (Lewis, 1983). One can also imagine a device that works in the other direction. This device, which we might call the Universal Packer, takes as input a continuous stream of any stuff that a mass term M may refer to, and outputs packages containing amounts of M that are appropriate in a given context. This device illustrates that one can in general construct a count use of a mass noun by finding a context in which the stuff that the mass noun normally refers to comes in certain standard portions, like cups of coffee in a restaurant, where it is quite common to speak of two coffees. These considerations show that virtually every count noun can be used as a mass noun and vice versa. We should therefore not classify nouns as count or mass but instead view the count/mass distinction to be one of different ways of using nouns, or perhaps not even as a syntactic or morphological distinction but as a semantic one. Intuitively, the fundamental difference between a mass noun like applesauce and a count noun like apple is that there is a clear notion of what is one apple, but there’s no clear notion of what is one applesauce. In the words of Jespersen (1924): ‘‘There are many words which do not call up the idea of some definite thing with a certain shape or precise limits. I call these mass words.’’ Or as Quine (1960) put it: ‘‘Inherent in the meaning of a count noun, like ‘apple’ is what counts as one apple and what as another. . . .such terms possess built-in modes of dividing their reference,. . .while mass nouns do not divide their reference.’’ A common way of expressing this is that count nouns ‘individuate’ their reference, while mass nouns do not. The non-individuating way of referring that is characteristic of mass nouns closely relates to the phenomenon that mass nouns can be used to refer to each of several objects as well as to the whole formed by these objects. For example, in a situation where there are several puddles of water on the floor, the term water in the sentence Please mop up the water on the floor may refer to the individual puddles as well as to the totality of all the water formed by the puddles. This phenomenon is known as ‘cumulative reference’: ‘‘Any sum of parts that are water is water’’ (Quine, 1960). Similarly, suppose one is served a bowl of rice; when one has eaten half of the rice in the bowl, what remains would also be called rice in the bowl. In general, for a mass noun M, any part of something that is M is again M. This phenomenon is called ‘distributive reference.’ The term ‘homogeneous reference’ has been used both as

being synonymous with distributive reference and as denoting the combination of cumulative and distributive reference. The property of distributive reference has been a matter of discussion among linguists and philosophers. Quine (1960) has rejected the idea that every part of something to which a mass noun may refer, may also be referred to by the noun. He notes that ‘‘there are parts of water, sugar, furniture too small to count as water, sugar furniture,’’ since the parts of an H2O molecule are not water, the legs of a chair are not furniture, and the parts of a grain of sugar would perhaps not be called sugar. Instead, he posits the ‘Minimal Parts Hypothesis,’ which says that for each mass noun M, there is a specific minimal size that parts of its referent may have in order to count as M. There is not much support for this position, however; semanticists generally agree that mass terms should be treated as referring homogeneously, in spite of the fact that their referents in the physical world may have minimal parts. In the standard formalization of count noun meanings, the intuition that count nouns individuate their reference is captured by construing the extension of a count noun as the set of all individuals that correspond to the builtin individuation of its reference. So apple refers to the set of all apples. Since a mass noun does not individuate its reference, this leads to the question what kind of things mass nouns denote. Many authors on mass terms believe that the answer to this question requires something else than sets. Quine (1960), Burge (1972), Moravcsik (1973), Ojeda (1993), and several others proposed to make use of mereology, a theory of nonatomic part-whole structures that has been developed as an alternative to set theory (Lesniewski, 1929; Leonard and Goodman, 1940); Bunt (1979, 1985) proposed to use part-whole structures called ‘ensembles,’ defined in an extension of standard set theory called ‘ensemble theory’ (see also Lewis, 1991); Parsons (1970) proposed an altogether different notion of ‘substances.’ The idea that mass terms do not individuate their references explains why they cannot be combined with numerals: one wouldn’t know what to count, so the numerical information would make no sense, and neither would quantifiers, like several, many, both, that presuppose countability. As mass terms do not refer to well-delineated objects, it would also be strange to apply adjectives describing properties like shape, size, or weight to mass nouns. This is confirmed by the fact that it is strange to speak of *small wine, *square water, or *heavy sugar. (And heavy water cannot be used to refer to water that is heavy, but only to refer to the substance deuterium oxide (D2O), formed by oxygen and the hydrogen

Mass Expressions 523

isotope deuterium, of atomic weight 2.) It has therefore been suggested that a distinction be made between count and mass adjectives, depending on whether the adjectives share with mass nouns the property of referring homogeneously (Bunt, 1980; cf. Moravcsik). For instance, square refers neither cumulatively nor distributively, since the whole formed by two square objects is in general not square, nor are the parts of a square object; heavy refers cumulatively but not distributively; and small refers distributively but not cumulatively. Ter Meulen (1980) suggested a count/mass distinction among verbs, depending on whether a verb refers distributively – that is, whether it denotes events that have sub-events that could be described using the same verb. Performance verbs, like write and reach, would be count verbs, whereas activity verbs, such as travel and think, would be mass verbs. Syntactic phenomena that support this distinction are that it is strange to say that *Harry was reaching the airport for an hour while it is fine to say that Harry was traveling to the airport for an hour, and, relating to the use of mass terms as direct objects, it is strange to say that *Alice was writing a letter for an hour but there’s nothing wrong with Alice was writing letters for an hour. It may be noted that some of the above observations on mass nouns do not really apply to all mass nouns. There is a subclass of mass nouns that do in fact individuate their references; examples in English are furniture, cattle, clothing, footwear, luggage. Using Quine’s terminology, inherent in the meaning of furniture is what counts as one piece of furniture. This may explain why such nouns can be modified by adjectives that do not refer homogeneously, as in small furniture, heavy furniture. Quantification for these nouns is also different from that for other mass nouns. Whereas All the water in this area is clean says that every water part that you can take in this area is clean, All the cattle in this area have been vaccinated clearly applies only to animals, not to arbitrary cattle parts. A related morphological phenomenon is that mass nouns of this kind in Dutch do not admit a diminutive form (for example, *meubilairtje, *veetje, *kledinkje, *schoeiseltje, *bagagetje) – which makes sense, since in these cases the mass noun itself already denotes individuals of the kind that the diminutive form would otherwise denote. Moreover, whereas mass nouns in general have syntactically much in common with plural count nouns, the ones in this particular subclass are semantically no different from count nouns. It has therefore been suggested they be assigned to a separate category, called ‘collective’ mass nouns (Bunt, 1985).

Mass terms are a challenge to the formal linguist, not only because it is difficult to pin down the count/ mass distinction on morphological and syntactic grounds but also because they call for a logical representation of the intuitions about non-individuating reference in such a way that sentences involving mass expressions are systematically assigned correct semantic interpretations through the application of a set of rules in a formal grammar. Concerning the first part of this challenge, Pelletier and Schubert (1995) have argued that the count/mass distinction can be formalized in two ways: in terms of occurrences of nouns and as different senses of nouns. An occurrence approach characterizes the use of a noun syntactically as ‘count’ or ‘mass.’ An implementation in a formal grammar would typically assign features ‘count’ and ‘mass’ to occurrences of nouns and certain other expressions. By contrast, a sense approach considers all nouns to be just nouns, avoiding any ‘count’ or ‘mass’ labeling, and interprets a noun occurring in a certain syntactic (‘mass’ or ‘count’) context in its ‘mass’ or in its ‘count’ sense. Since virtually every noun can be used either way, the main virtue of an occurrence approach is not to assess the syntactic well-formedness of expressions but rather to characterize the syntactic environments that force one interpretation of the noun or the other. For instance, characterizing the quantifier much as ‘mass,’ we can force the occurrence of apple in not too much apple to be interpreted as apple-stuff, rather than individual apples. Some authors have suggested that finer distinctions be made among nouns than count/mass (or count/mass/collective), depending on the noun’s syntactic preferences for occurring in certain syntactic environments. According to the ‘‘noun countability preferences’’ of Allan (1980), Bond et al. (1994) distinguish, besides pure count and mass nouns, among plural-only nouns (like scissors, pants), easily convertible count nouns (like cake, stone), and easily convertible mass nouns (like beer, coffee). This may be of practical use, e.g., in machine translation (Baldwin and Bond, 2003) or in language learning (Nagata et al., 2005). Concerning the second, semantic part of the challenge, Link (1983) and Landman (1991) have suggested that the models for a model-theoretic semantics of natural language should include a nonatomic Boolean algebra (or, more specifically, a join semi-lattice) supporting part-whole structures without minimal parts as semantic interpretations of mass terms. Such a structured model can be used to assign logically adequate interpretations to sentences with mass terms – i.e., interpretations that have the desired logical properties (such as rendering Water is water necessarily true, and supporting inferences like

524 Mass Expressions

This puddle is water, Water is transparent, therefore This puddle is transparent, but not supporting the inference Water is scarce here, This is a puddle here, therefore This puddle is scarce here). The fundamental question of what kind of thing a mass noun denotes is not answered in these formal approaches, other than to say that mass terms have denotations figuring in a non-atomic part-whole structure (which would be formally correct only for noncollective mass terms). The use of mereological wholes for these denotations, which many language philosophers have embraced, does provide nonatomic part-whole structures but has the drawbacks that mereology is an alternative to set theory and that mereological concepts as such do not fit in set-theoretical frameworks. In this respect, the use of ensemble theory, which formalizes atomic, nonatomic, and partly atomic part-whole structures within an extension of standard set theory, offers better possibilities for an elegant, integrated treatment of the semantics of mass expressions (cf. Lewis, 1991). See also: Classifiers and Noun Classes; Cognitive Semantics; Definite and Indefinite Articles; Diminutives and Augmentatives; Event-Based Semantics; Formal Semantics; Frame Semantics; Generics, Habituals and Iteratives; Grammatical Meaning; Meronymy; Number; Numerals; Partitives; Plurality; Quantifiers.

Bibliography Allan K (1980). ‘Nouns and countability.’ Language 56(3), 541–567. Allan K (2001). Natural language semantics. Oxford, UK: Blackwell. Baldwin T & Bond F (2003). ‘Learning the countability of English nouns from corpus data.’ In Proceedings of the 41st annual meeting of the Association for Computational Linguistics, Sapporo, Japan. 463–470. Bond F, Ogura K & Ikehara S (1994). ‘Countability and number in Japanese-to-English machine translation.’ In Proceedings of the 15th International Conference on Computational Linguistics (COLING’94), Kyoto, Japan. 32–28. Bunt H (1976). ‘The formal semantics of mass terms.’ In Karlsson F (ed.) Papers from the 3rd Scandinavian Conference of Linguistics, Helsinki, Finland. 71–82. Bunt H (1979). ‘Ensembles and the formal semantic properties of mass terms.’ In Pelletier F J (ed.) 249–277. Bunt H (1980). ‘On the why, the how, and the whether of a count-mass distinction among adjectives.’ In Groenendijk J & Stokhof M (eds.) Formal methods in the study of language. Amsterdam: Mathematical Centre. 51–77. Bunt H (1985). Mass terms and model-theoretic semantics. Cambridge, UK: Cambridge University Press.

Burge T (1972). ‘Truth and mass terms.’ Journal of Philosophy 69(10), 263–282. Chierchia G (1983). ‘On plural and mass nominals and the structure of the world.’ In Borowski T & Finer D (eds.) University of Massachusetts Occasional Papers VIII. Amherst: GLSA. 17–45. Jespersen O (1924). The philosophy of grammar. London: Allen and Unwin. Landman F (1991). Semantic structures. Dordrecht: Kluwer Academic Publishers. Leonard H & Goodman N (1940). ‘The calculus of individuals and its uses.’ Journal of Symbolic Logic 5, 45–55. Lesniewski S (1929). ‘Grundzu¨ge eines neuen Systems der Grundlagen der Mathematik.’ Fundamenta Mathematicae 14, 1–81. Lewis D (1979). ‘Scorekeeping in a language game.’ Philosophical Logic 8, 339–359. Lewis D (1991). Parts of classes. Oxford: Blackwell. Link G (1983). ‘The logical analysis of plurals and mass terms: a lattice theoretic approach.’ In Ba¨uerle R, Schwarze C & von Stechow A (eds.) Meaning, use and interpretation of language. Berlin: De Gruyter. 303–323. Lo¨nning J-T (1987). ‘Mass terms and quantification.’ Language and Philosophy 10(1), 1–52. McCawley J (1981). Everything the linguist always wanted to know about logic. Chicago: University of Chicago Press. Moravcsik J (1973). ‘Mass terms in English.’ In Hintikka J, Moravcsik J M E & Suppes P (eds.) Approaches to natural language. Dordrecht: Reidel. 301–288. Nagata R, Masui F, Kawai A & Isu N (2005). ‘An unsupervised method for distinguishing mass and count nouns in context.’ In Bunt H, Geertzen J & Thijsse E (eds.) Proceedings of the Sixth International Workshop on Computational Semantics (IWCS-6). Tilburg, The Netherlands. 213–224. Ojeda N (1993). Linguistic individuals. Menlo Park: Center for the Study of Language and Information. Parsons T (1970). ‘An analysis of mass terms and amount terms.’ Foundations of Language 6, 363–388. Pelletier F J (ed.) (1979). Mass terms: some philosophical problems. Dordrecht: Reidel. Pelletier F J & Schubert L K (1995). ‘Mass expressions.’ In Gabbay D & Guenthner F (eds.) Handbook of philosophical logic. Dordrecht: Reidel. 327–407. Quine W V O (1960). Word and object. Cambridge, MA: MIT Press. Sharvy R (1978). ‘Maybe English has no count nouns: notes on Chinese semantics.’ Studies in Language 2, 345–365. Ter Meulen A (1980). Substances, quantities and individuals. Ph.D. diss., Stanford University. Whorf B (1939). ‘The relation of habitual thought and behaviour to language.’ In Carroll J (ed.) Language, thought and reality: selected writings of Benjamin Lee Whorf. Cambridge, MA: MIT Press. 134–159.

Meaning Postulates 525

Meaning Postulates K Allan, Monash University, Victoria, Australia ß 2006 Elsevier Ltd. All rights reserved.

Carnap (1956: 225) proposed that meaning postulates are relative to a purpose, for instance: Suppose [a man constructing a system] wishes the predicates ‘Bl’ and ‘R’ to correspond to the words ‘black’ and ‘raven’. While the meaning of ‘black’ is fairly clear, that of ‘raven’ is rather vague in the everyday language. There is no point for him to make an elaborate study, based either on introspection or on statistical investigation of common usage, in order to find out whether ‘raven’ always or mostly entails ‘black.’ It is rather his task to make up his mind whether he wishes the predicates ‘R’ and ‘Bl’ of his system to be used in such a way that the first logically entails the second. If so, he has to add the postulate (P2) ‘(x) (Rx Bl x)’ to the system, otherwise not.

Given P2 nothing can be a raven that is not black. If the system is a semantic metalanguage for English, then (according to Carnap) it is analytically true that (x[raven0 (x) ! black0 (x)] (to use a different, but equivalent formulation). One may dispute the choice of example (to be raven-haired is to have black hair, but an observed raven may be albino); yet the method for introducing nonlogical vocabulary into formal semantics seems appropriate. For instance it is generally agreed that something like (1) and (2) are valid:

how is it to be extended to incorporate the default reference to an adult bovine? Fodor (1998) and elsewhere has argued against meaning postulates (which he supported in Fodor et al., 1975). In part his argument is that lexical meanings are not compositional for words like kill and bull. But there is no reason to insist that the righthand side of the arrow in either (1) or (2) represents part of the semantic composition of the left-hand side. It simply shows a valid relation between structures containing some semantic primitives. Fodor also adopted Quine’s (1953) objections to analyticity and claims that no principled distinction can be drawn between meaning postulates and encyclopedic knowledge. Determining what is semantic and what encyclopedic is a knotty problem we cannot resolve here (see Allan, 2001 for a discussion). Horsey (2000) marshals several arguments against Fodor’s position. Meaning postulates remain a useful device for semantics. See also: Componential Analysis; Compositionality; Concepts; Dictionaries; Dictionaries and Encyclopedias: Relationship; Evolution of Semantics; Formal Semantics; Generative Lexicon; Generative Semantics; Idioms; Inference: Abduction, Induction, Deduction; Lexical Meaning, Cognitive Dependency of; Lexical Semantics; Logical and Linguistic Notation; Logical Form; Metalanguage versus Object Language; Montague Semantics; Philosophical Theories of Meaning; Prototype Semantics; Representation in Language and Mind; Semantic Primitives; Stereotype Semantics; Truth Conditional Semantics and Meaning.

(1) 8x8y[kill0 (x,y) ! cause0 (x,(become0 (:(alive0 (y)))))]

(2) 8x[lx[bull0 (y) ^ animal0 (y)](x) ! male0 (x)]

The predicates kill0 , cause0 , bull0 , etc. are nonlogical vocabulary treated as semantic primitives. The stipulation of meaning postulates for any given language is problematic. There is no consensus over what constitute the set of semantic primitives for any language. There is the problem of correspondence between the metalanguage and the object language (e.g., between kill0 and kill). There is often a problem determining which the necessary semantic relations are and which are the contingent ones. For instance, one might question how the figurative killing of fires or conversations can be accommodated with (1). (2) is valid for papal bulls, male calves, male elephants, male whales, male seals, and male alligators; but

Bibliography Allan K (2001). Natural language semantics. Oxford & Malden, MA: Blackwell. Carnap R (1956). Meaning and necessity (2nd edn.). Chicago: University of Chicago Press. [First published 1947.] Dowty D R, Wall R E & Peters S (1981). Introduction to Montague semantics. Dordrecht: Reidel. Fodor J A (1998). Concepts: where cognitive science went wrong. Oxford: Oxford University Press. Fodor J D, Fodor J A & Garrett M F (1975). ‘The psychological unreality of semantic representations.’ Linguistic Inquiry 6, 515–531. Horsey R (2000). ‘Meaning postulates and deference.’ UCL Working Papers in Linguistics 12, 45–64. Quine W V O (1961). From a logical point of view: 9 logico-philosophical essays. Cambridge, MA: Harvard University Press. [First published 1953.]

526 Meaning, Sense, and Reference

Meaning, Sense, and Reference D J Cunningham, Indiana University, Bloomington, IN, USA ß 2006 Elsevier Ltd. All rights reserved.

Attempts to define and/or construct a theoretical account of meaning can be found throughout the arts, sciences, and humanities. Dewey and Bentley (1949: 297) contended that the word ‘meaning’ is ‘‘so confused that it is best never used at all.’’ As our task is to characterize meaning from a semiotic point of view, perhaps we can limit rather than contribute to this confusion. Consider the following scenarios: . A college student hears her professor use an unfamiliar word and looks it up in her dictionary . A pedestrian carrying a white cane approaches a busy intersection, pauses, and then walks into the street, whereupon the traffic yields . A cat races to the kitchen at the sound of an electric can opener . A doctor sees a dark area on X-ray photograph of the lung of a patient who has been coughing up blood . An anthropologist observes that among Nestlik Native Americans, sons have a more familiar relationship with their maternal uncle than with their father . A bicyclist rides past a dog that suddenly raises its head, arches its neck, and bares its teeth . A husband notices that his wife has suddenly stopped showing any physical affection toward him . A high school history teacher asks her students to analyze the St. Crispin’s day speech from Shakespeare’s Henry V as part of their unit on war in England . A three-year-old child points to the picture of her absent father and says ‘‘Daddy!’’ . A concert-goer unexpectedly begins to weep while listening to Barber’s ‘Adagio for strings’. Within semiotics, these scenarios are often treated as having something very important in common despite their many obvious differences (context, organism, medium, etc.). In one sense, these scenarios all describe an effort after meaning, of the word, cane, sound, shadow, etc. More broadly, they are all examples of semiosis or sign action. In studying these scenarios, in ascertaining the meaning of the word, cane, sound and so forth, we must ask a series of questions: What is a sign? How are these signs organized?

How are these signs related to other signs and sign systems? Since the topic of meaning is so deeply embedded in language and linguistics in general and semiotics in particular, other entries in this encyclopedia have already implicitly or explicitly dealt with the topic. Here, we will try to particularize the meaning of meaning for semiotics by examining some of the distinctions that have been drawn in past attempts to characterize it.

Meanings of Meaning In their classic book Meaning of meaning, Ogden and Richards (1956: 185–208) compiled a ‘representative list’ of 16 definitions of the word ‘meaning,’ and several more can be discerned in their full discussion. Included in their list are the familiar meanings, such as a dictionary entry, the feelings or images aroused, an intention (as in ‘I meant no harm.’), something of significance (as in ‘Religion adds meaning to my life.’), and so forth. To sort through these and many other definitions, it will be helpful to review some of the distinctions raised in previous philosophical analyses. Many of these accounts focus on the meaning of words and statements, that is, semantics, and one of our tasks here will be to demonstrate how semiotics enables us to look beyond language signs. Frege (1892) is usually credited with distinguishing between meaning as sense and meaning as reference. Even in the formal language of mathematics, such a distinction seems fruitful. For example, in the equations 2 þ 2 ¼ 4 and 3 þ 1 ¼ 4, both 2 þ 2 and 3 þ 1 refer to the same object, 4, but in different ways. We can say that 2 þ 2 and 3 þ 1 are the same in that they refer to the object 4, but, of course, they are also different than 4 (as sums) and from each other (as two different sums). They are two expressions or senses of the same thing – there is more information contained in them than in the tautology 4 ¼ 4. Frege’s most famous example is that the expressions ‘morning star’ and ‘evening star’ both refer to the same celestial object (the planet Venus), but in a different sense (as seen in the morning before sunrise or in the evening after sunset). Meaning as reference tells us that a thing is (That object in the sky there is the planet Venus) while meaning as sense tells us what a thing is (If there is a bright star in the sky right after sunset, it is the evening star). Meaning as reference proposes an identity, while meaning as sense proposes interpretation. According to Frege, ‘‘By means of a sign, we express [the object’s] sense and designate its reference’’ (1892).

Meaning, Sense, and Reference 527

Other related accounts of meaning make a distinction similar to Frege’s including Carnap on intension (sense) vs. extension (reference) and J. S. Mill on connotation (sense) vs. denotation (reference). While there are subtle and sometimes not so subtle differences between these accounts and Frege’s and among themselves, it is beyond our purposes here to compare them. We will use the sense/reference distinction to examine semiotic models of meaning.

Meaning and Semiotics As stated above, the topic of meaning is embedded deeply within semiotics in general and the concept of sign in particular. Here we will briefly describe how the two major founders of semiotics, Saussure and Peirce, have dealt with meaning. Ferdinand de Saussure and Structuralism

Saussure, it will be remembered, regarded the sign as constituted by a signifier (or sound image in the case of speech) and a signified (a concept or idea). Thus the spoken word tree is a language sign made up of a signifier (the ‘psychological imprint of the sound’) for the mental concept Tree, the signified. Saussure sometimes spoke of the relationship between signifier and signified as signification. Since the signified and signifier are both mental concepts, the sign they constitute has no necessary relationship to a reference. In fact, one of the hallmarks of Saussure’s semiology is that the relationship between a sign and its object is arbitrary, established by human social convention. Thus the sign tree is a language sign in English solely because it has been adopted by use as such, just as the sign arbor has been adopted in use by Frenchspeaking communities. Meaning is, therefore, a mentalistic concept for Saussure, arising not from the links between signs and the real objects to which they refer, but from the interplay between and relationships that arise from the action of signs. In short, meaning arises from the structure of signs. As a linguist, Saussure naturally emphasized language signs and structures. He defined language as a ‘‘system of interdependent terms in which the value of each term results solely from the simultaneous presence of others . . ..’’ (Saussure, 1959: 116). Language signs get their meaning within the structure as a whole, in the relationships between one sign and the others. These relationships are particularly determined by differences. Just as at the phonemic level of a language where some sounds are recognized as different and others are not (e.g., the initial sounds of tin and kin are regarded as different in English whereas those of coal and call are not), language signs also get their meaning in terms of recognized differences

with other language signs. It is therefore the structure of the whole that determines meaning, not reference to independently existing objects. It is more than just the structures of a language that determine meaning, however. In addition, a language also includes rules for distinguishing one sign from another and for relating signs to one another. These rules, called syntax, allow us to combine and recombine signs in a potentially indefinite number of ways. Without syntax, a structure of signs is static. With syntax, it is possible to manipulate signs and their structure so that new meanings are brought to light or new possibilities for a structure can be constructed. We can therefore build possible structures, meanings that may have no embodiment outside of our semiosis. For example, without a system of signs and syntax to manipulate them, we could not build the possible worlds of the past or the future. The past and the future do not exist in our immediate experience, but we can construct the past and future and act upon their meanings in language signs using syntax. Following Saussure, a number of scholars expanded this notion beyond language signs per se to all manner of cultural phenomena. Work in this vein is often called structuralism (see Hawkes, 1977 for a useful summary). Many of these analyses explicitly talk of signs in terms of signifiers and signified organized into a language-like structure and organized as if they embody a grammar. So we have semiotic analyses of the ‘language’ of gesture, dance, clothing, cars, advertisements, professional wrestling, circus, nearly anything you can imagine. Claude LeviStrauss (1966) dedicated his career to demonstrating that there are underlying systems of structure and meaning behind such dazzlingly varied and complex social systems as kinship patterns (as in scenario 5 above) and myth. Roland Barthes (1964) took all of culture as his domain as he mapped out systems of connotation and denotation for such objects as cars, food, and clothing. There are many other examples, of course. One important outcome of these analyses is that once you have uncovered the underlying structure for a particular phenomenon, manipulation of the syntax can create new forms of meaning that may not have emerged already; for example, structuralist analyses of advertisements can lead to a reversal or even a mocking of the dominant structures. Mystery stories usually keep the reader in the dark about the perpetrator of the crime until the last chapter, but the television detective show Columbo was very popular, in part, because the culprit was revealed at the beginning. Structuralist models tend to suggest there is a single meaning to a text, a notion attacked forcefully by

528 Meaning, Sense, and Reference

poststructuralists such as Derrida (1984). The problem arises from the signifier–signified relationship itself. The meaning of a text cannot reside in the text but only in the signs that represent the text. As such, the signs are only a part of the potential meanings of any text, and alternate relations lead to alternative meanings. To speak of the meaning of a text is nonsense; there will be as many meanings as there are contexts, potential structures of signs to interpret it. But these meanings are nonetheless related to the network of relationships mapped out by the sign system in question. The relation, as Eco (1984) showed, is one of inference, where: To walk, to make love, to sleep, to refrain from doing something, to give food to someone else, to eat roast beef on Friday – each is either a physical event or the absence of a physical event, or a relation between two or more physical events. However, each becomes an example of good, bad, or neutral behavior within a given philosophical framework. (1984: 10, emphasis his) C. S. Peirce

Charles S. Peirce was originally drawn to the study of signs by his search for a model of meaning valid for scientific inquiry. For Peirce, hypotheses are signs, inferred from the world of experience to give meaning to aspects of that world. Imagine, for example, stumbling across some bit of experience that, in itself, is quite puzzling and surprising (e.g., scenario 7 above). Yet, if that bit of experience is treated not for its unique properties, but as an example or case or sign of some rule of experience in action (e.g., perhaps the medication she is taking for high blood pressure is reducing her libido), then it is transformed into an ordinary affair. Couple this notion with Peirce’s assumption that the things of the world of experience that we claim to be true are at best only more or less plausible and therefore only meaningful hypotheses, and we can begin to understand why the concept of meaning plays such a central role in his theory, why he came to equate logic and semiotics. That signs can be both the product of and further source of inference opens semiotic inquiry in quite a different direction from the structuralist approaches described earlier, to the processes of sign use within organisms: semiosis. A sign, according to Peirce is ‘‘something which stands to somebody for something in some respect or capacity’’ (2.228 – As is common in Peircean scholarship, quotes and citations will be identified by volume and paragraph number from Peirce [1931– 1958]). The sign stands for something, the object, by linking it to an interpretant, an additional sign that stands for some aspect of the object. All three

elements, sign, object, and interpretant, are necessary for sign process to occur and are not decomposable into dyads. Thus, Peirce’s conception of the sign includes both its sense and its reference, but not as separate components. A sign is only an incomplete representation of the object or referent. Only certain aspects of the object are represented, and it is these aspects that come to define the interpretant, the sense of the sign process. Different signs may represent different aspects of the object and thereby produce different senses. Additionally, signs have aspects that are not relevant to the object (that may be characteristics of the sign as something in the world of experience, but not of the object) and that can produce additional, different interpretants. In other words, our experience of the world is mediated through signs and can never, therefore, be isomorphic with the objects of the world. In essence, we create our world of experience by creating and using signs as we interact with objects in our environment. Meaning emerges in the action of signs, also called semiosis. The varieties of meaning that are possible are due, in large part, to the complex interplay of sign, object, and interpretant. This interplay is seen clearly in Peirce’s sophisticated analysis of signs and sign process. Perhaps his most famous derivation (2.254–2.264) was to propose three trichotomies of signs (or aspects of signs). In brief, these are (1) the nature of the sign itself, (2) its relation to the object it represents, and (3) its relation to the effect it produces (i.e., to its interpretant). In the first trichotomy, the sign itself can be a quality, a single individual thing, or a general type. Peirce labeled these three sign aspects as qualisign, sinsign and legisign, respectively. The second trichotomy is perhaps the best known. A sign can represent an object by resembling it (icon), by being existentially connected to it (index), or by a general rule (symbol). The third trichotomy considers the manner in which the sign relates to its interpretant. The sign can lead to the effect of possibility (rheme), of actual fact or existence (dicent), or of formal law (argument). These trichotomies can be combined to identify additional aspects of sign process. For reasons too complicated to go into detail here, when the three trichotomies outlined above are combined, 10 classes of signs can be identified: Rhematic Iconic Qualisign, Rhematic Iconic Sinsign, Rhematic Iconic Legisign, Rhematic Indexical Sinsign, Rhematic Indexical Legisign, Rhematic Symbolic Legisign, Dicent Indexical Sinsign, Dicent Indexical Legisign, Dicent Symbolic Legisign, and Argument Symbolic Legisign. But the potential varieties of signs do not end here. Peirce estimated that many thousands of sign types could be identified (8.343) but left their identification to ‘future explorers.’

Meaning, Sense, and Reference 529

What is important to note for our purposes of understanding Peirce’s concept of meaning is that the distinctions raised by sign types only identify aspects of signs, not isolated pure categories. These aspects emerge in the action of signs or semiosis, that is, in the quest for meaning. In the perpetual action of semiosis, only certain emphases can be noted, only certain tendencies about the force of the sign action can be identified as the process of semiosis spreads from sign to sign. Sign action is in no way limited to any single triad but spreads throughout a network of interpretants, a process characterized by Eco (1984) as unlimited semiosis. A second point is a derivative of the first. Semiosis is the action of signs, not of a person. Certainly a person is an essential element in any human semiosis, but the action involves all three elements: the sign, object, and its effect. The interpretant could be an action, feeling, or thought of a person (Houser, 1987), but in its most general sense, the interpretant could be any effect, including additional sign action – a new triad of sign, object, and interpretant. Alternatively, a new aspect of sign process could emerge in a particular context, as when, for example, the iconic aspect of a symbol (e.g., the shape of a word heretofore treated as a symbol) comes to the foreground. For Peirce, meaning is the ‘‘proper significate effect of a sign’’ (5.475). Consider the following example. Suppose our student in scenario 1 above is looking up the word ocelot that she heard her professor use in her Biology class. In her trusty Merriam-Webster, she finds ‘one of a family (Felidae) of feral, carnivorous, usually solitary and nocturnal mammals.’ Not knowing the meaning of Felidae, she looks up the word to discover that the ocelot is a variety of cat. This discovery immediately links her to all of the prior knowledge, prior semiosic structures that she constructed from her experiences (feelings, thoughts, and actions) with cats. Returning to the original definition, she looks up other words she does not understand. Under carnivorous, she reads ‘subsisting or feeding on animal tissue; rapacious.’ Never having come across the word rapacious before, she looks it up to read ‘excessively grasping or covetous; ravenous or voracious.’ As she moves from word to word, from interpretant to interpretant, new meanings are attached to her original one. She is slowly developing her structure for ocelot. We can also move beyond words. Our dictionary could include a drawing of an ocelot. Her professor could mime the actions of an ocelot in stalking, crouching, and springing upon its prey. The growl could be recorded and played in the class. It is in this spread of signs, in the potential for unlimited semiosis, that meaning is to be found. According to Peirce, ‘‘a sign is not a sign unless it

translates itself into another sign in which it is more fully developed’’ (5.594). Interpretants become signs for additional semiosis, generating additional interpretants that link to additional objects that in turn become signs for additional semiosis. Of course, this spread does not continue indefinitely. Eventually, it must resolve itself into a set of structures with which the world as we experience it is meaningful to us. Peirce referred to this set of structures as ‘beliefs.’ The concept of belief is key in Peirce’s view, and he spoke of a semiosis in general and meaning is particular as a movement toward ‘fixing’ a belief. The converse of belief is doubt, and Peirce was very explicit in drawing the distinction between them: We generally know when we wish to ask a question and when we wish to pronounce a judgment, for there is a dissimilarity between the sensation of doubting and that of believing. But this is not all that distinguishes doubt from belief. There is a practical difference. Our beliefs guide our desires and shape our actions . . . Doubt is an uneasy and dissatisfied state from which we struggle to free ourselves and pass into the state of belief; while (belief) is a calm and satisfactory state which we do not wish to avoid, or to change to a belief in anything else. On the contrary, we cling tenaciously, not merely to believing, but to believing just what we do believe. (5.370–5.372)

Peirce called such doubt ‘genuine doubt.’ As such, it is situated in our experience, not a methodological strategy as in Descartes’ use of doubt. Doubt arises when the structures we have created, our current beliefs, do not account for some experience, when the character of signs does not fit our understanding. Peirce proposed four methods of resolving doubt and fixing beliefs: tenacity, authority, a priori, and experiment. Briefly, tenacity is invoked whenever one holds on to beliefs in the face of doubt and asserts that the beliefs will eventually accommodate the doubtful event. We use the method of authority to fix beliefs when we accept the beliefs of authority figures like teachers or scientists. Nowhere is the method of authority more widely used, and abused, than in the field of education. The a priori method is invoked when our beliefs change in the context of already existing structure of beliefs, a conceptual coherence to a worldview that has served us well so far. The three methods described so far all resolve doubt by opinion, stubbornly maintained, taken from others, or reasoned from premises. The fourth method, which Peirce preferred, is the method of experiment, where one seeks to remove doubt by collecting observations, generating potential hypotheses to account for the surprising experience, and reaching a conclusion based upon the interplay of inferential processes.

530 Meaning, Sense, and Reference

Inference, in fact, is implicated in all the methods of resolving doubt. Elsewhere (e.g., 5.145) Peirce describes three modes of inference – abduction, induction, and deduction – through which observers can build and work with structures of signs:

community was in terms of both the present communities of practice (scientist, citizen, family member, etc.), and the ‘family’ of all participants, past and future as well as present, who have, are and will work on clarifying our ideas and understandings.

Deduction is the only necessary reasoning. It is the reasoning of mathematics. It starts from a hypothesis, the truth or falsity of which has nothing to do with the reasoning; and of course its conclusions are equally ideal. The ordinary use of the doctrine of chances is necessary reasoning, although it is reasoning concerning probabilities. Induction is the experimental testing of a theory. The justification of it is that, although the conclusion at any stage of the investigation may be more or less erroneous, yet the further application of the same method must correct the error. The only thing that induction accomplishes is to determine the value of a quantity. It sets out with a theory and measures the degree of concordance of that theory with fact. It can never originate any idea whatsoever. No more can deduction. All the ideas of science come to it by way of Abduction. Abduction consists in studying facts and devising a theory to explain them. Its only justification is that if we are ever to understand things at all, it must be in that way. (5.145)

Finally, as what anything really is, is what it may finally come to be known to be in the ideal state of complete information, so that reality depends on the ultimate decision of the community; so thought is what it is, only by virtue of its addressing a future thought that is in its value as thought identical with it, though more developed. In this way, the existence of thought now depends on what is to be hereafter; so that it has only a potential existence, dependent on the future thought of the community. (5.316)

In the case of the method of experiment, a surprising experience might lead us to abduce a new hypothesis and examine it deductively to see if it squares with the available facts. Suppose, for example, I held the view that individual members of a species tended to be larger in colder climates. Recent data, however, have shown that this difference was not true of fish. If I engage in abduction, I might generate the hypothesis that the original relationship applies only to mammals. If observation showed this hypothesis to be tenable, then the surprising experience would be a matter of course. Deductively, I could link my hypothesis to other varieties of animals and inductively test the consequences. Similar inferential strategies can be observed in a priori, authority, and even tenacity. The validity of our beliefs is tested in accord with Peirce’s pragmatic maxim: Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object. (5.402)

If our beliefs are adequate to account for the phenomena before us, then we are satisfied. It is doubt that drives semiosis. One important source of doubt comes from comparing our beliefs with others. Logic is grounded in the collective nature of semiosis itself and oriented toward future activity. Peirce’s understanding of

In summary, Peirce argued that the subject matter of semiotics is semiosis, the action of signs in all domains of life. Semiosis is an effort after meaning. Our understanding of the world is entirely mediated by signs, and therefore to understand meaning, we must understand the nature of our signs: What is a sign? How is one sign related to another sign? What do signs reveal about the real world? What do they obscure? How are signs formed? What are the ways in which signs can stand for something else? The identification, understanding, and use of signs are fundamental parts of inquiry. In fact, the process of semiotics within inquiry was seen by Peirce to be an emergent process, and one quite explicitly linked to our cognitive natures: Symbols grow. They come into being by development out of other signs . . . We think only in signs. These mental signs are of mixed nature; the symbol-parts of them are called concepts. If a man makes a new symbol, it is by thoughts involving concepts. So it is only out of symbols that new symbols can grow. Omne symbolum de symbolo. A symbol, once in being, spreads among the people. In use and in experience, its meaning grows. Such words as force, law, wealth, marriage, bear for us very different meanings from those they bore for our barbarous ancestors. (2.302)

Dictionary vs. Encyclopedia In a seminal chapter in his Semiotics and the philosophy of language, Umberto Eco (1984: 46–86) further elaborated some of the issues raised above. In particular, Eco explored the question of whether the meaning of a linguistic expression could be captured in a synonymous expression or definition. As in scenario 1 above, the act of consulting a dictionary is often associated with seeking the meaning of a word or phrase. Even in the case of a nonlinguistic sign (as in the white cane of scenario 2, we often convert to linguistic sign (e.g., a blind man) and consult a dictionary.

Meaning, Sense, and Reference 531

We expect a dictionary to help us pinpoint meaning and disambiguate the meaning of one word from another. For example, the dictionary entry for ram should provide a synonymous expression or paraphrase (e.g., adult male sheep) as well as similarities/differences from other words (e.g., buck, stag, ewe, lamb, wether). This effect is accomplished by implicit (or sometimes explicit) reference to a hierarchical structure or Porphyrian tree. The synonymous terms or definitions are usually more general or abstract entries arrayed in a tree-like structure that depicts the relationships between the word whose meaning is sought and a network of related ones. For example, to define a ram as an adult male sheep specifies a relationship between ram and sheep but also raises the issues of the definition of sheep, differences between sheep and goats, cows, horses, and so on. Such taxonomies and classifications as this one are common within scientific domains, but the question raised by Eco (1984) is whether these structures actually analyze meaning. Structures achieve their classificatory power by proposing a finite number of hierarchically ordered categories by means of which terms are located. Ram has its place in this structure, and its place is linked to higher order terms (sheep, mammal, animal, living thing) and differentiated from other terms (goats, fish, plants, nonliving) by its position in the hierarchy. In Frege’s terms, such classifications seek to pinpoint the reference where the meaning of a term consists of an analysis that identifies the more abstract referents or properties to which the term in question is related. Ideally, this process should be an automatic one in which there is no ambiguity about whether the classification is correct. Indeed, a computer programmed with the requisite structures could perform our meaning analysis. In such an analysis, there is no room for interpretation! In a brilliant series of critiques, Eco clearly demonstrated the inadequacy of the ‘dictionary’ approach to meaning. There is something very artificial about the terms in these hierarchical tree structures – in the languages they use and the worlds they propose. Even at a formal level, logical inconsistencies crop up in even the most ‘natural’ classification systems. Thus, while a ram is an adult male sheep, it is difficult to construct a classification that appropriately differentiates on all three criteria at once (e.g., is gender superordinate or subordinate to sheep?). The very notion of hierarchy seems very slippery when probed – wool can be a property of and therefore subordinate to sheep in one analysis, whereas sheep can be an instance of and hence subordinate to the category of ‘wool-producing animals’ in another. Most serious, however, is that the attempt to minimize interpretation ultimately fails.

According to Eco (1984: 57), ‘‘either the primitives (more abstract referents) cannot be interpreted, and one cannot explain the meaning of a term, or they can and must be interpreted and one cannot limit their number.’’ Any act of meaning, therefore, must involve both reference and sense. To understand the meaning of ram, we must inevitably bring to bear some portion of an immense body of potentially relevant world knowledge. In most everyday uses of the word ram, for example, connections to generic spheres of knowledge such as animal husbandry, Greek culture, hunting, clothing manufacture, astronomy, climate, etc. are possible as are more personal connections such as living on a sheep ranch, hearing nursery stories about sheep, etc. Most actual dictionaries actually make explicit links to portions of this world knowledge and for this reason, Eco regards them as ‘disguised’ encyclopedias. In some very specific context, the meaning of the word water might be limited to the expression H2O, but in most contexts, it would be immediately connected to possible spheres of world knowledge related to drinking, washing, sailing, extinguishing fires, and so forth. The task of a person trying to figure out the meaning of a term is, therefore, not one of establishing the referent, but contextualizing the sense in which the referent is being interpreted – in Peirce’s terms by means of interpretants in a (potentially unlimited) process of semiosis. Eco (1984) explored several metaphors to describe encyclopedic competence and settles upon the rhizome. A rhizome is a root crop, a prostrate or underground system of stems, roots, and fibers whose fruits are tubers, bulbs, and leaves. A tulip is a rhizome as is rice grass, even the familiar crab grass. The metaphor of rhizome specifically rejects the inevitability of such notions as hierarchy, order, node, kernel, or structure. The tangle of roots and tubers characteristic of rhizomes is meant to suggest a form of mind where: . Every point can and must be connected with every other point, raising the possibility of an infinite juxtaposition . There are no fixed points or positions, only connections (relationships) . The structure is dynamic, constantly changing, such that if a portion of the rhizome is broken off at any point it could be reconnected at another point, leaving the original potential for juxtaposition in place . There is no hierarchy or genealogy contained as where some points are inevitably superordinate or prior to others

532 Meaning, Sense, and Reference

. The rhizome whole has no outside or inside, but is rather an open network that can be connected with something else in all of its dimensions. The notion of a rhizome is a difficult one to imagine, and any attempt to view it as a static picture risks minimizing its dynamic, temporal, and even self-contradictory character. Eco (1984) labeled the rhizome as ‘an inconceivable globality’ to highlight the impossibility of any global, overall description of the network. Since no one (user, scientist, or philosopher) can describe the whole, we are left with ‘local’ descriptions, a vision of one or a few of the many potential structures derivable from the rhizome. Every local description of the network is an hypothesis, an abduction, constantly subject to falsification. To quote Eco: Such a notion . . . does not deny the existence of structured knowledge; it only suggests that such a knowledge cannot be recognized and organized as a global system; it provides only ‘local’ and transitory systems of knowledge which can be contradicted by alternative and equally ‘local’ cultural organizations; every attempt to recognize these local organizations as unique and ‘global’ – ignoring their partiality – produces an ideological bias. (1984: 84, emphasis his)

This last statement emphasizes the point that we are not proposing the metaphor of rhizome for an individual mind making meaning, but to minds as distributed in social, cultural, historical, and institutional contexts. Except as a degenerate case, there is no such thing as a single mind, unconnected to other minds or to their (collective) social cultural constructions. Thinking, or whatever we choose to call the activity of mind as it makes meaning, is always dialogic, connected to another; either directly as in some communicative action or indirectly via some from of semiotic mediation – signs and/or tools appropriated from the sociocultural context. We are connected to other people individually but also collectively as in the speech communities or social languages in which we are all embedded. We are connected to the sociocultural milieu in which we operate, a milieu characterized by the tools (computers, cars, television, and so forth) and signs (language, mathematics, drawing, etc.) that we may appropriate for our thinking. Thus meaning is not an action that takes place within a mind within a body, but rather at the connections, in the interactions. But, it is worth saying again that this thinking is always ‘local,’ always a limited subset of the potential (unlimited) rhizomous connections that embody our meanings. Thinking is, rather, a matter of constructing and navigating a local, situated path through a rhizomous labyrinth, a process of dialogue

and negotiation with and within a local sociocultural context. Although this analogy fails if pushed too far, the connectivity we have in mind is a bit like the World Wide Web. While the ‘results’ of a connection to WWW are experienced via an interface with one’s local workstation, that experience is possible only as a result of connections with many (potentially an infinite) number of servers all over the world. The local workstation both contributes to (constructs) and is constructed by its connections. Meaning is in the connections.

Umwelt – Meaning beyond Words Jacob von Uexku¨ll’s (1957, 1982) notion of ‘Umwelt’ reinforces this conception of meaning as arising out of ‘local’ connections, but also extends the analysis beyond language signs. In his paper titled ‘The theory of meaning’ (1982; originally published in 1940) von Uexku¨ll elaborated Umwelt as the phenomenal or personal world of an organism; that is, the set of connections constructed by the organism via speciesspecific sensory characteristics and particular experiences in the physical world. Ticks, for example, individually and collectively, construct their Umwelt in part from their biologically determined sensitivity to butyric acid and the particular physical surroundings in which they find themselves. The tick will climb up a tree or tall grass and launch itself toward a source of butyric acid, usually a passing mammal, burrow into its skin, gorge itself with blood, drop off, lay eggs, and die. The Umwelt of the tick is not its environment, that which an observer of an organism might describe as pre-adjacent to and independent of the organism, but is, rather, that environment as selectively reconstituted and structured according to the particular species’ characteristics and according to the specific needs and experiences of the individual organism. The organism and its Umwelt are indivisible – it makes no sense to speak of an organism apart from its personal world. So in this sense, all semiosis is simultaneously distributed and situated in the personal world of an individual organism. From the point of view of its inhabitants, an Umwelt is the actual world of experience; that is, an organism’s particular slice of the rhizome is the ‘real’ world for that organism. As external observers of species and individual organisms, we can attempt to gain some understanding of these worlds, and how they influence and are influenced by the multitude of other worlds to which they are coupled. In a delightful paper titled ‘A stroll through the worlds of animals and men’ (1957; originally published in 1934), von Uexku¨ll described numerous examples of the Umwelten that organisms individually and

Meaning, Sense, and Reference 533

collectively create that then serve to mediate their experience in the world. Bees, dogs, snails, flies, earthworms, and even human children and adults are among the subjects whose Umwelt he attempts to map. Of course, as von Uexku¨ll acknowledged, analyses of an Umwelt will always be partial and incomplete in that we can not fully know the Umwelt, the systems of meaning, of another. Indeed the same physical object often provides a quite different experience both across and within species. In a famous example, von Uexku¨ll (1957) described the various Umwelten created by a large oak tree: a rough textured and convoluted terrain for a bug, a menacing form for a young child, a set of limbs for a nesting bird, a crop to be harvested by a woodsman, and so on. In all these cases, the environment of the tree was the same; that is, the bark, the height, the limbs were ‘available’ to each of the organisms, yet their experience of them, their meaning, was quite different. Objects of the world are ‘meaning carriers’ and our sensory-motor mechanisms are ‘meaning receivers.’ The coordination of meaning carriers and meaning receivers constitute a ‘functional circle’ out of which our Umwelt arises. Thus the fundamentals of the meaning making process are shared by all organisms. However, the human Umwelt, dubbed the ‘Lebenswelt’ by von Uexku¨ll (1957), includes not only biological and physical factors but cultural ones as well. Although both humans and animals engage in sign action, humans are unique in their ability to consciously manipulate signs, thus enabling them to construct worlds apart from nature and direct experience. The possibility for signs that are arbitrary (e.g., language signs) allows humans individually and collectively to create an infinite array of meanings and possibilities for reality through the manipulation of signs. The importance of signs in creating the Lebenswelt lies in their creative power for infinite representation and meaning-making, or unlimited semiosis as described above. Indeed, culture itself can be described as a web of signs. Culture, in this view, is not some mind-independent, pre-adjacent social reality, but rather a collective construction that is reconstructed (or more accurately co-constructed in context) by each new participant. And via these structures, we literally construct our knowledge dynamically as we interact in the world. As an example, consider the following. In their provocative book Metaphors we live by, Lakoff and Johnson (1980) built a solid case that the way in which we perceive and think about a situation is a function of the metaphors (they could have as easily said sign structures) we have adopted for and use in that situation. If we take a cultural phenomenon like schooling, we can examine the dominant metaphors

that define its meaning. Marshall (1988) has argued convincingly that the fundamental metaphor in many schools is ‘School Is Work.’ We speak of students needing to work harder on their studies, to complete their homework, to earn a grade, and so forth. Teachers are trained to manage their classes and are often held accountable in terms of their productivity. Teachers to whom this metaphor is pointed out often deny it and yet are surprised at how often they catch themselves and other educators using language consistent with the workplace metaphor. Other metaphors are equally powerful. The notion that the ‘Mind Is A Container’ is very pervasive and in fact underlies much of our teacher training and instructional design and development. If the mind is a container and knowledge is an entity that can be transferred to the mind (a Conduit metaphor) to fill the void, then naturally the task of the educational system is to facilitate this process of knowledge communication. If, however, we understand the metaphorical basis of our thinking, we can raise the possibility of choosing different metaphors and thereby different meanings, and consequently different roles for educators. If we adopt the metaphor of ‘School As A Community Resource,’ for example, perhaps we will have our students working on authentic tasks that meet community needs (designing bicycle routes through the city; looking into water quality of local streams, etc.) through which they learn the academic content and skills deemed worthwhile. The focus now becomes not on what the students know, but the processes whereby something can become known, how we know it, and for what purpose. So all cognition, all human process, is situated in the sense that we as meaning makers are inseparable from our Lebenswelt. Yet as human observers using signs in ways unique to our species, we can gain some awareness of the sign process itself in ourselves and in others and some limited ability to manipulate it. With the recognition that our personal world is some portion of, some selection from, infinite possibilities comes the realization that any attempt to assert our view as global inevitably introduces an ideological bias that must be justified. As we build and rebuild our Lebenswelt, we can gain some reflexive selfidentity, some autonomy in determining through deliberate choice what ideological bias we will adopt in our actions and our words. In the felicitous phrase of Peirce, we become ‘masters of our own meaning’ (5.393). To do so, we must solve the problem of ‘other minds.’ We structure our interactions in the world in terms of our ideas of how our mind works and how other minds work. In other words, we are continually engaged in Umwelt research. To understand the meanings of others, we must endeavor to

534 Meaning, Sense, and Reference

understand our own meanings and how they are connected. The character of any human interaction – but communication in particular – is guided by how the participants establish intersubjectivity based upon their respective conceptions of Umwelt. At its most fundamental level, communication depends upon this intersubjectivity, the raising of a distinction between one’s mind and that of others, all the while attending to their connection.

Summary We began with the assertion that the topic of meaning is embedded deeply within semiotics in general and the concept of sign in particular. Using Frege’s distinction between sense and reference as an analytical tool, we have seen how various models of semiotics have dealt with meaning. From the dazzling array of structuralist accounts of the interplay of meanings in cultural contexts by Saussure and his followers, through Peirce’s brilliant integration of both sense and reference in his triadic model of sign and sign process, to the elaborations of Eco and von Uexku¨ll of unlimited semiosis and Umwelt, we can confirm that the whole of semiosis can be seen in large part as an effort after meaning. Meaning is not a possession of an individual or a property inherent in an object, but the inevitable outcome of semiosis, the action of signs. Returning briefly to the scenarios with which we began this entry, we have seen that even in the prototypical case of looking up the meaning of a word in a dictionary (scenario 1), one is quickly immersed in a rhizomous network of related meanings and experiences. A white cane in the hands of a pedestrian (scenario 2) can have a powerful influence on the behavior of others and in a very real sense comes to define that person in that context. Yet a whole other set of connections, different senses of the person, emerge when our pedestrian sets her cane aside and now uses her hands to drink coffee at a sidewalk cafe. These various senses must be shared by the community if they are to be acted upon. Meaning therefore is always an interactive outcome, not an inherent characteristic of either individuals or objects. Learning certainly plays a major role in the development of meaning. The cat in scenario 3 learned over time that the sound of the can opener heralded a possible meal. Our doctor in scenario 4 must likewise learn to distinguish between suspicious and ordinary features of an X-ray in order to make a correct diagnosis. Signs do not speak for themselves, they must be interpreted! And in that interpretation, particular conceptual contexts will be brought to bear that highlight some aspects of the complex situation and obscure others. Our anthropologist might be using a

structuralist model of culture (e.g., Levi-Strauss) in interpreting scenario 5, a model that stresses certain interpersonal characteristics of this culture while ignoring others. We must always be alert to the fact that meanings can change as interpretive context changes. Meanings must be communicated if they are to make a difference. The bicyclist and the dog in scenario 6 are, in a sense, searching for a means to coordinate their behaviors, as are the husband and wife in scenario 7. This process of interpretation and coordination draws heavily upon inference, particularly abductive inference. What meaning could account for these unexpected signs that would make them sensible? Generating and testing potential meanings lies at the heart of human semiosis. Finally, to become ‘masters of our own meaning,’ we must become more reflexive about the centrality of meaning in our life and how we can play a role in enriching that process. For a child to make the link between a photograph and her absent father (scenario 9) is an achievement that every parent should celebrate, because this act signifies the beginning of an awareness of the difference between a sign and its sense (interpretant) and reference (object). These humble beginnings can grow into the wonders of the hermeneutic process whereby we can come to know the worlds we have created and our role within them. In such worlds, Shakespeare and ‘Adagio for strings’ become possible. What other wonders await us? See also: Definite and Indefinite Descriptions; Direct Reference; Natural versus Nonnatural Meaning; Proper Names; Proper Names: Philosophical Aspects; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Referential versus Attributive; Sense and Reference.

Bibliography Barthes R (1964). Elements of semiology. Lavers A & Smith C (trans.). New York: Hill and Wang. Derrida J (1984). ‘Languages and the institutions of philosophy.’ Researches Semiotiques/Semiotic Inquiry 4, 91–154. Dewey J & Bentley A (1949). Knowing and the known. Boston: Beacon. Eco U (1984). Semiotics and the philosophy of language. Bloomington: Indiana University Press. Frege G (1892). ‘U¨ber Sinn und Bedeutung.’ In Zeitschrift fu¨r Philosophie und philosophische Kritik. 100. Reprinted as ‘On Sense and Reference.’ In Geach P & Black M (eds.) Translations from the philosophical writings of Gottlob Frege. Oxford: Basil Blackwell. 25–50. Hawkes T (1977). Structuralism and semiotics. Berkeley, CA: University of California Press. Houser N (1987). ‘Toward a Peircean semiotic theory of learning.’ The American Journal of Semiotics 5, 251–274.

Memes 535 Lakoff G & Johnson M (1980). Metaphors we live by. Chicago: University of Chicago Press. Levi-Strauss C (1966). The savage mind. Chicago: University of Chicago Press. Marshall H (1988). ‘Work or learning: implications of classroom metaphors.’ Educational Researcher 17, 9–16. Ogden C & Richards I (1956). The meaning of meaning (8th edn.). New York: Harcourt, Brace & Co. Peirce C S (1931–1958). Collected papers of Charles Sanders Peirce. Hartshorne C & Weiss P (eds.). Cambridge, MA: Harvard University Press.

Saussure F de (1959). Course in general linguistics. New York: Philosophical Library. von Uexku¨ll J (1957). ‘A stroll through the worlds of animals and men: a picture book of invisible worlds.’ In Schuller C (ed.) Instinctive behavior: redevelopment of a modern concept. New York: International University Press, Inc. von Uexku¨ll J (1982). ‘The theory of meaning.’ Semiotica 42, 25–82.

Memes G Powell, University College London, London, UK ß 2006 Elsevier Ltd. All rights reserved.

In his 1976 book The selfish gene, the British zoologist Richard Dawkins proposed a new view on the nature of evolution. His overall claim was that evolution is driven not by species, nor by individual organisms, but rather by a more basic unit, the replicator, where a replicator is ‘‘anything in the universe of which copies are made’’ (Dawkins, 1982). His idea is that the engine of evolution is competition between distinct replicators, with a replicator’s success determined, roughly speaking, by the number of copies it gives rise to. The most obvious of Dawkins’s evolutionary replicators is the gene, the basic unit of biological evolution. But one of the insights offered by Dawkins’s view was that evolution, viewed as an ongoing competition between self-propagating replicators, is not restricted to biology. For humans at least (and maybe in very limited respects for some other species), culture is also evolutionary: culture, just like biology, is a matter of the competition between particular replicators. But what are these cultural replicators? What is the cultural equivalent of the gene? Dawkins labeled it the ‘meme.’

What Is a Meme? Given the parallel with biological evolution, memes are defined as the basic units of cultural evolution. But what might they look like? They are essentially pieces of information, of whatever sort, which pass from mind to mind via imitation. Moreover, given Dawkins’s general view of evolution, they must be selfish: as Sperber (2000: 163) put it, they must stand ‘‘to be selected not because they benefit their human carriers, but because they benefit themselves.’’ In The selfish gene, Dawkins suggested: ‘‘Examples of memes are tunes, ideas, catch-phrases, clothes

fashions, ways of making pots or of building arches’’ (1976: 206). Beyond these, the list of cultural elements which have been considered as memes is extensive. Dawkins (1976) himself suggested that both religious beliefs and language are memes, or rather complexes of interrelated memes. Other proposed candidates include chain letters, jokes, conspiracy theories, and sundry Internet phenomena. All of these seem, on the face of it, to propagate themselves via imitation, i.e., via the production of copies. Moreover, at least some seem clearly to do so in a way that benefits themselves, rather than their hosts. Chain letters, for instance, do not benefit those who reproduce them, and yet are highly successful replicators (see Dennett, 1990; Sperber, 2000).

How Are Memes like Genes? This may seem on the face of it to be an unhelpful question. After all, for Dawkins, who coined the term, memes are essentially defined as the cultural equivalent of genes. From this perspective, the question we should really be asking is whether there are in fact memes in human culture. However, what I want to examine in this section is the extent to which those cultural items which have been identified as memes, either by Dawkins himself or by those who have developed memetics (the science of memes) in his wake, are genuinely similar to genes. Given the above discussion, there seem to be some clear parallels between genes and memes: in Dawkins’s terms, both are replicators, i.e., both generate copies of themselves. It also seems as if the success of both in their respective replicator pools depends on similar factors. For Dawkins, two of the key factors which determine the success of a particular replicator are longevity and fecundity, i.e., how long particular copies of the replicator survive and how many copies each gives rise to. As Dawkins (1976) argued, these factors seem to

536 Mentalese

determine the success of memes, in much the same way that they determine the success of genes. Moreover, just as with genes, it seems at least plausible that memes are selfish. As Dennett (1990) pointed out, there are many very successful memes which seem to be of no benefit other than to themselves: among the examples he suggested are anti-Semitism and computer viruses.

How Are Memes unlike Genes? The above list of parallels between genes and those cultural items which have been identified as memes is certainly not complete. There are, however, some important respects in which memes seem not to be like genes. Along with longevity and fecundity, copying fidelity is a key factor in the success of genes. In other words, ceteris paribus, the more accurate the copies it produces, the more successful a gene will be. As Dawkins himself concedes, however, it would seem on the face of it as if copying fidelity is not such an issue for memes. Memes passed from host to host tend to undergo a high degree of variation. Consider, for instance, jokes: although we can identify a joke as the same as it passes from person to person via retelling, each retelling will differ in fundamental ways from its predecessor. Perhaps more significantly, there seems to be no memetic equivalent of the distinction between genotype and phenotype, a distinction which is fundamental to Darwinian theories of biological evolution. The idea, in very rough terms, is that the biological properties of each individual organism, i.e., its phenotype, is a function of two factors: the organism’s genotype and its environment. What passes genetically from individual to individual is not the phenotype, i.e., not the specific biological properties of the parent organism, but rather the genotype: if a man accidentally loses a finger, his child will not be born with nine fingers. However, it seems that meme replication involves replication ofthe phenotype: changes in a particular copy of a

meme will be transmitted to copies of that copy. Again, this list of differences between genes and memes is incomplete. Nevertheless, it suggests that maybe we should be cautious before taking the parallel between memes and genes as anything more than a useful metaphor.

The Current State of Memetics The meme meme has itself been hugely successful. Although its original champions, in particular Dawkins himself and Daniel Dennett, seem to have retreated to some extent, there is no shortage of supporters to take their place. In particular, Blackmore (1999) has defended a thesis on which both the human brain and the self are products of the replication of successful memes. Given its evolutionary advantages, there seems no reason to suppose that this particular meme will not continue to flourish. See also: Categorizing Percepts: Vantage Theory; Context

and Common Ground; Conventions in Language; Cooperative Principle; Diminutives and Augmentatives; Face; Gender; Honorifics; Ideophones; Neo-Gricean Pragmatics; Politeness; Politeness Strategies as Linguistic Variables; Sound Symbolism; Synesthesia; Synesthesia and Language; Taboo Words.

Bibliography Blackmore S (1999). The meme machine. Oxford: Oxford University Press. Dawkins R (1976). The selfish gene. Oxford: Oxford University Press. Dawkins R (1982). The extended phenotype. Oxford: Oxford University Press. Dennett D (1990). ‘Memes and the exploitation of imagination.’ Journal of Aesthetics and Art Criticism 48, 127–135. Sperber D (2000). ‘An objection to the memetic approach to culture.’ In Aunger R (ed.) Darwinizing culture: the status of memetics as a science. Oxford: Oxford University Press. 163–173.

Mentalese F Egan, Rutgers University, New Brunswick, NJ, USA ß 2006 Elsevier Ltd. All rights reserved.

The Basic Hypothesis Some theorists of mind have claimed that thought takes place in a language-like medium. They have called this language ‘Mentalese.’ Mentalese has a

syntax, a semantics, and a morphology, though discovering these properties of the language of thought will likely require extensive empirical investigation of the mind. Obviously, mentalese does not have a phonology. It is therefore more like written public language than overt speech. And whereas public languages require a pragmatics – a theory of how the language is used by speakers – mentalese, like

Mentalese 537

the machine languages of computers, does not call for one. Gilbert Harman (1973) offered the following argument for the existence of mentalese: logical relations hold among mental states, and these relations are essential to their role in psychological prediction and explanation. If the belief that snow is white and grass is green is true, then the belief that snow is white is true. In general, if the belief that p & q is true, then the belief that p is true. Generalizations of this sort presuppose that beliefs have sentential structure. Some beliefs are conjunctions, others disjunctions, and so on. Beliefs (as well as desires, fears, and the other propositional attitudes) are part of a language-like system. Harman’s argument fails to establish that mental states themselves have logical or sentential structure. The argument trades on the fact that belief ascriptions have sentential structure. We ascribe certain beliefs to subjects using sentences that are conjunctive or disjunctive, but it does not follow that the mental states so ascribed are themselves conjunctions or disjunctions, or that the relations that hold among these mental states are of the sort that hold among sentences (or propositions), that is, that they are logical relations. To assume that they are is just to assume what is at issue – that thoughts have a language-like structure. In general, one must guard against attributing to thoughts themselves properties of the representational scheme that we use to talk about them. The hypothesis that thought occurs in a languagelike medium is understood as the claim that not only beliefs but also desires and the other propositional attitudes are properly construed as relations to sentences in the inner language. To believe that the conflict in the Middle East will not be resolved is to bear a relation to an inner sentence token that means the conflict in the Middle East will not be resolved. To fear that the conflict in the Middle East will not be resolved is to bear a different relation to an internal sentence of the same type. The difference between the believing-relation and the fear-relation is construed as a difference in the processing that the sentence token undergoes in the brain, in other words, as a difference in its functional role. The belief is likely to cause, in certain circumstances, sincere assertions of a public language sentence meaning the conflict in the Middle East will not be resolved. The fear is more likely to give rise to appropriate emotional states.

What Is Mentalese Like? The Thinker’s Public Language, or a Proprietary Inner Code?

Some theorists have supposed that mentalese is just the thinker’s own public language. English speakers

think in English, Chinese speakers in Chinese. One might be inclined to this view by reflecting on the fact that many thoughts do not seem possible until the thinker has acquired a public language. Consider, for example, the thought that the leech of the genoa is curled. A subject cannot genuinely think this thought until she has acquired the concepts leech and genoa. Moreover, acquiring such concepts seems to require learning the appropriate public language terms for them, or more basic public language terms in which they can be defined. If mentalese is just the thinker’s public language, then investigation of its properties is relatively straightforward. The syntax of an English speaker’s mentalese is just English syntax; its semantics is just English semantics. Jerry Fodor, in his groundbreaking 1975 book, The language of thought, argued that thought takes place in a proprietary inner code; moreover, this inner language has the expressive power of any public language a thinker is capable of learning. According to Fodor, the process of language learning involves hypothesis formation and testing; in particular, the hypotheses have the form of ‘truth rules’ for the application of the public language terms. To learn the English term ‘genoa’ is to learn a rule of the form ‘‘x is a genoa’ is true if and only if x is P,’ where P is a predicate in the proprietary inner language that is coextensive with the English predicate ‘genoa.’ To learn a public language is to acquire a translation manual that pairs terms in the language of thought with their public language equivalents. On pain of regress, terms in the language of thought are not themselves learned. A consequence of Fodor’s view is that the concept genoa – and electron, carburetor, and any other concept a thinker can possess – is either innate or definable in terms of other concepts that are themselves innate. Fodor argues (Fodor, 1981) that few concepts are so definable; hence the vast majority of a thinker’s concepts are innate. Needless to say, many have found Fodor’s extreme nativism unpalatable. His argument depends upon construing (public) language learning as hypothesis formation and confirmation, a process that requires an internal medium of representation, a language of thought where the hypotheses are couched. But there is nothing inevitable about explicit hypothesis formation and testing models of learning. If public language predicates were learned as a result of a causal process that is not construed as linguistic or inferential – a process sometimes known as ‘triggering’ – then the argument’s nativist conclusion would not follow. Whatever their views on concept-nativism, most proponents of the view that thought takes place in a linguistic medium have followed Fodor in claiming that the language of thought is an inner neural code,

538 Mentalese

distinct from any public language. Accordingly, we will hereinafter construe the ‘language of thought hypothesis’ (LOT) as the view that thought takes place in a proprietary inner code. Psycho-Syntax and Psycho-Semantics

If the language of thought is indeed an inner neural code, then discovering its syntax and semantics will require extensive empirical investigation of the mind. Characterizing the syntax of mentalese will involve a specification of a finite set of primitive objects (words), and a grammar or set of formation rules that describe the ways in which complex syntactic objects (sentences) may be built out of the primitives. The individuation of syntactic types will be functionally based, adverting to the causal roles of these objects in the subject’s cognitive life. The so-called ‘mental logic’ research program that attempts to uncover the formal rules of inference underlying human deductive reasoning presupposes the existence of an innate syntax of thought and proposes to empirically investigate it. (See the papers in Braine and O’Brien, 1998.) Various candidate inference schemas have been offered, but no proposal is sufficiently detailed to generate empirically testable predictions regarding the underlying syntax. A full theory of mentalese also requires a ‘psychosemantics’ – an account of how internal sentences acquire their meaning. In virtue of what fact does a particular mental sentence mean snow is white rather than 2þ2 ¼ 4? The meanings of public language sentences are fixed by public agreement, or derive in some way from the mental states of the users of these sentences, but, on pain of circularity, the sentences of mentalese must acquire their meanings in some other way. Since the 1980s, there has been a proliferation of theories, mostly by philosophers, purporting to explain how mental representation is possible. Not all of these accounts presuppose LOT, but most, if not all, are compatible with it. Typically, such theories attempt to explain the semantic properties of thought while respecting a naturalistic constraint – they attempt to specify sufficient conditions for a mental state’s meaning, what it does in nonsemantic and nonintentional terms. (See Stich and Warfield, 1994 and Representation in Language and Mind.)

Further Arguments for LOT Theories of Mental Processing Are Committed to LOT

Fodor (1975) reasoned that the only plausible models of mental processing are computational models, and that these require a medium of computation, that is,

an inner system of representation. This argument has now been undermined by the existence of connectionist computational models. Connectionist machines are capable of performing cognitive tasks, but they lack fixed symbols over which computational operations are defined. Connectionist processes are not naturally interpretable as manipulations of internal sentences or data structures. If the mind is best characterized as a connectionist machine, or as an aggregate of such machines without an overarching executive control, then the LOT hypothesis is false. LOT Explains Some Pervasive Features of Thought

Fodor (1987) argues that LOT provides the best, indeed, the only explanation of two pervasive features of thought. Thought is productive: we can think arbitrarily many thoughts. It is also systematic; cognitive capacities are systematically related. If a subject can think the thought John loves Mary, then he can think the thought Mary loves John. The explanation for the productivity and systematicity thought is that thoughts have a language-like structure. We can think arbitrarily many thoughts for the same reason that we can utter arbitrarily many sentences. Thoughts, like sentences, are composed of a finite base of elements put together in regular ways, according to the rules of a grammar. The systematicity of thought is also explained by LOT: systematically related thoughts contain the same basic elements but are arranged differently. Whether the argument is successful depends on two issues: (1) whether productivity and systematicity are indeed pervasive features of thought; and (2) if they are, whether they can be accounted for without positing a language of thought. Thoughts are assumed to be productive because they are represented, described, and attributed by public language sentences, a system which is itself productive. However, as noted above, one must be careful not to attribute to thoughts themselves properties of the representational scheme that we use to talk about them. It would be a mistake to think that temperature is infinite because the scheme used to measure it, the natural numbers, is infinite. If thoughts are understood as internal states of subjects that are, typically, effects of external conditions and causes of behavior, then it is not obvious that there are arbitrarily many of them. The size of the set of possible belief-states of human thinkers, like the size of the set of possible temperatures of objects, is a matter to be settled by empirical investigation. Turning to systematicity, the argument falls short of establishing the existence of mentalese. In the first place, it is not clear how pervasive systematicity really is. It is not generally true that if a thinker can entertain a

Meronymy 539

proposition of the form aRb, then she can entertain bRa. One can think the thought the boy parsed the sentence, but not the sentence parsed the boy. Moreover, it is a matter of some dispute within the cognitive science community whether connectionist cognitive models, which do not posit a language of thought, might be capable of explaining the systematic relations that do hold among thoughts. (See MacDonald and MacDonald, 1995 for the classic papers on this issue, and Matthews, 1997 for further discussion). See also: Acquisition of Meaning by Children; Aristotle and

Linguistics; Categorizing Percepts: Vantage Theory; Category-Specific Knowledge; Cognitive Semantics; Coherence: Psycholinguistic Approach; Human Reasoning and Language Interpretation; Ideational Theories of Meaning; Intention and Semantics; Lexical Conceptual Structure; Lexical Meaning, Cognitive Dependency of; Nominalism; Psychology, Semantics in; Representation in Language and Mind; Thought and Language; Virtual Objects.

Egan F (1991). ‘Propositional attitudes and the language of thought.’ Canadian Journal of Philosophy 21, 379–388. Field H (1978). ‘Mental representation.’ In Block N (ed.) Readings in the philosophy of psychology, vol. 2. Cambridge, MA: Harvard University Press. 78–114. Fodor J A (1975). The language of thought. New York: Thomas Y. Crowell. Fodor J A (1981). ‘The present status of the innateness controversy.’ RePresentations: Philosophical essays on the foundations of cognitive science. Cambridge, MA: MIT Press. 257–316. Fodor J A (1987). ‘Why there still has to be a language of thought.’ Psychosemantics. Cambridge, MA: MIT Press. 136–154. Harman G (1973). Thought. Princeton, NJ: Princeton University Press. MacDonald C & MacDonald G (1995). Connectionism: debates on psychological explanation. Oxford: Blackwell. Matthews R J (1997). ‘Can connectionists explain systematicity’? Mind and Language 12, 154–177. Stich S P &Warfield T A (1994). Mental representation: a reader. Oxford: Blackwell.

Bibliography Braine M & O’Brien D (eds.) (1998). Mental logic. Hillsdale, NJ: Laurence Erlbaum Associates.

Meronymy M L Murphy, University of Sussex, Brighton, UK ß 2006 Elsevier Ltd. All rights reserved.

Meronymy (sometimes also called partonymy or the relation) is the PART-OF relation. For example, page, cover, and spine are meronyms of book (in its physical-object sense) in that they are parts of books. The converse relation, that of whole to part, is sometimes called holonymy, but meronymy is often used to refer generally to the phenomenon of relatedness of expressions for wholes and parts. While meronymy is often mentioned, along with synonymy, antonymy, and hyponymy, in lists of semantic relations among words, lexicology have traditionally paid it less attention than the other relations, as meronymy is not so clearly a linguistic relation. This is to say that the relation is not clearly a lexical relation (relating words), nor a sense relation (relating the meanings of words), but rather is a relation among the referents that the expressions denote. For instance, while a tail is a part of a dog, ‘tail’ is not necessarily part of the meaning of dog, nor ‘dog’ part of the meaning of tail. Recent changes in approaches to meaning have HAS-A

resulted in more attention to meronymy, and the relation is relevant to several applied linguistic endeavors. For instance, the PART-OF relation (like the TYPE-OF relation, hyponymy) is central to the creation of dictionary definitions. Furthermore, different kinds of meronym relations are often represented in lexical knowledge databases created for Natural Language Processing projects (e.g., WordNet – see Miller, 1998). Definitions, properties, and subtypes of meronymy are discussed in turn below, followed by discussion of its treatment in contemporary linguistics. The signs < and > are used here to indicate meronymy, with the holonym on the open side of the symbol and the meronym on the pointed side – e.g., bird>wing, fingerwing passes

540 Meronymy

these tests (A wing is a part of a bird; A bird has a wing) because the test frames express propositions that are considered to be generally true of birds (the existence of a few deformed birds notwithstanding). The particular test frames chosen affect what counts as meronymy to a particular theorist. For example, Winston et al. (1987) propose that meronym relations include any that can be expressed using the term ‘part’ or derivations from it, such as Y is partly X. This allows for relations such as metalSEGMENT (year>month), WHOLE>FUNCTIONAL COMPONENT (computer>keyboard), COLLECTION>MEMBER (forest>tree, club>member), and WHOLE>SUBSTANCE (jar>glass). (See Iris et al., 1988, Chaffin, 1992, and Miller, 1998 for particular taxonomies.) Winston et al. (1987) base their taxonomy of meronym types on three binary features, which in different combinations describe the various types of meronymy: (þ/) FUNCTIONAL, (þ/) HOMEOMEROUS, and (þ/) SEPARABLE. If a relation is [þHOMEOMEROUS], then the part is made of the same kind of thing as the whole, as for slicecaptain), COLLECTION>MEMBER (fleet>ship), and WHOLE>CULMINATION/CENTER (joke>punchline). MTT also allows for the combination of lexical functions (such as the above relations) to create additional relations.

Meronymy 541

Properties of Meronymy Because of the diversity of types of PART-OF relations and because the relations seem to relate referents rather than senses, little can be said in general about logical properties of meronymy. Unlike synonymy and antonymy, the relation is asymmetrical; that is, words are not each other’s meronyms. The various types of meronymy vary in whether they are transitive or not. For example, body parts are in transitive meronymic relations, as in (1). (1) An eye is a part of a face. A face is a part of a body. [ An eye is a part of a body.

But the cases in (2) and (3) seem less transitive. (2) Bark is part of a tree. A tree is part of a forest. ?[ Bark is part of a forest. (3) The house has a door. The door has a handle. ?[ The house has a handle. (Lyons, 1977)

Transitivity fails in (2) because the meronym relations in the premises are of different subtypes; bark

a. a kind of animal [M] b. they live with people sometimes they live in places where people live sometimes they live near places where people live c. they are not big a person can pick up [M] one with two hands [M] d. they have soft [M] fur [M] they have a round [M] head [M] their ears [M] stick out [M] on both sides of the top part of the head [M], they are pointed [M] their eyes [M] are not like people’s eyes, they are shiny [M] they have some stiff [M] hairs [M] near the mouth [M], they stick out [M] on both sides of the mouth [M] they have a long [M] tail [M] they have soft [M] feet [M] they have small sharp [M] claws [M]

Two kinds of semantic molecules predominate in the explication in (8): body-part terms (hands, head, ears, eyes, mouth, hairs, fur, feet, tail, claws) and

physical descriptors of various kinds (long, round, pointed, sticking out, soft, stiff, sharp, shiny). To these two groupings, we can add bodily actions and postures (eat, climb, jump, fight, etc.) that are needed in subsequent sections of the cats explication; obviously they will require the use of body-part terms as semantic molecules. If we now look into the semantics of body-part terms (see Cognitive Semantics), it emerges that certain shape descriptors are required as molecules in these explications. For example, head (in the sense of a ‘human person’s head’) requires the shape descriptor ‘round [M],’ and words such as arms, legs, and tail require ‘long [M].’ It would be incorrect to assume that shape descriptors are semantically more basic than all body-part terms, however, because one human body part, namely hands, is necessary in the explication of shape descriptors themselves. This is because shape descriptors designate properties that are both visual and tangible, and to spell out the nature of the latter concept requires the semantic prime TOUCH (contact) and the semantic molecule ‘hands [M].’ For example: (9) something long e.g., a tail, a stick, a cucumber) when a person sees this thing this person can think about it like this: ‘two parts of this thing are not like any other parts because one of these two parts is very far from the other’ if a person’s hands [M] touch this thing everywhere on all sides this person can think about it in the same way

From an experiential point of view the importance of the semantic molecule ‘hands [M]’ is perhaps not so surprising. The experience of handling things, of touching them with our hands and moving our hands in an exploratory way plays a crucial role in making sense of the physical world and in our construal of the physical world. It turns out that, unlike many other body-part words, hands can be explicated directly in terms of semantic primes, although space prevents us from demonstrating this here. Although shape descriptors such as ‘long [M]’ are only of a moderate level of semantic complexity, it is interesting that to ordinary intuition they appear so basic and unanalyzable. Presumably this is because they are formed very early in childhood and are subsequently incorporated into the molecular substructure of so many other concepts. Terms for most natural kinds and artifacts are semantically extremely complex, both because they incorporate numerous semantic molecules and because they encapsulate tremendous amounts of cultural knowledge. For natural kinds, this applies in

Natural Semantic Metalanguage 615

particular to terms for animal species with which people have close relationships, so to speak, either currently or in historically recent times. English cat, dog, mouse, and horse, for example, are far richer in conceptual terms than moose or kangaroo. Similarly, artifact terms can differ in their amount of conceptual information depending on the complexity of the function or functions they are intended to serve, including its degree of cultural embeddedness. For example, a word such as knife is simpler than one such as cup because cup involves a great deal more social information.

Other Uses of Semantic Primes The metalanguage of semantic primes can be used not only for lexical and grammatical semantics, but also as a notation for writing cultural scripts, that is, hypotheses about culturally shared assumptions, norms, and expectations that help regulate interaction in different cultural settings. By using simple crosstranslatable expressions, this approach to cultural pragmatics (or ethnopragmatics) avoids the implicit terminological ethnocentrism of conventional approaches that rely on complex technical and/or English-specific descriptors. Cultural scripts written in semantic primes aim to present an insider perspective on cultural norms in terms that are accessible to outsiders. The NSM metalanguage can also be used to spell out the meanings conveyed by nonverbal signals, such as facial expressions, gestures, and body postures. See also: Causatives; Cognitive Semantics; Componential

Analysis; Definition in Lexicology; General Semantics; Generative Lexicon; Human Reasoning and Language Interpretation; Ideational Theories of Meaning; Lexical Conceptual Structure; Lexical Fields; Lexical Meaning, Cognitive Dependency of; Lexical Semantics; Mentalese; Polysemy and Homonymy; Prototype Semantics; Stereotype Semantics.

Bibliography Ameka F (1999). ‘‘‘Partir, c’est mourir un peu.’’ Universal and culture specific features of leave taking.’ RASK, International Journal of Language and Communication, Special Edition on ‘E Pluribus Una’: the One in the Many 9–10, 257–284. D’Andrade R (2001). ‘A cognitivist’s view of the units debate in cultural anthropology.’ Cross-Cultural Research 35(2), 242–257. Durst U (2003). ‘The natural semantic metalanguage approach to linguistic meaning.’ Theoretical Linguistics 29(3), 157–200.

Goddard C (1998). Semantic analysis: a practical introduction. Oxford: Oxford University Press. Goddard C (2001a). ‘Lexico-semantic universals: a critical overview.’ Linguistic Typology 5(1), 1–66. Goddard C (2001b). ‘Sabar, ikhlas, setia–patient, sincere, loyal? A contrastive semantic study of some ‘‘virtues’’ in Malay and English.’ Journal of Pragmatics 33, 653–681. Goddard C (2002). ‘Ethnosyntax, ethnopragmatics, signfunctions, and culture.’ In Enfield N J (ed.) Ethnosyntax: explorations in grammar and culture. Oxford: Oxford University Press. 52–73. Goddard C (2003). ‘Whorf meets Wierzbicka: variation and universals in language and thinking.’ Language Sciences 25(4), 393–432. Goddard C & Wierzbicka A (eds.) (1994). Semantic and lexical universals–theory and empirical findings (Vols I and II). Amsterdam: John Benjamins. Goddard C & Wierzbicka A (eds.) (2002). Meaning and universal grammar–theory and empirical findings. Amsterdam: John Benjamins. Goddard C & Wierzbicka A (eds.) (2004). Intercultural Pragmatics 1(2). Special Issue on Cultural Scripts. Harkins J & Wierzbicka A (eds.) (2001). Emotions in crosslinguistic perspective. Berlin: Mouton de Gruyter. Hasada R (2002). ‘‘‘Body part’’ terms and emotion in Japanese.’ Pragmatics and Cognition, Special Edition on The Body in the Description of Emotion 10(1), 107–128. Junker M-O (2003). ‘A Native American view of the ‘‘mind’’ as seen in the lexicon of cognition in East Cree.’ Cognitive Linguistics 14(2–3), 167–194. Langford I (2000). ‘Forensic semantics: the meaning of murder, manslaughter and homicide.’ Forensic Linguistics 7(1), 72–94. Peeters B (2000). ‘S’engager vs. to show restraint: linguistic and cultural relativity in discourse management.’ In Niemeier S & Dirven R (eds.) Evidence for linguistic relativity. Amsterdam: John Benjamins. 193–222. Travis C (1998). ‘Omoiyari as a core Japanese value: Japanese-style empathy?’ In Athanasiadou A & Tabakowska E (eds.) Speaking of emotions: conceptualization and expression. Berlin: Mouton de Gruyter. 55–82. Travis C (2003). The semantics of the Spanish subjunctive: its use in the natural semantic metalanguage. Cognitive Linguistics 14(1), 47–69. Wierzbicka A (1972). Semantic primitives. Frankfurt: Athenaum. Wierzbicka A (1980). Lingua mentalis: the semantics of natural language. Sydney/New York: Academic Press. Wierzbicka A (1985). Lexicography and conceptual analysis. Ann Arbor, MI: Karoma. Wierzbicka A (1987). English speech act verbs: a semantic dictionary. Sydney/New York: Academic Press. Wierzbicka A (1992). Semantics: culture and cognition. Oxford: Oxford University Press. Wierzbicka A (1996). Semantics: primes and universals. Oxford: Oxford University Press.

616 Natural versus Nonnatural Meaning Wierzbicka A (1997). Understanding cultures through their key words. Oxford: Oxford University Press. Wierzbicka A (1999). Emotions across languages and cultures. Cambridge, UK: Cambridge University Press. Wong J (2004). ‘The particles of Singapore English: a semantic and cultural interpretation.’ Journal of Pragmatics 36, 739–793. Ye Z (2004). ‘The Chinese folk model of facial expressions: a linguistic perspective.’ Culture and Psychology 10(2), 195–222.

Yoon K-J (2004). ‘Korean maum vs. English heart and mind: contrastive semantics of cultural concepts.’ In Moskosky C (ed.) Proceedings of the 2003 Conference of the Australian Linguistics Society. Relevant Website http://www.une.edu.au/bess/linguistics/nsm/ – This site contains information and resources about the NSM approach to semantic analysis.

Natural versus Nonnatural Meaning A Barber, The Open University, Milton Keynes, UK ß 2006 Elsevier Ltd. All rights reserved.

Grice’s Distinction H. P. Grice noticed a potentially confusing lexical ambiguity in the word ‘means’ (see Grice, 1957). Contrast claim (1) with claim (2): (1) The flooded river means that the mountain snow is melting (2) The sentence ‘La neige des montagnes fond’ means that the mountain snow is melting

If it turned out that the mountain snow was not melting, then claim (1) would have to be withdrawn, whereas claim (2) would not. The French sentence in (2) would mean what it does even if it was untrue. Grice distinguishes what he calls natural meaning, which is what ‘means’ means in the context of a claim such as (1), from nonnatural meaning, invoked in (2). He offers a range of criteria for distinguishing the two kinds of meaning – ‘meaningN’ and ‘meaningNN’ as he abbreviates them – but the key difference is sometimes put in terms of meaningN, unlike meaningNN, being factive: what is meantN can only be a fact, whereas what is meantNN need not. Sentences are not the only kind of entity that may be said to have meaningNN. Others include people, as when we say that so-and-so meant something; utterances of a sentence as opposed to the sentence itself; and mental states, such as beliefs, which have representational content without necessarily representing accurately. The distinction seems to be available no matter what one takes to be the primary bearers of meaningNN. Grice stressed this distinction because he was interested in nonnatural meaning, which he deemed to be harder to pin down and so more in need of investigation than natural meaning. Natural meaning is simply a relation of necessitation between one event and another, for example a causal relation such as the one

alleged in (1). The investigation of nonnatural meaning is handicapped, he felt, by confusions that arise through failure to see the distinction between it and meaningN. In his 1957 paper, he criticizes the attempt to understand meaningNN in terms of meaningN, and then appeals to the distinction in the course of blocking objections to his own theory of meaningNN.

Grice’s Theory of Non-natural Meaning Grice’s distinction is acknowledged even by those who do not endorse his theory of meaningNN. But this latter has been influential nonetheless (see Schiffer, 1972 for a development of it, and Schiffer, 1987 for a retraction). The meaningNN of an act, he suggests, turns entirely on the intention of its agent. Not all acts have meaningNN, but those that do are performed with a triplex of intentions. First, the agent intends to bring about an effect of some kind in the mind of her audience – a change of belief, for example. Second, she intends to be recognized by this audience as having this first intention. Third, she intends the effect to arise through this recognition. If an act is performed with intentions of this form, then the intended effect is said by Grice to be the act’s meaningNN (or, equivalently, it is what the agent meantNN by the act; or again, it is what the expression used meansNN on that occasion). In practice, recognition of the intention is usually achievable only because of the close association between acts typed according to whether they share the same meaningNN and acts typed according to an overt, recognizable pattern. In the linguistic case, for example, typing according to the sentence uttered will usually group together utterances with the same meaningNN. Shared knowledge of which sentences are paired with which meaningsNN is what makes individual acts of meaningNN possible in the first place. This leads Grice to introduce a derivative notion, timeless meaningNN, which is the meaning associated with the

Negation 617

repeated use of a particular pattern. Sometimes this timeless meaningNN of a sentence is distinct from the meaningNN of an utterance of it on an occasion. Someone could use the sentence I know where you live to meanNN something distinct from what it timelessly meansNN. It is no coincidence that Grice was among the first to notice and develop a theory of the phenomenon of implicature (see Grice, 1975).

Other Remarks Though the existence of a distinction between natural and nonnatural meaning is clear once pointed out, it is easy to become confused over its precise nature. After all, utterances are events and, as such, have meaningN as well as meaningNN. And (3) is true on both readings of ‘means’: (3) The canyon-dweller’s shout of ‘‘Here comes an echo’’ meant that we would shortly hear an echo

Moreover, the utterances of an omniscient truthteller – God, for example – meanN whatever they meanNN. Efforts by philosophers of mind and language in the 1980s to understand systems of representation naturalistically (see Millikan, 1984; Dretske, 1988) may shed light on these cases. A distinction is sometimes drawn between what an instance of mental representation R indicates and what it represents. What it indicates (or what information it carries) is understood entirely in terms of the causal relations instances of R tend to enter into – something like meaningN. It represents not what it indicates but what it is the function of instances of R to indicate, within the representational system as a

whole. So when the representation system, i.e. the mind or some part of it, is functioning properly, an instance of R will indicate what it represents. Misrepresentation is a matter of functional breakdown in the system, resulting in instances of R that fail to indicate what they represent, i.e., what it is their function to indicate. An omniscient truth-teller can be thought of as a properly functioning representational system, which is why God’s utterances meanN (or indicate) what they meanNN (or represent). See also: Context and Common Ground; Cooperative Prin-

ciple; Evolution of Semantics; Expression Meaning vs Utterance/Speaker Meaning; Factivity; Implicature; Intention and Semantics; Meaning, Sense, and Reference; Representation in Language and Mind.

Bibliography Dretske F (1988). Explaining behavior: reasons in a world of causes. Cambridge: MIT Press. Grice H P (1957). ‘Meaning.’ Philosophical Review 66, 377–388. Grice H P (1975). ‘Logic and conversation.’ In Cole P & Morgan J (eds.) Syntax and semantics, vol. 3. London: Academic Press. Reprinted in H. P. Grice, Studies in the way of words. Cambridge: Harvard University Press, 1989. Millikan R G (1984). Language, thought, and other biological categories: new foundations for realism. Cambridge: MIT Press. Schiffer S R (1972). Meaning. Oxford: Clarendon Press. Schiffer S (1987). Remnants of meaning. Cambridge: MIT Press.

Negation R van der Sandt, Radboud University, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

Classical and Nonclassical Negation In logic, negation is an operation that applies to a sentence (or more generally a formula), thus yielding a new sentence. The sentence thus obtained is called the negation (or in some authors the denial) of the sentence negated. The negation sign is a logical operator, which is written in front of the relevant formula. The negation of a sentence j is – depending on the notation convention adopted – written as :j, j, –j, j E or Nj. In Frege (1879), we find a small vertical

stroke attached to the lower side of the content stroke. In his notation, the judgment that j is not the case thus comes out as j. In a classical two-valued logic (which allows exactly two truth values and moreover takes for granted that every sentence has a truth value), there is only one possible semantic definition of negation: the negation operator is a function that maps truth to falsity and falsity to truth. We may, however, deviate from classical semantics in various ways. One way is to question the assumption that every sentence has a truth value, another way is to allow for more than two truth values. This comes down to rejecting the principle of bivalence. The first strategy gives rise to so-called partial logics, the second to multivalent

618 Negation

logics. A partial logic thus allows a sentence to have no value and it is common to refer to this third possibility as a truth value gap. In multivalent logics, we may define several negation operators; what their truth tables look like will depend on the intended interpretation of the nonclassical values. In a threevalued logic, a sentence that is not true may either be false or have the third truth value. In this view, falsity is a subcategory of what is not true. An alternative account takes the third value as a special way of being not false. Finally, we may see the third value as independent of both truth and falsity. We thus may interpret the third value as a special way in which a sentence may go wrong and different interpretations will generally give rise to different truth tables. Nonclassical accounts of negation go back to Aristotle. Aristotle argued (in his famous ‘sea battle’ argument) that if the sentence There will be a sea battle tomorrow were now either true or false, we would be committed to fatalism. Lukasiewics concluded that such contingent future tense sentences are neither true nor false and consequently rejected bivalence. He introduced a third truth value to be interpreted as indeterminate or contingent, which naturally yields a truth table that maps truth to falsity, falsity to truth, and the third value to the third value. The Russian logician Bochvar gave another motivation for three-valued logics, arguing that it allows us to avoid logical paradoxes. Other writers have given various linguistic arguments for such deviations from classical logic. Three cases that received quite a bit of attention in the linguistic literature are category mistakes, vagueness, and presupposition failure. Examples of category mistakes are Carnap’s Caesar is a prime number or Chomsky’s Colorless green ideas sleep furiously. Such sentences do not ascribe properties to things they do not have (which would result in falsity), but properties they cannot possibly have, thus yielding absurdity. And, since this holds for both the unnegated sentence and its negation, it gives a truth table that maps the third value (interpreted as nonsensical) on itself. The problem is closely related to the problem of presupposition failure (see below). Vague predicates comprise predicates such as small, beautiful, long, etc., that is predicates that apply to a thing or set of things to a certain extent. In some cases, it is simply unclear whether a predicate applies to an object. An example is Austin’s France is hexagonal. Vague terms admit borderline cases, cases in which it cannot be uniquely determined whether a concept falls under the term or not. Three-valued logics are not of much help in solving the problem, since they replace the classic dichotomy between true and false with a strict trichotomy between true, false, and indeterminate, which yields results that are

equally counterintuitive as those that result from a bivalent approach. The most popular accounts are the so-called fuzzy logics introduced in the mid 1960s (Zadeeh, 1965) and the method of supervaluations (Van Fraassen, 1966). Fuzzy logics accept an infinite number of truth values (a real number in the interval [0, 1]). The idea is that the closer the value is to 1 the ‘truer’ the sentence is. The value of a negated sentence is taken to be 1 minus the value of the unnegated sentence. Assuming that vague sentences get their value somewhere in the middle of the interval, it follows that if a sentences is maximally vague, so is its negation. The same holds for conjunctions and disjunctions of vague sentences. Fuzzy logics have been objected to on a number of counts. Most importantly, people have objected to the artificiality of assigning precise numerical values to the degree to which an object is small or beautiful. An alternative account is found in the method of supervaluations. Supervaluations – which were originally developed to account for presupposition failure – differ from multivalent logics in that they accept only two truth values but allow truth-value gaps. Just as in fuzzy logics, the negation of a vague sentence will come out as vague; but in contrast to fuzzy logics, formulas that are classically valid (or contradictory) will always come out as true or false no matter what the values are of their atomic components. A comparison between the approaches and a linguistic application is found in Kamp (1975). Some of the best-known deviations from classical logic originate from presupposition theory. For Frege (1892) and Strawson (1952), presuppositions act as preconditions a sentence should satisfy in order to be true. Strawson and many other authors took preservation under negation as the defining characteristic. Thus (1b) seems to suggest as strongly as its unnegated counterpart (1a) that (1c) is true. (1a) The king of France is bald. (1b) The king of France is not bald. (1c) France has a king.

This phenomenon is general and seems to hold for all other lexical items that are said to induce presuppositions: (2a) John quit smoking. (2b) John didn’t quit smoking. (2c) John used to smoke.

Inspired by Strawson the so-called neo-Strawsonians turned preservation under negation into a definition of semantic presupposition: (3) A sentence j presupposes c just in case (i) j entails c; and (ii) : j entails c.

Negation 619

If we assume that negation maps truth to falsity and vice versa and, if we also adopt the standard definition of semantic entailment (j entails c iff for any model M if [[j]]M ¼ 1 then [[c]]M ¼ 1), (3) is equivalent to (4) j presupposes c just in case (i) in any model in which [[j]]M ¼ 1 it holds that [[c]]M ¼ 1; and (ii) in any model in which [[j]]M ¼ 0 it holds that [[c]]M ¼ 0.

The latter simply comes down to the idea that if a sentence j presupposes a sentence c, j cannot have one of the classical values true or false, unless c is true. In a classical two-valued logic, this would predict that sentences can only presuppose tautologies. This forces the theorist to either allow for truth-value gaps or alternatively to introduce a third value to account for presupposition failure. If we allow truth-value gaps, compositionality predicts that if the unnegated sentence has no value, its negated counterpart will not have a value either. In a threevalued logic, this is captured by adopting the following truth table for negation (* indicates the third value): (5)

In conjunction with definition (3), this correctly predicts that presuppositions are preserved under negation. Russell pointed out that (1b) has a second reading according to which this sentence is true simply because there is no king of France. In his analysis, the difference between the two readings is cashed out in terms of scope. In addition to the ‘standard’ interpretation, which is obtained by giving negation narrow scope and thus yields the formula (6a), we obtain the second reading by giving the negation scope over the full formula as in (6b): (6a) 9x8y[[KF(x) $ x ¼ y] ^: Bald(y)] (6b) : 9x8y[[KF(x) $ x ¼ y] ^ Bald(y)]

This retains classical two-valued negation and eliminates presuppositions. It also illustrates the idea, found in Frege, that negation can always be represented as the standard operator suitably placed in the logical representation. Russell’s theory eliminates presuppositions. However, the difference between a presupposition preserving and presupposition canceling interpretation can also be obtained in a three-valued logic by defining a

second negation operator value onto truth:

j that maps the third

(7)

We perceive two kinds on nontruth. Nontruth as plain falsity and nontruth as a result of presupposition failure. The operator given in (5) is known as the weak Kleene-negation or the internal Bochvar connective; the operator defined in (7) is the same as the external Bochvar negation (also called the denial operator).

Negation and Polarity Natural language contains a class of expressions whose distribution is limited to contexts that feel negative. Such items are called negative polarity items (NPIs). NPIs come in various types and syntactic categories: quantifiers and quantifier phrases (any, any book), verbs (bother to), factives (mind that, matter that), adverbs (ever, yet), prepositions (until), modals (need), idioms (give a damn, lift a finger), and many more. Thus the a sentences below, where we find the NPIs any and ever in the scope of a negative expression, are fine, but the b sentences are unacceptable. (8a) Mary didn’t find any coin. (8b) *Mary found any coin. (9a) No one ever squared the circle. (9b) *Somebody ever squared the circle.

In addition to NPIs, we find a class of positive polarity items (PPIs). Positive polarity items typically occur in non-negative contexts. Just like NPIs, they belong to various syntactic categories. Examples of PPIs are quantifiers like each and some, factives like be delighted that, adverbs like already, and a rich variety of other expressions. The following examples show that PPIs shun negation: (10a) Harry is already serving breakfast. (10b) *Harry is not already serving breakfast (11a) Everybody inspected each package. (11b) *Nobody inspected each package.

Contexts that license NPIs however, need not contain overt negative elements. Thus negative polarity elements may be allowed in, for example, conditionals and before-clauses. (12a) If you find any clue, you should report it to the police. (12b) Mary left before John* already/even arrived.

620 Negation

In addition, some types of contexts only allow certain subclasses of polarity items. The central problem is thus to give a general characterization to the classes of contexts that license NPIs and PPIs and to relate different subclasses of polarity elements to the contexts that license them. On the question of why natural language contains such items and in view of what semantic properties makes them they behave as they do, there is no agreement whatsoever. Though most authors assume that there is a general mechanism that governs all such items, they also assume that positive or negative polarity is at least in part a conventional feature that is given in the lexicon. Klima (1964) called contexts that license NPIs, affective contexts. Affective contexts in Klima’s sense are contexts that are either explicitly negative or contain an underlying negative element NEG. Related views are put forward by Baker (1970) and Linebarger (1980), who claim that NPIs are licensed by negative contexts but the negative element need not surface. Baker requires that the licensing context be either negative or entail a negative sentence. Linebarger states that a NPI may be licensed by an overt negative element or a negative implicature. Ladusaw (1979) was the first to propose a semantic characterization of negative context as a NPI licenser. According to Ladusaw, an expression acts as a trigger for NPIs if it denotes a downward entailing function. The central notion is downward entailment or downward monotonicity. The definition runs as follows:

Most authors agree that Ladusaw’s generalization is basically right when taken as a necessary condition (e.g., Van der Wouden). In order to license an NPI, the context should have the property of downward monotonicity. Zwarts (1986), Hoeksema (1983), and van der Wouden (1997) show that the issue is more complicated, however. Not all NPIs are licensed in all downward monotonic contexts. As a consequence, they distinguish between weak and strong NPIs (Zwarts) or between weak, medium, and strong ones (Van der Wouden). Weak NPIs conform to Ladusaw’s generalization, NPIs of medium strength and strong ones require contexts that satisfy additional algebraic properties. The picture that arises from this work is that polarity licensing of both NPIs and PPIs is not a homogeneous phenomenon. There is an additional complication. According to the characterization given above, putting a PPI in a negative context yields ungrammaticality. However, such constructions often allow for a special interpretation that suggests that the corresponding nonnegative sentence has been uttered before. Seuren (1985) calls this the ‘echo’ effect and takes it as an argument for the existence of a special presupposition-canceling negation operator. For Horn (1985, 1989), it is a diagnostic for a special use of negation he calls metalinguistic. Thus (15) is acceptable as a reaction to the utterance of Mary is still in Paris.

A function is downward entailing (monotone decreasing, downward monotonic) iff for all X and Y: if X  Y, then f(Y)  f(X).

It is implausible, though, that this phenomenon has anything to do with the formal properties of negation or negative context. We observe the same effect when NPIs are put in uncontroversially positive contexts.

We can test for downward monotonicity by checking whether a context validates inferences from sets to subsets. Given that ||chickpeas||  ||peas||, the inference from (13a) to (13b) holds: (13a) Mary doesn’t eat peas. (13b) Mary doesn’t eat chickpeas.

And this predicts that (14a), where we find the NPI any in a downward entailing environment, is fine, though (14b) is unacceptable. (14a) Mary doesn’t eat any peas. (14b) *Mary does eat any peas.

The converse property is upward entailment or upward monotonicity. A function f is upward entailing (upward monotonic) iff for all X and Y: if X  Y, then f(X)  f(Y).

The inference from Some Texans eat chickpeas to Some Texans eat peas is valid, which shows that Some Texans is upward entailing.

(15) Mary is NOT still in Paris. She never intended to be there and went to London instead.

(16) It DOES matter that my bunny is ill.

We return to the phenomenon of polarity reversal at the end of the last section.

Negation Versus Denial The negation of a sentence is sometimes called its denial. It is, however, important to strictly separate the concepts of denial and negation. The concept of denial belongs to speech act theory. In this respect, denials are on a par with assertions. And, in contrast to negation, denials – just like assertions – should not be characterized in terms of truth and falsity but in functional terms. The primary function of assertions is to introduce new information, that is information that is not already taken for granted by the participants in a discourse. Denials differ in this respect. Their essential function is to object to information that has already been introduced before or is in some

Negation 621

sense taken for granted. In doing so, a denial will have the effect of removing information from the discourse record. Despite being overtly negative, (17a) and (17b) will, if processed as the first sentence of a discourse, not be interpreted as denials but as assertions of negative sentences. (17a) Mary is unhappy/not happy. (17b) It does not matter that Mary read your letters.

Note, moreover, that a denial need not contain a negative morpheme. If uttered as a reaction to (17a) or (17b), B-b and B-c below will not be interpreted as assertions but as denials of the utterances they object to. The utterance by means of which the denial is performed may thus be of a negative or positive form. This shows that the concept of denial is logically independent of the concept of negation. The difference in status of the concepts of denial and negation gave rise to a view according to which denials can be semantically characterized in terms of assertion and negation. Following Frege–Austin, Searle, and Dummett claimed that denials can be semantically analyzed as assertions of negative sentences. Further pragmatic effects that distinguish denials from assertoric utterances must be accounted for independently. Frege (1918: 149; Geach 1997: 40) – states: ‘‘People speak of affirmative and negative judgments; even Kant does so [. . .] For logic at any rate such a distinction is wholly unnecessary; its grounds must be sought outside logic.’’ A linguist like Givo´n (1978) implicitly adopts the view that negation in language should be analyzed as a logical sign, which operates on sentences, but points out that denials have, in addition to their logical aspect, a pragmatic function that should be treated independently. Thus, though denials should logically be treated as negations of positive sentences, they constitute a different speech act from assertions. ‘‘While the latter are used to convey new information on the presumption of ignorance of the hearer, negatives are used to correct misguided belief on the assumption of the hearer’s error.’’ (Givo´n, 1978). However, Frege also points out that it is not easy to make sense of the interpretation of denials as negative statements. ‘‘It is by no means easy to state what is a negative judgment (thought). Consider the sentences ‘Christ is immortal’, ‘Christ lives for ever’, ‘Christ is not immortal’, ‘Christ does not live for ever’. Now which of the thoughts here is affirmative, which negative?n (Frege, 1918: 150; Geach 1977: 41). The central problem is that a treatment of the negation sign as a sign of denial forces us to introduce a further operator to account for negatory or denial force, in addition to the negation operator. Clearly, no operator that would go or fuse with Frege’s assertion sign can

be given a sensible interpretation in an embedded environment. Though an interpretation could be given to such an operator for simple sentences in an unembedded environment, it precludes the possibility of using this very same sentence as the antecedent of a conditional sentence. In this context, the sentence cannot have denial of negatory force. Fregean force always goes with full sentences and does not contribute to the proposition expressed. Thus in embedded environments, negation can only be interpreted as an ordinary functional expression, reversing the truth value of the proposition expressed. However, the introduction of some kind of denial operator having the same role as assertion would force us to recognize two ways of judging and accordingly two negative operators of a different status, one as some kind of speech act device, the other as a semantic operator reversing the truth value of the proposition expressed. Consequently, we should separate force from content, thus ending up with a theory that is preferable for being both conceptually and formally simpler.

Metalinguistic Negation Horn (1985, 1989) drew attention to the fact that an utterance may not just be rejected in view of falsity of the propositional content, but that denials may instead apply to a variety of information, comprising presuppositions, various types of implicatures and connotations related to style and register. He points out that denials can be used to reject an utterance of a previous speaker for whatever reason. And equating the notions of denial and negation, he made a distinction between the well-known truth-functional operator and a non-truth-functional metalinguistic device. On this view, standard truth-functional negation is found in negative assertions and propositional denials. His metalinguistic device applies to presuppositional inferences, implicatures, and connotations of style and register. He labeled this device ‘metalinguistic negation’ and characterized it as follows: ‘‘I am claiming [. . .] for negation [. . .] a use distinction: it can be a descriptive truth-functional operator, taking a proposition p into a proposition : p, or a metalinguistic operator which can be glossed ‘I object to u’, where u is crucially a linguistic utterance rather than an abstract proposition’’ (Horn, 1985: 136). The relevant examples can be classified in the following categories: A Assertions of negative sentences: (A-a) Mary is unhappy. (A-b) Mary is not happy. (A-c) It does not matter that Mary read your letters.

622 Negation

B Propositional denials: (B-a) Mary is NOT happy (as a reaction to the utterance of ‘Mary is happy’). (B-b) Mary IS happy (as a reaction to the utterance of A-a or A-b). (B-c) It DOES matter that Mary read my letters.

which led authors to call for some non-truth-functional analysis or else a nonliteral reinterpretation in view of a violation of Gricean maxims. Horn treats categories C–E as metalinguistic uses of negation. The following diagnostics act as a kind of litmus test:

C Presuppositional denials: (C-a) The king of France is NOT bald – France does not have a king. (C-b) Virginia CANNOT know that the earth is flat. (C-c) John DIDN’T stop smoking – he never smoked.

. Metalinguistic negation requires wide scope, thereby blocking lexical incorporation. . Such sentences exhibit a rising intonation contour (fall-rise) pending further clarification, thereby stressing the offensive item or else triggering contrastive stress on the negative morpheme. . Metalinguistic negation reverses the polarity of the sentence.

D Implicature denials: (D-a) It is not POSSIBLE, it is NECESSARY that the church is right. (D-b) That haggis is not GOOD, it is EXCELLENT. (D-c) That was not a LADY I kissed last night – it was my WIFE. E A variety of connotations (conventional implicature, style, register) (E-a) That is not a STEED – it’s a HORSE. (E-b) Grandma didn’t KICK THE BUCKET – she passed away. (E-c) He didn’t call the POlice, he called the poLICE. Abstracting from their discourse function and focusing on truth conditional content, we may first note that Frege’s strategy of analyzing the denial B-a as an assertion of a positive sentence works as far as truth conditional content goes. Assuming double negation in their underlying form, B-b and B-c can also be analyzed in the Fregean way. The presuppositional denials under C have historically either been handled by means of some nonstandard logic (thus postulating an ambiguity in the negation operator), or by an appeal to the Russellian notion of scope (see the first section above). More recent accounts that are formulated in a dynamic framework achieve a similar effect by invoking the notion of local accommodation. Though such theories take it that presuppositions normally escape from the scope of negation and other embedding operators, they introduce a special mechanism that forces presuppositional information – under specified conditions (e.g., threatening inconsistency) – to remain in situ, i.e., the place where they were triggered, and thus keep them within the scope of the embedding operator (Heim, 1983; Van der Sandt, 1992). The examples under D and E are problematic for any purely logical account. Their standard translations amount to plain contradiction,

Example (18) illustrates the first diagnostic. In contrast to example D-a, it amounts to a straightforward contradiction: (18) It is impossible, it is necessary that the church is right.

Note, moreover, that the same holds for example D-a if we destress the offensive item possible. Sentences (15) and (16) show that – contrary to received wisdom – negative denials accept PPIs, while positive denials allow NPIs. Van der Sandt (1991, 2003) stressed that in view of their function, denials should be interpreted in context. They are not incremental in the sense assertions are, nor can they naturally occur in isolation. They take up or ‘echo’ a previous utterance; and since utterances convey information of various kinds (comprising presuppositions, implicatures, and further connotations), the interpretation of a denial may be sensitive to all the information that is invoked by the utterance they object to. Just as a speaker, when uttering any of the B sentences, objects to the truthconditional content of a previously uttered sentence, he objects in the C cases to the presuppositions carried by a previous utterance, in the D cases to implicatures invoked, and in the E cases to offensive connotations of style and register a previous speaker may have conveyed with a corresponding nonnegative utterance. Consider (19b), which is typically a reaction to (19a) (19a) It is possible that the church is right. (19b) It is not POSSIBLE, it is necessary that the church is right.

The utterance of (19a) invokes the implicature that it is not necessary that the church is right. By denying this utterance by means of (19b), the second speaker does not just react to the propositional content but to all the information conveyed by (19a). This information comprises the (scalar) implicature that it is

Negation 623

not necessary that the church is right. Thus, if we take the negation in (19b) to apply to the full informative content of the first utterance, this yields an interpretation that can be paraphrased as (20): (20) :(e the_church_is_right ^ : u the_church_is_right) ^ u the_church_is_right

This simply conveys that it is necessary that the church is right and rejects all information conveyed by the previous utterance. The formal account is implemented in a nonincremental theory of discourse processing. While Horn subsumes only presuppositional denials, implicature denials and the style and register cases under his notion of metalinguistic negation, Van der Sandt takes the phenomenon to be fully independent of the notion of negation and proposes an explanation of both the Horn cases and propositional denials based on the discourse properties of denial. Geurts (1998) objects to both views. According to him, denial or metalinguistic negation is not a homogeneous phenomenon. Instead he claims that a different semantic mechanism is operative in each of the cases B–E. His account divorces metalinguistic negations from the discourse characteristics of denial. Propositional denials are simply assertions of negative sentences. Presuppositional denials are handled by the mechanism of local accommodation. For the last two categories, Geurts assumes that the negative morpheme operates on a different level. For implicature denials, he invokes the notion of semantic polysemy. Crucially, he maintains that a scalar like possible may either have an at least or an exhaustive sense (i.e., possible and not necessary). Example (19a) selects the at least interpretation and here the exhaustive interpretation comes about by way of conversational implicature. In the denial (19b), the exhaustive sense is selected immediately. Only the style and register and pronunciation cases are metalinguistic in that they make reference to linguistic objects. Example E-b typically corrects the pronunciation of police. In this context, the phrase POlice corrects pronunciation, and here this phrase may be constructed as ‘the body whose name is pronounced ‘‘POlice’’.’ Returning to the second section of this article, I conclude with a remark on polarity reversal. On the account that strictly distinguishes between the use of a (negative) sentence to convey new information and its use as a denial, polarity reversal falls out as a consequence of the discourse function of denial. Consider first the unmarked (21a) and (21b): (21a) It does not matter that Mary has read my letters. (21b) Harry did not pick any of the flowers.

A denial interpretation of either of these sentences would require that the corresponding non-negative sentence has been uttered just before (or is at least ‘in the air’). Note that their non-negated counterparts would contain a negative polarity element in a nonnegative context and thus be ungrammatical. This forces the regular assertoric interpretation. But (16), which does contain an NPI in a non-negative context, can very well be used to echo a negative and thus grammatical utterance. An analogous explanation can be given for negative sentences containing PPIs. Since affirmative sentences containing PPIs require a non-negative context, their negated counterparts cannot be interpreted as isolated assertions. But they can very well be interpreted as the denial of a previous utterance, which being positive, is fully grammatical. Sentence (15) illustrates this. See also: Assertion; Counterfactuals; Existence; Factivity;

Modal Logic; Multivalued Logics; Polarity Items; Presupposition; Propositional and Predicate Logic; Scope and Binding; Vagueness.

Bibliography Baker C L (1970). ‘Double negatives.’ Linguistic Inquiry 1, 169–186. Dummett M (1973). Frege. Philosophy of language. London: Duckworth. Frege G (1879). Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle: Louis Nebert. English translation in Geach P & Black M (1960). Gottlob Frege. Logical investigations. Oxford: Blackwell. 1–20. ¨ ber Sinn und Bedeutung.’ Zeitschrift Frege G (1892). ‘U fu¨r Philosophie und philosophische Kritik 100, 25–50. English translation in Geach & Black, 56–78. Frege G (1918). ‘Die Verneinung.’ Beitra¨ge zur Philosophie des deutschen Idealismus 1, 143–157. English translation in Geach P (ed.) (1977) Gottlob Frege. Logical investigations Oxford: Blackwell. 30–53. Geach P & Black M (1960). Translations from the philosophical writings of Gottlob Frege (2nd edn.). Oxford: Blackwell. Geurts B (1998). ‘The mechanisms of denial.’ Language 74, 274–307. Givo´n T (1978). ‘Negation in language: pragmatics, function, ontology.’ In Cole P (ed.) Syntax and semantics 9: pragmatics. New York: Academic Press. 69–112. Heim I (1983). ‘On the projection problem for presuppositions.’ In Proceedings of the West Coast Conference in Formal Linguistics (WCCFL) 2. 114–126. Hoeksema J (1983). ‘Negative polarity and the comparative.’ Natural Language and Linguistic Theory 1, 403–434. Horn L R (1985). ‘Metalinguistic negation and pragmatic ambiguity.’ Language 61, 121–174.

624 Neo-Gricean Pragmatics Horn L R (1989). A natural history of negation. Chicago: University of Chicago Press. Kamp H (1975). ‘Two theories about adjectives.’ In Keenan E (ed.) Formal semantics for natural language. London: Cambridge University Press. 123–155. Klima E S (1964). ‘Negation in English.’ In Fodor J & Katz J J (eds.) The structure of language. Englewood Cliffs: Prentice Hall. 246–323. Ladusaw W A (1979). Polarity sensitivity as inherent scope relations. Ph.D. diss. University of Texas at Austin. Distributed by IULC, Bloomington, Indiana (1980). Linebarger M (1980). ‘The grammar of negative polarity.’ Ph.D. diss. MIT. Distributed by IULC, Bloomington, Indiana. Seuren P A M (1985). Discourse semantics. Oxford: Blackwell. Strawson P (1952). Introduction to logical theory. London: Methuen.

Van der Sandt R A (1991). ‘Denial.’ In Papers from Chicago Linguistic Society 27(2): the parasession on negation. 331–344. Van der Sandt R A (1992). ‘Presupposition projection as anaphora resolution.’ Journal of Semantics 9, 333–377. Van der Sandt R A (2003). ‘Denial and presupposition.’ In Ku¨hnlein P & Zeevat H (eds.) Perspectives on dialogue in the new millennium. Amsterdam: John Benjamins. Van der Wouden T (1997). Negative contexts. Collocation, polarity and multiple negation. London: Routledge. Van Fraassen B (1966). Singular terms, truth value gaps and free logic. Journal of Philosophy 63, 481–495. Zadeeh L (1965). ‘Fuzzy sets.’ Information and Control 8, 338–353. Zwarts F (1986). ‘Categoriale grammatika en algebraı¨sche semantiek. Een Studie naar Negatie en Polariteit in het Nederlands.’ Ph.D. diss. Groningen University.

Neo-Gricean Pragmatics Y Huang, University of Reading, Reading, UK ß 2006 Elsevier Ltd. All rights reserved.

In 1967, H. P. Grice delivered his Williams James Lectures at Harvard. In these lectures, Grice presented a panorama of his thinking on meaning and communication – what he called his ‘‘tottering steps’’ (Grice, 1989) toward a systematic, philosophically inspired pragmatic theory of language use, which has since come to be known as ‘Gricean pragmatic theory.’ Since its inception, Gricean pragmatics has revolutionized pragmatic theorizing and has to date remained one of the cornerstones of contemporary thinking in linguistic pragmatics and the philosophy of language. This article undertakes to present and assess a neo-Gricean pragmatic theory of conversational implicature, focusing on the bipartite model developed by Laurence Horn and the tripartite model advanced by Stephen Levinson (see Implicature).

The Hornian System Horn (1984, 1989, 2004) developed a bipartite model. In Horn’s view, all of Grice’s maxims (except the maxim of quality) can be replaced with two fundamental and antithetical principles: the quantity (Q-) and relation (R-) principles. (1) Horn’s Q- and R-principles: (a) The Q-principle: Make your contribution sufficient. Say as much as you can (given R).

(b) The R-principle: Make your contribution necessary. Say no more than you must (given Q).

In terms of information structure, the Q-principle is a lower bounding pragmatic principle that may be (and characteristically is) exploited to yield upper bounding conversational implicatures: a speaker, in saying ‘. . . p . . . ,’ conversationally implicates that (for all he or she knows) ‘. . . at most p . . .’ (see Implicature). The locus classicus here is those conversational implicatures that arise from a prototype Hornscale. Prototype Horn-scales are defined in (2) (see Levinson, 1987, 2000): (2) Prototype Horn-scales: For to form a Horn-scale: (a) A(S) entails A(W) for some arbitrary sentence frame A. (b) S and W are equally lexicalized, of the same word class, and from the same register. (c) S and W are ‘about’ the same semantic relation, or from the same semantic field.

An example of Q-implicature is given in (3) (the symbol ‘< >’ represents ‘Q-scale’; ‘þ>’ represents ‘conversationally implicates’): (3) Some of his friends took a taxi to the station. þ> Not all of his friends took a taxi to the station.

On the other hand, the counterbalancing R-principle is an upper bounding pragmatic law that may be (and systematically is) exploited to engender

Neo-Gricean Pragmatics 625

low-bounding conversational implicatures: a speaker, in saying ‘. . . p . . . ,’ conversationally implicates that (for all he or she knows) ‘. . . more than p . . .’ (Atlas and Levinson, 1981). This is illustrated in (4): (4) Have you got a watch? þ> If you have got a watch and know the time, please tell me what time it is.

Viewing the Q- and R-principles as instantiations of Zipfian economy (Zipf, 1949), Horn explicitly identified the Q-principle (‘‘a hearer-oriented economy for the maximization of informational content’’) with Zipf’s ‘auditor’s economy’ (the force of diversification), and identified the R-principle (‘‘a speakeroriented economy for the minimization of linguistic form’’) with Zipf’s ‘speaker’s economy’ (the force of unification). Furthermore, Horn argued that the whole Gricean mechanism for pragmatic inference can be largely derived from the dialectic interaction (in the classical Hegelian sense) between the Q- and R-principles in the following way: (5) Horn’s division of pragmatic labor: The use of a marked (relatively complex and/or prolix) expression when a corresponding unmarked (simpler, less ‘effortful’) alternate expression is available tends to be interpreted as conveying a marked message (one that the unmarked alternative would not or could not have conveyed).

In effect, what (5) says is this: the R-principle generally takes precedence until the use of a contrastive linguistic form induces a Q-implicature to signal the nonapplicability of the pertinent R-implicature (for further discussion, see Huang, 1991, 1994, 2000, 2003, 2004a,b, 2005).

The Levinsonian System Horn’s proposal to reduce Grice’s maxims to the Q- and R-principles was called into question by Levinson (1987, 1991, 2000). In Levinson’s view, Horn failed to draw a distinction between what Levinson called semantic minimization (‘‘semantically general expressions are preferred to semantically specific ones’’) and expression minimization (‘‘‘shorter’ expressions are preferred to ‘longer’ ones’’). Consequently, inconsistency arises with Horn’s use of the Q- and R-principles. For example, in Horn’s division of pragmatic labor (see (5)), the Q-principle operates primarily in terms of units of speech production, whereas elsewhere, in Hornscales (see (2)), for instance, it operates primarily in terms of semantic informativeness. Considerations along these lines led Levinson to argue for a clear separation between pragmatic

principles governing an utterance’s surface form and pragmatic principles governing its informational content. He proposed that the original Gricean program (the maxim of quality apart) be reduced to three neoGricean pragmatic principles, or what he dubbed the quantity (Q-), informativeness (I-), and manner (M-) principles. Each of the three principles has two sides: a speaker’s maxim, which specifies what the principle enjoins the speaker to say, versus a recipient’s corollary, which dictates what this allows the addressee to infer: (6) Levinson’s Q-principle (simplified): Speaker’s maxim: Do not say less than is required (bearing I in mind). Recipient’s corollary: What is not said is not the case.

The basic idea of the metalinguistic Q-principle is that the use of an expression (especially a semantically weaker one) in a set of contrastive semantic alternates (such as a Horn-scale) Q-implicates the negation of the interpretation associated with the use of another expression (especially a semantically stronger one) in the same set. In other words, as already mentioned, the effect of this inference strategy is to give rise to an upper bounding conversational implicature. Seen the other way round, from the absence of an informationally stronger expression, it is inferred that the interpretation associated with the use of that expression does not hold. Hence, the Q-principle is essentially negative in nature. Three types of Q-implicatures can then be identified: Q-scalar implicatures, Q-clausal implicatures, and Q-alternate implicatures. The Q-scalar implicatures are derived from prototype Horn-scales. They were illustrated in (3) (recall that ‘þ>’ represents ‘conversationally implicate’) and are shown schematically in (7): (7) Q-scalar: y þ> Q-scalar  x

Q-clausal implicatures are inferences of epistemic uncertainty (Gazdar, 1979). They are shown schematically in (8) and are exemplified in (9): (8) qclausal: < xðpÞ; yðpÞ > yðpÞ þ > qclausal p;  p (9) If you want to sell more, you should reduce your prices. þ> You may want to sell more, or you may not; perhaps you should reduce your prices, or perhaps you should not.

Q-alternate implicatures come from a nonentailment semantic contrast set (Harnish, 1976; Horn, 1989; Hirschberg, 1991; Levinson, 2000). Roughly, there

626 Neo-Gricean Pragmatics

are two subtypes here. In the first, the expressions in the set are informationally ranked, as in (10). Following Huang (2005), let us call this subtype ‘Q-ordered’ alternate implicatures. By contrast, in the second subtype, the expressions in the set are of equal semantic strength, as in (11). Let us term this subtype ‘Q-unordered’ alternate implicatures: (10) They’ve engaged. þ> They haven’t got married. (11) John has a dog. þ> He doesn’t have a cat, a hamster, and L as well.

We come next to Levinson’s I-principle: (12) Levinson’s I-principle (simplified): Speaker’s maxim: Do not say more than is required (bearing Q in mind). Recipient’s corollary: What is generally said is stereotypically and specifically exemplified.

Mirroring the effects of the Q-principle, the central tenet of the I-principle is that the use of a semantically general expression I-implicates a semantically specific interpretation. In other words, as already noted, the working of this inferential mechanism is to induce a lower bounding conversational implicature. More accurately, the conversational implicature engendered by the I-principle is one that accords best with the most stereotypical and explanatory expectation, given our real-world knowledge. This is depicted schematically in (12) (the ‘[ ]’ here is intended to indicate an ‘I-scale’]: (13) I-scale: [x, y] y þ>I x

The class of I-implicature is heterogeneous, ranging from conjunction buttressing (as in (14)) through negative raising to interpretation of spatial terms, but I-implicatures do share a number of properties, notably: (a) they are more specific than the utterances that engender them, (b) unlike Q-implicatures, they are positive in nature, (c) they are characteristically guided by stereotypical assumptions, (d) they are nonmetalinguistic, in the sense that they make no reference to something that might have been said but was not (see ), and (e) unlike Q-implicatures, they normally cannot be canceled by metalinguistic negation (for an attempt to formalize the Q- and I-principles, see Blutner (1998)): (14) p and q þ> p and then q þ> p therefore q þ> p in order to cause q

John pressed the spring and the drawer opened. þ> John pressed the spring and then the drawer opened. þ> John pressed the spring and thereby caused the drawer to open. þ> John pressed the spring in order to make the drawer open.

We finally turn to Levinson’s M-principle: (15) Levinson’s M-principle: Speaker’s maxim: Do not use a marked expression without reason. Recipient’s corollary: What is said in a marked way is not unmarked.

Unlike the Q- and I-principles, which operate primarily in terms of semantic informativeness, the metalinguistic M-principle operates primarily in terms of a set of alternates that contrast in form. The fundamental axiom on which this principle rests is that the use of a marked expression Mimplicates the negation of the interpretation associated with the use of an alternative, unmarked expression in the same set. Putting it another way, from the use of a marked linguistic expression, it can be inferred that the stereotypical interpretation associated with the use of an alternative, unmarked linguistic expression does not obtain. This can be represented schematically as in (16) (the symbol ‘{ }’ indicates an ‘M-scale’): (16) M-scale : fx; yg y þ> M  x

An example of M-implicatures is given in (17b): (17a) Mary went from the bathroom to the bedroom. þ> in the normal way (17b) Mary ceased to be in the bathroom and came to be in the bedroom. þ> in an unusual way, e.g., in a magic show, Mary had been made to disappear by magic from the bathroom and reappear in the bedroom.

Given the preceding tripartite classification of neo-Gricean pragmatic principles, the question that arises next is how inconsistencies arising from these potentially conflicting inference apparatuses can be dealt with. According to Levinson (1991, 2000), they can be resolved by an ordered set of precedence, which encapsulates in part Horn’s division of pragmatic labor, as previously discussed. (18) Levinson’s resolution schema for the interaction of the Q-, I-, and M-principles: (a) Level of genus: Q > M > I. (b) Level of species: e.g., Q-clausal > Q-scalar.

Neo-Gricean Pragmatics 627

This is tantamount to saying that genuine Q-implicatures (whereby Q-clausal cancels rival Q-scalar) takes precedence over inconsistent I-implicatures, but otherwise I-implicatures take precedence, until the use of a marked linguistic expression triggers a complementary M-implicature, leading to the negation of the applicability of the pertinent I-implicature (for further discussion, see Levinson, 2000; Huang, 1991, 1994, 2000, 2003, 2004a,b, 2005).

Further Neo-Gricean Contributions Building on the Gricean generalized versus particularized implicatures dichotomy, Levinson (1995, 2000) developed a theory of presumptive meaning. He proposed to add a third level – utterance-type meaning – to the two generally accepted levels of sentence-type meaning and utterance-token meaning. This third layer is the level of generalized, preferred, or default interpretation, which is not dependent on direct computations about speaker intentions, but rather on expectations about how language is characteristically used. Stated thus, a neo-Gricean pragmatic theory of conversational implicature is essentially a theory of presumptive meaning – pragmatic inference that is generalized, arises by default, and thus is presumed. Furthermore, on a classical Gricean account, the total signification of an utterance is divided into what is said and what is implicated. Simply put, what is said is generally taken to be mapped onto the truth-conditional content of an utterance. What is implicated is then defined in contrast to, and is calculated on the basis of, what is said (and in the case of M-implicatures, together with how what is said, is said). Seen in this light, what is said is supposed to provide the input to what is implicated. To work out what is said, according to Grice (1989), it is required to (a) resolve reference, (b) fix deixis, and (c) disambiguate expressions. To these requirements, Levinson (2000) added (d) unpacking ellipsis, and (e) narrowing generalities. It turns out, however, that the determination of requirements (a)–(e) involves pragmatic inference of some kind. Put another way, there is ‘pragmatic intrusion’ of some sort, namely, the intrusion of pragmatically inferred content into truth-conditional content, involved in the working out of what is said. The question that arises next is what kind of pragmatic intrusion it is. Roughly, there are two positions. The first is that the pragmatic inference under consideration is of a special kind, which differs from conversational implicature. Within this camp, of particular interest are two lines of argument. According to Sperber and Wilson (1986), the pragmatic inference is an ‘explicature’. Another argument is due to Bach (1994), who

proposed a third category of communicative content, intermediate between Grice’s ‘what is said’ and ‘what is implicated.’ Bach dubs the vehicle of such a content ‘impliciture,’ since it is implicit in what is said. The second position is represented by Levinson (2000), who argued that these so-called explicatures/ implicitures result from the same pragmatic apparatus that engenders what is implicated. Therefore, they are largely the same beast as conversational implicature. Consequently, this gives rise to a problem known as ‘Grice’s circle,’ namely, how what is implicated can be defined in contrast to, and calculated on the basis of, what is said, given that what is said seems both to determine and to be determined by what is implicated (e.g., Huang, 1991). Levinson’s (2000) proposal was to reject the ‘received’ view of the pragmatics– semantics interface, namely, the view that the output of semantics is the input to pragmatics, and even to allow implicatures to play a systematic role in ‘pre’semantics, i.e., in the derivation of the truth-conditional content of an utterance. These matters will continue to be debated (see Pragmatics and Semantics). In recent years, the classical Gricean theory of conversational implicature has successfully and profitably been generalized to other core areas of linguistics. One such area is formal syntax, and the particular topic of inquiry is anaphora. Levinson (1987, 1991, 2000) and Huang (1991, 1994, 2000, 2004a) developed a (revised) neo-Gricean pragmatic theory of anaphora. The theory has effected a radical simplification of formal syntax, especially as regards Chomsky’s binding theory. Finally, a number of neo-Gricean experimental works have appeared, putting various aspects of classical and neo-Gricean pragmatic theory to a test (for further discussion, see Huang, 2003, 2004b). See also: Cooperative Principle; Expression Meaning vs Utterance/Speaker Meaning; Human Reasoning and Language Interpretation; Implicature; Intention and Semantics; Nonmonotonic Inference; Nonstandard Language Use; Pragmatics and Semantics.

Bibliography Atlas J & Levinson S C (1981). ‘It-clefts, informativeness and logical form: radical pragmatics.’ In Cole P (ed.) Radical pragmatics. New York: Academic Press. 1–61. Bach K (1994). ‘Conversational impliciture.’ Mind and Language 9, 124–162. Blutner R (1998). ‘Lexical pragmatics.’ Journal of Semantics 15, 115–162. Gazdar G (1979). Pragmatics: implicature, presupposition and logical form. London: Academic Press. Grice H P (1989). Studies in the way of words. Cambridge, MA: Harvard University Press.

628 Neologisms Harnish R M (1976). ‘Logical form and implicature.’ In Bever T, Katz J & Langendoen D T (eds.) An integrated theory of linguistic ability. New York: Crowell. 313–392. Hirschberg J (1991). A theory of scalar implicature. New York: Garland. Horn L R (1984). ‘Toward a new taxonomy for pragmatic inference: Q-based and R-based implicature.’ In Schiffrin D (ed.) Meaning, form, and use in context: linguistic applications. Washington, D.C.: Georgetown University Press. 11–42. Horn L R (1989). A natural history of negation. Chicago: The University of Chicago Press. Horn L R (2004). ‘Implicature.’ In Horn L R & Ward G (eds.) 3–28. Horn L R & Ward G (eds.) (2004). The handbook of pragmatics. Oxford: Blackwell. Huang Y (1991). ‘A neo-Gricean pragmatic theory of anaphora.’ Journal of Linguistics 27, 301–335. Huang Y (1994). The syntax and pragmatics of anaphora. Cambridge: Cambridge University Press. Huang Y (2000). Anaphora: a cross-linguistic study. Oxford: Oxford University Press. Huang Y (2001). ‘Reflections on theoretical pragmatics.’ Waiguoyu [ Journal of Foreign Languages] 131, 2–14.

Huang Y (2003). ‘On neo-Gricean pragmatics.’ International Journal of Pragmatics 14, 77–100. Huang Y (2004a). ‘Anaphora and the pragmatics–syntax interface.’ In Horn L R & Ward G (eds.). 288–314. Huang Y (2004b). ‘Neo-Gricean pragmatic theory: looking back on the past; looking ahead to the future.’ Waiguoyu [ Journal of Foreign Languages] 149, 2–25. Huang Y (2005). Pragmatics. Oxford: Oxford University Press. In press. Levinson S C (1987). ‘Pragmatics and the grammar of anaphora.’ Journal of Linguistics 23, 379–434. Levinson S C (1991). ‘Pragmatic reduction of the binding conditions revisited.’ Journal of Linguistics 27, 107–161. Levinson S C (1995). ‘Three levels of meaning.’ In Palmer F (ed.) Grammar and meaning. Cambridge: Cambridge University Press. 90–115. Levinson S C (2000). Presumptive meanings: the theory of generalized conversational implicature. Cambridge, MA: The MIT Press. Sperber D & Wilson D (1986). Relevance: communication and cognition. Oxford: Blackwell. Zipf G K (1949). Human behavior and the principle of least effort: an introduction to human ecology. Cambridge, MA: Addison-Wesley.

Neologisms A Lehrer, University of Arizona, Tucson, AZ, USA ß 2006 Elsevier Ltd. All rights reserved.

New words frequently appear in English. Some words enter the language while others are used for a short time, or perhaps only once. During the last century, however, the rate of growth has accelerated. Neologisms draw on traditional word-formation devices, but some processes which were previously marginal have become common, for example, the new bound morpheme -holic as in the blend workaholic. Another process which has become productive is verb-headed compounds like oil-paint.

Neologisms Based on Common Word Formation Devices One of the commonest word formation devices is compounding – combining two or more free morphemes. Compounds can be headed by any major lexical class, and all types are found in neologisms. A few examples are jock talk (N-N), ripjob ‘negative publicity’ (V-N), dolphin-safe (N-ADJ), fail-safe (V-ADJ), red-hot (ADJ-ADJ) after-born ‘child born after the father’s death’ (P-ADJ), over-quick (P-ADV), and off base (P-N).

Traditional textbooks such as Marchand (1960), Selkirk (1982), Bauer (1983), and Spenser (1991) claim that verb-headed compounds are rare. The few existing ones are backformed by deletion of a suffix such as -er or -ing or formed by, for example, converting a compound noun into a verb. Examples are skydive, derived from sky-diving, and carbon-copy, which becomes a verb by being converted from a noun compound (Bauer, 1983: 208). The compound self-destruct is backformed from self-destruction. Although verb-headed compounds may have been rare in the past, this is no longer true. They occur so frequently in print and in spontaneous speech that it is reasonable to conclude that such compounds have become productive in recent years. In addition to the few listed in the works cited (air-condition, baby-sit, and breast-feed) are many other already conventional verb-headed compounds: lip-read, gift wrap, copyread, custom-make, double-park, brainwash, flight-test, test-drive, spot-check, windowshop, plea-bargain, stir-fry, chain-smoke, handpick, stage-manage, code-switch, guest-conduct, dry-clean, sun-dry, spot-weld, backpedal, channel-surf, and spell-correct. Many verb-headed compounds are used in sports: bounce-pass, pinch-hit, bucket-drive, speedclimb, jump-shoot, jump-pass, and kick-turn. Many compounds are formed with the verb hop, as in

Neologisms 629

bed-hop, castle-hop, island-hop, and with initial self: self-identify, self-calm, self-censor, self-limit, self-evaluate, and self-regulate. Examples from spontaneous speech include people-watch, realitytest, guilt-trip (she guilt-tripped us into doing that), trail-blaze, cattle-ranch, and oil-paint. Affixation

Another productive process is affixation, where a derivational suffix or prefix is attached to a word or a bound morpheme. Neologisms often use relatively unproductive affixes. Recent examples include girldom, astronautess, marketeer, bejeaned, retribalize, and circumstellar. Some neologisms use unusual bases, as in otherize ‘make someone different from other people’. Conversion

Conversion is a process that turns a word belonging to one part of speech into another (e.g. noun ! verb or verb ! noun). Examples of noun-to-verb conversion are to impact, to blackhole, and to network. Examples of adjective-to-noun conversion are a given and a nasty. Clipping and Acronyms

Clipping and acronyms have also become popular and productive in the 20th century. Clipping or shortening simply deletes part of a word: veteran, veterinarian ! vet; laboratory, Labrador ! lab. Although most clippings delete the end of the word, in some the beginning is removed: omnibus ! bus, airplane ! plane, and in a few, only the middle remains, such as flu influenza. Acronyms are formed by taking one or more of the first letters of a phrase, as in SARS (severe acute respiratory syndrome), AIDS (acquired immune deficiency syndrome), RAM (random access memory). Many acronyms are created so that they form an existing word with a relevant meaning: NOW (National Organization of Women), BASIC (Beginner’s All-purpose Symbolic Instruction Code). Clipping and acronyms are extremely productive and efficient because they reduce long phrases and words, a process described by George Zipf (1949), who formulated a law which says that frequently used forms tend to become shorter.

Blends Blends are underlying compounds which take one word and part of another word or parts of two words. The part of a word is called a splinter. Common examples are beefalo beef þ buffalo, smog smoke þ fog, camcorder camera þ re-

corder, sitcom situation þ comedy. The splinter is formally identical to a clipping, but whereas clippings function as full words, splinters do not. A sentence like phone the vet (telephone the veterinarian) is fine, but not I am watching a sit (sitcom) right now. A few blends were recorded as early as the 19th century: brunch breakfast þ lunch (1886 OED/MW), slanguage slang þ language (1879 OED/ MW), solilioquacity soliloquy þ loquacity (1895 MW). Blends have become more common: motel motor þ hotel (1925), transceiver transmitter þ receiver (1934), permafrost permanent þ frost (1943). Many of these etymological blends have become conventional words and speakers are no longer aware of their more complex sources. However, the speed of creating blends has accelerated so that blending can no longer be considered a marginal word formation device. Many novel blends are not defined, and readers and listeners are expected to figure out their source words and the appropriate meaning from the context. The Structure of Blends

The commonest type of blend is a full word followed by a splinter: wintertainment winter þ entertainment, chatire chat þ satire, vodkatini vodka þ martini. Blends can also begin with a splinter, followed by a full word: narcoma narcotic þ coma, cinemenace cinema þ menace. Also common are blends consisting of two splinters. Two types are possible: (1) the beginning of one word is followed by the end of another word: psychergy psychic þ energy, hurricoon hurricane þ typhoon or monsoon, cheriodical cheery þ periodical, or (2) both splinters are the beginnings of words: biopic biographical þ picture, sitcom situation þ comedy. Blends typically show overlap in spelling and pronunciation. For example, in wintertainment, the bold letters belong to both source words: winter and entertainment. In cinemenace, the m belongs to both words. Sometimes the overlap is not simply of contiguous letters or sounds: astrocity astronaut þ atrocity and flustratred flustered þ frustrated distribute overlapping letters (and sounds) noncontiguously. A fourth type of blend consists of complete overlap. Although nothing is missing, a part of the blend belongs to both words: sexploitation sex þ exploitation, palimony pal þ alimony ‘money paid to an unmarried partner after breaking up’, cocacolonization Coca-ColaÕ þ colonization, cattitude cat þ wheat þ eatibles. attitude, WheatablesÕ A fifth type of blend where one word is embedded in another is rare and has not increased in frequency. Examples are entreporneur entrepreneur þ the

630 Neologisms

clipping porn, ambisextrous ambidextrous þ sex. Some blends are eye blends, that is, they must be read: leerics leer þ lyrics, eracism erase þ racism (bumper sticker), shampagne sham þ champagne. Blends may involve the deletion of some letters so that the neologism conforms to English spelling, as in abusage abuse þ usage, instead of abuseage. A very small number of blends consist of three source words. Examples are intelevisionary Intel þ television þ visionary (a journal article heading), teenangstrel teenaged þ angst ridden þ minstrel (Glowka et al., 2003: 339). Although blends are sometimes created in casual conversation, they are more likely to appear in print, especially in advertising, product names, and magazine and newspaper headings. Product names: Craicranberry þ raisins ‘dried cranberries’, sinsÕ DijonnaiseÕ Dijon þ mayonnaise ‘mustard and Europe þ leotard ‘style mayonnaise’, EurotardÕ go þ yogurt ‘portable of dancewear’, GogurtÕ yogurt’; shop names: Zoofari zoo þ safari, a gift shop at a zoo, Thriftopia thrift þ utopia, the name of a thrift shop; hybrids of plants and animals: broccoflower broccoli þ cauliflower, cockapoo cocker spaniel þ poodle; events: ARTstravaganze art þ extravaganza, an art auction. Halloween events included a spooktacular event, Howl-o-scream, a haunted house, and Halloweek, a week of scary movies. Splinters Become Morphemes

Sometimes splinters occur in several blends. For example, -topia ( utopia) is found in FruitopiaÕ, Thriftopia, and Artopia (names of shops); -ploitation is found in sexploitation, blaxploitation, and assploitation (a criticism of a Nike ad showing a runner’s derriere). If there are only a few examples, the process of analogy can account for the semi-productivity. However, splinters like -thon, -holic, -gate ‘scandal’, and -scape have spawned so many examples that it is plausible to argue that a new morpheme has been created (Lehrer, 1998). -Thon marathon means something like ‘event that lasts for a long time.’ Other implications may be ‘difficult’ and ‘fundraiser for charity.’ Examples are telethon, walkathon, bikathon, rockathon, dancethon, paint-a-thon, sew-a-thon, pianothon, performancethon, smilathon, soccer mom-athon, eata-thon, swimathon, and gore-a-thon (in a review of Mel Gibson’s The Passion of the Christ). -Holic alcoholic ‘addict’ can be freely added to nouns and verbs, and it has produced dozens of neologisms: workaholic, chocoholic, foodaholic, drugaholic, shopaholic, spendaholic, controlaholic, faxaholic, newsaholic, videoholic, and many more.

-Gate ‘political scandal’ began with Watergate and generated Irangate, Whitewatergate, Contragate, and more recently Skategate ‘scandal involving the attack on skater Nancy Kerrigan’ (Glowka et al., 2000a: 190). French and Hungarian have borrowed –gate for their political scandals. -Scape landscape ‘view or representation of a scene’ occurs in seascape, mountainscape, oceanscape, rockscape, and semantic extensions like soundscape, spacescape, mindscape, and dreamscape. Most of these splinters occur at the end of the word, but a few can be initial, as Mc McDonald’s, which has given rise to McJobs ‘low-paying jobs’, McMansions ‘developments of large, expensive, identical houses’, McGarbage ‘waste created by disposable dishes, cutlery, and packaging’, as well as the neologisms created by the McDonald Corporation, such as McMuffin. Another very popular initial splinter is e- electronic (Safire, 2003), in dozens of items: e-commerce, email, e-tail (electronic þ retail), e-slammer ‘prison system with a web site’ (Glowka et al., 2000b: 435), e-bucks ‘electronic money’, and e-celebrity ‘a famous person promoting an Internet company’ (Glowka et al., 2001a: 86). Although some linguists and lexicographers consider these splinters to be prefixes or suffixes, they are best considered as bound bases, in particular, as combining forms. They are like many classical bases which must be combined with something else, such as initial combining forms eco- and bio-, or -crat and -ology (Bauer, 1983; Warren, 1990; Lehrer, 1998).

Trendy Neologisms Neologisms often involve many kinds of wordplay: word-forming devices, rhymes, puns, allusions, and metaphors. For example, yuppie (yuppy), influenced by hippie, starts with the acronym YUPP- for young urban professional people with the added suffix -ie or -y, and this has generated many analogous terms, such as buppie black urban professional people, maffy middle-aged affluent folks (Glowka and Melanc¸on, 2002: 316), In addition, yuppie has become the base of a verb, yuppify, and an adjective, yuppie-ish. Malaprophesizing malapropism þ prophesizing (Glowka et al., 2001c: 417) ‘telling the future inadvertently through verbal mistakes and slips’ is a blend of two words, each morphologically complex. The Internet has given rise to a large number of neologisms. A regular column in American Speech, ‘Among the new words,’ devoted two issues to neologisms concerning computers and the Internet in 1999 (vol. 74, no. 3, pp. 298–323 and no. 4, pp. 403–425). There are many compounds with virtual as the first

Neologisms 631

word or techno- and e-as initial combining forms. Analogous to e-we find m-commerce ‘business conducted from a mobile telephone’ (Glowka et al., 2001c: 417). Dot com is a pronounced form of the Internet expression written as .com, which has spawned neologisms like dot snot ‘arrogant young person with wealth derived easily through the Internet’ (Glowka et al., 2001c: 417). Dot bomb refers to a failed Internet business (Glowka et al., 2001b: 307). Dot-orging is ‘changing from a commercial establishment .com to an organization, .org’. As with blends, compound words may be segmented in unconventional ways. Weblog World Wide Web þ log ‘personal website full of commentaries’ has been clipped to blog and converted to a verb blogging and an agentive blogger. Many neologisms used in a highly specific context, such as a heading for a newspaper or magazine article, will disappear. But some will enter the language. Brand names will remain if the product succeeds. Other neologisms have entered English as permanent words, but are still informal, even slang, such as yuppie and chocoholic. A few, however, have become or will be permanent additions to Standard English and no longer elicit responses to newness or trendiness. See also: Componential Analysis; Compositionality; Concepts; Folk Etymology; Human Reasoning and Language Interpretation; Lexical Fields; Lexical Semantics; Metaphor and Conceptual Blending; Onomasiology and Lexical Variation; Semantic Change, the Internet and Text Messaging; Semantic Change.

Bibliography Adams V (1973). An introduction to modern English wordformation. London: Longman. Algeo J (1991). Fifty years among the new words: a dictionary of neologisms, 1941–1991. Cambridge: Cambridge University Press. Bauer L (1983). English word-formation. Cambridge: Cambridge University Press. Clark E E & Clark H H (1979). ‘When nouns surface as verbs.’ Language 55, 767–811. Glowka A W & Melanc¸on M (2002). ‘Among the new words.’ American Speech 77(3), 313–324. Glowka A W, Lester B K, Duggan-Caputo R, Drye J & Popik B (2000a). ‘Among the new words.’ American Speech 75(2), 184–198.

Glowka A W, Lester B K, Dreiseitl I, Hicks K A, Moon T, Patterson H, Sinski W, White E, Winkeljohn J & Popik B (2000b). ‘Among the new words.’ American Speech 75(4), 430–446. Glowka A W, Edgar C M, Frayne V J, Ledford M A, Portwood A L, Wunder C & Popik B A (2001a). ‘Among the new words.’ American Speech 76(1), 79–96. Glowka A W, Brown T, Bufford A, English C, Llorente R, Ruiz V & Wiggins M (2001b). ‘Among the new words.’ American Speech 76(3), 301–311. Glowka A W, Melanc¸on M, Gandy L, Blount A & Bufford A (2001c). ‘Among the new words.’ American Speech 76(4), 411–423. Glowka AW, Melanc¸on M & Wyckoff D C (2003). ‘Among the new words.’ American Speech 78(3), 331–346. Katamba F (1994). English words. London/New York: Routledge. Lehrer A (1996a). ‘Identifying and interpreting blends: an experimental approach.’ Cognitive Linguistics 7(4), 359–390. Lehrer A (1996b). ‘Why neologisms are important to study.’ Lexicology 2(1), 63–73. Lehrer A (1998). ‘Scapes, holics, and thons: the semantics of English combining forms.’ American Speech 73(1), 3–28. Marchand H (1960). The categories and types of presentEnglish word-formation. Munich: C. H. Beck. Metcalf A A (2002). Predicting new words. Boston: Houghton Mifflin. Nunberg G (2000). The way we talk now. Boston: Houghton Mifflin. Safire W (2003). ‘The e-lancer eats a bagelwich.’ In Safire W (ed.) No uncertain terms: more writing from the popular ‘On language’ column in the New York Times magazine. New York: Simon & Schuster. 78–80. Selkirk E O (1982). The syntax of words. Cambridge, MA: MIT Press. Spenser A (1991). Morphological theory. Oxford: Blackwell. Warren B (1990). ‘The importance of combining forms.’ In Dressler W U, Luschu¨tzky H C, Pfeiffer O E & Rennison J R (eds.) Contemporary morphology. Berlin: de Gruyter. 111–132. Zipf G K (1949). Human behavior and the principle of least effort. Boston: Addison Wesley.

Relevant Websites http://www.m-w.com – Merriam-Webst OnLine. http://dictionary.oed.com – Oxford English Dictionary Online.

632 Nominalism

Nominalism G Klima, Fordham University, Bronx, NY, USA ß 2006 Elsevier Ltd. All rights reserved.

Nominalism is usually characterized in contrast to other two major theoretical alternatives concerning the ontological status of universals. According to this characterization, the nominalist position holds that universals are mere words (e.g., the word ‘round’); in contrast, the conceptualist position would identify universals with concepts (e.g., the concept expressed both by the English word ‘round’ and by the Latin word ‘rotundus’), and the realist position would claim universals to be universal things (e.g., the nonmental, nonphysical, abstract Form of Roundness itself). Both historically and theoretically, there are a number of problems with this simple, indeed, simplistic conception. First, by these standards, no premodern author would qualify as a nominalist since all these authors held that our universal terms owe their universality to the universal concepts they express (thus, on this account all these authors would have to be classified as conceptualists), and no medieval author would count as a realist because they all denied the real existence of Platonic ‘abstract entities’ (although most medieval authors did posit universals both in re, as trope-like individualized forms such as the individual roundness of this billiard ball as opposed to the numerically distinct roundness of another one, and ante rem, as Divine Ideas, identical with God, who is not an ‘abstract entity’) (Klima, 2001). Accordingly, this conception leaves in obscurity the genuine theoretical differences of medieval nominalists and realists, who first distinguished themselves under these designations. However, as we shall see, clarifying these genuine theoretical differences, we can have a deeper understanding not only of what a genuinely nominalist position was in the late Middle Ages but also what in general a genuinely nominalist position has to consist in, as well as the important conceptual connections such a position has to such broader issues as ontological commitment, abstraction, induction, essentialism, and the possibility of valid scientific generalizations.

Extreme Realism: Plato’s Ideal Exemplars The problem of universals originated with Plato’s answer to the question of how universal knowledge (e.g., our knowledge of geometrical theorems) of a potential infinity of individuals is possible. Plato’s answer in terms of his theory of Forms involves the

idea of regarding individuals of the same kind as copies of an original exemplar or archetype and assuming that our understanding has some direct access to this exemplar. Clearly, if I can read today’s news on the printer’s printing plates, then I do not have to buy the copies at the newsstands to get my news. Likewise, knowing that the form of all triangles, after which all triangles are modeled, has three angles equal to two right angles, I know that all triangles have the same property. Thus, Plato’s answer is obviously a good answer to the question of the possibility of universal knowledge. However, this answer raises more problems than it solves. If the exemplar is not one of the copies, yet it has to be like the copies (given that the copies all have to be like it), then what sort of entity is it? In fact, this is precisely the basis of the most famous argument against Plato’s theory, the Third Man (Wedberg, 1978). For if the exemplar has to be of the same sort as its copies, it can be grouped together with its copies into the same kind. However, if for a group of individuals of the same kind there always has to be a common exemplar, and nothing can be its own exemplar, as Plato’s theory claims, then there has to be another exemplar for the exemplar and its copies taken together. Since this reasoning can be repeated indefinitely, we have to conclude that for a given set of copies there would have to be an infinity of exemplars, contrary to the theory’s explicit claim that there can only be one such exemplar. Therefore, since the theory as stated entails inconsistent claims, it cannot be true in this form.

Moderate Realism or Conceptualism: Aristotle’s Universals Prompted by such and similar inconsistencies, Aristotle famously rejected Plato’s Forms and provided a radically different sort of answer to Plato’s original question concerning the possibility of universal knowledge. Plato’s idea was that this sort of knowledge is possible because of the human soul’s prenatal access to the Forms serving as the common exemplars of all sorts of particulars the soul comes to experience in this life and the soul’s ability to recollect these Forms and their properties prompted by these experiences. In contrast, Aristotle’s idea was that we can have this sort of knowledge from experience in this life. To be sure, the finite number of experiences we can have in this life with individuals of the same sort can never justify universal knowledge claims that cover a potential infinity of such individuals. However, if our cognitive faculties enable us to recognize a common pattern that equally

Nominalism 633

characterizes any possible individual of the same sort, then even this finite number of experiences may enable us to make valid generalizations covering all possible individuals of the same sort. Using the previous analogy, in this case I do not have access to the printer’s plates (if there is any such thing at all, which need not be the case if copies can ‘self-reproduce,’ as in photocopying); I can only collate a number of variously smudged, incomplete newspapers, from which, nevertheless, I am able to extract their common content so that I will know the news any other copy of the same issue may carry even before actually checking it. This is the idea Aristotle worked out in his theories of abstraction and induction.

The Moderate Realism/Conceptualism of Medieval Aristotelians According to Aristotle and his medieval interpreters, human understanding is characterized by two intellective powers or faculties, namely the active intellect (nous poietikos, intellectus agens) and the receptive intellect (nous pathetikos, intellectus possibilis). Due to the obscurity of Aristotle’s formulations, in the Middle Ages there was much debate about the makeup of these faculties, namely whether they are material or immaterial, and whether they existed as individualized powers of individual human souls or, rather, as separate substances connected to individual human souls to carry out their intellectual operations (pretty much as mainframe computers are connected to their terminals). However, regardless of these differences, there was quite general agreement concerning the function of these intellective powers (whatever they are in themselves) and how they can produce universal knowledge without having to ‘look up to’ or ‘recollect’ Platonic Forms. The active intellect has the function of producing simple, universal concepts (i.e., universal representations of individuals) from their singular representations provided by the senses. The receptive intellect, on the other hand, receives these universal representations for further processing, combining them into more complex concepts and judgments by means of complexive concepts, the so-called syncategorematic concepts (‘syncategoremata’), and using these judgments in reasoning.

Abstraction, Induction, and Essentialism In the process of abstraction, the active intellect forms a common concept by considering the multitude of experiences of particulars of the same kind, disregarding what is peculiar to each while focusing on what is common to all. The common concept retained

in the receptive intellect is a mental representation that represents the individuals it is abstracted from only with respect to what commonly characterizes them all. Accordingly, the same mental representation will naturally represent in the same respect not only those individuals that it is abstracted from but also other individuals of the same kind – that is, individuals that resemble the observed ones precisely in the same respect in which the abstracted concept represents the observed individuals. Therefore, if the receptive intellect forms a judgment with this concept in which the predicate belongs to the particulars that fall under this concept on account of falling under this concept, then it is able to produce a valid scientific generalization by induction – that is, by reasoning in the following way: This observed S was P, and that observed S was P, and so on for all observed S’s; therefore, in general, every S is P. Obviously, this inference is valid (not formally, but materially) only if an S is essentially and not merely coincidentally P – that is, if an S is P on account of being an S so that any S is necessarily P, as long as it is an S (and if it is also essentially an S, then it is necessarily an S, and hence also a P, as long as it exists). This is precisely why this inference requires the multitude of experiences, namely to ensure that the observed S’s have been P not by mere coincidence but precisely on account of being S’s. However, if this inference is valid, then it obviously provides a good answer to Plato’s original question (concerning the possibility of universal knowledge) without resorting to Plato’s universal Forms.

The Ontological Commitments of Moderate Realism However, the previous conception may still involve an enormous amount of ontological commitment; admittedly, not to Platonic universal Forms but to individuals of a rather strange sort, the individualized forms of individual substances. If all our common terms are applicable to individual things on account of our common concepts, and we gain these concepts by abstraction from the individualized forms or properties that sort individuals into various kinds, then apparently there are as many individualized forms in an individual substance as there are true common predicates of it. (Yet, this conception need not make individual substances into mere congeries of individualized forms, in the vein of modern trope theories. On this conception, individual substances are the primary units of reality that account for the individuation of individualized forms, and not the other way around (cf. Campbell, 1997; Daily, 1997).) In this moderate realist view, commonly endorsed by

634 Nominalism

medieval authors of the 13th century (e.g., Thomas Aquinas), the world is a world of individuals without ‘universal entities,’ but it is populated by all sorts of nonsubstantial particulars, serving as the individualized significata of our common terms. The universals of this world, then, are the common terms of our language as well as the concepts they express, and the individualized forms as-conceived-in-a-universalmanner by means of these concepts (i.e., as-existingin-the-mind as the direct objects of these concepts) (Klima, 1999). To be sure, these authors were able to reduce the ontological commitment of their theories by two fundamental semantic strategies: the distinction of several senses of ‘being’ and the identification of the semantic values of various terms. On the first strategy, ontological commitment simply becomes ambiguous among different kinds of entities credited with different degrees of reality corresponding to different senses of ‘existence’ or ‘being’ (such as being in reality, esse reale, and being in the mind, esse rationis; cf. Klima, 1993) – a move generally rejected by nominalists. On the second strategy, however, ontological commitment is reduced even within the same domain of entities. For instance, since two billiard balls A and B are similar in shape, the moderate realist view is apparently committed not only to the individualized roundness of A and the individualized roundness of B but also to the individualized similarity of A to B and to the individualized similarity of B to A. However, several moderate realist authors would argue that nothing prevents us from saying that the roundness of A is the same thing as the similarity of A to B, merely conceived differently, by means of a different concept (Henninger, 1989; Brower, 2001).

Late Medieval and Modern Nominalism This strategy actually anticipates the more radical reductionist program of William Ockham and his 14th-century followers, such as John Buridan, Albert of Saxony, Marsilius of Inghen, and in general the late medieval nominalist tradition of the via moderna. These nominalist authors agreed with their moderate realist counterparts in positing a world consisting only of individuals and identifying universals with universal terms of written and spoken languages, which owe their universal mode of representation to universal concepts of the human mind, interpreted as singular mental acts representing their singular objects in a universal manner. However, they denied even the ‘diminished’ existence of universals as universal objects of these mental concepts admitted by moderate realists, and they reduced the number of really distinct categories of singulars to two or

three, admitting distinct entities only in the categories of substance, quality, and possibly quantity. (Ockham, for example, identified substance with quantity, whereas Buridan argued against their identification.) To be sure, using their own eliminative strategy of identifying the semantic values of linguistic items in different linguistic categories, the moderate realists could in principle achieve the same degree of ontological parsimony. However, for them this was to be achieved separately, based on metaphysical considerations. Their semantics demanded positing as many different types of semantic values for their terms as there are different linguistic categories, although it was possible for them to decide on the basis of further metaphysical considerations about the identity or nonidentity of these semantic values. In contrast, the nominalists had the principles of ontological reduction (i.e., eliminating unwanted ontological commitment) ‘built into’ their semantic principles (Klima, 1999). The reductionist program of the nominalists was devised to show that it is possible to have a sufficiently ‘fine-grained’ semantics without a complex ontology by ‘moving’ the requisite distinctions from (a full-fledged or ‘diminished’) reality to the conceptual structures posited in the mind. For example, in contrast to the moderate realist interpretation of similarity alluded to previously, in the nominalist approach there is no metaphysical question of whether the relation signified by the term ‘similar’ is identical with its foundation, for example, the roundness of ball A. The signification of ‘similar’ is not construed in the same way in the first place. Instead of assuming that in ‘A is similar to B’ this term signifies A’s similarity to B, an extramental entity that may or may not be distinct from the roundness of A, the nominalists would say that this term simply signifies A in relation to B, on account of the relative (connotative) concept whereby we conceive of A in relation to B. Therefore, even if we do have the obvious semantic difference between the absolute term ‘round’ and the relative term ‘similar,’ this semantic difference need not be accounted for in terms of a bloated ontology of distinct extralinguistic correlates of these terms because the relevant semantic distinctions can be made with reference to conceptual distinctions in the mind. Following this nominalist strategy, once we have identified a basic ‘vocabulary’ of simple concepts that only commit us to entities in the ‘permitted’ ontological categories, any further apparent ontological commitments of our language can be eliminated by means of nominal definitions, which show how the semantic features of linguistic items carrying such apparent ontological commitment can be accounted for in terms of an implicit conceptual struc-

Nominalism 635

ture that can be explicated by means of the basic vocabulary carrying commitment only to permitted entities. Indeed, this strategy of eliminating ontological commitment by means of nominal definitions explicating this implicit conceptual structure foreshadows the similar strategy of elimination by paraphrase characteristic of all modern nominalistic programs. Thus, in this approach, and this is the gist of medieval nominalism in general, a sufficiently fine-grained semantics for natural language is achieved by mapping linguistic constructions onto sufficiently rich conceptual structures in a mental language (comparable in its role to the contemporary ‘language of thought hypothesis’), which in turn can be mapped onto a parsimoniously construed reality without any loss of semantic distinctiveness. Modern nominalistic programs, such as the program promulgated by Goodman and Quine (1947), apply the same basic strategy, but without the medievals’ appeal to a mental language, by introducing the explicit ontological commitment of a primitive vocabulary and by eliminating any further apparent commitment (e.g., to such ‘abstract entities’ as numbers) by providing suitable paraphrases of relevant linguistic items (e.g., numerals) in terms of this primitive vocabulary. Indeed, in this program, Platonistic descriptions of language in terms of linguistic types are replaced by a nominalistic syntax treating of tokens, such as singular inscriptions. (In fact, medieval nominalists also formulated their logical theories in terms of token-phrases (Klima, 2004a).) The successful paraphrase then shows that there is no need to assume the existence of the putative entities apparently required by the semantic features of the phrases that appeared to carry commitment to them, so nominalists are entitled to get rid of them by one swoosh of Ockham’s razor.

Nominalism, Antirealism, and Skepticism Yet, despite customary charges and modern tendencies to the contrary, the ‘reductionist’ program and the corresponding strategy of medieval nominalism did not necessarily result in metaphysical antirealism, conventionalism, or skepticism. Medieval nominalists typically regarded concepts as naturally representative of a world of individuals presorted into natural kinds and not sorted into these kinds by our concepts and/or linguistic conventions. Thus, they maintained an essentialist metaphysics, and so the scientific knowability of a mind-independent reality, yet without any ontological commitment to (whether subsistent or inherent) universal essences distinct from their individuals. Whether they could consistently do so is a further issue, which bears

direct relevance to contemporary considerations concerning ontological commitment and metaphysical essentialism. Plato’s problem of the possibility of universal knowledge still remains a problem for nominalists: Once they have rejected the moderate realists’ distinct individualized forms, which in the moderate realist conception are precisely the items in reality that serve as the foundation for the abstraction of our universal concepts, the nominalists still have to provide a plausible story about how universal concepts can be abstracted from observed singulars (and hence how they can apply even to previously unobserved singulars), unless they want to give up on the possibility of universal knowledge in the classical sense altogether (Klima, 2004b, 2005). See also: Aristotle and Linguistics; Classifiers and Noun

Classes; Concepts; Evolution of Semantics; Mentalese; Philosophical Theories of Meaning; Pre-20th Century Theories of Meaning; Semantic Value.

Bibliography Aydede M (2004, Fall). ‘The language of thought hypothesis.’ In Zalta E N (ed.). Bacon J (2002, Fall). ‘Tropes.’ In Zalta E N (ed.). Balaguer M (2004, Summer). ‘Platonism in metaphysics.’ In Zalta E N (ed.). Brower J (2001, Summer). ‘Medieval theories of relations.’ In Zalta E N (ed.). Campbell K (1997). ‘The metaphysic of abstract particulars.’ In Mellor & Oliver (eds.). 125–139. Daily C (1997). ‘Tropes.’ In Mellor & Oliver (eds.). 130–159. Goodman N & Quine W V O (1947). ‘Steps toward a constructive nominalism. Journal of Symbolic Logic 12, 105–122. Henninger M (1989). Relations: medieval theories 1250– 1325. Oxford: Clarendon. Klima G (1993). ‘The changing role of Entia Rationis in medieval philosophy: a comparative study with a reconstruction.’ Synthese 96, 25–59. Klima G (1999). ‘Ockham’s semantics and metaphysics of the categories.’ In Spade P V (ed.) The Cambridge companion to Ockham. Cambridge, UK: Cambridge University Press. 118–142. Klima G (2001, Winter). ‘The medieval problem of universals.’ In Zalta E N (ed.). Klima G (2004a). ‘Consequences of a closed, token-based semantics: the case of John Buridan.’ History and Philosophy of Logic 25, 95–110. Klima G (2004b). ‘John Buridan on the acquisition of simple substantial concepts.’ In Freidmann R L & Ebbesen S (eds.) John Buridan and beyond: topics in the language sciences 1300–1700. Copenhagen: The Royal Danish Academy of Sciences and Letters. 17–32.

636 Nonmonotonic Inference Klima G (2005). ‘The essentialist nominalism of John Buridan.’ Review of Metaphysics 58, 301–315. Lagerlund H (2004, Summer). ‘Mental representation in medieval philosophy.’ In Zalta E N (ed.). McInerny R (2002, Spring). ‘Saint Thomas Aquinas.’ In Zalta E N (ed.). Mellor D H & Oliver A (eds.) (1997). Properties. Oxford: Oxford University Press.

Rosen G (2001, Fall). ‘Abstract objects.’ In Zalta E N (ed.). Wedberg A (1978). ‘The theory of ideas.’ In Vlastos G (ed.) Plato: a collection of critical essays. Notre Dame, IN: University of Notre Dame Press. 28–52. Zalta E N (ed.) (2001–2004). Stanford encyclopedia of philosophy. http://plato.stanford.edu.

Nonmonotonic Inference K Frankish, The Open University, Milton Keynes, UK ß 2006 Elsevier Ltd. All rights reserved.

In most logical systems, inferences cannot be invalidated simply by the addition of new premises. If an inference can be drawn from a set of premises S, then it can also be drawn from any larger set incorporating S. The truth of the original premises guarantees the truth of the inferred conclusion, and the addition of extra premises cannot undermine it. This property is known as monotonicity. (The term is a mathematical one; a monotonic sequence is one whose terms increase but never decrease, or vice versa.) Nonmonotonic inference lacks this property. The conclusions drawn are provisional, and new information may lead to the withdrawal of a previous conclusion, even though none of the original premises is retracted. Much of our everyday reasoning is nonmonotonic. We frequently jump to conclusions on the basis of partial information, relying on rough generalizations – that people usually mean what they say, that machines usually work as they are designed to, that objects usually stay where they are put, and so on. We treat these conclusions as provisional, however, and are prepared to retract them if we learn that the cases we are dealing with are atypical. To take an example that is ubiquitous in the literature, if we know that Tweety is a bird, then we may infer that Tweety flies, since we know that birds typically fly, but we shall withdraw this conclusion if we learn that Tweety is an atypical bird – a penguin, say. An important feature of inferences like this is that they are sensitive to the absence of information as well as to its presence. Because we lack information that Tweety is atypical, we assume that he is not and proceed on that basis. (That, of course, is why the acquisition of new information can undermine the inference.) In standard logics, by contrast, inference is sensitive only to information that is explicitly represented, and we would need to add the premise that Tweety is not atypical in order to reach the conclusion that he flies. This feature of nonmonotonic inference makes it highly

useful. We do not have the time or mental capacity to collect, evaluate, and process all the potentially relevant information before deciding what to do or think. (Think of everything we would need to know in order to be sure that Tweety is not atypical – that he is not a penguin, not an ostrich, not a hatchling, not injured, not tethered to the ground, and so on.) Because of its central role in commonsense reasoning, nonmonotonic inference has attracted much interest from researchers in artificial intelligence – in particular from those seeking to model human intelligence in computational terms. The challenge has been to formalize nonmonotonic inference – to describe it in terms of a precisely defined logical system that could then be used to develop computer programs that replicate everyday reasoning. Nonmonotonic logic also has applications to more specific problems in artificial intelligence, among them the so-called frame problem (McCarthy and Hayes, 1969). In order to plan how to reach its goals, an artificial agent will need to know what will and what will not change as a result of each action it might perform. But the things that will not change as a result of an action will be very numerous, and it would be impracticable to list them all in the system’s database. A more efficient solution would be for the system to reason nonmonotonically, using the rule of thumb that actions leave the world unchanged except in those respects in which they are known to alter it. Another application is in the area of database theory. Here it is often convenient to operate with the tacit assumption that a database contains all the relevant information and to treat as false any proposition that cannot be proved from it. This is known as the closed world assumption (Reiter, 1978). Again, this involves a nonmonotonic inference relation, since the addition of new data may permit the derivation of a proposition that was previously underivable and had thus been classified as false. Further areas of application include reasoning about natural kinds, diagnostic reasoning, and natural language processing (Reiter, 1987; McCarthy, 1986).

Nonmonotonic Inference 637

Work on formalizing nonmonotonic inference has progressed rapidly since its beginnings in the 1970s, and there is now a large body of mature work in the area, much of it highly technical in character. (For a collection of seminal papers, see Ginsberg, 1987; for surveys of the field, see Brewka et al., 1997, and the articles in Gabbay et al., 1994. Antoniou, 1997, offers a relatively accessible introduction to the area.) One of the most important nonmonotonic formalisms is default logic, developed by Raymond Reiter (1980). This involves supplementing first-order logic with new rules of inference called default rules, which have the form in (1). (1) p : q r

P is known as the prerequisite, q as the justification, and r as the consequent. Such rules are to be read, ‘If p, and if it is consistent with the rest of what is known to assume that q, then conclude that r.’ In simpler cases (‘normal defaults’), q and r are the same, so the rule says that given the prerequisite, the consequent can be inferred, provided it is consistent with the rest of one’s data. Thus the rule that birds typically fly would be represented as (2). (2) BirdðxÞ : FliesðxÞ Flies(x)

This says that if x is a bird and the claim that x flies is consistent with what we know, then we can infer that x flies. Given that all we know about Tweety is that he is a bird, we can therefore infer that he flies. The inference is nonmonotonic, since if we subsequently acquire information that is inconsistent with the claim that Tweety flies, then the rule will cease to apply to him. The application of default rules is tricky, since it is necessary to check their justifications for consistency not only with one’s initial data but also with the consequents of any other default rules that may be applied. The application of one rule may thus block that of another. To solve this problem, Reiter introduced the notion of an extension for a default theory. A default theory consists of a set of premises W and a set of default rules D. An extension for a default theory is a set of sentences E that can be derived from W by applying as many of the rules in D as possible (together with the rules of deductive inference) without generating inconsistency. An extension of a default theory can be thought of as a reasonable development of it. Another approach, one closely related to default logic, is autoepistemic logic (Moore, 1985). This turns on the idea that we can infer things about the world from our introspective knowledge of our own minds (hence the term ‘autoepistemic’). From the fact that

I do not believe that I owe you a million pounds, I can infer that I do not owe you a million pounds, since I would surely know if I did. Building on this idea, autoepistemic logic represents rules of thumb as implications of claims about one’s own ignorance. For example, the rule that birds typically fly can be represented as the conditional claim that if something is a bird and one does not believe that it cannot fly, then it does fly. Given introspective abilities, one can use this claim to draw the defeasible conclusion that Tweety flies, based on one’s ignorance of reasons to think he cannot. This approach can be formalized by using the apparatus of modal logic, with the modal operator L interpreted as ‘It is believed that.’ A third approach is circumscription (McCarthy, 1980, 1986; see also Lifschitz, 1994). This involves formulating rules of thumb with abnormality predicates and then restricting the extension of these predicates – circumscribing them – so that they apply to only those things to which they must apply, given the information currently available. Take the Tweety case again. We render the rule of thumb that birds typically fly as the conditional in (3), where ‘Abnormal’ signifies abnormality with respect to flying ability. (3) 8x(Bird(x) & :Abnormal(x) ! Flies(x)).

This does not, of course, allow us to infer that Tweety flies, since we do not know that he is not abnormal with respect to flying ability. But if we add axioms that circumscribe the abnormality predicate so that it applies to only those things that are currently known to be abnormal in this way, then the inference can be drawn. This inference is nonmonotonic, since if we were to add the premise that Tweety is abnormal with respect to flying ability, then the extension of the circumscribed predicate would expand to include Tweety, and the inference would no longer go through. Unlike the other strategies mentioned, circumscription can be formulated by using only the resources of first-order predicate calculus, though its full development requires the use of secondorder logic, allowing quantification over predicates. Each of these approaches has its own strengths and weaknesses, but there are some general issues that affect them all. One problem is that all of them allow for the derivation of multiple incompatible sets of conclusions from the same premises. In default logic, for example, a theory may have different extensions depending on the order in which the rules are applied. The standard example is the theory that consists of the premises Nixon is a Quaker and Nixon is a Republican, together with the default rules Quakers are typically pacifists and Republicans are typically not pacifists. Each of these rules blocks

638 Nonmonotonic Inference

the application of the other, since the consequent of one is incompatible with the justification of the other. The theory thus has two incompatible extensions – one in which the first rule is applied and containing the conclusion that Nixon is a pacifist, the other in which the second rule is applied and containing the conclusion that Nixon is not a pacifist. Similar results occur with the other approaches mentioned. The existence of multiple extensions is not in itself a weakness – indeed, it can be seen as a strength. We might regard each extension as a reasonable extrapolation from the premises and simply plump for one of them. (This is known as the credulous strategy. The alternative skeptical strategy is to endorse only those claims that appear in every extension.) In some cases, however – particularly in reasoning involving time and causality – a plausible theory generates an unacceptable extension. Much work has been devoted to this problem – one strategy being to set priorities among default rules, which determine the order in which they are applied and so restrict the conclusions that can be drawn. (A much-discussed case is the Yale shooting problem, introduced in Hanks and McDermott, 1987, where the rule of thumb that living things typically stay alive generates an unexpected conclusion in reasoning about the effects of using a firearm. For a description of the scenario, and for discussion, see Shanahan, 1997.) A second problem concerns implementation. The goal of much work in this area is to build artificial nonmonotonic reasoning systems, but the formal approaches that have been devised are not easy to implement. Default logic, for example, requires checking sets of sentences for consistency, and there is no general procedure for computing such checks. Restricted applications have been devised, however, and aspects of nonmonotonic reasoning have been effectively implemented by using the techniques of logic programming (programming in languages based on formal logic). A third issue concerns the piecemeal character of much work in this area. Theories have been developed and elaborated in response to particular problem cases and with relatively little attention to the features of the inference relations they generate. Recent work has begun to address this issue and to identify properties that are desirable in any nonmonotonic inference relation (see, for example, Makinson, 1994). One of these is that adding a conclusion back into the premise set should not undermine any other conclusions – a property known as cautious monotonicity. Finally, a word about probabilistic logics. Most nonmonotonic formalisms embody a qualitative

approach: premises and conclusions are treated as either true or false, as in deductive logic. But probabilistic logics, in which propositions are assigned continuous probability values, can also be used to model certain types of nonmonotonic inference. In probabilistic reasoning, it is common to treat a conclusion as warranted if its probability, given the premises, exceeds a certain threshold. This inference relation is nonmonotonic, since the addition of new premises may lead to a readjustment of one’s probability assignments, with the result that a conclusion that previously passed the threshold now falls short of it. However, although probabilistic logics are well suited for modeling reasoning under uncertainty (see Shafer and Pearl, 1990), it is unlikely that they can do all the work required of a theory of nonmonotonic inference (McCarthy, 1986). See also: Concessive Clauses; Conditionals; Context and

Common Ground; Context Principle; Cooperative Principle; Formal Semantics; Game-theoretical Semantics; Implicature; Indeterminacy; Inference: Abduction, Induction, Deduction; Lexical Conditions; Logic and Language; Logical Consequence; Metonymy; Monotonicity and Generalized Quantifiers; Neo-Gricean Pragmatics; Nonstandard Language Use; Pragmatic Presupposition; Semantics–Pragmatics Boundary.

Bibliography Antoniou G (1997). Nonmonotonic reasoning. Cambridge, MA: MIT Press. Brewka G, Dix J & Konolige K (1997). Nonmonotonic reasoning: an overview. Stanford: CSLI Publications. Gabbay D M, Hogger C J & Robinson J A (eds.) (1994). Handbook of logic in artificial intelligence and logic programming, vol. 3: Nonmonotonic reasoning and uncertain reasoning. Oxford: Oxford University Press. Ginsberg M L (ed.) (1987). Readings in nonmonotonic reasoning. Los Altos, CA: Morgan Kaufmann. Hanks S & McDermott D (1987). ‘Nonmonotonic logic and temporal projection.’ Artificial Intelligence 33, 379–412. Lifschitz V (1994). ‘Circumscription.’ In Gabbay, Hogger & Robinson (eds.). 298–352. Makinson D (1994). ‘General patterns in nonmonotonic reasoning.’ In Gabbay, Hogger & Robinson (eds.). 35–110. McCarthy J (1980). ‘Circumscription – a form of nonmonotonic reasoning.’ Artificial Intelligence 13, 27–39. [Reprinted in Ginsberg (1987).] McCarthy J (1986). ‘Applications of circumscription to formalizing common-sense knowledge.’ Artificial Intelligence 28, 89–116. [Reprinted in Ginsberg (1987).] McCarthy J & Hayes P J (1969). ‘Some philosophical problems from the standpoint of artificial intelligence.’ In Meltzer B & Mitchie D (eds.) Machine intelligence, vol. 4.

Nonstandard Language Use 639 Edinburgh: Edinburgh University Press. 463–502. [Reprinted in Ginsberg (1987).] Moore R C (1985). ‘Semantical considerations on nonmonotonic logic.’ Artificial Intelligence 25, 75–94. [Reprinted in Ginsberg (1987).] Reiter R (1978). ‘On closed world data bases.’ In Gallaire H & Minker J (eds.) Logic and data bases. New York: Plenum. 55–76. [Reprinted in Ginsberg (1987).]

Reiter R (1980). ‘A logic for default reasoning.’ Artificial Intelligence 13, 81–132. [Reprinted in Ginsberg (1987).] Reiter R (1987). ‘Nonmonotonic reasoning.’ Annual Review of Computer Science 2, 147–186. Shafer G & Pearl J (1990). Readings in uncertain reasoning. San Mateo, CA: Morgan Kaufmann. Shanahan M (1997). Solving the frame problem. Cambridge, MA: MIT Press.

Nonstandard Language Use A Bezuidenhout, University of South Carolina, Columbia, SC, USA ß 2006 Elsevier Ltd. All rights reserved.

To talk of non-standard language use presupposes that we understand what it is for a use of language to be standard. A standard use can be thought of either as a use that conforms to some standard (viz. to a linguistic rule or convention) or as a use that is the usual or most common one. So, a non-standard use is either one that flouts a linguistic convention or that is an uncommon or novel use. Various linguistic phenomena somehow marked as non-standard will be discussed and classified as belonging to one or another of these two categories of non-standard usage. A possible third sense of ‘non-standard,’ which is the opposite of Bach’s (1995) notion of a standardized use, also will be discussed. Although non-standardized uses may be pragmatically marked uses of expressions, they don’t flout linguistic conventions and are not novel. Speech errors, such as cases in which a speaker says ‘pig vat’ instead of ‘big fat,’ also could be viewed as cases of non-standard language use, but will not be discussed in this article. A conventional use, one conforming to a linguistic convention, is an agreed-upon use. The agreement needn’t be an explicit one but simply a matter of members of a linguistic community conforming their use to that of other members of the community, so long as others do the same. Malapropisms are one sort of example of non-conventional use. Malapropisms are cases where a speaker (unintentionally) substitutes a word for another word, whose agreed-upon meaning is different from what the speaker intends to convey. Usually the substitution is based on some sort of sound similarity between the ‘correct’ and ‘incorrect’ usage. In his play The Rivals, Richard Sheridan has a lot of fun at the expense of his character Mrs. Malaprop. For example, Mrs. Malaprop exclaims to Sir Anthony Absolute, ‘you surely speak laconically!,’ meaning

to surmise that Sir Anthony is speaking ironically. A little later she hopes that Sir Anthony will regard her niece, Lydia Languish, ‘as an object not altogether illegible,’ meaning that she hopes her niece will not be regarded as someone ineligible for a match with Sir Anthony’s son. Such malapropisms are not confined to fiction. For example, in some experimental work on referential communication reported by Brown (1995), a participant in a map-reading task referred to an electric pylon as a colon. Another example occurred on the June 16, 2004, broadcast of the Tavis Smiley Show on National Public Radio. One of Smiley’s guests praised him by saying, ‘You have spoken up so unanimously for black people.’ The concept of unanimity only makes sense when one is talking about a group of people, and Smiley is a single individual. However, what the guest was thinking is that whenever Smiley speaks up he speaks up for black people. So, if one thinks of Smiley and his past selves as a group, one can think of them as speaking with one voice and hence as unanimous. This last example shows that malapropisms are not always based on sound similarities but can arise from quite complicated cognitive comparisons (e.g., Smiley and his past selves are like a group of individuals). Malapropisms are extreme examples of misuse, and rather idiosyncratic. There are other examples of ‘misuse’ that are more widespread among members of a linguistic community. For example, many people use the sentence, ‘Hopefully, Bush will be defeated’ to express their hope that Bush will be defeated, not the thought that the defeating of Bush will be performed with hope. Or again, many people say things such as, ‘Me and him are going to the movies’ or ‘I could care less’ or ‘Everyone will be bringing their spouses’ instead of ‘He and I are going to the movies’ or ‘I couldn’t care less’ or ‘Everyone will be bringing his or her spouse.’ Given how widespread these latter ‘misuses’ are, one might argue that, rather than being misuses of English, they are acceptable uses within some

640 Nonstandard Language Use

‘non-standard’ dialects of English. Calling these dialectical varieties ‘non-standard’ is rather contentious. There are some language purists who argue that only ‘standard’ English is acceptable usage. However, as Chomsky (2000) points out, calling some varieties of English ‘standard’ and regarding them as normative for English usage generally has nothing to do with the linguistic properties of these language varieties, but is purely a matter of power politics. The dialect of the ruling group is inevitably regarded as the norm for correct speech. Everyone has heard the old saw about a language being a dialect with an army and a navy. But as Chomsky remarks: ‘‘To say that one variety of English is ‘right’ and another ‘wrong’ makes as much sense as saying that Spanish is right and English is wrong’’ (2000: 71). According to the second sense of ‘non-standard’ articulated above, non-standard uses are novel or ‘nonce,’ i.e., one-off, uses. The largest class of uses that have been thought of as ‘nonce’ are the uses of sentences to convey Gricean particularized conversational implicatures. For instance, a mother asks her teenage son to tidy his room and he replies: ‘I have to beat this boss real quick.’ The son is explicitly talking about what he has to do in the video game he’s playing but is implicitly refusing to clean his room right then. In this sense of ‘nonce’ meaning, the novel meaning is something that is merely indirectly or implicitly conveyed. Such nonce meanings are conveyed because the hearer understands the speaker to have explicitly said one thing but to have meant something else (either in addition or instead). Thus, implicatures are not nonce meanings for expressions. Rather, they are novel meanings conveyed by the use of expressions that are understood to have their conventional meanings. Metaphorical, ironic, metonymic, and other such ‘non-literal’ uses of expressions have also been thought of as nonce uses. For example, ‘The soldiers butchered the villagers’ can be used metaphorically to express the fact that the soldiers killed the villagers in a particularly inhumane way. ‘You’re a fine friend’ can be used ironically to express the fact that the hearer has done something unfriendly. ‘The ham sandwich is getting restless’ can be used metonymically to express the fact that the orderer of the ham sandwich is getting restless. Many philosophers analyze such uses in terms of Gricean particularized conversational implicatures. The speaker says one thing but indirectly conveys something else. Inasmuch as such uses are analyzed in Gricean terms, these again would not be cases in which expressions are given novel meanings. Rather, by using expressions with their conventional meanings, speakers indirectly express novel meanings.

(Of course, the examples of metaphor, irony, and metonymy given above are very hackneyed, and in this sense are not ‘novel.’ For example, the ‘butcher’ metaphor has become a ‘dead’ metaphor. Many dictionaries of English usage now list ‘to kill in a brutal manner’ as one of the meanings of ‘to butcher.’ Other, ‘fresher,’ examples could have been given, but these would have required more context setting.) Not everyone would agree that cases of metaphor and metonymy are correctly analyzed in Gricean terms. An alternative ‘contextualist’ view denies that we interpret the entire sentence literally and only then (because of a violation of some Gricean maxim, such as the maxim of quality) derive an implicated metaphorical meaning. For example, consider the following (1) Dickens is easy to read. (2) The appendectomy is on top of the cabinet. (3) The stinkbug has left the room.

In (1), which is an example of a producer-for-product metonymy, ‘Dickens’ is used to refer to the novels that Dickens wrote. In (2), one can imagine a context in which ‘the appendectomy’ is used to refer to the file of the patient who is scheduled for an appendectomy. And in (3), ‘the stinkbug’ can be used metaphorically to refer to a person whose personal hygiene leaves much to be desired. According to the contextualist, local pragmatic processes operate on the referring terms (‘Dickens,’ ‘the appendectomy,’ ‘the stinkbug’) to yield pragmatically determined contents. These become a part of the propositional contents that are directly expressed by sentences (1)–(3) in their conversational contexts. For example, the speaker of (2) doesn’t say falsely that a medical procedure is on the cabinet and thereby indirectly convey the proposition that the file of a patient scheduled for an appendectomy is on the cabinet. Rather, the latter proposition is directly expressed. See Carston (1997) and Bezuidenhout (2001) for a defense of the contextualist view of metaphor. If this view is correct, there is a sense in which the expressions ‘Dickens,’ ‘the appendectomy,’ etc. are given novel meanings in different contexts. These novel meanings are not merely indirectly conveyed by literally saying something else. Note, this view is not committed to a ‘Humpty Dumpty’ theory of meaning, according to which expressions can mean whatever their users intend them to mean. (In Through the Looking-Glass, Humpty Dumpty says to Alice, ‘There’s glory for you’ and claims to mean ‘There’s a nice knockdown argument for you.’) According to the contextualist view, the novel contextual meanings of expressions are pragmatic

Nonstandard Language Use 641

developments of semantically encoded meanings. The local pragmatic processes that determine contextual meanings are thus not completely unconstrained. Speakers cannot convey novel meanings that are completely unrelated to conventionally encoded meanings. (For a more radical view, see Donnellan [1968] and Davidson [1986]. They argue that, in appropriate contexts, ‘glory’ can mean a nice knockdown argument.) Another class of uses that have been called ‘nonce’ are ones in which speakers coin new words. Attested examples are as follows, the first two given by Clark (1992: 315–316), the third by Higginbotham (2002: 577): (4) He’s being grand juried for possible conflict in real estate loans. (5) Today I’m going gallerying. (6) I’ve hard-drived the diskette.

These novel uses are similar to ones that have already entered common usage, such as ‘He docked the boat’ or ‘The librarian shelved the books.’ These uses all obey a productive rule, in which a noun is turned into a verb by the addition of a verbal particle (‘–ed’ to signal past tense, ‘–ing’ to signal an ongoing or incomplete action). The novel words are coined because there is no readily available, short expression already in common use that quite covers the speaker’s intended meaning. In some cases, people invent new words when there are already perfectly good words in common usage that mean exactly what the speaker intends. For example, on the June 16, 2004, edition of the Tavis Smiley Show on NPR, one of Smiley’s guests said: ‘You have to conversate and talk to your child on a man to man level.’ Presumably what was meant was that one should converse with one’s child. What is already a verb, ‘to converse,’ is subjected to a morphological rule for verb formation, by the addition of the suffix ‘–ate,’ perhaps on analogy with the verb ‘to orientate,’ which is frequently used instead of the plain ‘to orient’ (this latter being the norm in standard American English, although not in standard British English, which favors ‘to orientate’). A third sense of ‘standard use’ should be mentioned. It corresponds to what Bach (1995) calls a standardized use, which he distinguishes from a conventional use. One reason that Bach appeals to his notion of standardized use is to counter attempts by relevance theorists such as Carston (1995) to reconstrue Gricean generalized conversational implicatures as part of what is directly expressed. Consider, for example, the Gricean treatment of scalar terms, such as numerals. On this view, ‘Grice has three teeth’

literally says that Grice has at least three teeth, but it conversationally implicates in a generalized way that Grice has at most three teeth. These two propositions together entail that Grice has exactly three teeth. Carston’s alternative view is that ‘three teeth’ is semantically underspecified. When used in an appropriate context, it can be specified in such a way that what is said (i.e., what is explicitly conveyed) in that context by ‘Grice has three teeth’ is that Grice has exactly three teeth. Bach offers a third alternative. On this view, ‘three teeth’ has a standardized use to convey exactly three teeth. A standardized use is something like a default use. It is a default in the sense that it has become a regular practice – perhaps a mental habit – for members of the linguistic community to use the word(s) in question in a certain way, given that circumstances are normal. Only if circumstances are somehow unusual will a different interpretation be called for. Bach denies that the standardized meaning is explicitly conveyed. It is implicitly conveyed (he calls it an ‘impliciture’), but because it is conveyed by default in normal circumstances, it gives the illusion of being directly conveyed. The inference that is needed to derive this standardized interpretation has become compressed by precedent. Hence, hearers are not aware of having made any inference. A similar view, according to which generalized implicatures are default inferences, has been defended by Levinson (2000). The details of Bach’s views are not important here. The main point is to introduce a third sense of ‘nonstandard use.’ It corresponds to a non-standardized use in Bach’s sense. A non-standardized use is one that is called for when circumstances are somehow not ‘normal’ and hence the default meaning must be overridden. For instance, suppose Grice has a large collection of shark teeth and you need exactly three shark teeth for an artwork you are creating. Believing that Grice might be willing to give up some of his shark teeth for you I say ‘Grice has three teeth.’ Arguably what is conveyed here is that Grice has at least three teeth, not that he has exactly three. Now, according to Griceans such as Bach and Levinson, the conventionally encoded meaning of ‘three teeth’ is at least three teeth. In other words, when circumstances are not normal and the default meaning is overridden, the meaning that is conveyed is the literally encoded meaning. So the non-standardized use in this case is the conventional one (i.e., it is a standard use in the first sense articulated above). In the example given in the previous paragraph, pragmatic context indicated that a non-default understanding of ‘three teeth’ was called for. Sometimes, however, the linguistic expression itself

642 Number

signals whether it is to be given a default or nondefault reading. There are pragmatically marked and unmarked ways of saying things. Unmarked ways of saying things call for default interpretations. Marked ways of saying things call for non-default interpretations. For example, there is a difference between saying ‘The policeman stopped the car’ and ‘The policeman brought the car to a stop.’ The former suggests that the stopping was the normal sort of stopping, whereas the latter suggests that the policeman did something unusual to halt the car. But although non-standardized uses may be pragmatically marked uses of expressions, they don’t flout linguistic conventions and they are not novel uses, or at least no more novel than their unmarked alternatives. Both ‘The policeman stopped the car’ and ‘The policeman brought the car to a stop’ will call for quite a lot of pragmatic inferencing to understand what the speaker said. See also: Coherence: Psycholinguistic Approach; Context and Common Ground; Context Principle; Conventions in Language; Cooperative Principle; Human Reasoning and Language Interpretation; Implicature; Irony; Metaphor and Conceptual Blending; Neo-Gricean Pragmatics; Nonmonotonic Inference; Pragmatic Determinants of What Is Said; Pragmatics and Semantics; Psychology, Semantics in; Selectional Restrictions; Semantics–Pragmatics Boundary; Thought and Language.

Bibliography Bach K (1995). ‘Standardization vs. conventionalization.’ Linguistics and Philosophy 18, 677–686. Bezuidenhout A (2001). ‘Metaphor and what is said: a defence of a direct expression view of metaphor.’ Midwest Studies in Philosophy 25, 156–186. Brown G (1995). Speakers, listeners and communication: explorations in discourse analysis. Cambridge: Cambridge University Press. Carston R (1995). ‘Quantity maxims and generalized implicature.’ Lingua 96, 213–244. Carston R (1997). ‘Enrichment and loosening: complementary processes in deriving the proposition expressed?’ Linguistische Berichte 8, 103–127. Chomsky N (2000). New horizons in the study of language and mind. Cambridge: Cambridge University Press. Clark H (1992). Arenas of language use. Chicago: University of Chicago Press. Davidson D (1986). ‘A nice derangement of epitaphs.’ In Lepore E (ed.) Truth and interpretation: perspectives on the philosophy of Donald Davidson. Oxford: Blackwell. 433–446. Donnellan K (1968). ‘Putting Humpty Dumpty together again.’ Philosophical Review 77, 203–215. Higginbotham J (2002). ‘On linguistics in philosophy and philosophy in linguistics.’ Linguistics and Philosophy 25, 573–584. Levinson S (2000). Presumptive meanings: the theory of generalized conversational implicature. Cambridge, MA: MIT Press.

Number G G Corbett, University of Surrey, Guildford, UK

Nominal and Verbal Number

ß 2006 Elsevier Ltd. All rights reserved.

English makes a semantic and a formal distinction between, for instance, cat and cats, child and children. In English, this number distinction reflects a nominal feature. It is realized on nouns via relatively simple inflectional morphology. (There are many languages with more complex morphological realization of number, see, for instance, the account of Nilo-Saharan languages in Dimmendaal, 2000.) In English, when the number feature is realized on verbs, this is by agreement with a subject noun phrase, as in the children were playing together. The verb is plural to indicate the number of children, and not the number of playing events. That is, we have a nominal feature which can be realized where expected, on the noun phrase, and also on the verb. While there are many languages like English in this respect, we also find instances of verbal number,

Number is one of the most common features of the world’s languages. While it is common, it is certainly not uniform. The situation in languages like English is often treated as normal. Yet the number feature reveals many surprises, and the system found in English is a rather extreme one, when seen against the possibilities established from a survey of the languages of the world. We shall consider characteristics of the English-type systems in turn, showing some of the ways in which they are typical or atypical and the contrasting types of system found elsewhere. We shall give particular prominence to the question of number values. This entry is based on Corbett (2000), where a much fuller account is given.

Number 643

that is, where number is realized on the verb to indicate the number of events or the number of participants. For an example we turn to Rapanui (an Oceanic language spoken on Easter Island) (Veronica Du Feu, personal communication): (1) ruku ‘dive’ (2) ruku ruku ‘go diving’

The form in (2) is appropriate if there is more than one dive; it does not require more than one person to be diving. The reduplicated form indicates verbal plurality: there are plural events. Verbal number may also be concerned with the number of participants rather than the number of events (see, for instance, Durie, 1986). Verbal number may be restricted to relatively small numbers of verbs, and it rarely shows more than a two-way opposition (for details see Corbett, 2000: 243–264). Nominal number shows a greater range of possibilities and we shall concentrate on that type.

Number as an Obligatory Category Number is an obligatory category in English. If someone says there’s a goat in the garden when there are several, this is misleading and inappropriate. In many languages, however, the use of the plural is not obligatory, but it is available when it is important to mark number. Perhaps the clearest example is provided by the Cushitic language Baiso, spoken on Gidicho Island in Lake Abaya (southern Ethiopia) and on the western shore of the lake. Baiso nouns can mark ‘general’ meaning, that is, they have a form which is noncommittal as to number, along with singular, paucal (example (8) below), and plural (Corbett and Hayward, 1987). The form lu´ban ‘lion’ does not specify a number of lions (it could be one, or more than that) (Dick Hayward, personal communication). (3) lu´ban foofe lion.GENERAL watched.1SING ‘I watched lion’

Having a distinct form for general number is highly unusual. There are many languages which can express general meaning, but they do so by means of a form shared with the singular. This more usual situation, with general identical to singular, can be illustrated from Arbore (also a Cushitic language). We find pairs like the following (Hayward, 1984: 159–183). (4) a. kele´h ‘gelded goat(s)’

b. keleh-me´ ‘gelded goats’ c. goran ‘heifer cow(s)’ d. gor-no´ ‘heifer cows’ e. ke´r ‘dog(s)’ f. ker-o´ ‘dogs’

Alhough the morphology may appear comparable to English, the meanings of the forms are different: kero´ guarantees more than one dog, and it would be used when to indicate this is important; ke´r does not imply only one: it might be one, it might be more than that. (There are other, less frequent number pairings in Arbore.) Systems like that of Arbore, in which number is not an obligatory category (and so is arguably not inflectional) are common in the world’s languages. At the opposite extreme from English we find Piraha˜, the only remaining member of the Mura family, spoken in 1997 by some 220 people along the Maici River (Amazonas, Brazil). It has been described by Everett (1986), who stated that it had no system of grammatical number, even in the pronouns. This is extremely rare.

The Nominals Involved in the Number System In English and similar languages, the majority of nominals can mark number, from the personal pronouns and nouns denoting persons (e.g., nurse) down to those denoting inanimate objects (e.g., blanket), and even to some denoting abstracts (e.g., week). Many languages restrict the number opposition to fewer nominals, namely those which are high in animacy. Thus, in Slavey (an Athapaskan language spoken in parts of the Northwest Territories, British Columbia, and Alberta, Canada), number may be marked on the noun phrase, provided the noun denotes a human or a dog (Rice, 1989: 247). We wish to investigate the countability preferences (Allan, 1980) of nominals (including pronouns as well as nouns). There is considerable variety in the world’s languages. Consider the following Warrgamay example (Queensland, Australia) (Dixon, 1980: 266–268): (5) yibi-yibi Nulmburu-Ngu child-REDUP.ABS woman-ERG wurrbi-bajun-du buudi-lgani-y big-very-ERG take-CONT-UNMARKED.TENSE malan-gu river-ALL ‘the very big woman/women is/are taking the children to the creek’

644 Number

We see that a noun can be marked for number in Warrgamay, as in yibi-yibi ‘child.’ However, this is not required; forms like nulmburungu ‘woman/ women’ are quite normal. Dixon (1980: 267) stated that a noun in this language ‘is not normally specified for number’ and suggested that this is the typical situation in Australia (1980: 22). Warrgamay is particularly interesting, since in (5) the verb does not determine number either. As for pronouns, the first and second persons (singular, dual, and plural) and the third dual and plural are ‘‘strictly specified for number and are used only for reference to humans, and occasionally tame dogs (Dixon, 1981: 39–40). The form filling the third singular slot can range over all persons and all numbers (it can have nonhuman as well as human reference) but its ‘‘unmarked sense’’ (1981: 40) is third-person singular. Thus, in Warrgamay, the word for ‘woman’ is not normally specified for number, while in English it must be. Yet the first and second persons are. This is part of a wider pattern. The observation that the sets of nominals involved in number distinctions are related to animacy was taken up and investigated further by Smith-Stark (1974), who proposed the version of the animacy hierarchy shown in Figure 1. Smith-Stark suggested that plurality ‘splits’ a language if ‘‘it is a significant opposition for certain categories but irrelevant for others’’ (1974: 657). The type of evidence he produced concerned marking of the noun phrase for number (usually by marking on the noun itself) and agreement in number (mainly verbal agreement but with some instances of agreement within the noun phrase). As postulating a typological hierarchy implies, various languages make the split at different points. An example of a split between pronouns and nouns is found in Bengali, since according to Masica (1991: 225–226) number is obligatory for pronouns, while other plural suffixes are optional. In Gunwinggu, a Gunwingguan language of western Arnhem Land, pronominal affixes on the verb mark number for humans but not normally for nonhumans (Evans, 2003: 234–235, 417–418). A lower split is found in agreement in Mundari (a Munda language of east India); verbs agree in number with nominals on the hierarchy as far down as animate nouns, but not with inanimates (Bhattacharya, 1976: 191–192).

Figure 1 The Smith-Stark (animacy) hierarchy.

Interesting information on which nouns have a plural, and whether its use is obligatory, can be found in Haspelmath’s (2005) map for the World atlas of language structures. His data, from a sample of 290 languages, are in accord with the animacy hierarchy. In addition, he mapped the distribution of the different types. Obligatory number marking for different types of noun (the ‘English pattern’) is found in western and northern Eurasia and much of Africa. Elsewhere, Haspelmath’s sample reveals a more mixed picture. He found optional marking of number to be common in Southeast and East Asia and Australia, while a lack of number marking is widespread in New Guinea and Australia. For comparable data on pronouns see Daniel (2005). Even in languages that mark number as extensively as English does, there are usually some nouns which are off the bottom of the scale, those which do not enter into number oppositions. In English, they typically pattern with the singulars, thus reliability has the form of a singular and takes singular agreements. On the other hand, in Manam (Lichtenberk, 1983: 269), mass nouns are treated as plural (unless they refer to a single quantity): (6) daN di-e´no water 3PL-exist ‘there is water (available)’

In various Bantu languages we often find both possibilities: some mass nouns are singular and some plural. We have concentrated on the basic singular/plural opposition. An important goal of the typological investigation of number is to integrate the typology of the nominals involved in number with the typology of values (see ‘Number Values’). This is a significant challenge. Smith-Stark considered only plurals, suggesting that other values, such as the dual, would follow the plural. This situation is certainly not the only possibility. For instance, in Avar, some nouns have a paucal, and in modern Hebrew and in Maltese, some nouns have a dual (in addition to singular and plural). The nouns involved are relatively few in comparison with those with a plural, and they are not those at the top of the animacy hierarchy. These paucals and duals can be analyzed as ‘minor numbers’; they do not pattern according to the

Number 645

animacy hierarchy, but they are counterexamples of a narrowly definable type. Then there are other apparent number values that appear to run counter to the hierarchy; for example, associatives, as in Bagvalal (Nakh-Daghestanian) jasˇ ‘girl, daughter,’ jasˇ-i (normal plural) ‘girls, daughters,’ jasˇ-a¯ri (associative plural) ‘daughter and her family’ (Danie`l, 2001: 135–136). Associatives involve an additional category and so fall outside the scope of the constraints above (Corbett and Mithun, 1996; Moravcsik, 2003; Moravcsik and Daniel, 2005). A discussion of these various complications is beyond the scope of this entry; see Corbett (2000: 54–88) for more detailed data on the animacy hierarchy and Corbett (2000: 89–132) for an integration of the typology of the nominals involved in number oppositions with the typology of values.

The Semantics of Number So long as we stay with innocent examples like English cat and cats, the semantics of number look clear, since cats is appropriate for more than one instance of cat. In fact, there are many concealed difficulties: a concise overview of work on number within formal semantics can be found in Link (1998), who showed how the study of plurals related to different movements in linguistic semantics and provided a substantial bibliography. Ordinary plurals like cats are only one possible type of number; if we consider I versus we, it becomes clear that here we have an associative use of the plural; we is most often used for ‘I and other(s) associated with me’ (Moravcsik, 2003; Corbett, 2004). These different uses are bound up with the nominal’s position on the animacy hierarchy. Thus, in English, while the items at the very top of the hierarchy have associative uses, nouns lower on the hierarchy, like coffee, have a number opposition provided they are recategorized. If coffee denotes a type, or a typical quantity of coffee, then there is an available plural, as in: she always keeps three different coffees, and we’ve ordered two teas and three coffees.

Number Values A striking way in which English is limited in its number system is in terms of the number values. It can distinguish only between singular and plural. Many languages have more distinctions. The Dual

A common system is one in which singular and plural are supplemented by a dual to refer to two distinct

real-world entities. Examples are widespread. We will take Central Yupik, as spoken in southwestern Alaska; the number forms are: singular arnaq ‘woman,’ dual arnak ‘two women’ and plural arnat ‘three or more women.’ The addition of the dual has an effect on the plural, which is now used for three or more real-world entities. More generally, a change in the system gives the plural a different meaning. As with the plural, the use of the dual need not be obligatory; Slovenian is a language with a dual, but one whose use is subject to interesting conditions (see Derganc, 2003). And in Kxoe (Central Khoisan) the plural is used of a man and wife; to use the dual would be an insult, because it would suggest they were together by chance (Treis, 2000). The Trial

The ‘trial’ refers to three distinct real-world entities, and it occurs in systems with the number values: singular, dual, trial and plural. A particularly clear instance is found in Larike (Larike–Wakasihu), a Central Moluccan language spoken on the western tip of Ambon Island, Indonesia. (Central Moluccan forms part of the Central Malayo–Polynesian subgroup of Austronesian.) This example is from Laidig and Laidig (1990: 96): (7) Matidui-tue au-huse nusa 3TRIAL-live at-there island ‘those three live on the island over there’

The reason for citing Larike is that it has a ‘genuine’ trial, which is available for use for exactly three (Laidig and Laidig, 1990: 92). The dual and trial forms of Larike can be traced back to the numerals ‘two’ and ‘three’ and the plural comes historically from ‘four.’ This is a frequent development in Austronesian languages. Accordingly, some linguists use the term ‘trial’ more widely, according to the form of the inflections (derived historically from the numeral three), even for languages where the forms are used synchronically for small groups, including those greater than three (for which ‘paucal’ would be more accurate). The trial of Larike is not a paucal, but strictly a trial according to Laidig and Laidig. The Paucal

As just noted, paucals are used to refer to a small number of distinct real world entities, somewhat akin to the English quantifier ‘a few’ in meaning. There is no specific upper numerical limit on the use of paucals (and their lower boundary varies according to the system in which they are found). Let us return to the Cushitic language Baiso. Besides the

646 Number

general number forms (as in (3), where number is not specified), nouns also have singular, paucal and plural. Here is the paucal (Dick Hayward, personal communication). (8) luban-jaa foofe lion-PAUCAL watched.1SING ‘I watched a few lions’

The paucal is used in Baiso when referring to a small number of individuals, from two to about six. The Baiso system (for nouns) of singular/paucal/ plural, also with a ‘general’ form, is rare. The paucal is usually found together with a dual, so that the system of number values is singular/dual/paucal/ plural. This widespread system is found, for example, in Manam, an Oceanic language spoken on islands off the north coast of New Guinea (Lichtenberk, 1983: 108–109). The Largest Systems

It is interesting to ask what is the largest system of number values. There are claims in the literature for languages with a ‘quadral,’ for four distinct realworld entities, as part of a system of five number values. Systems with five number values have indeed been found, and these are the largest systems, but the evidence specifically for a quadral is less certain. All the claims come from within the Austronesian family, and the best documented case is Sursurunga (Hutchisson, 1986, and personal communications), spoken in southern New Ireland. The forms in question are restricted to the personal pronouns, but are found with all of them, the first person (inclusive and exclusive), the second and the third. Given the conclusion to be reached, we shall label the forms ‘greater paucal’ (Table 1). Here is an example of what we shall call a greater paucal form (Hutchisson, 1986; and personal communications). (9) gim-hat 1EXCL-

ka´wa´n maternal.uncle:nephew/ GREATER_PAUCAL niece ‘we four who are in an uncle–nephew/niece relationship’

The greater paucal has two further uses. First, Sursurunga plural pronouns are never used with terms for dyads (kinship pairings like uncle–nephew/niece). Here the greater paucal is used, for a minimum of four, and not just for exactly four (Hutchisson, 1986: 10). Second, a speaker may use the first-personinclusive greater paucal in hortatory discourse to suggest joint action involving more than four persons, including the speaker. These two uses account for the majority of instances of the use of the forms in question. In terms of meaning, the term ‘quadral’ is therefore not appropriate, and so ‘paucal’ is more accurate. Why then have we suggested ‘greater paucal’? To see why, we need to examine the rest of the number system of Sursurunga (using data and judgments from Don Hutchisson, personal communications). The regular use of the dual is quite strictly for two people. It also has one special use for the singular when the referent is in a taboo relationship to the speaker. The so-called ‘trial’ can be used for three, and it can also refer to small groups, typically around three or four, and to nuclear families of any size. It is, therefore, not strictly a trial, as discussed earlier: it could be glossed as ‘a few’ and it should be labelled ‘paucal.’ The development of trial into a paucal is a frequent one, as we noted above. The so-called ‘quadral,’ as we have just seen, is primarily used in different ways with larger groups, of four or more (an appropriate gloss would be ‘several’). This too would qualify as a paucal. There are therefore two paucals in Sursurunga, a paucal (traditionally ‘trial’) and a greater paucal (traditionally ‘quadral’). The plural is for numbers of entities larger than what is covered by the greater paucal; however, there is no strict dividing line. Using semantic labels, we can say that the number feature in Sursurunga has the values: singular, dual, paucal, greater paucal, and plural. Thus, Sursurunga has a well-documented five-value number feature. There are certainly other languages with five number values, namely Lihir, Tangga and Marshallese; we do not have such detailed information as for Sursurunga and there is no certain case of a quadral: it seems that in all cases the highest value below plural in such systems can be used as a paucal.

Table 1 Emphatic pronouns in Sursurunga (Based on Hutchisson, 1986, and personal communications)

1 exclusive 1 inclusive 2 3

Singular

Dual

Paucal

Greater paucal

Plural

iau — ia´u -i/on/a´i

giur gitar gaur diar

gimtul gittul gamtul ditul

gimhat githat gamhat dihat

gim git gam di

Note that a´ is used to indicate schwa (e).

Number 647

Besides the split in the paucal, we may also find languages that split the plural, with ‘greater plurals’ of different types, as for instance in Arabic (Ojeda, 1992). Greater plurals may imply an excessive number or else all possible instances of the referent.

Number Mismatches and the Agreement Hierarchy Nominal number may be realized directly on the noun phrase; it may also be marked on the verb (agreement). In the straightforward cases the values of number match. There are also many instances where we find a mismatch, and English is an especially rich source here. An interesting mismatch involves nouns which are singular morphologically and (typically) have a normal plural and yet, when singular, may take plural agreement, particularly in British English: (10) the committee have decided.

Having an agreement choice here is made possible by the semantics of the noun. However, the likelihood of examples like (10) being acceptable depends on the variety of English involved, and speakers from different parts of the world have opposing judgments. Speakers of different varieties were asked to consider sentences apparently produced by nonnative speakers of English and to correct them where necessary (data from Johansson, 1979: 203; Bauer, 1988: 254). The relevant test sentence was: (11) The audience were enjoying every minute of the show.

(Table 2) For each variety, n is the number of respondents, and the other figures are the percentage giving each response. There is substantial divergence between the varieties. Most speakers of British English were happy with the example as given, as were the majority of New Zealand English speakers (for further comparative data see Levin, 2001; and Corbett, 2006).

Table 2 Judgments on agreement with ‘Committee-Type’ nouns in three varieties of English

Another source of variation is the ‘agreement target’; while Table 2 is concerned with the predicate, it is known that the relative frequency of plural agreement is different for other agreement targets such as the personal pronoun. So let us examine the different agreement targets. In attributive position there is no choice, only singular is possible: (12) that committee . . . / *those committee.

In the predicate, as we know, there are two possibilities; both numbers are possible in British English: (13) The committee has/have agreed to reconvene.

The English relative pronoun does not mark number, but it controls number agreement in its clause and so its number can be inferred: (14) the committee, which has decided . . . / the committee, who have decided . . .

Both numbers are possible. This is also the case with the personal pronoun, and here speakers of American English admit the plural too: (15) The committee . . . It . . . / They . . .

Thus, with such nouns, for speakers of British English, syntactic or semantic agreement is possible for all agreement targets, except attributive modifiers, where only syntactic agreement is acceptable. Levin (2001: 76) counted examples of these agreements with nouns like committee in American English, in a corpus from the New York Times (NYT) and from the Longman Spoken American Corpus (LSAC) (Table 3). The picture is remarkably clear. Plural agreement is more common in spoken rather than written language. Moreover, the choice depends on the syntactic position. In attributive position we find only singular agreement. Plural agreement is increasingly possible in the predicate, the relative pronoun and the personal pronoun. These data fit into a much more general pattern based on the agreement hierarchy, which comprises

Table 3 Agreement in written and spoken American English (Levin, 2001: 76) Verbs

Response (%)

No correction Was enjoying

Other response

Variety GB (n ¼ 92)

US (n ¼ 93)

NZ (n ¼102)

77.2 15.2 7.6

5.4 90.3 4.3

72.5 20.6 6.9

GB ¼ Great Britain; US ¼ United States; NZ ¼ New Zealand.

NYT LSAC

Relative pronouns

Personal pronouns

n

% plural

n

% plural

n

% plural

3233 524

3 9

702 43

24 74

1383 239

32 94

NYT ¼ New York Times; LSAC ¼ Longman Spoken American Corpus.

648 Number

Figure 2 The agreement hierarchy.

four target types (Figure 2). Possible agreement patterns are constrained as follows: For any controller that permits alternative agreement forms, as we move rightwards along the agreement hierarchy, the likelihood of agreement with greater semantic justification will increase monotonically (that is, with no intervening decrease).

The agreements with committee-type nouns conform to this typological claim, and there is a good deal of further evidence, both from number, and from gender (Corbett, 1991: 225–260, forthcoming). The English examples are well known, but the phenomenon is found much more widely. For instance, Samoan has several such nouns, some morphologically simple, like aiga ‘family,’ and some derived, like ‘au-pua’a ‘group of pigs’: (16) Ua

momoe uma le sleep.PL all ART.SING ‘the whole family were asleep’ (O lou igoa o Peko) PERF.V

aiga family

(17) ‘A tete‘i le but suddenly.awake.PL ART.SING ‘au-pua‘a . . . COLL-pig ‘When the pigs suddenly awake . . .’ (Moyle)

Note that the article is singular, but the verb is plural; the use of the verb in the plural is frequent but not obligatory (Mosel and Hovdhaugen, 1992: 91, 443, details of sources there). The reason for the potential choice of agreement in these cases is clear. We have seen that the choice may be influenced by language variety but it is also constrained by linguistic factors, notably the agreement hierarchy.

Number and Numerals In the discussion of number distinctions, we left out any mention of numerals. This is because they can have a strangely distorting effect on number use. There are languages like English, where numerals above ‘one’ typically require the plural (two cats, not *two cat). On the other hand, there are languages where the indication of number by the numeral is sufficient, and the noun stands in the singular, as in Bagvalal and Hungarian, among others. And then there are languages which have both options, as is

generally true of Slavonic languages. Taking Russian as our example, odin ‘one’ takes a singular noun; dva ‘two,’ tri ‘three’ and cˇetyre ‘four,’ when themselves in the nominative, take a noun in the genitive singular (a situation which has arisen through the loss of an earlier dual); and pjat’ ‘five’ and above, when in the nominative, take a noun in the genitive plural. Complex numerals govern the noun according to the last element of the numeral. The numeral phrases that result from the rules given can in turn produce interesting mismatches when predicate agreement is involved.

Conclusion Number is indeed a more complex category than is generally recognized, and the English system, although a common one, is at one extreme of the typological space. We noted that, number may be a verbal category as well as a nominal one (but the possibilities are rather more restricted for verbal number). We also saw that number need not be obligatory, that the range of items that distinguish number varies dramatically from language to language, that the semantics of number is affected by the particular item involved, and that the values of the number category can be just two or, in various combinations, up to five. When number is marked by agreement, the values of controller and target do not always match, but the patterns of mismatching are constrained. Complex issues that we have not been able to address here include various interactions, notably between the forms of marking of number, the particular nominals that distinguish number in a given language, and the systems of number values. For these see Corbett (2000).

Acknowledgments The support of the Economic and Social Research Council (UK) under grant RES051270122 is gratefully acknowledged. See also: Classifiers and Noun Classes; Concepts; Context and Common Ground; Conventions in Language; Grammatical Meaning; Mass Expressions; Numerals; Partitives; Plurality; Quantifiers; Vagueness.

References Allan K (1980). ‘Nouns and countability.’ Language 56, 541–567. Bauer L (1988). ‘Number agreement with collective nouns in New Zealand English.’ Australian Journal of Linguistics 8, 247–259. Bhattacharya S (1976). ‘Gender in the Munda languages.’ In Jenner P N, Thompson L C & Starosta S (eds.)

Number 649 Oceanic linguistics special publication 13: Austroasiatic studies I. Honolulu: University Press of Hawaii. 189–211. Corbett G G (1991). Gender. Cambridge: Cambridge University Press. Corbett G G (2000). Number. Cambridge: Cambridge University Press. Corbett G G (2004). ‘Suppletion in personal pronouns: theory versus practice, and the place of reproducibility in typology.’ Linguistic Typology 8(3). Corbett G G (2006). Agreement. Cambridge: Cambridge University Press. Corbett G G & Hayward R J (1987). ‘Gender and number in Bayso.’ Lingua 73, 1–28. Corbett G G & Mithun M (1996). ‘Associative forms in a typology of number systems: evidence from Yup’ik.’ Journal of Linguistics 32, 1–17. Danie`l M (2001). ‘Imja susˇcˇestvitel’noe.’ In Kibrik A E (ed.) Bagvalinskij jazyk: grammatika: Teksty: Slovari. Moscow: Nasledie. 127–150. [Co-editors Kazenin K I, Ljutikova E A & Tatevosov S G.] Daniel M (2005). ‘Plurality in independent personal pronouns.’ In Haspelmath M, Dryer M, Gil D & Comrie B (eds.) World atlas of language structures. Oxford: Oxford University Press. Derganc A (2003). ‘The dual in Slovenian.’ Sprachtypologie und Universalienforschung 56, 165–182. Dimmendaal G J (2000). ‘Number marking and noun categorization in Nilo-Saharan languages.’ Anthropological Linguistics 42, 214–261. Dixon R M W (1980). The languages of Australia. Cambridge: Cambridge University Press. Dixon R M W (1981). ‘Wargamay.’ In Dixon R M W & Blake B J (eds.) The handbook of Australian languages: II: Wargamay, the Mpakwithi dialect of Anguthimri, Watjarri, Margany and Gunya, Tasmanian. Amsterdam: John Benjamins. 1–144. Durie Mark (1986). ‘The grammaticization of number as a verbal category.’ In Nikiforidou V, VanClay M, Niepokuj M & Feder D (eds.) Proceedings of the Twelfth Annual Meeting of the Berkeley Linguistics Society: February 15–17, 1986. University of California. Berkeley, California: Berkeley Linguistics Society. 355–370. Evans N (2003). Pacific linguistics 541: Bininj Gun-wok: a pan-dialectal grammar of Mayali, Kunwinjku and Kune (2 vols). Canberra: Pacific Linguistics. Everett D (1986). ‘Piraha˜.’ In Derbyshire D C & Pullum G K (eds.) Handbook of Amazonian languages: 1. Berlin: Mouton de Gruyter. 200–325. Haspelmath M (2005). ‘Occurrence of nominal plurality.’ In Haspelmath M, Dryer M, Gil D & Comrie B (eds.)

World atlas of language structures. Oxford: Oxford University Press. Hayward D (1984). Cushitic language studies 2: The Arbore language: a first investigation: including a vocabulary. Hamburg: Buske. Hutchisson D (1986). ‘Sursurunga pronouns and the special uses of quadral number.’ In Wiesemann W (ed.) Pronominal systems (continuum 5). Tu¨bingen: Narr. 217–255. Johansson S (1979). ‘American and British English grammar: an elicitation experiment.’ English Studies 60, 195–215. Laidig W D & Laidig C J (1990). ‘Larike pronouns: duals and trials in a Central Moluccan language.’ Oceanic Linguistics 29, 87–109. Levin M (2001). Lund studies in English 103: Agreement with collective nouns in English. Stockholm: Almqvist and Wiksell. Lichtenberk F (1983). Oceanic linguistics special publication 18: A grammar of Manam. Honolulu: University of Hawaii Press. Link G (1998). ‘Ten years of research on plurals-where do we stand?’ In Hamm F & Hinrichs E (eds.) Studies in linguistics and philosophy 69: Plurality and quantification. Dordrecht: Kluwer. 19–54. Masica C P (1991). The Indo–Aryan languages. Cambridge: Cambridge University Press. Moravcsik E (2003). ‘A semantic analysis of associative plurals.’ Studies in Language 27, 469–503. Moravcsik E & Daniel M (2005). ‘Associative plural.’ In Haspelmath M, Dryer M, Gil D & Comrie B (eds.) World atlas of language structures. Oxford: Oxford University Press. Mosel U & Hovdhaugen E (1992). Samoan reference grammar. Oslo: Scandinavian University Press. Ojeda A E (1992). ‘The semantics of number in Arabic.’ In Barker C & Dowty D (eds.) The Ohio State University working papers in linguistics 40: Proceedings of the Second Conference on Linguistics and Semantic Theory, held at the Ohio State University May 1–3, 1992. Columbus: The Ohio State University Department of Linguistics. 303–325. Rice K (1989). Mouton grammar library 5: A grammar of Slave. Berlin: Mouton de Gruyter. Smith-Stark T C (1974). ‘The plurality split.’ In La Galy M W, Fox A R & Bruck A (eds.) Papers from the Tenth Regional Meeting, Chicago Linguistic Society, April 19–21. Chicago: Chicago Linguistic Society. 657–671. Treis & Yvonne (2000). ‘NP coordination in Kxoe (Central Khoisan).’ Afrikanistische Arbeitspapiere 63, 63–92.

650 Numerals

Numerals J Gvozdanovic´, Universita¨t Heidelberg, Heidelberg, Germany ß 2006 Elsevier Ltd. All rights reserved.

Numeral Systems: Their Structure and Development Quantification is essential for our cognition of the world. To quantify is to establish the limits of an entity in a relative way. Each entity is defined by its limits and identified by the relativity of its limits. It is discrete. For example, a piece of sugar is a discrete entity, which can have further spatial properties not attributable to its source, the amorphous mass of sugar. Discreteness emerges from quantification. Discrete entities can be grouped together so as to form sets. Sets can be homogeneous or heterogeneous; they can contain many entities or be empty, yet they always have clear limits. By having clear limits, a set is an entity, too, an entity of a higher order. Set formation presupposes entities, themselves a cognitive product of either quantification or (if the entities have already been quantified) identification. The primary act of set formation involves abstraction from all but a classifying property. An example is found in tally sticks with notches on bones, which originated in Africa around 37 000 years B.C. and in Europe some millennia later. The notches represent quantified entities; the tally stick contains a set (Figure 1). Counting is based on classification, accumulation, and grouping. Cognitive units of counting are numbers; their symbolic expressions in language are numerals. Grouping of entities is an essential feature of number systems, with the effect of establishing bases (in addition to digits such as ‘1’). Across cultures, primitive counting on fingers and toes led to the establishment of the group of ‘5’ – symbolized by a hand – as a basis for further counting. Significantly, counting on fingers and toes operates with the base of ‘5’ (i.e., five fingers or a hand), the base of ‘10’ (i.e., two hands), and the base of ‘20’ (i.e., fingers and toes or hands and feet). Contemporary examples in active use are found in Papua New Guinea, where, e.g., the Aghu language symbolizes ‘5’ by the word for ‘hand’, whereas Kombai, Korowai, and Wambon symbolize ‘5’ by the word for ‘thumb’. Similar systems are found, e.g., in Mullukmulluk in Australia (cf.

Figure 1 African tally stick (37 000 B.C.), probably representing the moon cycle (29 notches).

Greenberg, 1978: 257) and in Bantawa in east Nepal (where the numeral for ‘5’ is u. kchuk ‘one hand’, ‘10’ is hu. achuk ‘two hands’, and ‘20’ is rekachuk ‘four hands’; cf. Gvozdanovic´, 1985: 137). Many systems known as advanced started from a quinary base, for example, Sumerian from the 4th millennium B.C. in Mesopotamia (Table 1). Comparative studies of Indo-European have shown that Proto-Indo-European (PIE) words for ‘5’ and ‘10’ are also based on counting by hand. Thus, PIE *penkw(e) ‘5’ is connected, e.g., with Old High German (OHG) fu¯st and Russian pjast’ ‘fist’, originally referring to the hand clenched to form a fist as a symbol of ‘five’ (cf. Winter, 1992: 17), whereas PIE *de ‘10’ has been reconstructed as *de- ‘two hands’, and *om < dom ‘100’ as either the genitive plural of the neuter collective noun *d- (i.e., * d dom ‘a decade of decades’, cf. Szemere´nyi, 1960: 139– 140), or a derivative with –to-, signifying the limiting point in the series of decades (cf. Coleman, 1992: 404). Indeed, the base of ‘100’ was the highest decimal base in Proto-Indo-European (as the reconstruction of ‘1000’ *ghes-lo- on the basis of Aryan and Greek fails to account for the other Indo-European languages; cf. also Szemere´nyi, 1960: 1). Historically seen, the development of advanced numeral systems was connected with astronomy, geometry, and calendar construction, in addition to stock administration and trade. This applies to the ancient Egyptian decimal system (since the 5th millennium B.C.) as well as to the Sumerian and later Akkadian and Babylonian sexagesimal system (cf., e.g., Neugebauer, 1947: 37 concerning the calculation of lunar phases). Apart from bases of unrestricted productivity, rudimentary bases have been recorded, for example, in duodecimal counting of Benue-Congo languages (involving a base of ‘12’), later replaced by a decimal system, or in Sora, a Munda language (cf. Greenberg, 1978: 270) (related to it were probably also the ‘big hundreds’ transitionally present in Northern Table 1 Sumerian numerals above ‘5’ composed with the base ‘5’a Number

Sumerian numeral

1 2 3 4

disˇ, asˇ min esˇ limmu

5

ia

a

Cf. Justus (1996).

Number

6 7 8 9 10

Sumerian numeral

asˇ < *ia þ asˇ (5 þ 1) imin < *ia þ min (5 þ 2) ussu < *ia þ esˇ (5 þ 3) ilimmu < *ia þ limmu (5 þ 4) u

Numerals 651

Germanic [equalling ‘120’]). Smaller rudimentary bases have been attested in rural communities, such as base ‘4’ in Barbarenˇo (a Chumash language of California; cf. Comrie, 1999: 89), in Tunisian, or in Celtic (cf. Justus, 1999: 61 f.). Yuman languages of Southern California, Arizona, and Baja California, next to Coahulteco a Hokan language, provide examples of limited use of the base ‘3’. A rudimentary base ‘2’ is found in the central Wintun branch of Californian Penutian (cf. Greenberg, 1978: 279). Rudimentary bases coexist with productive bases, usually of later origin. The lowest productive base by which all higher productive bases can be divided is called fundamental (cf. Greenberg, 1978: 270). Apart from the rare exceptional types mentioned above, languages are classified by their fundamental bases as quinary (with the fundamental base ‘5’), decimal (with the fundamental base ‘10’), vigesimal (with the fundamental base ‘20’), or sexagesimal (with the fundamental base ‘60’). The capacity to undergo recursive mathematical operations makes bases into the backbone of numeral systems. The relevant operations and their asymmetrical implications (in the sense of Greenberg’s hierarchies (1978: 257–272) are the following: . addition (e.g., French dix-neuf ‘ten-nine’, i.e., ‘19’), . subtraction (e.g., Latin un-de-viginti ‘one-offtwenty’, i.e., ‘19’); it implies the existence of addition (every minuend is a base of the system or a multiple of a base; a subtrahend is never larger than the remainder); . multiplication (e.g., English fifty [5  10]); it implies the existence of addition (a rare exception is found in a subgroup of the Yuman languages, affiliated to the Hokan stock, which has the first decade only, analyzed as ‘1’, ‘2’, ‘3’, ‘4’, ‘5’, ‘3  2’, ‘7’, ‘4  2’, ‘3  3’, ‘10’); . division (e.g., Welsh hanner cunt ‘half hundred’, i.e., ‘50’); it implies the existence of multiplication (division is expressed as multiplication by a fraction, and the denominator of the fraction is always ‘2’ or a power of ‘2’); . (rarely) overcounting (e.g., older Danish halv-tredsinds-tyve ‘half-third-time-twenty’, i.e., ’50’) and ‘going-on’ operation (e.g., in Mayan and in FinnoUgric, cf. Ostyak ‘18’ as (‘8 going-on 20’)); . exponentiation (e.g., Sumerian gesˇ ‘60’, sˇa`r ‘3600’, sˇa`r-gal ‘sˇa`r-big’ ‘216,000’). Bases show up as significant boundaries in language decay, at least at intermediate stages. This may be illustrated by the Tibeto-Burman languages of Nepal (cf. Gvozdanovic´, 1985: 136), which have been overlayered by the (Indo-European) standard language Nepali, but have preserved their original numerals

either up to the fundamental base ‘5’ (symbolized by ‘one hand’ or by the Proto-Tibeto-Burman form ngaji) or up to ‘3’. The observed regularity demonstrates the cognitive relevance of numeral bases.

Morphology and Syntax of Numeral Expressions Numeral morphosyntax shows a combination of mathematical and linguistic properties, relating to semantic, morphosyntactic, and pragmatic levels. For numeral morphosyntax Hurford (1987) formulated a so-called ‘packing-strategy’ principle that states that complex numeral expressions are formed by combining the highest-valued simpler numerals available (e.g., the French numeral for ‘70’ is soixante dix [60 þ 10], not *cinquante-vingt [50 þ 20] or *quarante trente [40 þ 30]). The point at which a language changes methods for signaling addition is as a rule indicative of a base break. It is typical of Indo-European that such a break occurs between the -teens (cf. English thirteen–nineteen, expressed as digit þ base) and the upper decades (starting from ‘20’, regionally limited also above ‘60’). Outside of Indo-European, pattern discontinuity may be illustrated by Hebrew, where the numeral plurals of ‘3’ to ‘9’ denote the corresponding decades ‘30’ to ‘90’, but the plural of ‘10’ denotes ‘20’. The nexthigher base is ‘100’, for which Hebrew has a separate noun, mea. Numeral bases are treated differently from digits. In systems with more than one base, there is a base above which certain regularities hold, e.g., complex expressions consist of a product and a remainder, and the remainder never has a value larger than the next lower base of the total expression (cf. French soixante douze ‘72, i.e., 60 þ 12’ is possible because the next lower base is ‘20’). Numeral syntax is usually highly complex, and only a part of this complexity is directly connected with numeral morphology. Typologically interesting is the tendency of the numeral to precede the noun if the descriptive adjective is also in the preceding position, as opposed to variation in the position of the numeral when the adjective follows the noun. An example of the latter is found in Malay, in which the descriptive adjective regularly follows the noun, but the numeral and the classifier may be placed either before or after the noun (the latter option seems to be pragmatically marked). Numeral morphosyntax exhibits a set of further regularities: . the numeral ‘one’ is syntactically an attribute of its denominator;

652 Numerals

. numerals expressing bases either are syntactic heads or share headedness with their denominator (cf. Hebrew, in which these numerals have inherent gender); in the development of Indo-European, the lower bases have arguably developed from adjectives to nouns; . the numerals ‘two’, ‘three’, and ‘four’ have an intermediate status; they may differentiate gender; languages with grammatical dual have a special treatment of ‘two’; . many languages have discontinuity of expression around the numerals ‘three’ and ‘four’, relating especially to distinct marking for gender, case, indefiniteness, or word order; these phenomena are presumably related to short-term memory limitations (cf. Cowan, 2001); . in rule-governed variation between the singular and the plural with numerals, the singular is favored with higher numbers, in measure constructions, in indefinite constructions, and with nouns that are inanimate or impersonal (Greenberg, 1978: 283); . the order noun-numeral is favored in indefinite and approximative constructions (Greenberg, 1978: 284); . predicate agreement with quantified subjects above the cut-off point around ‘three’ or ‘four’ mentioned above is sensitive to animacy and topicality: the singular agreement is preferred with inanimate and non-topical subjects, the plural agreement with animate and topical subjects. Although quantification takes effect in the logical structure of language, its morphology and syntax depend in part on pragmatics. This shows that quantification permeates all the central levels of language. See also: Classifiers and Noun Classes; Concepts; Grammatical Meaning; Mass Expressions; Number; Numerals; Partitives; Plurality; Quantifiers; Vagueness.

Bibliography Coleman R (1992). ‘Italic.’ In Gvozdanovic´ J (ed.) IndoEuropean numerals. Berlin: Mouton de Gruyter.

Comrie B (1999). ‘Haruai numerals and their implications for the history and typology of numeral systems.’ In Gvozdanovic´ J (ed.) Numeral types and changes worldwide. 81–94. Corbett G G (1983). Hierarchies, targets and controllers; agreement patterns in Slavic. London: Croom Helm. Corbett G G (1993). ‘The head of Russian nominal expressions.’ In Corbett G G, Fraser N M & McGlashan S (eds.) Heads in grammatical theory. Cambridge: Cambridge University Press. 11–35. Cowan N (2001). ‘The magical number 4 in short-term memory: A reconsideration of mental storage capacity.’ Behavioral and Brain Sciences 24(1), 87–114. Greenberg J H (1978). ‘Generalizations about numeral systems.’ In Greenberg J H (ed.) Universals of human language, vol. 3: Word structure. Stanford: Stanford University Press. 249–295. Gvozdanovic´ J (1985). Language system and its change; on theory and testability. Berlin: Mouton de Gruyter. Gvozdanovic´ J (ed.) (1992). Indo-European numerals. Berlin: Mouton de Gruyter. Gvozdanovic´ J (ed.) (1999). Numeral types and changes world-wide. Berlin: Mouton de Gruyter. Gvozdanovic´ J (1999). ‘Types of numeral changes.’ In Gvozdanovic´ J (ed.). 95–111. Halle M (1994). ‘The morphology of numeral phrases.’ In Avrutin S, Franks S & Progovac L (eds.) Formal approaches to [Slavic] linguistics. Ann Arbor: Michigan Slavic Publications. 176–215. Hurford J R (1975). The linguistic theory of numerals. Cambridge: Cambridge University Press. Hurford J R (1987). Language and number; the emergence of a cognitive system. Oxford: B. Blackwell. Ifrah G (1981). Histoire Universelle des Chiffres. Paris: Seghers. Justus C F (1996). ‘Numeracy and the Germanic upper decades.’ Journal of Indo-European Studies 24, 45–80. Justus C F (1999). ‘Pre-decimal structures in counting and metrology.’ In Gvozdanovic´ J (ed.). 55–79. Menninger K (1934). Zahlwort und Ziffer. Breslau: Ferdinand Hirt. Neugebauer O (1947). ‘Studies in ancient astronomy. VIII. The water clock in Babylonian astronomy.’ ISIS 37, 1, 37–43. Szemere´nyi O (1960). Studies in the Indo-European system of numerals. Heidelberg: Winter. Winter W (1992). ‘Some thoughts about Indo-European numerals.’ In Gvozdanovic´ J (ed.). 11–28.

O Onomasiology and Lexical Variation D Geeraerts, University of Leuven, Leuven, Belgium ß 2006 Elsevier Ltd. All rights reserved.

The Scope of Onomasiological Research Although it has hardly found its way to the canonical English terminology of linguistics, the distinction between onomasiology and semasiology is a traditional one in Continental structural semantics and the Eastern European tradition of lexicological research. As Baldinger puts it, ‘‘Semasiology . . . considers the isolated word and the way its meanings are manifested, while onomasiology looks at the designations of a particular concept, that is, at a multiplicity of expressions which form a whole’’ (1980: 278). The distinction between semasiology and onomasiology, in other words, equals the distinction between meaning and naming: semasiology takes its starting-point in the word as a form, and charts the meanings that the word can occur with; onomasiology takes its starting-point in a concept, and investigates by which different expressions the concept can be designated, or named. To grasp the range of onomasiology, one should realize that the two descriptions of onomasiology that Baldinger mentions are not exactly equivalent. On the one hand, studying ‘a multiplicity of expressions which form a whole’ lies at the basis of the traditional, structuralist conception of onomasiology, i.e., to the study of semantically related expressions (as in lexical field theory, or the study of the lexicon as a relational network of words interconnected by links of a hyponymical, antonymical, synonymous nature, etc.). On the other hand, studying ‘the designations of a particular concept’ opens the way for a contextualized, pragmatic conception of onomasiology, involving the actual choices made for a particular name as a designation of a particular concept or a particular referent. This distinction can be further equated with the distinction between an investigation of structure, and an investigation of use, or between an investigation of langue and an investigation of parole. The structural conception deals with sets of related expressions, and basically asks the question: what are the relations among the alternative expressions? The

pragmatic conception deals with the actual choices made from among a set of related expressions, and basically asks the question: what factors determine the choice for one or the other alternative? This second, usage-oriented (or if one wishes, pragmatic) form of onomasiology is related to two specific points of interest: differences of structural weight that may appear within onomasiological structures, and onomasiological change. 1. The importance of structural weight may be appreciated by considering semasiological structures first. Qualitative aspects of semasiological structure involve the following questions: which meanings does a word have, and how are they semantically related? The outcome is an investigation into polysemy, and the relationships of metonymy, metaphor, etc. that hold between the various readings of an item. Quantitative aspects of lexical structure, on the other hand, involve the question whether all the readings of an item carry the same structural weight. The semasiological outcome of a quantitative approach is an investigation into prototypicality effects of various kinds: prototypicality research is basically concerned with differences of structural weight among the members or the subsenses of a lexical item. The qualitative perspective is a much more traditional one in semasiological lexicology than the quantitative one, which was taken up systematically only recently, with the birth and development of prototype theory. The distinction between the qualitative and the quantitative aspects of semantic structure (as we may loosely call them) can be extrapolated to onomasiology. The qualitative question then takes the following form: what kinds of (semantic) relations hold between the lexical items in a lexicon (or a subset of the lexicon)? The outcome, clearly, is an investigation into various kind of lexical structuring: field relationships, taxonomies, lexical relations like antonymy and so on. The quantitative question takes the following form: are some categories cognitively more salient than others; that is, are there any differences in the probability that one category rather than another will be chosen for designating things out in the world? Are certain lexical categories more

654 Onomasiology and Lexical Variation

obvious names than others? Again, this type of quantitative research is fairly new. The best-known example is probably Berlin and Kay’s basic level model (Berlin and Kay, 1969; Berlin, 1978), which involves the claim that a particular taxonomical level constitutes a preferred, default level of categorization. The basic level in a taxonomy is the level that is (in a given culture) most naturally chosen as the level where categorization takes place; it has, in a sense, more structural weight than the other levels. 2. The distinction between a structure-oriented and a usage-oriented form of onomasiology extends naturally towards the study of onomasiological change. On the one hand, when we think of onomasiological change in a structural way, we will be basically interested in what may be called ‘‘lexicogenesis’’ – the mechanisms for introducing new pairs of word forms and word meanings. These involve all the traditional mechanisms that introduce new items into the onomasiological inventory of a language, like word formation, word creation (the creation of entirely new roots), borrowing, blending, truncation, ellipsis, folk etymology, and others. Crucially, the semasiological extension of the range of meanings of an existing word is itself one of the major mechanisms of onomasiological change – one of the mechanisms, that is, through which a concept to be expressed gets linked to a lexical expression. In this sense, the study of onomasiological changes is more comprehensive than the study of semasiological changes, since it encompasses the latter (while the reverse is obviously not the case). On the other hand, if we think of onomasiological change in a usage-oriented way, the lexicogenetic perspective inevitably has to be supplemented with a sociolexicological perspective – with the study, that is, of how onomasiological changes spread through a speech community. Beyond merely identifying onomasiological mechanisms in the traditional etymological vein, we need to study how these mechanisms are put at work and how they may lead to overall changes in the habits of the language community. Classifications of lexicogenetic mechanisms merely identify the space of possible or virtual onomasiological changes; sociolexicology studies the actual realization of the changes.

The Contribution of Various Traditions of Research The various traditions of lexical semantics have contributed in different ways to the study of onomasiology. The major traditions are the following: . prestructuralist semantics, as dominant between 1870 and 1930, and as represented by the work

of Paul, Bre´al, Darmesteter, Wundt, and many others; . structuralist semantics, as dominant between 1930 and 1960, and as represented by the work of Trier, Weisgerber, Coseriu, Lyons, and lexical field theorists at large; . generativist and neogenerativist semantics, as originated in the 1960s, with the work of Katz and Fodor; . cognitive semantics, as originated in the 1980s, and as represented by the work of Lakoff, Langacker, Talmy, and others. Of these four traditions, all except the generativist/ neogenerativist have made noteworthy contributions to the field of onomasiology. 1. Prestructuralist semantics – apart from coining the term onomasiology itself (Zauner, 1902) – has introduced some of the basic terminology for describing lexicogenetic mechanisms. Although basically concerned with semasiological changes, the major semasiological treatises from Bre´al and Paul to Stern and Carnoy do not restrict themselves to strictly semasiological mechanisms like metaphor and metonymy, but also devote attention to mechanisms of onomasiological change like borrowing or folk etymology. (Compare Quadri [1952] for an overview of the tradition.) While the distinction between the two perspectives is treated more systematically in the structuralist era, attempts to classify lexicogenetic mechanisms continue to the present day. Different proposals may be found in the work of, among others, Dornseiff (1966), Algeo (1980), Tournier (1985), and Zgusta (1990). 2. The crucial contribution of structuralist semantics to onomasiology is its insistence, in the wake of De Saussure himself, on the distinction between semasiology and onomasiology. In the realm of diachronic linguistics, this division shows up, for instance, in Ullmann’s classification of semantic changes (1962). More importantly, the bulk of (synchronic) structuralist semantics is devoted to the identification and description of different onomasiological structures in the lexicon, such as lexical fields, taxonomical hierarchies, lexical relations like antonymy and synonymy, and syntagmatic relationships. 3. There are three important contributions that cognitive semantics has so far made to onomasiology. First, cognitive semantics has drawn the attention to a number of qualitative onomasiological structures that did not come to the fore in the structuralist tradition. This shift holds true, on the one hand, for the development of the Fillmorean frame model of semantic analysis (Fillmore, 1977, Fillmore and Atkins, 1992). Frames constitute a specific type of syntagmatic structure in the lexicon that received

Onomasiology and Lexical Variation 655 Table 1 A conceptual map of onomasiological research

Synchronic structures Mechanisms and processes of change

Qualitative approaches: what are the relevant phenomena?

Quantitative approaches: which phenomena carry more weight?

Research into lexical structures: structuralist semantics (plus cognitive semantics) Research into lexicogenetic mechanisms: prestructuralist

Research into onomasiological salience: cognitive semantics

Research into preferential lexicogenetic mechanisms: cognitive semantics

semantics

little or no attention in the structuralist tradition. On the other hand, the seminal introduction of generalized metaphor research in the line of Lakoff and Johnson (1980) can be seen as the identification of figurative lexical fields: the ensembles of near-synonymous metaphors studied as conceptual metaphors constitute fields of related metaphorical expressions (just like ordinary semantic fields consist of ensembles of near-synonymous lexical items). Second, cognitive semantics introduces a quantitative perspective into the study of onomasiological structures. As mentioned above, basic level research in the line of Berlin and Kay introduces the notion of salience into the description of taxonomical structures: basic levels are preferred, default levels of categorization. Third, cognitive semantics introduces a quantitative perspective into the study of lexicogenetic mechanisms. Within the set of lexicogenetic mechanisms, some could be more salient (i.e., might be used more often) than others. Superficially, this increased use could involve, for instance, an overall preference for borrowing rather than morphological productivity as mechanisms for introducing new words, but from a cognitive semantic perspective, there are other, more subtle questions to ask: do the way in which novel words and expressions are being coined, reveal specific (and possibly preferred) ways of conceptualizing the onomasiological targets? For instance, do specific cultures have dominant metaphors for a given domain of experience (and could such dominant metaphors perhaps be universal – see Ko¨vecses, 1990)? In addition, cognitive semantics is gradually developing a pragmatic, usage-oriented form of onomasiological research in which the various factors that influence the onomasiological choice of a category for talking about a given referent, are being investigated. It has been shown, for instance (Geeraerts et al., 1994, 1999), that the selection of a name for a referent appears to be determined by the semasiological salience of the referent, i.e., the degree of prototypicality of the referent with regard to the semasiological structure of the category, by the onomasiological salience of the category represented by the expression, and by contextual features of a classical sociolinguistic

and geographical nature, involving the competition between different language varieties.

A Conceptual Map of Onomasiology To conclude, we can summarize the relationship between the various aspects of onomasiology into a single comprehensive schema in Table 1. Filling in the chart with the names of the research traditions that have made a dominant contribution to each of the various subfields schematizes the progressive development of onomasiology. The historical development from prestructuralist semantics over structuralist semantics to cognitive semantics implies a gradual enlargement of the field of onomasiological research, from an interest in lexicogenetic mechanisms over research into lexical structures (fields and others) to various quantitative approaches taking into account the difference in salience of the onomasiological phenomena. See also: Cognitive Semantics; Definition in Lexicology; Disambiguation; Evolution of Semantics; Frame Semantics; General Semantics; Ideational Theories of Meaning; Idioms; Jargon; Lexical Fields; Lexical Semantics; Lexicology; Lexicon: Structure; Meaning, Sense, and Reference; Metaphor and Conceptual Blending; Metonymy; Neologisms; Neologisms; Pre-20th Century Theories of Meaning; Proper and Common Names, Impairments of; Prototype Semantics; Sound Symbolism; Stereotype Semantics.

Bibliography Algeo J (1980). ‘Where do all the new words come from?’ American Speech 55, 264–277. Baldinger K (1980). Semantic theory. Oxford: Basil Blackwell. Berlin B (1978). ‘Ethnobiological classification.’ In Rosch E & Lloyd B (eds.) Cognition and Categorization. Hillsdale, NJ: Lawrence Erlbaum. 9–26. Berlin B & Kay P (1969). Basic color terms: their universality and evolution. Berkeley: University of California Press. Dornseiff F (1966). Bezeichnungswandel unseres Wortschatzes. Ein Blick in das Seelenleben der Sprechenden. Lahr/Schwarzwald: Moritz Schauenburg Verlag.

656 Operators in Semantics and Typed Logics Fillmore C (1977). ‘Scenes-and-frames semantics.’ In Zampolli A (ed.) Linguistic structures processing. Amsterdam: North Holland Publishing Company. 55–81. Fillmore C & Atkins B (1992). ‘Towards a frame-based lexicon: the semantics of risk and its neighbors.’ In Lehrer A & Kittay E (eds.) Frames, fields, and contrasts: new essays in semantic and lexical organization. Hillsdale, NJ: Lawrence Erlbaum. 75–102. Geeraerts D, Grondelaers S & Bakema P (1994). The structure of lexical variation. Meaning, naming, and context. Berlin: Mouton de Gruyter. Geeraerts D, Grondelaers S & Speelman D (1999). Convergentie en divergentie in de Nederlandse woordenschat. Amsterdam: Meertens Instituut. Ko¨vecses Z (1990). Emotion concepts. New York: Springer.

Lakoff G & Johnson M (1980). Metaphors we live by. Chicago: The University of Chicago Press. Quadri B (1952). Aufgaben und Methoden der onomasiologischen Forschung. Eine entwicklungsgeschichtliche Darstellung. Bern: Francke Verlag. Tournier J (1985). Introduction a` la lexicoge´ne´tique de l’anglais contemporain. Paris: Champion/Gene`ve: Slatkine. Ullmann S (1962). Semantics. Oxford: Basil Blackwell. Zauner A (1902). Die romanischen Namen der Ko¨rperteile. Eine onomasiologische Studie. Ph.D. Thesis, Universita¨t Erlangen. Published in Romanische Forshungen 14, 339–530 (1903). Zgusta L (1990). ‘Onomasiological change.’ In Polome´ E (ed.) Research guide on language change. Berlin: Mouton de Gruyter. 389–398.

Operators in Semantics and Typed Logics R T Oehrle, Pacific Grove, CA, USA ß 2006 Elsevier Ltd. All rights reserved.

A ‘semantic operator’ maps one or more semantic to a resulting semantic entity . This entities broad characterization subsumes such basic notions as predication, modification, coordination, and quantification. One may restrict the characterization so that the semantic entity a is the semantic value corresponding to a syntactic clause, or to a syntactic clause containing one or more gaps (in some sense), or in other ways. We here consider semantic operators from the perspective of the family of formal systems known as the l-calculus. One reason to do so stems from the historical prominence of the l-calculus in semantic investigations. A more forward-looking reason is that within this family, many questions arise on formal grounds that are directly relevant to linguistic analysis, not only in semantics, but in all linguistic dimensions that involve compositional operations. The historical roots of l-calculus go back at least to Gottlob Frege (1879), who explicitly connects ‘abstraction’ with functions in the Begriffsschrift, x9. This intuitive idea was formalized by Alonzo Church (1941), and the introductory chapter of Church (1956) is in part an excursus on Frege’s insights and methods. Contemporaneously, a formal system called ‘combinatory logic’ was introduced by Haskell Curry (see Curry and Feys, 1958; Curry, 1977) anticipated in earlier work by Moses Scho¨nfinkel (1924). These systems have come to play a central role in a variety of basic problems in logic and computation: decision problems, recursive

function theory, combinatory reduction systems, and theoretical computer science. They offer insights to a broad range of practical and theoretical issues in linguistics, as well. Our focus here will be on the variation introduced by different disciplines of resource sensitivity, and by different typing systems, and how these intrinsic parameters of variation connect with linguistic questions.

l-terms There are a variety of presentations of the system of l-terms. (An excellent text is the book of Hindley and Seldin (1986); for a very readable introduction to connections with the theory of computation and the denotational semantics of programs, see Stoy (1981); Barendregt (1984) offers a comprehensive analysis; for other useful perspectives, see the books of Girard, Lafont, and Taylor (1989) and Krivine (1993). We draw freely on these works in the exposition that follows and the interested reader will find that the expositions of the subject they contain are well worth scrutiny.) The most perspicuous is built up from a set V of variables by two operations: ‘application,’ which combines two terms M and N to make the term (MN); and ‘abstraction,’ which combines a variable x and a term M to make the term (lx.M), in which lx is called the ‘abstraction operator’ and M is its ‘scope.’ We use x, y, z, . . . for variables and assume that the variables represented by different letters are distinct unless explicitly identified. Similarly, we use capital roman letters (M, N, P, Q, R, S, . . .) to represent l-terms. The notation M  N means that M and N are the same l-term. To reduce parentheses, we

Operators in Semantics and Typed Logics 657

write MN1 . . . Nk for (. . . (MN) . . . Nk) and (nesting in the opposite direction) lx1 . . . xk.M for (lx1.(. . . (lxk. M). . .)). An occurrence of a variable x in a term M is said to be ‘bound’ when it lies in the scope of an abstraction opertor of the form lx; otherwise, it is ‘free.’ Thus the variable x is free in the term x, in the term yx, but not in the term lx.x. And the variable z has both bound and free occurrences in (lz.yz)z (while the occurrence of y is free). We write FV(M) to denote the set of variables with free occurrences in the term M. The intended interpretation of an application MN is that it represents the application of the function M to the argument N. The intended interpretation of an abstraction lx.M is that it forms that function which, when applied to an argument N returns the value M when free occurrences of x within M are interpreted as N. Thus, lx.x represents the ‘identity’ function: for any argument N, the intended interpretation of (lx.x)N is N itself. The term lx.y represents the ‘constant’ function whose value is always y: if we evaluate (lx.y)N by substituting N for free occurrences of x in y, the result is always y (since there are no free occurrences of x within the atomic variable y). Further examples: lxy.xy applies its first argument to its second argument (evaluation applied to the order operator  argument); lyx.xy applies its second argument to its first argument (evaluation applied to the order argument  operator); lyzx.zxy passes two arguments to z in reverse order (roughly as in English-like passive constructions, evaluation applies the second argument (z) to the third and applies the result to the first); lxzy.z(yx) applies its second argument to the result of applying its third argument to its first argument (roughly as in English subject-raising constructions); the similar term lzyx.z(yx) acts on its first two arguments to yield lx.z(yx), the functional composition of z and y; lx.xx applies its argument to itself.

Equivalent Terms and Alphabetic Variance From the perspective of the intended interpretation, many distinct l-terms in the presentation above are ‘equivalent’: lx.x represents the identity function, but ly.y and lz.z do so as well. Corresponding to this interpretive equivalence among such terms is a syntactic equivalence called a ‘change of bound variables’ or ‘alphabetic variance’ or ‘a-conversion.’ Intuitively, we start with a term of the form lx.M (or the occurrence of such a term in a larger term), replace the variable x in the abstraction operator with another variable (y, say), and replace all free occurrences of x in M with y (yielding a term we will denote

by [y/x]M). But if this change of bound variables is to capture the correct notion of equivalence, it is essential to avoid two kinds of unwanted clashes. In the first, starting with a term lx.M and changing to a term ly.[y/x]M, we must ensure that M contains no free occurrences of y that would become bound in the shift to ly.[y/x]M. For instance, lx.y is the constant function that returns y for any argument, but the putative alphabetic variant ly.[y/x]x  ly.y is the identity function, which is hardly equivalent (on the intended interpretation) to the function that yields y for any argument. Generalizing, we allow a change in bound variables from lx.M to ly.[y/x]M only when y is not in FV(M), a restriction that prevents this form of inadvertent binding from arising. The second kind of clash in the contemplated change from lx.M to ly.[y/x]M arises when M contains a free occurrence of x within the scope of an abstraction operator of the form ly: substituting y for free occurrences of x in M changes the essential structure of M, since free occurrences of x within the scope of ly are transformed by the substitution into bound occurrences of y. To see how to prevent such cases of inadvertent binding, let us walk through the several inductive cases of the substitution definition.

Substitution Given a variable x, and two terms, N and M, we define substitution of N for free occurrences of x in M in a way that preserves the essential structure of the terms in question. The definition is inductive on the structure of M. If M  x, then [N/x]M  N (substitute N for x); if M is an atom distinct from x, then [N/x]M  M (no occurrence of x to substitute for!); if M  PQ, then [N/x]M  [N/x]P[N/x]Q (distribute the substitution across the application operation to the simpler arguments, which by the inductive assumption are defined); if M  lx.M0 , then there are three subcases to consider: if x ¼ x, then there are no free occurrences of x in M  lx.M0 and [N/x]M  M; otherwise, if x ¼ y 6¼ x, then [N/ = FV(N) (since no inadverx]ly.M0  ly.[N/x]M0 if y 2 tent bindings of y can arise by moving N inside the scope of ly in this case), but when y 2 FV(N), then [N/x]ly.M0  lz.[N/x][z/y]M0 , where z is not free in M0 and not free in N. The final case addresses both forms of inadvertent binding: z must be chosen so that it is not free in M0 (avoiding the change of bound variable problems) and is not free in N (so that no free variable in N can be bound in the course of inductive substitution). (A practical way to avoid the clashes that are possible under substitution is to adopt what Barendregt, [1984: x2.1.13 and Appendix C] calls the ‘variable convention,’ which

658 Operators in Semantics and Typed Logics

requires that the set of bound variables and the set of free variables in any context are disjoint: under this assumption, we never need to consider alphabetic variants and the difficult and unintuitive final clause of the substitution definition above is simply preempted.)

b-conversion

5

55

5

5

5

5

5

5

Substitution plays a critical role as well in the characterization of b-conversion, a relation that models the evaluation of an application. We call an application of the form (lx.M)N a ‘b-redex.’ The ‘contractum’ of the redex (lx.M)N is the term [N/x]M. If a term P contains a particular occurrence of a b-redex Q and the term P0 is the result of substituting the contractum of Q for that occurrence, we say that P b-contracts in one step to P0 and write P 1 P0. We write P R for the reflexive transitive closure of one-step reduction. A term that contains no b-redexes is said to be in ‘b normal form.’ These definitions raise a host of questions: Does every non-normal term reduce to a normal form? Can a term containing more than one b-redex reduce to different normal forms? If a term has a normal form, does every sequence of b-reductions result in this normal form? In the system of l-terms defined thus far, not every term reduces to a normal form. For example, consider the ‘self-applicator’ function lx.xx, which takes an argument a and applies a to itself, yielding aa; if we apply the self-applicator to itself, we have (lx.xx)(lx.xx), which b-reduces to (lx.xx)(lx.xx) (same!). And we have (lx.((xx)x)) (lx.((xx)x)) b ((lx.((xx)x))(lx. ((xx)x))) (lx.((xx)x)), where further steps of b-reduction lead to incremental growth. In these l-terms, then, b-reduction does not reduce the number of l-abstraction operators the term contains and these terms have no b-normal form. On the other hand, if a l-term in this system has a normal form, the normal form is unique. This is a consequence of the celebrated ‘Church-Rosser theorem,’ which states Q and P R, that if a l-term P is such that P then there is a l-terms S such that Q S and R S. By this theorem, it is impossible for Q and R to be distinct b-normal forms, since if Q and R are both normal (and thus cannot undergo further b-reductions since they contain no b-redexes), Q S only if S  Q and R S only if S  R, so that by the properties of  we have Q  R, contradicting the premise that they are distinct. Finally, consider the l-term T, with T  (ly.z) ((lx.xx) (lx.xx)), which applies the constant function ly.z (which returns z, by b-reduction, for any argument) to the self-applicator applied to itself. The term as a whole is a b-redex, but so is its argument. Consequently, the term offers alternative

one-step b-reductions: one of them selects the b-redex whose abstraction operator is ly and yields the value z, which is b-normal; the other selects the argument term ((lx.xx)(lx.xx)), for which, as we have just seen, the input of b-reduction is identical to its output. Thus, T is a term that has a normal form (namely, z), but supports an infinite sequence of b-reductions (choose the second alternative at each step) that doesn’t culminate with a b-normal form (since it doesn’t culminate at all!). A system with these properties is called ‘weakly normalizing’: for any term with a b-normal form, its b-normal form is unique; but applying a series of b-reductions to a term need not culminate in its normal form, even when this normal form exists.

Alternative Presentations The presence of an overt variable in each abstraction operator of the form lx makes the binding relation between the operator and the free variables of x within its scope relatively transparent. But as we have seen, this presentation provides many terms representing the same intended value (such as lx.x and lz.z, with identical behavior with respect to b-reduction). Congruence

One way to avoid this superfluity is to borrow a standard algebraic technique, regarding a-conversion as an equivalence relation and then showing that it is in fact a ‘congruence’ relation by proving that the basic operations on l-terms respect it – that is, that if M and M0 are alphabetic variants and N and N0 are alphabetic variants, so are the applications MN and M0 N0 , the abstractions lx.M and lx.M0 and the substitutions [N/x]M and [N0 /x]M0 . Nameless Terms

A more direct notational attack would be to replace free variables and their corresponding abstraction operators with a comparable notation providing a representation for every l-term in which all alphabetic variants are represented by the same term. A system of this kind was devised by de Bruijn (1972). In this system, the standard set of variables represented by x, y, z, . . . is replaced by the set of natural numbers 1, 2, 3, . . ., and the standard abstraction operator consisting of l followed by a standard variable (yielding such operators as lx and ly and lz) is replaced by the simple form l. Following Barendregt’s presentation, we define the system of L*-terms inductively as follows: any natural number is a L*-term; if P and Q are L*-terms, so is (PQ); and for the abstraction step,

Operators in Semantics and Typed Logics 659

if M is a L*-term, so is lM. To interpret the result of adding the abstraction operator to a term, note that any occurrence of a variable k (i.e., a natural number) will be within the scope of n-many abstraction operators (for n a non-negative integer); if k > n, k is to be interpreted as a free variable; if k  n, then k is to be interpreted as a bound variable bound by k-th abstraction operator above it. On this interpretation, the L* variables 1 and 2 are on a par with the standard x and y as free variables. But the identity function represented equivalently in the standard form by lx.x and ly.y is represented in the de Bruijn notation uniquely by l1 (since the variable 1 is bound by the 1-st occurrence of the operator l in whose scope it lies). If we replace the number 1 in l.1 by any other natural number, the result is a constant function: thus, for the term l.2, the variable 2 is free; so this L*-term represents a constant function that returns 2 for any argument, just as the standard l-term ly.z stands for the constant function that returns z for any argument. But whereas there are many equivalent standard l-term representations of this function (ly.z, lu.z, lw.z, . . .), there is just one corresponding L*term. Although there is no formal operation in this system corresponding to a-conversion, an analog of b-reduction is still needed to evaluate b-redexes and the relevant notion of substitution has an arithmetical flavor, since it is necessary (inter alia) to increment the variable being substituted for each time the substitution operator traverses an occurrence of the l-operator. Combinatory Logic

A more radical idea, going back to Scho¨nfinkel (1924) and developed much more fully by Curry and his collaborators and students, is to dispense with the abstraction operator altogether and replace it with a set of operators, a set of variables, and a set of postulates, which together form ‘Combinatory Logic.’ Surprisingly, two operators suffice. These operators are called K (Konstant) and S (Substitution) and their properties are defined by equations Kxy ¼ x and Sxyz ¼ xz(yz). In other words, K behaves like the l-term lu.lv.u (returning the first of its two arguments and throwing the second away), while S behaves like the l-term lu.lv.lw.(u(w)) (v(w)). Consider now how to evaluate the term SKKx: by the definition of S, this yields the term Kx (Kx), which yields, by the definition of K, x itself. Thus, the combinator SKK behaves like the l-term lx.x. What about other l-terms? We proceed inductively (inside out), using the following clauses: (1) lx.x ¼ I ¼ SKK; (2) lx.M ¼ KM if x 2 = FV (M) (which covers lx.y as a special case); (3) lx.MN ¼ S(lx.M) (lx.N). As an example, take the term

lx.ly.yx that maps an element x to its ‘type-lifted’ analogue. Eliminating innermost occurrences of the l-operator first, we have ly.yx ¼ S(ly.y)(ly.x) ¼ SI(K(x)). Substituting this result for ly.yx in lx.ly.yx yields lx.SI(Kx), which translates to S(lx.SI) (lx.Kx), which in turn translates to S(K(SI)) (S(KK)I) (in several steps) (which is further reducible if we would like to replace I by SSK). If we apply this to an entity f, we obtain (K(SI)f ) ((S(KK)I)f ), which itself reduces to (SI)((KK)f )(I(f ))). Applying this to a second argument g yields Ig(((KK)f(I(f )))g), which reduces to g(Kfg) ¼ gf. From a linguistic point of view, it is noteworthy that the ‘operator / bound variable’ syntax of the standard form of the l-calculus is not required to express its fundamental concepts. In particular, in the standard presentation of l-terms, the ‘operator / bound variable’ relation is unconstrained: we can add, for any variable x, the abstraction operator lx to any term and form a term. As a result, discontinuous (long-distance) binding is built into the syntax of these standard l-terms from the outset. In the theory of combinators, all communication between arguments and operators is local (just as it is in the standard account of b-conversion). Local control opens the way to more sensitive discrimination in dealing with discontinuous dependencies. Early transformational accounts of discontinuous dependencies treated them in a way that allowed movement across an ‘essential variable.’ Non-transformational accounts of discontinuous dependencies – in GPSG, Combinatory Grammar, HPSG, LFG, Type Logical Grammar – have all tacitly adopted a recursive specification of discontinuty and it is explicitly recursive in the LFG-centric idea of ‘functional uncertainty’ and the modally-licensed postulates of Type Logical Grammar.

Parameters of Variation The different perspectives sketched above present a basically uniform general system from different points of view. But this general system is the source of a family of quite distinct systems that arise by fixing specific values along particular dimensions. Here we examine three of these dimensions. The first involves the addition of new operators apart from application and abstraction. The second involves assigning ‘types’ to l-terms, and serves as a source of variation because of different ways in which the external system of types and l-terms interact. Our third dimension involves ‘resource-relations’ within l-terms themselves, and thus deals with intrinsic properties of l-terms themselves (although the properties in question are hardly restricted to l-terms).

660 Operators in Semantics and Typed Logics Additional Operators

4

5

A standard convention in the study of l-terms is to assume that restricting applications and abstractions to act on one term at a time is not a restriction in principle, in view of the 1-1 correspondence between functions that map a pair of elements ha, bi to a value v and functions that map the element a to a function that maps b to the value v. For example, take the function that maps the pair ha, bi to their sum a þ b. Instead of doing this in one fell swoop, we can start with a and get back the function ly.a þ y (that is, the function increase-by-a), then apply this to b (in the form (ly.a þ y)b)), which normalizes to a þ b, the exact value we reached from the pair. It is possible to incorporate this reasoning explicitly into the lcalculus by introducing operators apart from application and abstraction (though it is also possible but not always convenient to define such operations by means of l-terms). The most basic case is the addition of a binary product operator , together with projection operators p0, p1, defined so that p0 (A  B) ¼ A, p1 (A  B) ¼ B, and (p0(A)  p1(A)) ¼ A. Taking ‘þ’ as a primitive, and using these definitions, one may show that (lz.p0(z) þ p1(z)) h3,5i ¼ ((lx.(ly.x þ y))(3))(5) in the sense that they both normalize (given the equation for the product operator) to 3 þ 5. If we reverse the order of arguments in such cases and at the same time reverse the order of abstraction operators, both cases normalize to the same value: ((lx.(ly.x þ y)) 3 þ 5 ((ly.(lx.x þ y))(5))(3). In just (3))(5) the same way that we smuggled some arithmetical notation into these terms, we can introduce the characteristic application/abstraction structure of l-calculus with many systems (including the system of natural language expressions). Thus, in the l-calculus, there are interesting correspondences to be observed between cases involving multi-argument functions and cases in which functions act on one argument at a time, as well as correspondences between functions that act on several arguments successively and functions that act on a permutation of the order of these arguments. These questions bear directly on linguistic attempts to model syntactically such phenomena as discontinuous dependencies, the ‘non-standard’ constituent structure found in intonational phrasing, coordinate structures, and clitic structures. Rigid approaches to constituent structure are not naturally adaptable to the analysis of such phenomena, and require the introduction of theoretically ad hoc ‘restructuring rules’ that manipulate constituent structure (often under the assumed control of particular lexical expressions); on the other hand, while the full generality of the flexible possibilities of

the l-calculus provides a capacious framework to investigate these issues, it may offer more equivalences than are actually wanted. A reasonable formal balance would be to make flexible constituency a choice, but not a requirement (as in multi-modal type logical grammar [Moortgat, 1997); [Oehrle, in press]).

Types The basic system of l-terms that we have focused on thus far is type-free in the sense that the formation rules for application and abstraction enforce no restrictions on the properties of their input and output terms. This has some advantages: there is a single identity operator lx.x (aka l.1 in the nameless notation or I as a combinator), rather than a distinct identity function for each set (as in set-theoretical treatments of functions). And it has some disadvantages: certain combinations allowable in the type-free system, such as ((lx.xx) (lx.xx)) have no b-normal form. Analogous choices are important in a number of places in natural language analysis, where ‘type’ is closely related to the notion ‘category’ and where grammatical composition is standardly taken to be category-dependent. Church Typing

One way to assign types to l-terms originates with Church (1940), with sources in earlier work going back to Russell’s ramified theory of types. We first define a system T of types generated by a set of basic types t. Any element t of t is a type in T; if t1 and t2 are elements of T, so is t1 ! t2. For example, if T ¼ {np, s}, we have basic types np, s, and such complex types as s ! s (compare sentence modifiers), np ! s (a simple predicate), (np ! (np ! s) (a 2-place predicate), (np ! s) ! s (a monadic quantifier). Every atom a is assigned a type t (not necessarily an atomic type!). We write M : t to indicate that a term M has been assigned type t. The rules for application and abstraction are modified so that every complex expression is also assigned a type. Application is stated so that an application (MN) : b can be formed only from M : a ! b and N : a. (In this transition, the types a ! b and a combine to yield the type b.) Similarly, if x : a is a variable of type a and M : b is a term of type b, we can form the abstraction (lx : a.M : b) : a ! b. A consequence of Church’s typing system is that there is no longer a single identity function represented by the equivalence class that includes lx.x, ly.y, lz.z, .. . . Instead, for each typed variable x : a, there is an identity function lx : a.x : a and if the types a and b are distinct, then lx : a.x : a and ly : b.y : b are

Operators in Semantics and Typed Logics 661

distinct identity functions: they act on distinct types. This usage accords with the definition of functions in algebra and category theory, where there is a particular identity function 1A for every set or category A, but no general identity function that acts on any object whatsoever. But it conflicts with the intuition that such a general operation is a reasonable one. A second consequence of Church’s typing system is that self-application is impossible. For any typed lterm f : a, there can be no term (f : af : a), because the type restrictions imposed on the formation of applications require that in the first occurrence of f : a, a be a type of the form a ! b. But as we have defined types, this is impossible. As a result of these restrictions, however, the system of l-terms with Church-typing are ‘strongly normalizing’: every term has a b-normal form and every sequence of b-reductions starting with a given term M terminates with the normal form of M after a finite number of steps. As we will see below, Montague’s celebrated system of Intensional Logic is an extension of Church’s typing system. Curry Typing

Church’s type system assigns a fixed type to every atom. And the type requirements imposed on the components of well-formed complex types preclude the formation of self-application types. Curry proposed an alternative (clearly set forth in Hindley, 1997), in which the set of terms is the same as in the general (type-free) system. Some of these terms are typable, others are not. Types are assigned to terms by a system of ‘type-inference’ stated over ‘sequents’ of the form G 7! M : t, where G is a set of type-declarations of the form x : a (associating an atom x with a type a), and M : t pairs a term M with a type t. The antecedent G is called a ‘context.’ And two contexts G1 and G2 are said to be ‘consistent’ when their union does not contain both x : a and x : b with a 6¼ b. We write G  x to denote the result of removing any element of the form x : a (if there is one). Curry’s type assignment system starts with an infinite set of axioms: for any variable x and type a, the sequent {x : a} 7! x : a is an axiom. Then there are two inference rules, for application and abstraction, respectively. The application rule states that if G1 7! M : a ! b and G2 7! N : a, then G1 [ G2 7! (MN) : b if G1 [ G2 is consistent. The abstraction rule states that if G 7! M : b, then G  x 7! lx.M : a ! b, if G is consistent with {x : a}. These inference rules depend on properties of the contexts involved. But we are particularly interested in type assignments that make no particular

assumptions – that is, the empty context. For example, no particular assumptions are needed to prove that the term lx.x can be assigned any type of the form a ! a: we start with the conclusion of the desired proof, which takes the form % 7! lx.x : a ! a, and this sequent can be proved using the abstraction rule from the sequent x : a 7! x : a, which is itself an axiom. To prove that the application of the identity function to itself is typable, we start with the sequent % 7! (lx.x) (lx.x) : a ! a, which is provable if we can prove both % 7! lx.x : (a ! a) ! (a ! a) and % 7! lx.xa ! a (where the empty antecedents are clearly consistent). But we’ve just seen how to carry out these two sub-proofs – in one of them, the variable x is associated with the type a; in the other, the variable x is associated with the type (a ! a) ! (a ! a). On the other hand, the term lx.xx is not typable at all. If it were, we could prove it by proving % 7! lx.xx : a ! b, which is provable if it is provable that {x : a} 7! xx : b. And this is provable if we can find consistent contexts G1 and G2, with x : a ! b in G1 and x : a in G2. But this is impossible: any solution of the final requirement is inconsistent, since it requires that x be associated with distinct types. This shows that not every type-free l-term is typable. Any term that is well-typed in Church’s system is typable in Curry’s system of type assignment. And to every way of typing a term in Curry’s system, there is a corresponding Church-typed term. A basic difference, however, is that Curry’s system allows a single term to be typed in more than one way. This is the essential property of polymorphic type systems, which have played an increasingly important role in the semantics of programming languages. For example, for important practical reasons, integers (type int) and real numbers (type float) are distinct types in many programming languages, but we would like to be able to combine integers with reals in arithmetical operations (accommodated by shifting objects of type int to corresponding objects of type float) and then to print the result (conversion again to type string) in decimal notation – distinct from the underlying representation of either type of number in the machine language. There is a direct analogy here to what has been called ‘coercion’ in natural language semantics, involving shifts between mass and count interpretations of nouns or various aspectual categories. From a linguistic perspective, Curry’s system – which is not rigid on the assignment of types to underlying constants but does demand overall consistency – is closer to natural language analysis than Church’s system, because in natural language (roughly speaking) global consistency is more highly valued than rigid adherence to local values.

662 Operators in Semantics and Typed Logics Resource-sensitivity

In the constant function ly.x (where we take y and x to be distinct atoms), there is no free occurrence of y in x. In such a case, when the variable associated with an abstraction operator has no free occurrences within its scope, the abstraction is said to be ‘vacuous.’ In the identity function ly.y, the abstraction operator ly binds exactly one occurrence of the associated variable y within its scope. A case of this kind is said to be ‘linear.’ There is no standard term that applies to the case in which an abstraction operator binds more than one occurrence of the associated variable within its scope, as with the combinator S, which is expressed in the l-calculus by a term of the form lx.ly.lz.xz(yz), where the abstraction operator lz binds two occurrences of the associated variable z within its scope. These same distinctions show up in a wide variety of contexts and data structures, from the differentiation of ‘sets,’ ‘multisets,’ and ‘sequences,’ to the study of ‘relevance logic’ and ‘linear logic,’ to graph theory (in the distinction between ‘graphs,’ which allow at most one edge between nodes, and ‘multigraphs,’ which allow distinct edges between a single pair of nodes), and probability theory (in the distinction between ‘sampling with replacement’ and ‘sampling without replacement’). Interest in subsystems of l-terms or combinators weaker than the full system has grown increasingly and escalated sharply in recent years. (For further discussion, see Morrill and Carpenter, 1990; van Benthem, 1995; Moortgat, 1997 and Oehrle, 2003. Steedman’s Combinatory Categorial Grammar 2000) is based on a resource-sensitive fragment of combinatory logic.) Church observed that the application (ly.z) ((lx.xx) (lx.xx)) is unusual in having a b-normal form (namely, z) and allowing a non-terminating series of b-reductions (namely, the series that arises by always choosing to reduce on the subterms (lx.xx) (lx.xx), which reduces to itself). Such a case, where a term has a normal form but has a subterm with no normal form, arises only if vacuous abstraction is allowed. Resource-sensitivity is not restricted to questions of occurrence or multiplicities (as in the differentiation of vacuous, linear, and multilinear binding just discussed), but also involves the equivalence or differentiation of structural relations, such as ‘associativity’ and ‘order.’ These properties are universal for binary operations, in the sense that any binary operation can be characterized in part by its treatment of resources. Not surprisingly, the resourcesensitive perspective that has one of its sources in the study of l-terms has many applications to linguistic analysis.

Linguistic Applications of the l-calculus The l-calculus is applicable to any system in which the notions of function or operation play a central role. In the case of natural language analysis, there are two basic perspectives one may take, in view of the fact that language is a ‘multi-dimensional’ phenomenon in which a variety of subsystems – segmental structure, syntactic form, intonational structure, semantic and pragmatic interpretation – play interactive and mutually-constraining roles. In such a case, one may consider the applicability of the l-calculus to the individual subsystems, or examine how it can be used to regulate the interaction of these subsystems. In the exposition and examples to follow, we freely shift between these two perspectives. The Extensional Subsystem of Montague’s PTQ

A number of important questions regarding syntactic categories, semantic types, and semantic interpretation crystallized around the publication of a series of papers in the late 1960s and early 1970s by Richard Montague, which showed how a number of difficult problems in natural language semantics can be modeled (to a first approximation) in a possible-worlds setting. (Montague’s linguistic work is contained in Montague (1974).) Before considering the full intensional system, we first examine its extensional subsystem, which is a Church-typed form of higher order l-terms. The extensional type system is built up from atomic types e (‘entity’) and t (‘truth-value’) and is closed under a binary operation h, i (which plays the same role here as the binary operation ! introduced above in the discussion of Church’s type system). The language associated with these types is the smallest set containing: 1. constants Ca and variables Va for each type a; 2. the l-term lx.a of type hb, ai, whenever x is in Vb and a is an expression of type a; 3. the application a(b) of type a, whenever a is an expression of type hb, ai and b is a term of type b; 4. the equality a ¼ b of type t, whenever a and b are of the same type; 5. the standard truth-functional operations from propositional logic (:,  _ ;  ^ ;  ! ;  $ ), which combine with one or more expressions of type t to form an expression W Vof type t; 6. the (higher-order) quantifiers and which combine with a variable u (of any type) and an expression W f of type t to make the existential V quantification uf and the universal quantification uf. To interpret this language, we need a nonempty set E of entities, the set {0,1} of truth-values (0 for false, 1 for true), and we need to specify, for each type a, the set Da of possible interpretations of expressions of type a: De ¼ E, Dt ¼ {0,1}, and Dhb, ai is the set of

Operators in Semantics and Typed Logics 663

function with domain Db and codomain Da. In addition to these constraints, we need a function F that assigns to each constant c of type a an interpretation in Da and a function g that assigns to each variable u of type a an interpretation in Da. Looking back to the extensional language defined above, we can interpret it in the following way: (1) If c is a constant, its interpretation is F(c) and if u is a variable, its interpretation is g(u). (2) The abstraction lx.a of type hb, ai is interpreted as a function h : Db ! Da from Db to Da; this function acts on an argument b in Db and associates it to the entity in Da which is the interpretation of a that differs possibly from its interpretation relative to F (which assigns interpretations to the constants in a) and the assignment function g (which interprets the free variables a) only by fixing the value of the variable x in a to the argument b. (This is a model-theoretic analogue of b-reduction: whereas we think of b-reduction as manipulating symbols ‘syntactically,’ by replacing one symbol with another in the course of carrying out substitution, the modeltheoretic analogue simply identifies semantic values); (3) given a of type hb, ai, interpreted as a function a0 from Db to Da and given b of type b with interpretation b0 in Db, the interpretation of the application a(b) is simply the application of the function a0 to the argument b0 , which yields a0 (b0 ) of type Da; the remainder of the clauses are standard. We now introduce an ‘object-language,’ whose elements are interpreted categorized expressions. To indicate that an expression e of the object-language is associated with interpretation i and type T, we write e:i:T. Each syntactic category C is associated with a semantic type T(C); and each object-language expression belonging to C will be interpreted as an object of the semantic type T(C). We assign to each ‘lexical’ object-language expression e a category and an interpretation i consistent with its category (and indicate the association between expression and interpretation by writing e:i). Each rule that combines expressions e1, . . ., ek into a complex expression E(e1, . . ., ek), also assigns an interpretation E0 (e01 , . . ., e01 ) based on the interpretations e01 , . . ., e0k of the syntactic components e1, . . ., ek. Finally, to make the exercise more interesting, we assume that the set of categories is ‘structured’: we begin with some ‘atomic categories’ (atomic for purposes here, at least); then we close the set of categories under a binary operation /, so that whenever A and B are categories associated with semantic types T(A) and T(B), respectively, then A/B is a category associated with the semantic type hT(B), T(A)i. As basic categories, we choose: S (‘sentence’), with T(S) ¼ t; Nm (‘name’), with T(Nm) ¼ e; CN (‘common noun’), with T(CN) ¼ he, ti.

We assume no lexical element of the category S, but we admit such lexical names as jan:je:Nm and lee: le:Nm (where je and le are constants of type e) and such lexical common nouns as teacher:tchhe,ti:CN and student:sthe,ti:CN. We also assume the existence of lexical elements belonging to various complex categories. For example, we may allow the lexical element sneezes:snzhe,ti:S/Nm, the lexical element everyone:lPhe,ti8xe.P(x):S/(S/Nm), and the lexical element every:lQhe,ti lPhe,ti 8xe.(Q(x) ! P(x)): (S/(S/Nm))/CN. If a is an expression whose syntactic category is of the form A/B and b is an expression whose syntactic category is B, then a and b are compositionally compatible: we know that if it is in fact possible to combine a and b syntactically, then there is available a syntactic category (namely, A) and a compatible interpretation (a0 (b0 )) that can be assigned to A. But in Montague’s syntactic category system (unlike other forms of Categorial Grammar), the structure of syntactic categories themselves does not determine how two compositionally compatible expressions can combine. As a consequence, it is necessary to state rules of combination for various pairs of syntactic categories. For example, an expression of the form X:Phe,ti:S/Nm and an expression of the form Y:ye:Nm can combine to yield an expression of the form Y X:Phe,ti (ye):S. (Thus, we can construct the categorized interpreted expression jan sneezes:snzhe,ti (je):S.) To make sentences with quantificational subjects, we need a similar, but different, rule: an expression of the form Q:Qh he,ti,ti:S/(S/Nm) and an expression of the form X:Phe,ti:S/Nm can combine to form an expression of the form Q X:Qhhe,ti,ti (Phe,ti):S. Both these rules combine an expression with syntactic category of the form A/B and an expression with syntactic category B, but the two rules cannot be collapsed, since the order of combination differs: in one case, the B-category expression precedes the A/B-category expression, in the other case, it follows. There is a possible simplification which Montague took advantage of: for each expression of the form X:ie:Nm, there is a corresponding expression of the form X:lPhe,ti. P(ie):S/(S/Nm). (The transition from Nm to S/(S/Nm) is a special case of what is often referred to as type-lifting.) If we combine the type-lifted form jan:lPhe,ti.P(je):S/(S/Nm) with sneezes:snzhe,ti:S/Nm by an application – this time with a second order argument S/Nm, rather than with an atomic argument – the result is jan sneezes:(lPhe,ti.P(je))snzhe,ti:S. The first and the third dimensions are the same as those we obtained earlier, on the assumption that jan was typed Nm, and a simple calculation using the interpretive rules for abstraction and application

664 Operators in Semantics and Typed Logics

shows that the medial term has the same value in both cases. Moreover, by treating all names as belonging to the higher category S/(S/Nm), rather than Nm, we can generalize the rule of application so that any expression of the form X:x:A/B can combine with any expression of the form Y:Z:B to form the expression of the form X Y:x (Z):A. For example, given the lexical assumptions above, two such applications yield an analysis of every teacher sneezes:. . .:S, whose middle term we leave for the reader to supply. The application of several b-reductions will show the equivalence of this term with the normal form 8xe.(tchhe,ti (xe) ! snzhe,ti (xe)). To extend this analysis to transitive verbs, one must find a way to deal with both names and quantifiers in object position. This involves two questions: is it possible to transfer the syntactic category and semantic type of quantifiers in subject position to quantifiers in object position? How is it possible to introduce scope ambiguities which arise when quantifiers are in both subject and object position? Montague dealt with the first question in effect by appealing to a form of type-lifting. Instead of assigning transitive verbs the type (S/Nm)/Nm and the corresponding semantic type he, he,ti i, he assumed that each (extensional) transitive verb (such as respects) is associated with a corresponding constant (such as respect*he,he,tii) of type he, he,tii, but the verb itself takes a quantifier argument and the interpretation of the quantifier argument is applied to the object argument of the verb. Thus, we have such lexical assumptions as respects: lQ.lx.Qly. (respect*(y) (x)):(S/Nm))/(S/(S/Nm)). We can combine respects with every teacher to form the verb phrase respects every teacher of syntactic category S/Nm, whose semantic value is equivalent to the b-normal form expression lx.(8ye.(tch(y) ! respect* (y)(x))). This use of type-lifting reconciles the monadic quantifier type with nonsubject syntactic positions. But if multiple quantifiers appear in a sentence, their relative scope depends completely (on this account so far) on the order of combination: scope is inverse to the order of combination, with the earliest quantifier added having the narrowest scope and the last quantifier added the broadest scope. Montague offered an account of scope as well, an account based on the role of variables in such systems as first-order logic. Suppose we admit the lexical element she1:lPhe,ti.(P(xe1)):S/(S/Nm). And suppose we allow a quantifier Q:Q:S/(S/Nm) to combine with a sentence sent:f:S to form the result sent[Q/she]:Q(lx1.f):S. In the first dimension, sent[Q/she] is defined as the result of replacing the first occurrence of she in sent (if there is one) with Q and replacing subsequent occurrences of she with appropriate pronouns; in

the second dimension, we apply the interpretation of the quantifier to the result of applying the abstraction operator lx1 to the interpretation of the sentential argument. For our purposes, the interesting case arises when the argument sentence is built up from the element she1, as is the case for some teacher respects she1:9x(tchr(x) ^ respect* (x1) (x)):S. If we now combine this expression with the quantifier every student:lQ.8z.(st(z) ! Q(z)):S/(S/Nm), the resulting sentence has the form some teacher respects every student and its interpretation (relative to this analysis) is equivalent to 8z.(st(z) ! 9x(tch(x) ^ respect*(z) (x))), with the universal quantifier taking wide scope over the existential quantifier. When the sentential argument contains multiple occurrences of she1, Montague’s account provides a treatment of (sentence-internal) bound anaphora – a treatment that is rather rudimentary in terms of empirical coverage, but completely rigorous in terms of its model-theoretic foundations. As stated, this mode of combination is resource-insensitive, since it is compatible with a vacuous form of quantifying-in as well. In this account, the form she1 is critical both for dealing with scope ambiguities and for dealing with anaphora. Its role is quite analogous to the individual variables in standard presentations of first-order logic (or indeed, in the standard presentation of l-terms above). In dealing with more complex sentences than the extremely simple ones considered here, more than one variable is needed. In order to ensure that there would always be enough variables, Montague admitted denumerably many of them (she1, . . ., shen, . . ., and for each distinct variable, his fragments contain a distinct rule of quantifyingin (since the quantifying in rule must link the form of the syntactic variable and its interpretation with the abstraction operator used in the statement of the interpretation of the result). The extensional part of the fragments that Montague introduced are much richer than we have been able to indicate above. They also include a form of relative clauses and treatments of coordination for sentences, verb phrases, and quantifiers. The use of l-terms makes the interpretive properties of these constructions especially perspicuous. Like the quantifying-in rule, the relative clause rule depends on the variable-like terms she1, and in fact there is a family of relativization rules, one for each variable, just as in the case of the quantifying-in rule. The n-th such rule combines a common noun z:z0 :CN and a sentence f:f0 :S to form a CN of the form z : lxn. (z0 (xn) ^ f0 ): CN. In the first dimension, z is the result of transforming occurrences of shen in f in ways that we shall gloss over. In the second dimension, the abstraction operator lxn binds an occurrence

Operators in Semantics and Typed Logics 665

of xn passed to z0 as an argument, as well as any occurrences of xn that might be free in f0 . The treatment of verb-phrase and quantifier coordination relies on a standard fact from lattice-theory: if L is a lattice and X is any non-empty set, then the function-set LX of all functions from X to L may be regarded as a lattice as well, with lattice-operations defined pointwise. This term means that if f and g are functions from X to L, we define their meet f ^ g as that function from X to L which maps x in X to f(x)^Lg(x) (where ^L is the meet operator in the lattice L). The l-term representation of this function is transparent: lx. (f(x) ^Lg(x)). In Montague’s application of this fact to natural language coordination, we start with the lattice of truth values – that is, the set {0, 1} whose meet operator is the conjunction ^ and whose join operator is the disjunction operator _. In addition, we have the non-empty set E of entities, and the set of functions from E to {0, 1} is in fact the set Dhe,ti, the set of possible denotations for one-place predicates. Given two one-place predicates g:g0 :S/Nm and d:d0 :S/Nm, we can form their conjunction g and d:lx. (g0 (x) ^ d0 (x)):S/Nm and their disjunction g or d:lx. (g0 (x) _ d0 (x)):S/Nm. The extension to quantifiers is simply the application of the same technique to the function-set of all functions from one place predicates to the lattice of truth values. The only difference in the semantic dimension – where conjunction, say, takes the form lPhe,ti. (Q 1(P) ^ Q 2(P)) – is that the abstraction operator involves a higher-order type. Montague showed that coordination and quantification interact in semantically interesting ways. For example, it is possible to derive the intuitive nonequivalence of such pairs of sentences as a man or a woman found every fish and a man found every fish or a woman found every fish. Since Montague’s work, the approach he pioneered has been applied to a much broader range of coordinate structures, including a variety of forms of so-called ‘nonconstituent conjunction’ – expressions whose properties are at odds with standard phrase-structure accounts, but compatible with functionally-based accounts of syntactic composition, in the sense that it is not hard to say what such expressions combine with and what the syntactic result of this combination is. See Gazdar (1980), Rooth and Partee (1982), Partee and Rooth (1983), Keenan and Faltz (1985), Steedman (1985, 1990), Dowty (1988), and Oehrle (1987). To move from this extensional account to an intensional model requires several related changes. First, we enrich the system of semantic types with a new modal type constructor hs, i. (The symbol s here is not interpreted and the type constructor is essentially a unary operator rather than a binary one.) Second, we enrich the associated language by introducing a modal

operator u for necessity and two tense modalities W (future) and H (past). Each of these modalities combines with an expression of type t to form an expression of type t. Finally, there are two functions that deal with the intensional type constructor hs, i: if a is an expression of type a, then ˆa (‘up a’) is an expression of type hs, ai; and if a is an expression of type hs, ai, then ˇ (‘down a’) is an expression of type a. To interpret this language, we assume not only a set E of entities and the set {0, 1}, but also a set of possible worlds I and a linearly-ordered set of moments of time J. We define the set Dhs, ai as the set of functions with domain I  J and co-domain Da. For example, an expression of type hs, ei will be a function that associates each world-time coordinate hi, ji with an individual entity in E, and an expression of type hs, ti will be a function that associates each world time pair hi, ji with a truth value. For any constant of type a, we assume that this information is given to us, as a function that acts on world-time coordinate pairs – elements of the product I  J – and yields appropriate values at any such coordinate. We record how it is given in terms of a function F that maps constants of type a to functions from I  J to Da. Let g be an assignment function that associates any variable u of any type (type a, say), with a value in Da. (Thus, if u is a variable of type e, then g(e) will be an element of E, not a function from world-time coordinates to E). We now define the value an expression a takes at an arbitrary world-time coordinate hi, ji. If a is a constant of type a, then F(a) is a function from I  J to the set of possible denotations of a, and applying F(a) to the worldtime coordinates hi, ji yields an element of Da. The clauses for abstraction, application, equality, and the propositional and first-order operators are the same (though the quantifiers make essential use of the assignment function g and alternatives to it related to the variable involved). The clauses for the modal operators u, W, and H fix a value relative to one world-time coordinate relative to the value their argument takes at alternative world-time coordinates. For example, we define the value of the necessitation uf at a given world-time coordinate hi, ji to be true if the value of f is true at every world time coordinate, and false otherwise; and we define the value of the ‘past-tensification’ Hf at a given worldtime coordinate hi, ji to be true if there is a time j0 < j such that f is true at the coordinates hi, j0 i, and false otherwise. Finally, the value of a of type hs, ai at a given world-time coordinate hw, ti is taken to be that function ha with domain I  J and codomain Da, such that ha(hi, ji) is the value of a at the world-time coordinates hi, ji. And the value of aˆ (where a is of type hs, ai), simply applies the function associated with a to the coordinates hi, ji of interest.

666 Operators in Semantics and Typed Logics

To adapt our object language to the intensional framework, we introduce one significant change: the basic types Nm and S are still interpreted as elements of E and {0, 1}, respectively, but the atomic type CN is interpreted as hhs, ei, ti and, more generally, the interpretation of any functor type of the form A/B is required to belong to the type hhs, | A |i, | B|i, where | A | and | B | are the semantic types associated with the categories A and B, respectively. For example, a one-place predicate belonging to the type S/Nm is to be interpreted as an element of type hhs, ei, ti. And a quantifier is to be interpreted as an element of type hhs, hhs, ei, tii, ti. These are not quite compatible, but Montague’s rule of application, which combines a quantifier of type S/(S/Nm) and a one-place predicate S/Nm, ensures the semantic well-formedness of the result by applying the interpretation of the quantifier to the intension of the interpretation of the argument. In other cases, Montague introduced expressions ‘syncategorematically’ without assigning them a syntactic category and only implicitly assigning them a semantic interpretation. For example, the coordinators and and or are used to make conjunctive and disjunctive sentences (of category t), verb-phrases (of category S/Nm), and quantifiers (of category S/(S/Nm)), but they have no syntactic category themselves. (And in this case, it’s useful that functors which make sentences associate them with type t, rather than type hs, ti.) Montague’s syntactic system treats intensional and extensional expressions identically. To distinguish them semantically, Montague introduced a number of ‘meaning postulates.’ One postulate ensures that proper names are to be interpreted as the same individual across all world-time coordinates. (This fits well with the contrast between the rigid interpretation of names and the nonrigid interpretation of descriptions in such pairs of sentences If I were Elizabeth II, I would be a Windsor and If I were the queen of England, I would be a Windsor, where we’re more likely to accept the first than the second.) The other postulates introduce assumptions which make it possible to deduce the different entailments of extensional and intensional expressions. Consider the interpretations of the extensional one-place predicate walk and the intensional one-place predicate rise. Each of them is assigned an interpretation of the semantic type hhs, ei, ti. If w is the interpretation of walk, however, a postulate ensures that there is an element w* of the extensional type he, ti with the property that we can define w(x) to hold just in case w* ( ˇ x) holds. In other words, w is an intensional lift of an extensional concept. Properly intensional concepts cannot be reduced to extensional concepts in this way. For further

details and analysis, see Montague’s original papers, as well as the excellent textbooks by Dowty et al. (1981) and Gamut (1991). The object languages in the various fragments that Montague proposed are rigorously defined, but focus on relatively simple constructions and sometimes introduce devices from the study of formal languages that do not obviously fit. One possible perspective on this work draws on the history of model-theoretic consistency proofs: just as the great 19th century constructions of models of non-Euclidean geometry prove the consistency of geometries in which the parallel postulate fails, Montague’s model-theoretic construction of linguistic fragments proves the consistency of languages with intensional constructions, quantification, a limited form of generalized coordination, and modality. At the same time, the rigorous analysis Montague offered of these properties radically improved our understanding of the interactions among these properties in their natural language setting. Montague’s functional perspective on composition makes it possible to assign functional types to a broad range of natural language expressions: for example, it is a simple step to move by abstraction from the interpretation of every teacher as lP.8x.(tch(x) ! P(x)) to an interpretation of every itself, as every:lQ.lP.8x.(Q(x) ! P(x)):(S/(S/Nm))/ CN. And by expliciting typing quantifiers (rather than treating them syncategorematically, as is the standard syntactical approach in first-order logic), it is natural to ask which elements of this type are actually instantiated in natural language expressions, paving the way for the work on generalized quantifiers by Barwise and Cooper (1981), Keenan and Stavi (1986), van Benthem (1986), Westerstahl (1988), and others in the late 1970s and 1980s, and to the connections between the monotonicity properties of quantifiers and other operators that have played an important role in the analysis of polaritysensitive expressions. The fact that natural language proper names seem compatible with more than one type led to the investigation of a variety of forms of type-shifting (see van Benthem, 1986; Partee, 1987), leading to the exploration of systems of ‘type-inference’ and dynamic type-assignment. Finally, Montague’s exploration of the relation between a functionally-based syntax and a functionally-based semantics introduced a new paradigm for the study of grammatical composition, one in which grammatical composition can be seen as a multidimensional system of type-inference systems (differentiated by contrasting resource-management regimes), linked abstractly by a common core of shared principles (as in the Curry-Howard correspondence between proofs in intuitionistic implicational logic and l-terms).

Operators in Semantics and Typed Logics 667

As an example, consider the interaction of syntactic form and semantic interpretation involving infinitival arguments. In general, it is widely assumed that basic syntactic structure has a ‘linear’ character. From a functional perspective, this means that it can be modeled perspicuously by using a linear form of reasoning (such as the associative Lambek calculus L or its nonassociative variant NL). (In nonfunctional frameworks, the linearity of argument structure has given rise to a range of special purpose principles: the y-criterion of the Government & Binding theory, the Completeness and Coherence principles of LFG.) On the other hand, various infinitival constructions suggest that the interpretations of syntactic arguments are not restricted to a linear resource-management regime, but can be used more liberally. This situation can be modeled directly by pairing a linear functional account of syntactic composition with a multilinear functional account of semantic composition, which we represent here by two systems of l-terms. Consider first a verb like try. It combines with an np and an infinitive to form a sentence, a fact we represent here with the linear implicational type np inf s. This type does not represent how the properties of the np and inf arguments contribute to the whole. To represent these, we label this implicational type with two l-terms. The first involves the interpretive dimension and takes the form lxelPhe,ti.try0 (x, P(x)). The second is a function involving the dimension of physical form, and the variables f1 and f2 range over phonological or orthographical representations: we shall assume it takes the form lf1.lf2.(f1triedf2). Note that the interpretive term is not linear: the abstraction operator lxe binds more than free occurrence in its scope. The properties of the two term labels interact with the properties of the type system in the following ways: a modus ponens step in the labeled type system combines a structure that is assignable a labeled type B with a structure that is of the form f1 : s1 : A assignable a labeled type of the form f2 : s2: A, and the combination of the two structures is assignable a labeled type of the form f1(f2) : s1(s2) : B; an ‘abstraction step’ in the labeled type system allows one to infer that a structure G is assignable a labeled type B if the structure of the form lf.F : ls.S : A consisting of G together with the assumption f : s : A is assignable a labeled type of the form F : S : B. If we reach a situation in which every upward branch of the proof ends with an ‘identity axiom’ instance, where the structure on the left and the type on the right are the same, the proof is successful. Structures like G are built up with an operation  that is associative and commutative. In such a system, the labeled types lf1.lf2.(f1 tried f2) : lxe.lPhe,ti.

inf s and (Past(try0 (x, P(x)))) : np lf2.lf1.(f1triedf2) : lPhe,ti.lxe. (Past(try0 (x, P(x)))) : inf s are interderivable. In this case, since np the types np and inf are distinct, this means that we can combine the arguments of try in either order, with the same result. (It is also possible to think of the situation as one in which we can combine the arguments in a way that does not depend on the order of application.) The verb try has often been analyzed as a verb whose subject must syntactically control the (missing) subject of the infinitival argument. (On the account offered here, there is no representation of a missing subject of the infinitive: try combines directly with an infinitive.) A standard contrast (going back to the insights of Jespersen (1909–1949, 1937)) is to compare the behavior of a verb like try with a ‘raising’ verb like seem. As a first approximation, we might consider the following labeled type adequate for seemed. It has the same implicational type as tried and an analogous term in the first dimension, but differs in the interpretive dimension: lf1.lf2. (f1 seemed f2) : lxe.lPhe,ti. (Past(seem0 (P(x)))) : np inf s. There is a better analysis of seemed, however, one which accounts for the scope ambiguity observable in sentences with a quantifier in subject position (Jacobson, 1990; Carpenter, 1997; Oehrle, in press). For example, the sentence Every student seems to be well-prepared can be construed to mean either that every student seems to have the property that he or she is well-prepared or that it seems that every student is well-prepared. This distinction is directly derivable in the type system described here by assuming that seemed combines with a monadic quantifier as its subject, rather than a simple noun phrase. Consider the labeled type lf.(f seems to be well-prepared): s) lQ.(Pres(seem(Q(be well-prepared0 )))):((np s) s. This labeled type can combine with a quantifis) s in this system of er of type Q:lP.Q(P):(np type-inference. On the first, the quantifier is the argument and the result (after normalizing the l-terms) is Q seems to be well-prepared: Pres(seem(Q(be wellprepared0 ))). On this analysis, the quantifier has narrow scope with respect to the interpretation of seem. On the second analysis, the quantifier is the functor and we must show that the structure labeled by seems to be well-prepared, of type ((np s) s) s yields a structure compatible with the argument of the quantifier, namely, of the type np s. We can derive this result by an abstraction inference, if we can show that the structure built up from np and ((np s) s) s yields the type s. And we can do this if we can show that the type (np s) s is derivable from the structure np. This is simply the type-lifting rule,

668 Operators in Semantics and Typed Logics

which is derivable in this system by abstraction from the structure consisting of np s and np, which is itself an application. The interpretive term associated s in this proof is lx.((lQ.(Presseem(Q(be with np well-prepared0 )))(lP.P(x)))). And in the application of the interpretation of the quantifier to this term, the quantifier outscopes seem. Quantifier scope ambiguities arise in this system without special stipulation. In fact, the type system itself provides four distinct proofs that two quantifiers and a two-place predicate can combine to form a sentence: there are two ways in which the quantifiers can bind the arguments of the predicate and for each of these two ways, two possible scopings. In natural language, the position of the quantifier in the sentence determines which argument it binds. And in the term labeled system described here, the string terms constrain which argument the quantifier binds in exactly the same way. As a result, only the two intuitively available readings are possible. For details, see Oehrle (1994, 1995) and more recent work by Muskens (2003) and de Groote (2001). Much has happened since Montague’s era. Some of the highlights include the application of Montague’s basic perspective to the study of generalized coordination (mentioned above), the investigation of branching quantifiers and plural constructions, the development of theories of discourse representation and dynamic forms of Montague Grammar, and new function-based theories of binding and anaphora. Moreover, other forms of type-theory have been investigated as models for linguistic phenomena (see, for example, Chierchia, Partee, and Turner, 1989 and Ranta 1994). In all of these cases, the various forms of the l-calculus have played a fundamental role. See also: Boole and Algebraic Semantics; Categorial

Grammar, Semantics in; Compositionality; Discourse Representation Theory; Discourse Semantics; Dynamic Semantics; Formal Semantics; Game-theoretical Semantics; Interpreted Logical Forms; Monotonicity and Generalized Quantifiers; Montague Semantics; Natural Language Understanding, Automatic; Possible Worlds; Propositional and Predicate Logic; Quantifiers; Situation Semantics.

Bibliography Barendregt H (1984). ‘The lambda calculus: its syntax and semantics.’ Studies in Logic and the Foundations of Mathematics 103. Amsterdam: North-Holland. Barwise J & Cooper R (1981). ‘Generalized quantifiers and natural language.’ Linguistics and Philosophy 4, 159–219. Carpenter B (1997). Type-logical semantics. Cambridge, MA: The MIT Press.

Chierchia G, Partee B H & Turner R (eds.) (1989). ‘Properties, types and meaning.’ In Studies in Linguistics and Philosophy 38–39. Dordrecht: Kluwer Academic Publishers. Church A (1940). A formulation of the simple theory of types. Journal of Symbolic Logic 5, 56–68. Church A (1941). The calculi of lambda-conversion. Annals of Mathematics Studies 6. Princeton: Princeton University Press. Church A (1956). Introduction to mathematical logic, vol. 1. Princeton Mathematical Series 17. Princeton: Princeton University Press. Curry H B (1977). Foundations of mathematical logic. New York: Dover Publications. Curry H B & Feys R (1958). Combinatory logic. Amsterdam: North-Holland. de Bruijn N G (1972). ‘Lambda calculus with nameless dummies: a tool for automatic formula manipulation.’ Indagationes Mathematicae 34, 381–392. de Groote P (2001). ‘Towards Abstract Categorial Grammars.’ In Association for Computational Linguistics, 39th annual meeting and 10th conference of the European chapter, proceedings of the conference. France: Toulouse. 148–155. Dowty D (1988). ‘Type raising, functional composition, and non-constituent conjunction.’ In Oehrle R T, Bach E & Wheeler D W (eds.) Categorial grammars and natural language structure. Studies in Linguistics and Philosophy 32. Dordrecht: D. Reidel. 153–197. Dowty D R, Wall R E & Peters S (1981). Introduction to Montague Semantics. Synthese Language Library: 11. Dordrecht: D. Reidel. Frege G (1879). Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. In English translation in van Heijenoort (ed.), Halle: Nebert. 1–82. Gamut L T F (1991). Logic, Language and Meaning. Chicago: The University of Chicago Press. Gazdar G (1980). ‘A cross-categorial semantics for coordination.’ Linguistics and Philosophy 3, 407–409. Girard J Y, Lafont Y & Taylor P (1989). ‘Proofs and types.’ Cambridge tracts in theoretical computer science 7. Cambridge: Cambridge University Press 1989. Hindley J R (1997). ‘Basic simple type theory.’ Cambridge Tracts in Theoretical Computer Science 42. Cambridge: Cambridge University Press. Hindley J R & Seldin J P (1986). ‘Introduction to combinators and l-calculus.’ London Mathematical Society Student Texts 1. Cambridge: Cambridge University Press. Jacobson P (1990). ‘Raising as function composition’ Linguistics and Philosophy 13, 423–476. Jespersen O (1909–1949/1961). A modern English grammar on historical principles. 7 vols. George Allen & Unwin, Munksgaard, London, Copenhagen. Jespersen O (1937/1969). Analytic Syntax. Transatlantic Series in Linguistics. New York: Holt, Rinehart and Winston. Keenan E L & Stavi J (1986). ‘A semantic characterization of natural language quantifiers.’ Linguistics and Philosophy 9, 253–326.

Operators in Semantics and Typed Logics 669 Keenan E L & Faltz L M (1986). ‘Boolean semantics for natural language.’ Synthese Language Library 23. Dordrecht: D. Reidel. Krivine J L (1993). Lambda-calculus, types and models. Computers and their Applications Series Paris and London: Masson and Ellis Horwood. Montague R (1974). Thomason R H (ed.). Formal philosophy: selected papers of Richard Montague. New Haven: Yale University Press. Montague R (1974). ‘The proper treatment of quantification in ordinary English.’ In Thomason R H (ed.) Formal philosophy: selected papers of Richard Montague, New Haven: Yale University Press. 247–270. Moortgat M (1997). ‘Categorial type logics.’ In van Benthem J & ter Meulen A (eds.) Handbook of logic and language, Amsterdam: Elsevier. Morrill G & Carpenter B (1990). ‘Compositionality, implicational logic, and theories of grammar.’ Linguistics and Philosophy 13, 383–392. Muskens R (2003). ‘Language, lambdas and logic.’ In Kruijff G-J M & Oehrle R T (eds.) Resource-sensitivity, binding and anaphora. Dordrecht: Kluwer. 23–54. Oehrle R T (1987). ‘Boolean properties in the analysis of gapping.’ In Huck G J & Ojeda A E (eds.) Discontinuous constituency. Syntax and semantics 20. Orlando: Academic Press. 203–240. Oehrle R T (1994). ‘Term-labeled categorial type systems.’ Linguistics and Philosophy 17, 633–678. Oehrle R T (1995). ‘Some 3-dimensional systems of labelled deduction.’ Bulletin of the Interest Group in Pure and Applied Logics 3.2–3.4, 29–448. Oehrle R T (2003). ‘Resource sensitivity – a brief guide.’ In Kruijff G-J M & Oehrle R T (eds.) Resource-sensitivity, binding and anaphora. Studies in Linguistics and Philosophy. Dordrecht: Kluwer Academic Publishers. 231–255. Oehrle R T (in press). ‘Multi-modal type-logical grammar.’ In Borsley R & Borjars K (eds.) Non-transformational syntax, Blackwell. Partee B H (1987). ‘Noun phrase interpretation and typeshifting principles.’ In Groenendijk J et al. (eds.) Studies

in discourse representation theory and the theory of generalized quantifiers. GRASS 8. Dordrecht: Foris. 115–143. Partee B H & Rooth M (1983). ‘Generalized conjunction and type ambiguity.’ In Bauerle R, Schwarze Ch & von Stechow A (eds.) Meaning, use and interpretation of language, Berlin: de Gruyter. 361–383. Ranta A (1994). Type theoretical grammar. Oxford: Oxford University Press. Rooth M & Partee B H (1982). ‘Conjunction, type ambiguity, and wide scope ‘‘or’’.’ In Flickinger D, Macken M & Wiegand N (eds.) Proceedings of the first West Coast conference on formal linguistics 353–362. Stanford University, Department of Linguistics. Scho¨nfinkel M (1924). ‘Ueber die bausteine der mathematischen logik.’ Mathematische Annalen 92, 305– 316, English translation in van Heijenoort (ed.). 355–366. Steedman M (1985). ‘Dependency and coordination in the grammar of Dutch and English.’ Language 61, 523–568. Steedman M (1990). ‘Gapping as constituent coordination.’ Linguistics and Philosophy 13, 207–263. Steedman M (2000). The syntactic process. Language, Speech, and Communication Series. Cambridge, MA: The MIT Press. Stoy J E (1981). Denotational semantics: The Scott¼StraStrachey approach to programming language theory. The MIT Press Series in Computer Science. Cambridge, MA: MIT Press. van Benthem J (1986). Essays in logical semantics. Dordrecht: D. Reidel. van Benthem J (1995). Language in action: categories, lambdas, and dynamic logic. Cambridge, MA: MIT Press. van Heijenoort J (ed.) (1967). From Frege to Go¨del: a sourcebook in mathematical logic 1879–1931. Cambridge, MA: Harvard University Press. Westerstahl D (1988). ‘Quantifiers in formal and natural languages.’ In Gabbay D & Guenthner F (eds.) Handbook of Philosophical Logic, Dordrecht: D. Reidel. 1–13.

This page intentionally left blank

P Partitives M Koptjevskaja-Tamm, Stockholm University, Stockholm, Sweden ß 2006 Elsevier Ltd. All rights reserved.

In traditional linguistics, primarily in the IndoEuropeanistic tradition, the term ‘partitive’ is normally associated with partitive (meanings/uses of) genitives, which include (a) reference to body parts and ‘organic’ parts of objects, e.g., the roof of the house, the lion’s head; (b) reference to a set from which a subset is selected by means of various nonverbal words, e.g., the best among the Trojans, three of the boys, a section of the barbarians; (c) quantification, e.g., an amphora of wine, dozens of soldiers; and (d) reference to ‘partial objects’ of certain verbs (such as to eat, to drink, etc.), normally alternating with accusatives, primarily in Classical Greek, Gothic and Old High German, Sanskrit, and Balto-Slavic. The idea of partiality, to which the term ‘partitive’ owes its name, grows more and more vague as we proceed along this list. The last two constructions do not in fact refer to a ‘part’ in any reasonable sense, since there is no well-defined ‘whole’ to which it could relate. The ‘partitive case’ is, together with the nominative, accusative, and genitive, one of the four main grammatical cases in the Finnic languages. It continues the proto-Uralic separative (ablative case) but has more or less lost its original uses in Finnic. Its central functions overlap with those of the partitive genitives and include marking partial objects and subjects, complements to nominal quantifiers, and to numerals under certain syntactic conditions. The functions of the ‘partitive article’ in French (and in some other Romance varieties) also include marking partial objects and subjects.

Partitive and Pseudo-Partitive Nominal Constructions Traditionally, a piece of the cake, a pile of Mary’s books, a cup of tea, and a pile of books all count

as examples of ‘partitive nominal constructions’ (PCs) (corresponding to the second and third uses of partitive genitives listed in the previous section). On closer inspection, however, we see that only in a piece of the cake and a pile of Mary’s books are we really talking of a part of something rather than an amount of some substance, as we do in a cup of tea and a pile of books. This motivates the relatively recent term ‘pseudo-partitive,’ coined by Selkirk (1977). Both partitive and pseudo-partitive nominal constructions are noun phrases consisting of two nominals, one of which is a quantifier (cup, slice, pile), while the other nominal will be called ‘‘quantified’’. Although the same quantifiers may appear in both types of NPs, their role is different. Thus, PCs involve a presupposed set of items or a presupposed entity referred to by one of the nominals (the cake, Mary’s books), and the quantifier indicates a subset or a subpart which is selected from it. In a pseudo-partitive nominal construction (PPC), the same word merely quantifies over the kind of entity (tea, books) indicated by the other nominal. Swedish, along with many other languages, makes a clear distinction between the two constructions: the quantified in PCs is marked with the preposition av, whereas PPCs merely consist of two juxtaposed nominals. (1a) en kopp av de-t a:COM cup of the-NEUT.SING god-a te-t good-DEF tea-DEF.N.SING ‘a cup of the good tea’ (1b) en kopp te a:COM cup tea ‘a cup of tea’

Nominal quantifiers, or measures, create units to be counted for those entities that either do not come in ‘natural units’ (like mass nouns) or come in ‘different units’ (cf. six bunches of carrots and two rows of trees). This distinguishes them from numeral classifiers in such languages as Vietnamese and Japanese, which actualize the semantic boundaries of a given count noun by designating its natural unit. In practice there is no sharp border between the two.

672 Partitives

Semantically, the class of quantifier nouns is quite heterogeneous and covers at least the following major semantic subtypes: . Conventionalized measures: a liter of milk, a kilo of apples . Abstract quantity nouns: a large amount of apples . Containers: a cup of tea, a pail of apples . Fractions/parts: a slice of bread, a quarter of an hour, a large section of students . Quantums (for mass nouns): a lump of sugar, a drop of milk . Collections (for count nouns): a group of students, a herd of sheep . Forms (both for mass and count nouns): a pile of sand/bricks, a bouquet of roses.

Cross-Linguistic Variation and Geographic Distribution The mixed nature of nominal quantifiers accounts for their double similarity, both with typical nouns and with typical quantifiers, e.g., numerals, and is to a high degree responsible for the cross-linguistic variation demonstrated by PCs and PPCs (for the details, both synchronic and diachronic, cf. KoptjevskajaTamm, 2001). Partitive nominal constructions tend to be formed with an overt marker associated with the quantified, where overt markers are either inflectional (case markings) or analytical (adpositions). Such markers normally originate as markers of ‘direction FROM’/ ’separation’ (e.g., the ablative case and the like) and/ or as possessive markers. The two grammaticalization sources agree well with two different ‘stages’ in the part-whole relations relevant for PCs: the part either still belongs to the whole (the situation often encoded by possessive constructions) or is being separated from it. In the latter case, the development of PCs most probably involves reanalysis of sentences referring to physical separation of a part from an object, such as Give me two slices from the cake. Examples (2) (PCs with the ablative case marker) and (3) (PCs with possessive markers) from Turkish illustrate both options (Kornfilt, 1996). (2a) Ahmet [pasta-dan iki dilim] Ahmet cake-ABL two slice ‘Ahmet ate two slices of the cake’

ye-di eat-PAST

(2b) Ahmet bakkal-dan iki s¸is¸e s¸arap c¸al-dı Ahmet grocer-ABL two bottle wine steal-PAST ‘Ahmet stole two bottles of wine from the grocery store’

(3)

Ahmet [s¸arab-in yari-sin-i] Ahmet wine-GEN half-3SING.POSS-ACC ic¸-ti drink-PAST ‘Ahmet drank half of the wine’

For PPCs, the cross-linguistically dominating technique consists in merely juxtaposing the nominal quantifier and the quantified, as in the Swedish example (1b) above. Since PCs normally involve overt markers, in these cases there will be a clear contrast between PCs and PPCs. However, in many languages, PPCs also involve a construction marker for relating the nominal quantifier and its complement to each other – either the same as or different from the one used in PCs. Although languages tend to have one standard PC and one standard PPC, they may occasionally show other, more marginal patterns. These are often restricted to certain semantic subgroups of nominal quantifiers and/or special contexts of use, as the Turkish possessive-like PC in example (3). In Swedish, a group of students can be expressed by the juxtapositional construction en grupp studenter or by two constructions involving prepositions: en grupp av studenter ‘a group of students’ or en grupp med studenter ‘a group with students.’ Av (originally an ablative preposition) suggests that the PPC arises from constructions that relate an object to the material it is made of (cf. en kla¨nning av dyrt siden ‘a dress of expensive silk’), whereas the comitative marker med comes from constructions relating an object, e.g., a container to an entity that accompanies it (cf. en va¨ska med nycklar ‘a bag with keys’). The only systematic cross-linguistic study of the domain has been carried out on the languages of Europe (Koptjevskaja-Tamm, 2001), where the following geographic distribution of the PPC types is attested. Most Slavic, Baltic, marginally German and Icelandic, and a number of Northeast Caucasian (Daghestanian) languages, as well as Irish Gaelic and Scots Gaelic, use the genitive case-marking on the quantified nominal in PPCs, as in Russian bokal vina (glass wine.GEN), ‘a glass of wine’; the same pattern was predominant among the older IndoEuropean languages. Finnic uses instead the partitive (i.e., originally the separative) case, as in Finnish sa¨kki perunotita (sack potatoes.PART), ‘a sack of potatoes.’ Modern Romance languages, English, and Icelandic (and marginally Danish, Swedish, and Norwegian) mark the quantified nominal with a preposition. Overt markers in PPCs originate from different sources, primarily by extension from partitive markers (which in turn come from possessives and ablatives), from markers of material relations and

Partitives 673

comitatives. Given that separative constructions and ablative markers are frequent grammaticalization sources for constructions used for both possessive and material relations (e.g., cf. Heine and Kuteva, 2002), it is not always clear which of the several possible developmental scenarios a particular language has undergone. Both the French marker de and the English marker of exemplify such problems. The juxtapositional PPC type occurs in all the European language families, especially in two clear areas: the southern and southeastern parts of Europe, where different language families meet, and in the Germanic. In a number of languages (Danish, Swedish, Norwegian and, marginally Icelandic, German, Yiddish, Dutch, Bulgarian, Macedonian, Greek) the juxtapositional type is clearly new and came to replace the more archaic genitive construction. Although the details of this development still have to be worked out, the juxtapositional strategy in these cases is clearly the final output of grammaticalization. For the majority of the languages, however, juxtaposition in PPCs appears to be an old phenomenon – at least, there is no evidence to the contrary. Juxtaposition in these cases is thus something that has hardly undergone any grammaticalization at all. One simple explanation for the preference of juxtaposition in PPCs is that these are modeled upon, or behave like, constructions with more typical quantifiers, such as numerals, for which juxtaposition of the quantifier and the quantified is the crosslinguistically unmarked option. At a deeper level of explanation, it might be suggested that juxtaposition of the quantifier and the quantified reflects ‘weak coreferentiality’ of their referents. To use cognitive grammar terminology, they both profile virtually the same thing, a ‘replicate mass’ whose quantity and type is denoted by the quantifier and by the quantified respectively (Langacker, 1991: 83–84).

Headedness in Pseudo-Partitive Constructions The structure of partitive, and especially pseudo-partitive, constructions, has been debated by various syntactic theories, in particular, the questions of headedness and constituency (Akmaijan and Lehrer, 1976; Jackendoff, 1977; Selkirk, 1977; Lo¨bel, 1986, 2000; Battye, 1991; Delsing, 1993; Vos, 1999; Kinn, 2001). The facts are often controversial. Pseudo-partitive nominal constructions with overt markers superficially look like typical asymmetrical head-dependent structures, primarily like (possessive) noun phrases. Thus, the quantified is marked with an inflectional or adpositional marker typical of

marking dependents, (see examples (4a) and (4b)). In languages with case, it is the quantifier that receives the morphological case marking appropriate to the slot filled by the whole construction, i.e., is the ‘morphosyntactic locus’ of the construction (Zwicky, 1985; see example (4c)). These marking facts point out the quantifier as the head and the quantified as the dependent in such constructions (examples 4a to 4c are from Russian). (4a) bokal vin-a glass wine-GEN ‘a glass of wine’ (4b) bokal Petr-a glass Peter-GEN ‘Peter’s glass’ (4c) smesˇat’ s bokal-om with glass-INSTR mix.INF ‘to mix with a glass of wine’

vin-a wine-GEN

Juxtapositional PPCs work differently. In some languages, illustrated by modern Greek in (5), both the quantifier and the quantified agree in case. (5a)

[e´na kilo´ kafe´s] [one.NEUT.NOM kilo.NOM coffee.NOM] kostı´zi epta´ dola´ria dollars.ACC cost.PRES.3SING seven ‘One kilo of coffee costs seven dollars’ (5b) i aksı´a [eno´s kilu´ [one.N.GEN kilo.GEN art.FEM.NOM value.NOM kafe´] ı´ne epta´ dola´ria dollars.ACC coffee.GEN] be.PRES.3SING seven ‘The price of one kilo of coffee is seven dollars’

A much more frequent option in languages with morphological case consists in treating the quantified as the morphological locus (see example (6) from Turkish). In still other languages, e.g., Swedish and Bulgarian, that lack morphological case, the question of the morphological locus in PPCs does not make much sense. (6) Bir bardak su¨t-e bir kas¸ık one glass milk-DAT one spoon bal ekle-n-ir honey odd-PASS-AOR ‘To one/a glass of milk is added one/a spoon of honey’

Thus, in juxtapositional PPCs the criterion of being morphosyntactic locus often, but not always, points out the quantified as the head. Agreement facts are also controversial: in example (7) from Swedish, the predicate may agree with either the quantifier or the quantified. Not all languages allow both options, and even those that do show a complicated system of restrictions governing the choice between the two (example from Delsing, 1993: 202).

674 Perfects, Resultatives, and Experientials (7) En la˚da a¨pple-n ha-r a:COM box apple-PL have-PRES blivit stulen /stulna become.SUP stolen.SING.COM /stolen.PL ‘A box of apples has been stolen’

Occasional errors of the kind en uppfriskande kopp kaffe ‘a refreshing cup of coffee’ in Swedish, where the participle ‘refreshing’ formally modifies ‘a cup’ or ‘a cup of coffee’ but semantically belongs to ‘coffee,’ provide some evidence that the quantified is interpreted as the semantic head of the whole construction (cf. the discussion of semantic heads in Croft, 2001: 254–272). One possible interpretation of the controversial evidence is that the notions of head and dependent are perhaps not applicable to such constructions.

Relations to Other Phenomena Numerals, particularly higher numerals, develop from nominal quantifiers. The morphosyntactic properties of numeral constructions often betray their kinship with nominal PCs and PPCs (Corbett, 1978; Greenberg, 1989). A particularly clear case is illustrated by the complicated system found in the Slavic, Baltic, and Finnic languages. Special marking of partial objects and subjects mentioned earlier (cf. art. 180) is derived from PCs and PPCs with deleted quantifiers (Koptjevskaja-Tamm and Wa¨lchli, 2001: 464–465). See also: Classifiers and Noun Classes; Diminutives Augmentatives; Grammatical Meaning; Hyponymy Hyperonymy; Lexicon/Dictionary: Computational proaches; Mass Expressions; Meronymy; Number; merals; Plurality.

and and ApNu-

References Akmaijan A & Lehrer A (1976). ‘NP-like quantifiers and the problem of determining the head of an NP.’ Linguistic Analysis 2(4), 395–413. Battye A (1991). ‘Partitive and pseudo-partitive revisited: reflections on the status of ‘de’ in French.’ Journal of French Languages Studies 1, 21–43.

Corbett G (1978). ‘Universals in the syntax of cardinal numerals.’ Lingua 46, 355–368. Croft W (2001). Radical construction grammar. Syntactic theory in typological perspective. Oxford: Oxford University Press. Delsing L-O (1993). ‘The internal structure of noun phrases in the Scandinavian languages.’ Ph.D. diss., University of Lund. Greenberg J (1989). ‘The internal and external syntax of numeral expressions.’ Belgian Journal of Linguistics 4, 105–118. Heine B & Kuteva T (2002). World lexicon of grammaticalization. Cambridge: Cambridge University Press. Jackendoff R (1977). X’ syntax: a study of phrase structure. Cambridge, MA: The MIT Press. Kinn T (2001). ‘Pseudopartitives in Norwegian.’ Ph.D. diss., University of Bergen. Koptjevskaja-Tamm M (2001). ‘‘‘A piece of the cake’’ and ‘‘a cup of tea’’: partitive and pseudo-partitive nominal ¨ constructions in the Circum-Baltic languages.’ In Dahl O & Koptjevskaja-Tamm M (eds.) The Circum-Baltic languages: typology and contact, vol. 2. Amsterdam/ Philadelphia: John Benjamins–568. Koptjevskaja-Tamm M & Wa¨lchli B (2001). ‘The CircumBaltic languages: an areal-typological approach.’ In Dahl ¨ & Koptjevskaja-Tamm M (eds.) The Circum-Baltic O languages: typology and contact, vol. 2. Amsterdam/ Philadelphia: John Benjamins. 615–750. Kornfilt J (1996). ‘Naked partitive phrases in Turkish.’ In Hoeksema J (ed.) Studies on the syntax and semantics of partitive and related constructions. Berlin/New York: Mouton de Gruyter. 107–143. Langacker R W (1991). Foundations of cognitive grammar. Stanford: Stanford University Press. Lo¨bel E (1986). Apposition und Komposition in der Quantifizierung. Syntaktische, semantische und morphologische Aspekte quantifizierender Nomina in Deutschen. Tu¨bingen: Max Niemeyer. Lo¨bel E (2000). ‘Q as a functional category.’ In Bhatt C, Lo¨bel E & Schmidt C (eds.) Syntactic phrase structure phenomena. Amsterdam/Philadelphia: J. Benjamins. 133–158. Selkirk E (1977). ‘Some remarks on noun phrase structure.’ In Culicover P, Akmaijan A & Wasow T (eds.) Formal syntax. New York: Academic Press. 283–316. Vos R (1999). ‘A grammar of partitive constructions.’ Ph.D. diss., Tilburg University. Zwicky A (1985). ‘Heads.’ Journal of Linguistics 21, 1–29.

Perfects, Resultatives, and Experientials J Lindstedt, University of Helsinki, Helsinki, Finland ß 2006 Elsevier Ltd. All rights reserved.

Perfects, resultatives, and experientials constitute a family of cross-linguistically identifiable grammatical

categories (grams) associated with the verb. They can be located in the borderline region between tense and aspect. Both ‘resultative’ and ‘experiential’ are also used as names for certain functions of the perfect, but there are languages in which they are distinct grammatical categories with a morphological marking of

Perfects, Resultatives, and Experientials 675

their own. For clarity, the terms ‘resultative proper’ and ‘experiential proper’ can be used in such cases. Not all forms called ‘perfects’ in the traditional grammars of various languages can be subsumed under the cross-linguistic category of ‘perfect,’ but the English present perfect, as in ‘‘she has read this book,’’ for instance, does qualify as a typical instance of the perfect. Perfects express the relevance of a past situation from the present point of view – ‘current relevance’ or ‘continuing relevance’ (CR), for short. In Reichenbach’s (1966: 289–290) temporal logic, the point of reference in the perfect coincides not with the point of the event but with the point of speech. This is one way of capturing the intuition that ‘‘she has read this book’’ tells something about the present state of affairs, although it describes a past event. A negative characterization of the perfect follows from this property: it is not a tense used in connected narratives about past events. In Weinrich’s (1964) analysis, the perfect belongs to the ‘world discussed,’ not to the ‘world narrated.’ That is why the Latin perfect, the form that originally gave its name to the category, is not a perfect at all in the present typological meaning but, rather, a perfective past tense, as it is freely used in past narratives. The term ‘perfect’ (together with its derivative ‘perfective’) thus has considerable historical ballast, and the newer term ‘anterior’ is sometimes used in its stead (as in Bybee et al., 1994). However, in Creole linguistics, ‘anterior’ may refer to any relative past tense, irrespective of whether it signals CR or not. Because the semantics of the perfect are not so easy to define as that of the ‘past tense,’ for instance, not all linguists consider it to be a cross-linguistic category at all; in their opinion, the perfects in different languages are only linked by their common name, reflecting the vicissitudes of the scholarly history of traditional grammar. It was Dahl (1985: 129–153) who first showed that a cross-linguistic category of perfect can be identified empirically, without a preconceived definition of its semantics: The perfects of various languages, different in their peripheral uses, center around certain prototypical uses in a nonrandom fashion. On the basis of Dahl’s results, a questionnaire was developed that operationalizes the definition of this gram (Lindstedt et al., 2000; Lindstedt 2000): A language possesses a perfect if it has a gram, associated with the verb, that is used in the translation equivalents of most of the first seven examples, illustrating different kinds of CR of past situations, but is ‘not’ used in the following four examples in the questionnaire, consisting of short narratives. The perfect thus defined expresses ‘present relevance’ and is also called the ‘present perfect’ because in various languages it has formal counterparts on other temporal levels: these are the ‘past perfects’

(or ‘pluperfects’), ‘future perfects’ (or ‘futura exacta’), and even ‘past future perfects’. As illustrations we may take the English sentences ‘‘she had read this book,’’ ‘‘she will have read this book,’’ and ‘‘she would have read this book,’’ respectively (notice that the last of these also carries modal meanings). Although the term ‘perfect’ may refer to all of these grams, especially in studies considering the perfect to be an aspect rather than a tense, it should be noted that on nonpresent temporal levels, the notion of continuing relevance is not so crucial: These other perfects could simply be described as absolute-relative tenses that express temporal location of events relative not only to the present time but also to each other. The distinction between ‘resultatives’ and perfects was established in linguistics in the 1980s, largely owing to the important collective work edited by Nedjalkov (1988). Resultatives ‘‘signal that a state exists as a result of a past action’’ (Bybee et al., 1994: 54). The diagnostic difference between resultatives proper and perfects is that only resultatives combine with adverbs of unlimited duration, such as ‘still’ or ‘as before.’ In English, it is not possible to say ‘‘She has still gone’’ (if ‘still’ is used in its temporal meaning) – in contrast to the resultative construction ‘‘She is still gone’’ (see also Lindstedt, 2000: 366–368). The CR meaning of the perfect is obviously a generalization of the resultative, and sometimes the term ‘resultative perfect’ is used to cover both meanings. Resultatives proper have also been called ‘stative perfects.’ ‘Experientials proper’ express that a certain type of event occurred at least once in the past (Dahl 1985: 139–144). Dahl and Hedin (2000) call this meaning a ‘‘type-focusing event reference’’ – as opposed to ‘‘token-focusing event reference,’’ pertaining to a particular occurrence. The Chinese past marker -guo is a well-known example of an experiential (Mangione and Li, 1993): (1) Ta¯ chı¯-guo tia´njı¯ she eat-EXP frog ‘She has eaten frog (sometimes).’

Another well-known example is the Japanese koto ga aru construction; the functions of the Chinese and Japanese markers are, however, by no means identical (Dahl 1985: 141). When the CR meaning of the perfect is weakened, it may develop into what is called the ‘experiential perfect.’ In English, the CR perfect and the existential perfect are formally differentiated only in rare cases like the following (cf. Comrie, 1976: 58–59): (2) Mary has gone to Paris. (3) Mary has been to Paris.

676 Perfects, Resultatives, and Experientials

In the CR perfect of (2), the event of Mary’s having gone to Paris may be relevant to the present state of affairs in various ways, but typically there is at least the implicature that Mary is now absent. The experiential perfect of (12) only expresses that this type of event occurred at least once in the past; a particular event token would be referred to with ‘‘Mary went to Paris.’’ The semantic connection between CR and experientiality is seen with an animate agent if ‘‘certain qualities or knowledge are attributable to the agent due to past experiences’’ (Bybee et al., 1994: 62), but the definition of the experiential proper and the experiential perfect covers inanimate agents as well, which is why some scholars prefer the term ‘existential perfect,’ indicating an existential quantification over past points of time. Experientials proper and experiential perfects are typical of interrogative and negated sentences, but not exclusively. Their meaning is incompatible with specific time adverbials, and sometimes this restriction holds true of other kinds of perfects as well; thus, sentence (4) is ungrammatical in English: (4) *I have woken up at 4 o’clock this morning.

However, a perfect would be possible – though not the only alternative – in Finnish and Bulgarian, for instance. This is because there exists a possible CR reading – ‘‘I woke up so early that I am now tired.’’ According to Dahl (1985: 137–138), Swedish occupies an intermediate position: a specific time adverbial can combine with the perfect if it is part of the information focus. The degree of incompatibility of specific time adverbials with the perfect in a particular language shows to what extent it has become a predominantly experiential form, or a kind of past indefinite tense. Other kinds of perfect meanings mentioned in the literature include the ‘hot news perfect’ (McCawley, 1971), as in: (5) Mary has had her baby: it’s a girl!

and the ‘perfect continuing,’ also known as the ‘perfect of persistent situation,’ as in:

the main verb (cf. Maslov 1984: 224–248), main verb þ participle meaning ‘already,’ and constructions involving verbs like ‘finish’ or ‘cast aside.’ The first two, common in European languages, are originally resultatives at the early stages of their grammaticalization; the two latter sources can be called ‘completive’ (Bybee et al., 1994: 57–61). A perfect deriving from a possessive construction may involve an auxiliary meaning ‘to have’; if this is the case, it can be called a ‘have perfect’ or, using a Latin name, a ‘habeo perfect.’ Because transitive verbs with the meaning ‘to have’ are rare outside European languages, the have perfect is a typically European areal phenomenon. A copula-based perfect is a ‘be perfect’, or a ‘sum perfect.’ Typologically, the perfect is a gram type that is frequent, that is, likely to appear in different languages; but unstable, as it often tends to be changed into something else – often a general past tense (as in most Slavic languages; Tommola, 2000) or a perfective past tense (as in many Romance languages and dialects; Squartini and Bertinetto, 2000). In a single language there can be two or three perfect-like grams that are at different stages of their grammaticalization path (cf. Graves [2000] for Macedonian). In some languages the perfect has developed evidential functions – or has even become a predominantly evidential gram, as is the case in a large area stretching from the west of the Black Sea to Central Asia (Haarmann, 1970; Dahl, 1985: 149–153; Friedman, 1986), though the perfect is not the sole diachronic source of evidentials. The evidential function most typical of perfects and resultatives is inferentiality. This is the case with the Scandinavian perfect, for instance (Haugen, 1972; Kinnander, 1973; cf. also Weinrich, 1964: 84–86 for German). Inferentiality is resultativity the other way round, as it were – from the results we infer that an event must have occurred. See also: Aspect and Aktionsart; Causatives; Event-Based Semantics; Evidentiality; Grammatical Meaning; Implicature; Tense.

(6) Mary has been waiting for him for an hour.

These two are minor types only; in many languages, the present would be the tense used in (6). As for their morphological marking, perfects, resultatives, and experientials are typically periphrastic (analytic). One important exception is the old IndoEuropean perfect (as attested in Ancient Greek and Old Indic), which is an inflectional (synthetic) form. Bybee and Dahl (1989: 67–68) list four typical diachronic sources of the perfect in the languages of the world: copula þ past participle of the main verb, possessive constructions involving a past participle of

Bibliography ¨ (1989). ‘The creation of tense and aspect Bybee J & Dahl O systems in the languages of the world.’ Studies in Language 13, 51–103. Bybee J, Perkins R & Pagliuca W (1994). The evolution of grammar: tense, aspect, and modality in the languages of the world. Chicago: University of Chicago Press. Comrie B (1976). Aspect. Cambridge: Cambridge University Press. Comrie B (1985). Tense. Cambridge: Cambridge University Press.

Performative Clauses 677 ¨ (1985). Tense and aspect systems. Oxford: Basil Dahl O Blackwell. ¨ (ed.) (2000). Tense and aspect in the languages of Dahl O Europe. Empirical Approaches to Language Typology, EUROTYP 20–26. Berlin: Mouton de Gruyter. ¨ & Hedin E (2000). ‘Current relevance and event Dahl O ¨ (ed.). 385–401. reference.’ In Dahl O Friedman V A (1986). ‘Evidentiality in the Balkans: Bulgarian, Macedonian, and Albanian.’ In Chafe W & Nichols J (eds.) Evidentiality: the linguistic coding of epistemology. Advances in Discourse Processes, 20. Norwood, N.J.: Ablex. 168–187. Graves N (2000). ‘Macedonian – a language with three ¨ (ed.). 479–494. perfects?’ In Dahl O Haarmann H (1970). Die indirekte Erlebnisform als grammatische Kategorie. Eine eurasische Isoglosse. Vero¨ffentlichungen der Societas Uralo-Altaica, 2. Wiesbaden: Otto Harrassowitz. Haugen E (1972). ‘The inferential perfect in Scandinavian: a problem for contrastive linguistics.’ The Canadian Journal of Linguistics 17, 132–139. Kinnander B (1973). ‘Perfektum i ‘sekunda¨r’ anva¨ndning.’ Nysvenska studier 53, 127–172. Lindstedt Jouko (2000). ‘The perfect – aspectual, temporal ¨ (ed.). 365–383. and evidential.’ In Dahl O ¨ Lindstedt et al. (2000). ‘The perfect questionnaire.’ In Dahl O (ed.). 800–809.

McCawley J D (1971). ‘Tense and time reference in English.’ In Fillmore C & Langendoen T (eds.) Studies in linguistic semantics. New York: Holt, Rinehart & Winston. 96–113. McCoard R W (1978). The English perfect: Tense-choice and pragmatic inferences. Amsterdam: North Holland. Mangione L & Dingxuan L (1993). ‘A compostional analysis of -guo and -le.’ Journal of Chinese Linguistics 21(1), 65–122. Maslov Ju S (1984). Ocˇerki po aspektologii. Leningrad: Izdate’lstvo Leningradskogo universiteta. Maslov Ju S (1988). ‘Resultative, perfect, and aspect.’ In Nedjalkov (ed.). 63–85. Nedjalkov V P (ed.) (1988). Typology of resultative constructions. Typological Studies in Language, 12. Amsterdam: John Benjamins. Nedjalkov V P & Jaxontov S J (1988). ‘The typology of resultative constructions.’ In Nedjalkov V P (ed.). 3–62. Reichenbach H (1966). Elements of symbolic logic. New York: The Free Press; London: Collier, Macmillan. Squartini M & Bertinetto P M (2000). ‘The simple and ¨ compound past in Romance languages.’ In Dahl O (ed.). 403–439. Tommola H (2000). ‘On the perfect in North Slavic.’ In ¨ (ed.). 441–478. Dahl O Weinrich H (1964). Tempus: Besprochene und erza¨hlte Welt. Stuttgart: W. Kohlhammer.

Performative Clauses K Allan, Monash University, Victoria, Australia

I welcome you.

ß 2006 Elsevier Ltd. All rights reserved.

I advise you to do it.

The constative utterance, under the name so dear to philosophers, ‘statement,’ has the property of being true or false. The performative utterance, by contrast, can never be either. It has its own special job; it is used to perform an action. There are five necessary conditions (NCs) and one sufficient condition (SC) on using performatives felicitously. To issue such an utterance is to perform the action – an action, perhaps, that we scarcely could perform, at least with so much precision, in any other way. Here are some examples (Austin, 1963: 22). The constative utterance, under the name so dear to philosophers, of statement, has the property of being true or false. The performative utterance, by contrast, can never be either: it has its own special job, it is used to perform an action. To issue such an utterance is to perform the action – an action, perhaps, which one scarcely could perform, at least with so much precision, in any other way. Here are some examples: I name this ship ‘Liberte´’. I apologise.

According to Austin, by uttering such clauses under the right conditions, the speaker performs, respectively, the acts of naming, apologizing, welcoming, and advising.

Necessary and Sufficient Conditions Necessary Condition 1

NC1: An explicit performative clause complies with the normal grammatical rules of the language and contains a verb that names the illocutionary point of the utterance. (1) I promise to call Jo tomorrow (2) I’ll call Jo tomorrow

In (1), the speaker uses an explicit performative clause (in boldface) to make a promise. The speaker could also have made the promise by uttering (2), in which the promise is not explicitly spelled out in the semantics of the verb but is inferred by means

678 Performative Clauses

demonstrated by Searle (1975), Bach and Harnish (1979), and Allan (1986). Here is a short list of performative verb listemes (there are many more): abjure, abolish, accept, acknowledge, acquit, admonish, affirm, agree to, announce, answer, ascribe, ask, assent, assert, assess, assume, baptize, beg, bet, bid, call upon, caution, charge, christen, claim, classify, command, commiserate, compliment, concur, congratulate, conjecture, convict, counsel, declare, declare out, delegate, demand, demur, deny, describe, diagnose, disagree, dispute, donate, dub, excuse, exempt, fire, forbid, give notice, grant, guarantee, guess, hire, hypothesize, identify, implore, inform, instruct, license, notify, offer, order, pardon, plead, pray, predict, prohibit, proscribe, query, question, rank, recommend, refuse, reject, renounce, report, require, rescind, resign, sanction, say, state, submit, suggest, summon, suppose, swear, tell, testify, thank, urge, volunteer, vouch for, withdraw. Necessary Condition 2

NC2: The performative verb must be in the present tense, because the illocutionary act is defined on the moment of utterance. Contrast the performative in (3) with the same listeme used nonperformatively in (4). (3) I promise to take Max to a movie tomorrow (4) I promised to take Max to a movie tomorrow

In saying I promise in (3), the speaker makes a promise; but the words I promised in (4) do not constitute the making of a promise; instead, they report that a promise was made. Necessary Condition 3

NC3: (In English) a performative may occur in either the simple or progressive aspect. A performative verb normally occurs in the simple aspect, perhaps for the same reason that the simple aspect is normal in on-the-spot reporting of football matches, baseball games, and so on. However, there are occasions when a performative may occur in the progressive aspect, as in (5) and (7). (5) I am requesting you (for the umpteenth time) to tell me your decision (6) I request you (for the umpteenth time) to tell me your decision

Example (5) has the illocutionary point of a request; the grounds for claiming it to be a statement about a request are no stronger than the grounds for claiming the same about (6).

(7) That horse has won its third race in a row, and I’m betting you $100 it’ll win on Saturday

Uttered in felicitous circumstances, (7) has the illocutionary point of a bet, so the hearer can justifiably reply You’re on! thereby taking up the bet and expecting the speaker to pay up when she or he loses, or vice versa. Necessary Condition 4

NC4: A performative clause must be declarative and realis – real, actual, factual – that is, denote an actualization of the illocutionary act. An explicit performative clause cannot be interrogative, imperative, or subjunctive. None of (8)–(10) is performative. (8) Shall I bet $50 on the cup? (9) Get out of here! (10) Should I recommend her for the job?

NC4 also places constraints on the modal auxiliaries that may occur in performative clauses. (11) I will hereby promise to visit you next time I’m in town

In (11) the modal will is used in its root meaning ‘act on one’s will, desire, want, hence insist’ on carrying out the illocutionary act named in the performative verb. Example (11) denotes an ongoing act that can be glossed ‘I will with these words make the promise to visit you next time I am in town’; so, if Max utters (11) to his aged aunt but then does not visit her the next time he is in town, his aunt can justifiably chide him with breaking his promise: But you promised to visit! Contrast the performative promise of (11) with (12), in which the modal will is used in its epistemic ‘predict’ sense and is irrealis because it denotes an unactualized event, namely the future act of promising (to take place tomorrow). (12) Tomorrow when I see her, I will promise to visit next time I’m in town Sufficient Condition

SC: The legalistic-sounding adverb hereby, inserted into a performative clause, marks the verb as performative provided that hereby is used with the meaning ‘in uttering this performative.’ Note that hereby cannot legitimately be inserted between will and promise in (12) – as it was in (11) – which confirms that the clause is not performative. The pattern established by will holds generally for modal auxiliaries with performative verbs that actualize the illocutionary act. The modal must be used in

Performative Clauses 679

its root meaning, which is realis; compare the leavetaking in (13) with the warning in (14). Example (15) is ambiguous. (13) I must hereby take my leave of you (14) Trespassers should hereby be warned that they will be prosecuted (15) I can hereby authorize you to act as our agent from this moment

The root meaning of can and could is linked to the adjective cunning and north British dialect canny: ‘actor knows how and has the power and ability, to do act A.’ In (15), if can means ‘have the power to’ and hereby means ‘in uttering this performative,’ then (15) effects an authorization (‘I have the power by the utterance of these words to authorize you . . .’). However, if I can hereby means, say, ‘using this fax from the head office makes it possible for me to,’ then (15) is not an authorization but a statement about a possible authorization. Examples (16)–(18) are additional examples of nonperformative clauses with modals. (16) I might promise to visit you next time I’m in town (17) I might hereby authorize your release (18) I could hereby sentence you to 10 years imprisonment

The modal might is never realis, and it is obvious that (16) states the possibility that the speaker will promise without actualizing a promise. The hereby that occurs in (17) necessarily has the sense ‘using this’ and refers to something in context other than the performative utterance (e.g., a confession from another party); thus, (17) is nonperformative. Similarly, (18) does not pass sentence; compare it with I hereby sentence you to 10 years imprisonment. In (18), could is epistemic and irrealis, and hereby once again means ‘using this.’ Necessary Condition 5

NC5: The subject of the performative clause is conditioned by the fact that the speaker is the agent for either him- or herself or another person or institution, whichever takes responsibility for enforcing the illocution described by the performative verb. This influences the form of the actor noun phrase. In all the examples so far, the actor is I. However, consider the following examples. (19) We, the undersigned, promise to pay the balance of the amount within 10 days. (20) We hereby authorize you to pay on our behalf a sum not exceeding $500.

(21) You are hereby authorized to pay . . .. (22) Notice is hereby given that trespassers will be prosecuted. (23) The court permits you to stand down.

Examples (19)–(20) have we; (21)–(22) are passive voice, and the authorization is made either on the speaker’s behalf or on behalf of someone else; there is a third-person actor in (23), in which an authorized person utters the performative on behalf of the court. The verb permits is performative because it is the issuing of this utterance that actually grants the permission.

Other Issues Explicit performatives can be negative. Example (24) performs an act of not-promising; note the scope of the negative: An act of not-promising is different from an act of promising not to do something, as in (25). (24) I don’t promise to come to your party, but I’ll try to make it. (25) I promise not to come to your party.

Austin (1963, 1975) insisted on a distinction between what he called constatives, which have truth values, and performatives, which, instead of truth values, have felicity conditions. In his opinion, (26) has no truth value but is felicitous if there is a cat such that the speaker has the ability and intention to put it out, and it is infelicitous – but not false – otherwise. This contrasts with (27), which is either true if the speaker has put the cat out or false if he or she has not. (26) I promise to put the cat out. (27) I’ve put the cat out.

The claim that performatives do not have truth values was challenged from the start (Cohen, 1964; Lewis, 1970; Bach, 1975), and Austin seems to have been misled by the fact that the truth value of a performative is less communicatively significant than its illocutionary point. It is often assumed (e.g., by Gazdar, 1981) that performative clauses have only one illocutionary force: The main verb expresses the illocutionary point directly. But an analysis of (28) makes this impossible; its primary illocution – like that of every performative clause – is declarative. (28) I promise to go there tomorrow

The primary illocution is ‘S28 (the speaker of (28)) is saying that S28 promises to go there tomorrow.’ This is not the illocutionary point of (28), however. S28 is using this primary illocution as a vehicle for a further

680 Performative Clauses

illocution to be read off the performative verb, namely ‘S28 reflexively intends the (primary) declarative to be a reason for H28 (the hearer of (28)) to believe that S28 undertakes and intends (i.e., S28 promises) to go there tomorrow.’ There is no further inference to draw, so this is the illocutionary point of (28). The speaker has no choice but to make a promise indirectly by means of a declarative; the grammar of English determines the matter. What additional evidence is there that performatives are declaratives in primary illocution as well as form? First, there is the obvious similarity between (28) and (29). (29) I promised to go there tomorrow

Unlike (28), which is in the present tense and has the illocutionary point of a promise, (29) is past tense (which violates the definitions of performative clauses) and has the illocutionary point of a statement (or report) about a promise made in the past. The primary illocution of (29) is ‘S29 is saying that S29 promised to go there tomorrow.’ This is not the only parallel with (28); H29 will interpret (29) (subconsciously, and not in so many words) as ‘S29 reflexively intends the declarative to be a reason for H29 to believe that S29 did undertake and intend to go there tomorrow. There is no further inference to draw, so the illocutionary point of (29) is that S29 did undertake and intend to go there tomorrow.’ Note that the undertaking in both (28) and (29) remains to be fulfilled. Although S29 is not actually making the promise in (29), as S28 is in (28), nevertheless, provided all normal cooperative conditions hold, S29 is as much obliged to fulfill the promise reported in (29) as S28 is in (28)! The presumption that the primary illocution of explicit performatives is that of a declarative permits a commonsense account of the similarity and difference between (28) and (29). Second, there is a distinction between saying F and saying that F. The former reports locutions; the latter reports statements. Imperatives and interrogatives do not make statements, but declaratives do. Compare the sentences in Table 1. In order to be reported by saying that, the propositional content of the imperatives and interrogatives needs to be recast as the declarative sentences in Table 1. This is not the case with a performative because its primary illocution is already that of a declarative. Compare the sentences in Table 2, for which no recast declarative sentences are needed. Third, there is a set of adverbials that modify primary illocutionary acts, for example, honestly, for the last time, seriously, frankly, once and for all, in the first place, and in conclusion. Consider (30):

Table 1 Imperatives and interrogatives with saying F versus saying that F Imperative

Interrogative

go! I said go *I said that go

what’s your name? I said what’s your name? *I said that what’s your name?

Declarative

I said that you must go

I said that I want to know your name

Table 2 Performatives with saying F versus saying that F Declarative

Performative

the beer’s cold I said the beer’s cold I said that the beer’s cold

I promise to go there tomorrow I said I promise to go there tomorrow I said that I promise to go there tomorrow

(30) In the first place I admit to being wrong; and secondly I promise it will never happen again.

Example (30) means ‘The first thing I have to say is that I admit to being wrong; and the second thing I have to say is that I promise it will never happen again.’ It is clear that secondly denotes a second act of saying, not a second act of promising; from this we may further deduce that in the first place identifies a first act of saying, not a first act of admitting. The evidence strongly supports the view that explicit performatives have the primary illocution of declarative and that the performative verb names the illocutionary point. See also: Intention and Semantics; Lexical Conditions;

Mood, Clause Types, and Illocutionary Force; Pragmatic Determinants of What Is Said; Pragmatics and Semantics; Prosody; Speech Act Verbs; Speech Acts; Speech Acts and AI Planning Theory; Speech Acts and Grammar.

Bibliography Allan K (1986). Linguistic meaning (vols. 1–2). London: Routledge and Kegan Paul. Austin J L (1963). ‘Performative-constative.’ In Caton C E (ed.) Philosophy and ordinary language. Urbana, IL: University of Illinois Press. 22–54. (Reprinted in Searle J R (ed.) (1971). The philosophy of language. London: Oxford University Press. 1–12.) Austin J L (1975). How to do things with words (2nd edn.). Urmson J O & Sbisa` M (eds.). Oxford: Oxford University Press. Bach K (1975). ‘Performatives are statements too.’ Philosophical Studies 28, 229–236. (Reprinted, slightly amended, Bach K & Harnish R M (eds.) (1979). Linguistic communication and speech acts. Cambridge, MA: MIT Press. 203–208.)

Philosophical Theories of Meaning 681 Bach K & Harnish R M (1979). Linguistic communication and speech acts. Cambridge, MA: MIT Press. Ballmer T T & Brennenstuhl W (1981). Speech act classification: a study in the lexical analysis of English speech activity verbs. Berlin: Springer-Verlag. Cohen L J (1964). ‘Do illocutionary forces exist?’ Philosophical Quarterly 14(55), 118–137. (Reprinted in Rosenberg J & Travis C (eds.) (1971). Readings in the philosophy of language. Englewood Cliffs, NJ: PrenticeHall. 580–599.) Gazdar G (1981). ‘Speech act assignment.’ In Joshi A, Webber B L & Sag I (eds.) Elements of discourse understanding. Cambridge: Cambridge University Press. 64–83. Hare R M (1970). ‘Meaning and speech acts.’ Philosophical Review 79, 3–24. (Reprinted in Hare R M (ed.) (1971). Practical inferences. London: Macmillan. 74–93.) Jackendoff R S (1972). Semantic interpretation in generative grammar. Cambridge MA: MIT Press. Katz J J (1977). Propositional structure and illocutionary force. New York: Thomas Crowell. Lewis D (1970). ‘General semantics.’ Synthese 22, 18–67. (Reprinted in Davidson D & Harman G (eds.) (1972).

Semantics of natural language. Dordrecht: Reidel. 169–218.) Recanati F (1987). Meaning and force: the pragmatics of performative utterances. Cambridge, UK: Cambridge University Press. Sadock J M (1974). Toward a linguistic theory of speech acts. New York: Academic Press. Schreiber P A (1972). Style disjuncts and the performative analysis. Linguistic Inquiry 3, 321–347. Searle J R (1969). Speech acts. London: Cambridge University Press. Searle J R (1975). ‘Indirect speech acts.’ In Cole P & Morgan J L (eds.) Syntax and semantics 3: Speech acts. New York: Academic Press. 59–82. (Reprinted in Searle J R (1979). Expression and meaning: studies in the theory of speech acts. Cambridge, UK: Cambridge University Press.) Searle J R (1979). Expression and meaning: studies in the theory of speech acts. Cambridge, UK: Cambridge University Press. Vendler Z (1972). Res cogitans. Ithaca, NY: Cornell University Press. Wierzbicka A (1987). English speech act verbs: a semantic dictionary. Sydney, Australia: Academic Press.

Philosophical Theories of Meaning R M Martin, Dalhousie University, Halifax, Nova Scotia, Canada ß 2006 Elsevier Ltd. All rights reserved.

The Direct Reference Theory It is obvious that an important fact about language is that bits of it are systematically related to things in the world. ‘Referential’ theories of meaning hold that the meaning of an expression is a matter, somehow, of this connection. The most immediately plausible application of this theory is to the meaning of proper names: the name ‘Benedict Spinoza’ is connected to the philosopher, and this fact appears to exhaust the meaning of that name. The other names – ‘Benedictus de Spinoza,’ ‘Baruch Spinoza,’ and ‘Benedict d’Espinosa’ – mean the same because they are connected to the same person. But even in the case of proper names, problems arise. For example, consider proper names with nonexistent references: ‘Santa Claus.’ If the meaning of a proper name is constituted by nothing but its relationship to the bearer of that name, then it follows that ‘Santa Claus’ is meaningless; but this seems wrong, because we know perfectly well what ‘Santa Claus’ means, and we can use it perfectly well, meaningfully.

Another example would be the two proper names applied to the planet Venus by the Ancient Greeks, who were unaware that it was the same planet that appeared sometimes in the evening sky, when they called it ‘Hesperus’ and sometimes in the morning sky, when they called it ‘Phosphorus.’ Because these two names in fact refer to one and the same object, we should count them as having exactly the same meaning. It would appear to follow that someone who knew the meaning of both names would recognize that the meaning of one was exactly the same as the meaning of the other, and therefore would be willing to apply them identically. But the Greeks, when seeing Venus in the morning sky, would apply ‘Phosphorus’ to it, but refuse to apply ‘Hesperus.’ Does it follow that they did not fully understand the meanings of those terms? That is an implausible conclusion, since these are terms of the ancient Greek language: how could the most competent speakers of that language fail to understand the meanings of two terms in that language? It looks much more plausible to say that the fact that Hesperus and Phosphorus are identical is not a consequence of the meanings of those words. So meaning is apparently not exhausted by reference. (This example and this argument were originated by Frege.)

682 Philosophical Theories of Meaning

But here is a second sort of difficulty for the reference theory. Even if it could do a plausible job of explaining the meaning of proper names, it is not at all clear what it should do with other elements of language. Proper names, after all, make up only a small part of language, and an atypical part, insofar as meaning is concerned, at that; one does not find them in most dictionaries, for example. Consider whether this sort of approach to meaning might be extended to linguistic items other than proper names. It is a short step from there to more complex, less direct ways of referring, for example, ‘the Amsterdam-born author of the Ethics.’ If this definite description gets its meaning by its reference, then since it refers to Spinoza again, it must mean the same as those other names. But a problem immediately arises here, similar to the ‘Hesperus/Phosphorus’ worry. One might understand the meaning of ‘Benedict Spinoza’ perfectly, it seems, without knowing some facts about the philosopher, for example, that he was born in Amsterdam and wrote the Ethics; and, as a result, although one understood ‘the Amsterdam-born author of the Ethics’ he or she would not know that this referred to Spinoza. A similar problem arises with the ‘Santa Claus’ worry: ‘the present king of France’ is surely meaningful, although it is without reference (Russell’s famous example). Still other linguistic elements provide greater difficulty for a reference theory. How, for example, is the meaning of the predicate ‘is wise,’ occurring, for example, in ‘Spinoza is wise,’ to be explained in terms of reference? Particular wise objects exist, to be sure – Spinoza for one. But clearly it is not helpful to think that ‘is wise’ here gets its meaning merely by referring to Spinoza again – which would add nothing – or to some other wise person – which seems irrelevant. And what if that sentence were false (but meaningful), and Spinoza were not among the wise things: what would ‘is wise’ refer to then? More likely ‘is wise’ refers to a set of things – the wise individuals (Spinoza, Socrates, Bertrand Russell, etc.). But the sentence clearly tells us more than that Spinoza belongs to the group Spinoza, Socrates, Bertrand Russell, etc. It refers not to that group, it seems, but rather to the property that is necessary for inclusion in that group: wisdom. It is controversial whether we really need to populate our universe with strange objects such as ‘properties’ and ‘universals’; but, in any case, even if they do exist, it’s not clear that ordinary predicates refer to them. For example, ‘wisdom’ may be the name of a particular thing, referred to in the sentence, ‘Wisdom appeals to Francine,’ but it is much less clear that this thing is referred to in the sentence ‘Spinoza is wise.’ A similar difficulty is posed by

common nouns, e.g., ‘philosopher.’ It does not seem that we could explain the meaning of this element in the sentence ‘Spinoza is a philosopher’ by claiming reference to a particular philosopher, the class of philosophers, or philosopher-ness. Furthermore, reference has nothing to do with grammatical structure, which one would think is an important part of the meaning of any sentence. These two sentences, ‘Jane loves John’ and ‘John loves Jane,’ make the same references (to Jane, John, and loving, perhaps) but they surely mean something very different. A sentence conveys more than a series of references. It does not merely point at Jane, John, and the property of loving; in addition, it makes the claim that Jane loves John (or vice versa).

Meaning as Truth Conditions Perhaps a more promising way to extend the reference theory for common nouns, predicates, and other linguistic elements is to think of them as functions. Consider the analogy with arithmetic: 5, 13, 9, and so on are the names of numbers (whatever they are), but x/3 ¼ 9 is a function from numbers to a ‘truth value.’ With ‘27’ as the argument, its value is TRUE. With ‘16’ as the argument, its value is FALSE. Its meaning consists in the systematic way in which it pairs arguments with truth values. Now consider the systematic way ‘x is wise’ relates arguments to truth values. Substitute the proper name of certain things (any of the wise things) and the value is TRUE. Substitute the proper name of other things and the value is FALSE. The systematic way in which arguments and values are related in this case (it seems) exhausts the meaning of ‘is wise.’ Philosophers have proposed similar ways to deal with other linguistic elements. For example, adverbs might be regarded as functions taking a predicate as ‘argument’ and yielding a predicate as ‘value.’ This amendment in the spirit of the direct reference theory considerably extends its power and explains the function, basically in terms of reference, of various parts of language that do not by themselves refer. Partially because some of the functions in this approach have TRUE and FALSE as values, it was proposed that these truth values be considered the referents of sentences. (This move has seemed implausible to many, however: what are these things called truth values?) In the 1930s, Alfred Tarski proposed a definition of ‘truth’ that some philosophers thought would be the basis of a good theory of meaning. Tarski’s proposal was that what would constitute a definition of TRUE for a language L would be a complete list of statements giving the truth conditions for each of the

Philosophical Theories of Meaning 683

sentences in L. So one of these statements defining truth-in-English would be ‘Snow is white’ is true in English if and only if snow is white. (This may appear ludicrously trivial, because the sentence whose truth conditions are being given, and the reference to the truth condition itself, are in the same language. Of course, if you did not know what made ‘Snow is white’ true, this statement would not tell you. But that is not a problem with Tarski’s view in particular: no statement of a theory would be informative to somebody who didn’t speak the language in which the theory was stated.) Now, when we know the truth-conditions for a sentence, then do we know its meaning? Once you know, for example, what it takes to make ‘Snow is white’ true, then, it seems, you know what that sentence means. And what it takes, after all, is that snow be white. Obviously what one learns when one learns the meaning of a language cannot be the truth conditions of each sentence in the language, one at a time, because there are an infinite number of sentences. What is learned must be the meanings of a finite number of elements of sentences, and a finite variety of structures they go into. In the Tarskian view, then, the semantic theory of a language consists of a large but finite number of elements (words, perhaps), together with composition rules for putting them together into sentences and information sufficient for deriving the truth conditions for each of a potentially infinite number of sentences. One might object that this elaborate theory could not be what people know when they know what their language means. But perhaps providing this is not the purpose of a semantic theory (or a theory of anything). Baseball players are adept at predicting the path of a ball but only rarely are familiar with Newtonian theory of falling bodies. The idea here is attractive. If you know what sort of world would make a sentence true, then it seems that that is all it would take for you to know what that sentence means. This idea (though not particularly through Tarski’s influence) was the basis of the ‘logical positivist’ theories of meaning and of meaningfulness. Logical positivists enjoyed pointing out that Heidegger’s famous assertion ‘‘Das Nicht nichtet’’ (‘The nothing nothings’) was associated with no particular ways the world might be that would make it either true or false, and concluded that this statement, along with many others in metaphysics (e.g., McTaggart’s assertion that time is unreal), were meaningless. But it seemed that baby and bathwater alike were being flushed down the drain. Coupled with a rather narrow and ferocious empiricism, this criterion ruled out as meaningless a number of

assertions that were clearly acceptable. What empirical truth conditions are there now for statements about the past, or for assertions about invisible subatomic particles? But this may be a problem more about the logical positivists’ narrow empiricism than about their theory of meaning/meaningfulness. More germane here is the problem that many perfectly meaningful sentences have no truth conditions because they’re neither true nor false: ‘Please pass the salt,’ for example.

Sense and Reference Because of the Hesperus/Phosphorus problem mentioned above, Frege rejected the idea that the meaning of an expression is the thing it refers to. So Frege distinguished the thing to which a symbol referred (in his words, the Bedeutung, the ‘referent’ or ‘nominatum’) from what he counted as the meaning (the Sinn, usually translated as the ‘sense’) of the symbol, expressed by the symbol. The sense of a symbol, according to Frege, corresponded to a particular way the referent was presented. It might be called the ‘way of thinking about’ the referent. While his theory separated meaning from reference, nevertheless it can be considered a ‘mediated reference theory’: senses are ways a reference would be encountered, ways of getting to things from the words that refer to them. But it is the reference of included terms, not their sense, that determines the truth value of the sentence. Frege’s approach led him to a problem with sentences such as these: (1) Fred said, ‘‘Venus is bright tonight.’’ (2) Fred believes he’s seeing Venus.

Both sentences include examples of ‘opaque context,’ a part in the sentence in which substitution of a co-referring term can change the truth value of the sentence. In each of these sentences, substituting ‘the second planet from the sun’ for ‘Venus’ may make a true sentence false, or a false one true. In (1), an example of ‘direct discourse,’ if Fred’s very words did not include ‘the second planet from the sun,’ then that substitution can make a true sentence into a false one. That substitution in (2) may result in a false sentence if Fred believes that Venus is the third planet from the sun. Frege’s solution to this problem is to stipulate that in direct discourse – word-for-word quotation – the expression quoted refers to ‘itself,’ rather than to its usual referent (in this case, Venus). And in belief contexts and some other opaque contexts, expressions refer to their ‘senses,’ not to their usual referents. But what, exactly, are these ‘senses’? First, it is clear that Frege did not intend them to be the ideas anybody

684 Philosophical Theories of Meaning

associates with words. Frege’s ‘senses’ are objective: real facts of the language whose conventions associate senses with its symbols. One may have any sort of association with a bit of language, but the conventions of the language specify only certain of them as meaning-related. (Therefore, Lewis Carroll’s Humpty Dumpty does not succeed in meaning ‘There’s a nice knock-down argument for you’ with ‘There’s glory for you,’ even though he insists ‘‘When I use a word it means just what I choose it to mean – neither more nor less.’’) But why not admit that in addition to the public languages there can be ‘private’ ones with idiosyncratic senses? More will be said about this later. But second, neither can ‘senses’ be characteristics of the things referred to: for then, whatever is a sense of ‘Hesperus’ would be a sense of ‘Phosphorus.’ Furthermore, there appear to be symbols in a language that have sense but no reference: ‘the present king of France’ and ‘Atlantis’ are examples. Reference-less terms pose a problem for Frege. Should he consider them words with sense but no reference? If so, then how can they figure in sentences with a truth-value? (This is similar to the ‘Santa Claus’ problem.) A promising approach seems to be to say that the sense of a term ‘T’ consists of those characteristics judged to be true of things that are called ‘T’ by competent speakers of the language. But this immediately creates a problem with proper names. If ordinary proper names have senses – associated characteristics that are the way of presenting the individual named, associated conventionally with that name by the language – then there would be corresponding definitional (hence necessary) truths about individuals referred to by proper names. But this is problematic. The sense of a name applying to one individual cannot be the sense of the name of any other individual, because senses are the way terms pick out their designata. So the characteristics associated with a name would have to constitute that individual’s ‘individual essence’ – characteristics uniquely and necessarily true of that individual. But many philosophers have doubted that there are any such characteristics. Even if we can find characteristics that uniquely designate an individual, Kripke (1972) influentially argued that these characteristics are never necessary. Suppose, for example, that ‘Aristotle’ carries the sense ‘Ancient Greek philosopher, student of Plato, teacher of Alexander the Great.’ It would follow that this determined the referent of ‘Aristotle’; so if historians discovered after all that no student of Plato’s ever taught Alexander the Great, then ‘Aristotle’ would turn out to be a bearer-less proper name, like ‘Santa Claus.’ But this is not how the proper name would work. Instead, we would just decide that Aristotle did not teach

Alexander after all. Kripke argues that all sentences predicating something of a proper-named individual are contingent, so proper names do not have senses. But, of course, they are meaningful bits of language. This problem may apply even more broadly than merely to proper names. Consider the meaning of the term ‘water.’ Back before anyone knew the real properties of what made water water – i.e., its chemical constitution – competent speakers applied the term to any colorless, odorless liquid. But they were sometimes wrong, because the characteristics then used to distinguish proper and improper applications of the term, although they happened to pick out water on the whole, were not the genuinely necessary and sufficient conditions for something to be water at all. In those days, then, the real sense of the word was totally unknown and unused by any competent speaker of the language. Quine argued that Frege’s senses lack what is necessary for well-behaved theoretical objects. We have no idea, for example, of their identity conditions: is the sense of this word the same as the sense of that? More about Quine will be discussed in the final section of this article.

The Idea Theory The theories discussed so far consider what linguistic elements mean, but other classical theories have concentrated on what people mean by bits of language. Words, Locke argued, are used as ‘sensible marks’ of ideas; the idea corresponding to a word is its meaning. This has a certain amount of intuitive plausibility, in that non-philosophers think of language as a way of communicating ideas that is successful when it reproduces the speaker’s idea in the hearer’s mind. The ‘idea theory’ of meaning received its fullest expression in conjunction with British Empiricist epistemology. For the classical empiricists, our ideas were copies of sense-impressions – presumably similar to the sense experiences themselves, except dimmer. These mental representations served us as the elements of thought and provided the meanings for our words. However, problems with this theory are obvious. For one thing, not every such association is relevant to meaning. For example, the word ‘chocolate’ makes me picture the little shop in Belgium where I bought an astoundingly impressive chocolate bar. But although one might want to say, in a sort of loose way, that that’s what ‘chocolate’ means to me, it doesn’t seem to be a real part of the word’s meaning. Someone else could be completely competent in using that word without any mental pictures of Belgium. Also, there are some meaningful terms that seem to be associated with no mental imagery, for example,

Philosophical Theories of Meaning 685

‘compromise.’ The problem of the meaning of ‘unicorn’ is solvable: images of a horse’s shape and an antelope’s horn can be mentally pasted together to provide a representation; but there are other problems. My image of my cat Tabitha might picture her facing right; but I’m to use this also to identify her as the bearer of that name when she’s facing left; so the mere association of a word with an image is not enough to give that word meaning. There must also be some procedure for using that image. Common nouns (‘dog’) could stand for any and all dogs, whereas the meaning of ‘dog’ was presumably a particular image of a particular dog. More of a theoretical mechanism is needed to explain why this image stands for Fido and Rover but not for Tabitha. And other sorts of words – logical words, prepositions, conjunctions, etc. – raise problems here too: how could sensory experiences be the source of their meaning? A more recent concern about the idea theory arose from the fact that the ideas that gave bits of language their meaning were private entities, whereas the meanings of a public language were presumably public. Clearly I would learn the word ‘cat’ by hearing you use the word in the presence of cats, but not in their absence; but according to the idea theory, I would have learned it correctly when my private image matches yours – something that is impossible for either of us to check. What we can check – identical identifications of objects as cats and non-cats – does not ensure identical private imagery and (according to the idea theory) has nothing directly to do with the meaning we invest ‘cat’ with anyway. Wittgenstein’s ‘private language argument,’ deployed against the idea theory, was considered devastating by many philosophers. Very briefly put, this argument is that the meaning of a public bit of language could not be given by a supposedly necessarily private item, such as a mental representation, because this would make impossible any public check – any real check at all – on whether anyone understood the meaning of a term; and without the possibility of a check, there was no distinction between getting the meaning right and getting it wrong.

Meaning as Use Wittgenstein’s hugely influential suggestion was that we think instead of sentences as ‘‘instruments whose senses are their employments’’ (1953: 421). Starting in the 1930s and 1940s, philosophers began thinking of the meaning of linguistic items as their potential for particular uses by speakers and attempting to isolate and describe a variety of things that people do with words: linguistic acts, accomplished through the use

of bits of language. One clear theoretical advantage of this approach over earlier ones was its treatment of a much wider variety of linguistic function. Whereas earlier approaches tended to concentrate on information giving, now philosophers added a panoply of other uses: asking questions, giving orders, expressing approval, and so on. This clearly represented a huge improvement on the earlier narrower views, which tried to understand all the elements of language as signs – representations – of something external or internal. Austin distinguished three kinds of things one does with language: (1) the ‘locutionary act,’ which is the utterance (or writing) of bits of a language; (2) the ‘illocutionary act,’ done by means of the locutionary act, for example, reporting, announcing, predicting, admitting, requesting, ordering, proposing, promising, congratulating, thanking; and (3) the ‘perlocutionary act,’ done by means of the illocutionary act, for example, bringing someone to learn x, persuading, frightening, amusing, getting someone to do x, embarrassing, boring, inspiring someone. What distinguishes illocutionary acts is that they are accomplished just as soon as the hearer hears and understands what the utterer utters. It is clear that the illocutionary act is the one of these three that is relevant to the meaning of the utterance. The performance of the act of merely uttering a sentence obviously has nothing to do with its meaning. Neither does whatever perlocutionary act is performed: you might bore someone by telling her about your trip to Cleveland or by reciting 75 verses of The fairie queen, but the fact that both of these acts may serve to bore the hearer does not show that they are similar in meaning. But similarity in meaning is demonstrated by the fact that two different locutionary acts serve to accomplish the same illocutionary act. For example, ‘Do you have the time?’ and ‘What time is it, please?’ perform the same illocutionary act (a polite request for the time) and are thus similar in meaning. However, this approach does not deny that what the other theories concentrated on is a significant part of language. In Austin’s classification, one part of many locutionary acts is an act of referring; when I say, ‘‘Aristotle was the student of Plato,’’ I’m probably performing the illocutionary act of informing you, but I’m doing that by means of the locutionary act of referring to Aristotle and Plato. And clearly many linguistic acts include ‘propositional content’: one reports, announces, predicts, admits, requests, orders, proposes, promises, and so on, ‘that p,’ so it seems that inside this theory we would need an account of the way any of these linguistic acts correspond to actual or possible states of the world.

686 Philosophical Theories of Meaning

A recent influential theory from Donald Davidson responds to these needs by combining, in effect, speech act theory with Tarskian semantics. According to Davidson’s proposal, the list of truth conditions for each assertion in a language provides an account of the language’s semantics – at least, of the propositional content of sentences in the language: what a statement, prediction, assertion, question, etc., states, predicts, asserts, asks, etc. This explains what is shared by ‘The light is turned off,’ ‘Turn off that light,’ ‘Is the light turned off?’ and so on. But secondly, according to Davidson, a theory of meaning needs a ‘mood indicator’ – an element of the sentence that indicates the use of that sentence – e.g., as a statement, request, or question.

Quine’s Skepticism Quine’s skepticism about meanings was among the most important 20th-century developments in the philosophy of language. Here is one way of approaching his position. Imagine a linguist trying to translate a tribe’s language. Suppose that the tribesmen said ‘‘gavagai’’ whenever a rabbit ran past. Does ‘gavagai’ mean ‘lo, a rabbit!’? The evidence might as well be taken to show that ‘gavagai’ asserts the presence of an undetached rabbit part, a temporal slice of a rabbit, or any one of a number of other alternatives, including even ‘Am I ever hungry!’ or ‘There goes a hedgehog!’ (if the tribesmen were interested in misleading you). Quine called this the ‘indeterminacy of translation.’ But, we might object, couldn’t further observation and experimentation decide which of these alternatives is the right interpretation? No, argued Quine, there are always alternatives consistent with any amount of observation. But, we might reply (and this is a more basic objection), what that shows is that a linguist never has absolutely perfect evidence for a unique translation. This is by now a familiar (Quinian) point about theory: theory is always to some extent undetermined by observation. In any science, one can dream up alternative theories to the preferred theory that are equally consistent with all the evidence to date. But in those cases, isn’t there a right answer – a real fact of the matter – which, unfortunately, we may never be in a perfect position to determine, because our evidence must always be equivocal to some extent? At least in the case of meaning, argued Quine, the answer is no, because for Quine, linguistic behavior is all there is to language, so there are no hidden facts about meaning to discover, with linguistic behavior as evidence. So meanings are not given by ideas in the head, Fregean senses, or anything else external to this behavior.

A similar sort of position was argued for more recently by Kripke (1982). Imagine that someone – Fred – has used a word ‘W’ to refer to various things, A, B, and C. Now he encounters D: is that referred to by ‘W’? One wants to say: if D is like A, B, and C, then Fred should go on in the same way and apply ‘W’ to D. But Kripke argues that there is no fact about Fred’s intentions or past behavior – no fact about what he means by ‘W’ – that would make it correct or incorrect for him to apply ‘W’ to D. Neither is there, in the external world, a real sufficient (or insufficient) similarity of D to A, B, and C that make it correct (or incorrect). The only thing that would make that application correct or incorrect is what a community of speakers using that word would happen to agree on. But does anti-realism about meaning really follow from these considerations? Suppose that Quine and Kripke are right, and all there is to language is social behavior. But maybe this does not imply that meanings are unreal. When we consider an action as social behavior, after all, we do not think of it (as Quine, in effect, did) merely as bodily movements. There are facts about the social significance of the behavior, above and beyond these movements, that give the movement its social significance. Perhaps it is these facts that would make one linguistic theory rather than another correct – that determine the meaning of the noises made by Quine’s natives, and whether Fred is following the linguistic rule when he applies ‘W’ to D. Language is a tool we know how to use, and the real meaning of our utterances is what we know when we know how to use that tool. See also: Aristotle and Linguistics; Compositionality; Defi-

nite and Indefinite Descriptions; Direct Reference; Evolution of Semantics; Expression Meaning vs Utterance/ Speaker Meaning; Ideational Theories of Meaning; Indeterminacy; Intention and Semantics; Logic and Language; Logical and Linguistic Notation; Meaning Postulates; Modal Logic; Mood, Clause Types, and Illocutionary Force; Nominalism; Proper Names: Philosophical Aspects; Reference and Meaning, Causal Theories; Reference: Philosophical Theories; Referential versus Attributive; Rigid Designation; Sense and Reference; Speech Acts; Thought and Language; Truth Conditional Semantics and Meaning; Use Theories of Meaning.

Bibliography Alston W P (1964). Philosophy of language. Englewood Cliffs, NJ: Prentice-Hall. Austin J L (1962). How to do things with words. Urmson J O & Sbisa M (eds.). Cambridge, MA: Harvard University Press. Blackburn S (1984). Spreading the word: groundings in the philosophy of language. Oxford: Clarendon Press.

Phrastic, Neustic, Tropic: Hare’s Trichotomy 687 Davidson D (1967). ‘Truth and meaning.’ Synthese 17, 304–323. Frege G (1892). ‘On sense and reference.’ In Geach P & Black M (eds.) Translations from the philosophical writings of Gottlob Frege. Oxford: Basil Blackwell. Grice H P (1957). ‘Meaning.’ Philosophical Review 66, 377–388. Kripke S (1972). ‘Naming and necessity.’ In Davidson D & Harmon G (eds.) Semantics of natural language. Dordrecht: Reidel. Kripke S (1982). On rules and private language. Cambridge, MA: Harvard University Press. Martin R M (1987). The meaning of language. Cambridge, MA: The MIT Press.

Mill J S (1872). A system of logic, book I (8th edn.). London: Longmans. Quine W V O (1951). ‘Two dogmas of empiricism.’ In From a logical point of view. Cambridge, MA: Harvard University Press. Quine W V O (1960). Word and object. Cambridge, MA: The MIT Press. Russell B (1905). ‘On denoting.’ Mind 14, 479–493. Searle J R (1969). Speech acts. Cambridge: Cambridge University Press. Stainton R J (1996). Philosophical perspectives on language. Peterborough, ON: Broadview Press. Wittgenstein L (1953). Philosophical investigations. Anscombe G E M (trans.). Oxford: Basil Blackwell.

Phrastic, Neustic, Tropic: Hare’s Trichotomy K Allan, Monash University, Victoria, Australia ß 2006 Elsevier Ltd. All rights reserved.

The terms ‘phrastic,’ ‘neustic,’ and ‘tropic’ were introduced to the theory of speech acts by philosopher Richard M. Hare. Hare (1949) had compared pairs like the imperative in (1) with the declarative in (2): (1) Keep to the path. (2) You will keep to the path.

He concluded that (1) and (2) have the same ‘phrastic,’ but a different ‘neustic,’ which he characterized as follows: (3) [Keeping to the path by you]phrastic[please]neustic (4) [Keeping to the path by you]phrastic[yes]neustic

It is tempting to symbolize neustic ‘please’ by ‘!’ and ‘yes’ by ‘‘’, but Hare (1970) gives us reason to compare (2) with (5) and (6). (5) You will keep to the path! (6) Will you keep to the path!(?)

Although (5) has the declarative form of (2), it has what Hare calls the same ‘subscription’ (illocutionary point) as (1). Example (6) also has the same illocutionary point, but expressed in the interrogative form. Hare (1970) introduced a third operator, ‘tropic,’ to capture the mood of the utterance, here identified with the clause type (see Mood, Clause Types, and Illocutionary Force). We can now translate (1)–(2) and (5)–(6) as follows: (7) [Keeping to the path by you]phrastic[!tropic] [please]neustic

(8) [Keeping to the path by you]phrastic[‘tropic] [yes]neustic (9) [Keeping to the path by you]phrastic[‘tropic] [please]neustic (10) [Keeping to the path by you]phrastic[?tropic] [please]neustic

Hare writes, ‘‘I shall retain the term ‘phrastic’ for the part of sentences which is governed by the tropic and is common to sentences with different tropics’’ (1970: 21); this is what Searle (1969) calls the ‘propositional content’ of the speech act. A ‘‘neustic has to be present or understood before a sentence can be used to make an assertion or perform any other speech act’’ (Hare, 1970: 22). Obviously, the inventory of neustics needs to be vastly increased beyond ‘please’ and ‘yes’ to include the extensive number of illocutionary points to which a speaker may subscribe. As we can see from (7)–(10), to share a phrastic and a tropic will not guarantee a common neustic; nor will the sharing of a phrastic and a neustic guarantee a common tropic. These three parts of a speech act are independent. Hare (1970) uses the distinction between neustic and tropic to explain the fact that (11) makes an assertion about what time it is, but no such assertion is made by the identical sentence as it occurs in the protasis (if-clause) in (12), nor in the complement clause in (13). (11) It is ten o’clock. (12) If it is ten o’clock, then Jane is in bed. (13) Max says that it is ten o’clock.

In each of (11)–(13) the clause ‘it is ten o’clock’ has the same phrastic and the same tropic, but the neustic is a property of the whole speech act.

688 Plurality See also: Assertion; Grammatical Meaning; Interrogatives; Mood, Clause Types, and Illocutionary Force; Propositions; Speech Acts and Grammar; Speech Acts.

Bibliography Hare R M (1949). ‘Imperative sentences.’ Mind 58, Reprinted in Hare (1971), 1–21.

Hare R M (1970). ‘Meaning and speech acts.’ Philosophical Review 79, 3–24. Reprinted in Hare (1971), 74–93. Hare R M (1971). Practical inferences. London: Macmillan. Searle J R (1969). Speech acts. Cambridge: Cambridge University Press.

Plurality P Lasersohn, University of Illinois at Urbana-Champaign, Urbana, IL, USA ß 2006 Elsevier Ltd. All rights reserved.

Plural expressions may be intuitively characterized as those involving reference to multiple objects. Semantic theories differ, however, in how this intuition is worked out formally. The most popular approach is to treat plural expressions as referring to some sort of ‘plural object,’ or group. That is, alongside individual objects such as people, tables, chairs, etc., we assume there are groups of such objects, and that plural expressions refer to these groups in much the same way as singular expressions refer to individuals. Just as singular noun phrases denote and quantify over individuals, plural noun phrases denote and quantify over groups; just as singular predicates hold true or false of individuals, plural predicates hold true or false of groups. An alternative, advanced by Schein (1993) (building on earlier logical work by George Boolos) is to regard a plural term as denoting each of several individuals directly, rather than denoting the group containing these individuals. This gives the effect of treating denotation for plural terms as a relation rather than a function; it is the denoting relation itself, rather than the denoted object, which is plural. The primary advantage of this technique is that it allows a reasonable treatment of noun phrases such as the sets that do not contain themselves, which potentially give rise to Russell’s paradox in more conventional treatments. Among analyses that do regard plural expressions as referring to groups, the main options are to identify groups with sets, or with mereological sums. The latter choice is favored especially by those who regard sets as abstract, mathematical objects existing outside of space and time. As Link (1998) puts it, ‘‘If my kids turn the living room into a mess I find it hard to believe that a set has been at work.’’ However, not

all authors share the intuition that sets of concrete objects are themselves abstract, and in any case the issue seems more philosophical than linguistic (but see the discussion of conjoined noun phrases below). No matter which approach is adopted, a central problem in the semantics of plurality is determining the range and distribution of readings available to sentences containing plural expressions. Sentence (1a), for example, is intuitively interpreted as predicating the property of being numerous to our problems collectively; no individual problem can be numerous. Sentence (1b), in contrast, requires all (or nearly all) the individual children to be asleep; the predicate applies distributively. Sentence (1c) seems ambiguous between collective and distributive readings, meaning either that the T.A.s together earned exactly $20 000, or that they each did: (1a) Our problems are numerous. (1b) The children are asleep. (1c) The T.A.s earned exactly $20 000.

Such examples raise several issues: Are plural expressions authentically ambiguous between collective and distributive readings, or are both interpretations covered under a single, very general meaning? If there is an authentic ambiguity, is it a simple two-way ambiguity between collective and distributive readings, or are there other possibilities? What is the locus of the ambiguity: in the noun phrase, the predicate, or both? In favor of the view that there is an authentic ambiguity, consider a situation in which there are two T.A.s, and each of them earned exactly $10 000. In this case, sentence (1c) is true. Sentence (2) is also true in this situation: (2) The T.A.s earned exactly $10 000.

This suggests that there are two distinct figures, both of which are the exact amount the T.A.s earned: in one sense they earned $20 000, and in another sense they earned $10 000 – but this appeal to multiple senses amounts to a claim of ambiguity.

Plurality 689

An ambiguity is also suggested by patterns of anaphora, as argued by Roberts (1991). Sentence (3a) allows the continuation in (3b) if (3a) is interpreted as predicating lx9y[piano(y) & lift(x, y)] of the group of students as a whole, but not if this predicate is understood as applying to each student separately. (A third interpretation, in which there is some piano y such that the predicate lifted y applies to the individual students, also allows the continuation in (3b), but this does not affect the argument.) (3a) The students lifted a piano. (3b) It was heavy.

If sentence (3a) is unambiguous, with no formal differentiation between the collective and distributive interpretations, it is hard to see how we could capture this difference in anaphoric potential. If there is an ambiguity, one may ask whether the fully collective and fully distributive readings are the only ones available, or if instead there may be ‘intermediate’ readings. Intermediate readings appear to be called for in examples like (4): (4) The shoes cost $75.

This sentence is most naturally interpreted as meaning that each pair of shoes costs $75, not that each individual shoe costs that much, or that all the shoes together cost that much. Gillon (1987) argues that sentences with plural subjects have as many readings as there are minimal covers of the set denoted by the subject noun phrase, in which a cover of a set A is a set C of nonempty subsets of A such that their union, [ C, is equal to A, and a cover of A is minimal iff it has no subsets that are also covers of A; this idea is developed further by Schwarzschild (1996) and others. Under this proposal, the pragmatic context makes a particular cover salient, and the predicate is required to hold of each element of the cover. The fully distributive and fully collective readings reemerge as special cases. However, cover-based analyses face a challenge in dealing with examples like (1c): Suppose John, Mary, and Bill are the T.A.s, and each of them earned $10 000. In this case, the predicate earned exactly $20 000 holds of each cell of the cover {{John, Mary}, {John, Bill}}, but sentence (1c) is not intuitively true in this situation. Whether it appeals to covers or not, an ambiguity analysis must address the issue of where in the sentence the ambiguity is located. Early treatments often seemed to take for granted that it was the plural noun phrases themselves which are ambiguous, but many more recent treatments trace the ambiguity to the predicates with which the noun phrases combine (Scha, 1984; Dowty, 1986; Roberts, 1991; Lasersohn,

1995). A standard argument for this view comes from examples like (5): (5) The students met in the bar and had a beer.

The natural interpretation is that the students met collectively but had separate beers. This reading is easily obtained if we treat the subject noun phrase as unambiguously denoting the group of students as a whole, and treat the conjunct verb phrases as predicates of groups, with the distributive predicate had a beer holding of a group iff each of its individual members had a beer; the denotation of the whole, coordinate verb phrase may then be obtained by intersection. If we try to claim that the collective/distributive ambiguity is located in the subject noun phrase, however, it seems impossible to give a consistent answer as to which reading it takes in this example. A suitable ambiguity in the predicate may be obtained by positing an implicit operator on the predicate. Link (1998) and Roberts (1987) suggest an optional operator on the verb phrase, notated ‘‘D’’, and interpreted as lPlx9y[yPx!P(y)], where P is the relation an individual stands in to the larger groups of which it forms a part. This gives a simple two-way ambiguity, depending on whether the operator is present or not; intermediate readings may be obtained using a more complex operator which quantifies over elements of a cover (Schwarzschild, 1996). Either operator is easily generalized across types to give distributive readings for non-subject argument places (Lasersohn, 1998). It should be noted that the lexical semantics of some predicates prevent them from participating in the collective/distributive ambiguity: a verb like sleep, for example, cannot apply to a group without also applying to the individual members of the group. Adding a distributivity operator in this case is redundant, and does not result in a difference in meaning. Conversely, certain predicates apply only to groups: gather, for example. Such predicates do show collective/distributive ambiguities however, when applied to arguments denoting groups of groups: (6) The tribes gathered.

Example (6) may be used to mean that each tribe gathered separately, or that all the tribes gathered together. A good deal of work has been devoted to the relation between conjunction and plurality (Link, 1998; Hoeksema, 1983; Landman, 1989a; Lasersohn, 1995; Schwarzschild, 1996; Winter, 2001) because conjoined noun phrases sometimes admit collective readings, as in (7): (7) John and Mary lifted the piano.

690 Plurality

Such examples suggest that coordinate noun phrases may denote groups in much the same fashion as plural noun phrases. Such sentences also admit a distributive reading, which may be obtained either through the use of a distributivity operator as with other plural noun phrases, or by reducing the coordination to propositional conjunction, via a generalized conjunction operator or conjunction reduction transformation. Conjoined noun phrases have sometimes been used to argue that semantic theory must allow reference to higher-order groups (Hoeksema, 1983; Landman, 1989a). This is easily accomplished in set theory, since sets may contain other sets as members: {a, {b, c}} 6¼ {{a, b}, c} 6¼ {a, b, c}, but is less straightforward if groups are modeled as mereological sums. (8a) Blu¨cher and Wellington and Napoleon fought against each other at Waterloo. (8b) The cards below seven and the cards from seven up are separated.

Example (8a), from Hoeksema (1983), may be parsed in either of two ways: [[Blu¨cher and Wellington] and Napoleon] or [Blu¨cher and [Wellington and Napoleon]]. It is intuitively true relative to the first parse but false relative to the second, suggesting that the two parses correspond to denotations such as {{b, w}, n} and {b, {w, n}}. Likewise example (8b), from Landman (1989a), is not equivalent to The cards below 10 and the cards from 10 up are separated. But if reference to higher-order groups is disallowed, the subject noun phrases of these two sentences would seem to refer to the same group, namely the group containing all the individual cards as members. The opposing view that noun phrases need never refer to higher-order groups has been defended in detail by Schwarzschild (1996), who points out that the pragmatic context may make salient a particular division of the denotation into subgroups even if that denotation is first-order; correct truth conditions may be obtained if the semantics is sensitive to this pragmatically supplied division. A number of additional issues arise in the semantics of plurality, which can only briefly be mentioned here: Certain adverbs, such as together or separately, seem to force a collective or distributive reading, but the exact mechanism by which they do this is a matter of some dispute (Lasersohn, 1995; Schwarzschild, 1994; Moltmann, 1997). Plural expressions share a number of characteristics with mass terms, suggesting a unified analysis (Link, 1998). Plural noun phrases affect the aspectual class of predicates with which they combine, suggesting a parallel in the domain of events to the structure of groups among individuals (Krifka, 1989) – a parallel also suggested by the phenomenon of verbal plurality or ‘pluractionality’

(Lasersohn, 1995). Finally, the interpretation of bare plurals, or plural noun phrases with no overt determiner, and particularly the alternation between the existential, generic, and kind-level readings, illustrated in (9), has attracted enormous attention; but much of this work belongs more properly to the study of genericity and indefiniteness than to the semantics of plurality per se (Carlson, 1980; Carlson and Pelletier, 1995). (9a) Raccoons are stealing my corn. (9b) Raccoons are sneaky. (9c) Raccoons are widespread. See also: Generic Reference; Grammatical Meaning; Mass Expressions; Number; Numerals; Quantifiers; Vagueness.

Bibliography Carlson G (1980). Reference to kinds in English. New York: Garland Press. Carlson G & Pelletier F J (1995). The generic book. Chicago: University of Chicago Press. Dowty D (1986). ‘Collective predicates, distributive predicates, and all.’ In Marshall F (ed.) ESCOL ’86: Proceedings of the Third Eastern States Conference on Linguistics. Columbus: Ohio State University. 97–115. Gillon B (1987). ‘The readings of plural noun phrases in English.’ Linguistics and Philosophy 10, 199–219. Hamm F & Hinrichs E (1998). Plurality and quantification. Dordrecht: Kluwer Academic Publishers. Hoeksema J (1983). ‘Plurality and conjunction.’ In ter Meulen A (ed.) Studies in model theoretic semantics. Dordrecht: Foris Publications. 63–83. Krifka M (1989). ‘Nominal reference, temporal constitution, and quantification in event semantics.’ In Bartsch R, van Benthem J & van Emde Boas P (eds.) Semantics and contextual expression. Dordrecht: Foris Publications. 75–115. Landman F (1989a). ‘Groups, I.’ Linguistics and Philosophy 12, 559–605. Landman F (1989b). ‘Groups, II.’ Linguistics and Philosophy 12, 723–744. Landman F (2000). Events and plurality: the Jerusalem lectures. Dordrecht: Kluwer Academic Publishers. Lasersohn P (1995). Plurality, conjunction and events. Dordrecht: Kluwer Academic Publishers. Lasersohn P (1998). ‘Generalized distributivity operators.’ Linguistics and Philosophy 21, 83–92. Link G (1998). Algebraic semantics in language and philosophy. Stanford, California: CSLI Publications. Moltmann F (1997). Parts and wholes in semantics. Oxford: Oxford University Press. Roberts C (1991). Modal subordination, anaphora and distributivity. New York: Garland Press. Scha R (1984). ‘Distributive, collective and cumulative quantification.’ In Groenendijk J, Janssen T & Stokhof M (eds.) Truth, interpretation and information. Dordrecht: Foris Publications. 131–158.

Polarity Items 691 Schein B (1993). Plurals and Events. Cambridge, Mass.: MIT Press. Schwarzschild R (1994). ‘Plurals, presuppositions, and the sources of distributivity.’ Natural Language Semantics 2, 201–248.

Schwarzschild R (1996). Pluralities. Dordrecht: Kluwer Academic Publishers. Winter Y (2001). Flexibility principles in boolean semantics. Cambridge, Mass.: MIT Press.

Polarity Items J Hoeksema, University of Groningen, Groningen, The Netherlands

negative polarity item. Among the environments marked as [affective] are:

ß 2006 Elsevier Ltd. All rights reserved.

. the scope of negation (including so-called n-words, expressions such as never, nothing, nobody, nowhere, no, none, neither); . complements to negative predicates such as unpleasant, unlikely, odd, impossible, lack, refrain from and many more; . clauses introduced by without, as if, before; . comparative clauses; . questions; . antecedent clauses in conditionals; . restrictive relative clauses modifying universal and superlative noun phrases; . degree complements of too; . the scope of negative quantifiers and adverbs such as few/little, seldom, rarely, barely, hardly, only, etc.

Words or expressions that either require or shun the presence of a negative element in their context are referred to as negative and positive polarity items (NPIs and PPIs), respectively. Familiar examples of NPIs in English are any and yet, while some and already exemplify the category of PPIs, e.g.: (1) The cat did not find any mice. (2) *The cat found any mice. (3) The cat found some mice. (4) *The cat did not find some mice.

It should be noted that (4) is acceptable when it is a denial or metalinguistic negation (‘no, you’re wrong, the cat did not find some mice’), otherwise it is odd. Many languages, perhaps all, have NPIs and PPIs, and their distribution has been the topic of a rapidly growing literature since the seminal work of Klima (1964).

Negative Polarity Items The core research questions regarding NPIs are (1) the proper delimitation of their distribution and (2) the underlying causes of this distribution. Regarding the former question, there has been an almost continuous debate as to whether the distribution of NPIs should be viewed in syntactic, semantic, or pragmatic terms. And of course any answer to the former question will have consequences for the latter as well. Foremost among the syntactic treatments of NPIs is that of Klima (1964), where it was noted that the environments in which polarity items find themselves at home are too diverse to treat them in terms of a dependency on (surface) negation. Instead, Klima proposed a treatment whereby the presence of a morphosyntactic feature [affective] acts as the trigger of a

No explanation was given for why these environments acted as hosts of polarity items. The feature [affective] was simply meant to ensure that they did. An important innovation due to Klima is the association of scope with the notion of c-command (or rather, ‘in-construction-with,’ in Klima’s own terminology). An expression is in the scope of negation if all nodes directly dominating the negative operator also dominate it. It was noticed early on that the same notion could also be used for anaphoric dependencies (cf., also Progovac, 1994 for a more recent account of the similarities between anaphoric dependencies and polarity phenomena). The equation of scope with c-command, while extremely influential, is not unproblematic, though (Hoeksema, 2000). While the work of Klima was already developing in the direction of generative semantics, using some amount of semantic decomposition to simplify the grammar, work by Seuren (1974) pushed this line of research much further, by aiming at the elimination of the feature [affective]. In Seuren’s proposal, all environments could be reduced to negative clauses in a properly abstract deep structure, e.g., few children ate any popcorn can be reduced to the deep structure of not many children ate any popcorn. A problem

692 Polarity Items

with this line of reasoning is that not all negative clauses are actually acceptable hosts of polarity items. Thus, Linebarger (1980) has noted that not all children ate any popcorn is ungrammatical. Hence, the presence or absence of negation by itself is not sufficient to license polarity items. Another problem with a purely syntactic approach is that polarity items may be sensitive to pragmatic factors. For example, NPIs in questions may be more acceptable when the question is viewed as rhetorical, and not as a request for information; polarity items in conditionals may be better when the conditional is viewed as a threat, and not as a promise (Lakoff, 1969). Ladusaw (1979), building on earlier work by Fauconnier (1975), and using the formal apparatus of Montague grammar, proposed to eliminate the feature [affective] by offering a semantic account of the licensing of polarity items. In particular, he noted that many of the contexts in which polarity items are acceptable have the property of downward monotonicity or implication reversal. Normally, expressions may be replaced by more general ones salva veritate: Jones is a phonologist will entail Jones is a linguist, given that phonologists are a subspecies of linguist. Now for the negative counterpart of these sentences, the direction of the entailment is reversed: Jones is not a linguist entails Jones is not a phonologist, but not vice versa. In propositional logic, this reversal is known as modus tollens: (5)

p!q :q ! :p

However, in natural language, this reversal is not restricted to negation but is found among a wide variety of negative expressions. For example, it is unlikely that Jones is a linguist entails it is unlikely that Jones is a phonologist, and not the other way around. Here the negative element is unlikely. Similarly, few people are linguists entails few people are phonologists. (Note that if we had used the positive quantifier many, rather than its negative counterpart few, the entailment would have been in the upward direction.) In formulaic notation, letting contexts denote functions, and using ‘x < y’ for the relation ‘x is a hyponym of y,’ we can define downward-entailing contexts as follows: (6) Downward entailment A function f is downward entailing if and only if for all x,y such that x < y: f(y) < f(x).

Ladusaw’s work, like that of Seuren, constitutes a step forward compared to the original proposals of Klima, but it runs into some of the same problems, such as the blocking effect caused by intervening

universal quantifiers noted by Linebarger and the relevance of subtle pragmatic factors. Another problem for any unified theory is the diversity of polarity items, which is at odds with the idea of a single licensing condition. To remedy this situation, Zwarts (1981, 1995; see also Van der Wouden, 1997) proposed a hierarchy of licensing conditions, corresponding to various classes of negative polarity items. (7) Zwarts’s Hierarchies Hierarchy of contexts: negation > antiadditive > downward entailing > nonveridical Hierarchy of expressions: superstrong > strong > weak > nonreferential

Negation is the strongest trigger, and is defined, crosslinguistically, by the laws of Boolean algebra. Not many clear-cut examples of superstrong NPIs have been identified, but Dutch mals ‘tender, soft’ is a plausible candidate, since it is triggered by niet ‘not,’ but not, for instance, by n-words: (8) zijn opmerkingen waren his remarks were ‘his remarks were harsh’ (9) *Geen none

van of

zijn his

niet not

opmerkingen remarks

mals tender

was were

mals tender

Antiadditive environments are a proper subset of the downward-entailing contexts, where the following condition holds (drawn from the De Morgan laws in Boolean algebra): (10) Anti-additivity A function f is anti-additive iff for all x,y in its domain, f(x _ y) ¼ f(x) ^ f(y)

Foremost among the antiadditive operators in natural language are the n-words. Among the strong NPIs identified by Zwarts is the German auch nur. This expression is triggered by negation and n-words, but not by weaker triggers such as ho¨chstens 10 ‘at most 10’: (11) kein Kind hat auch nur irgendetwas gesehen no child has even anything seen ‘no child has seen anything at all’ (12) *ho¨chstens 10 Kinder haben auch nur at most 10 children have even irgendetwas gesehen anything seen ‘at most 10 children have seen anything at all’

In this respect, auch nur differs from weaker polarity items such as German brauchen ‘need’ which are acceptable in both contexts (but not in any context that is not downward entailing):

Polarity Items 693 (13) kein Kind braucht sich no child needs REFL ‘no child need be ashamed’

zu to

scha¨men shame

(14) ho¨chstens 10 Kinder brauchen da zu sein at most 10 children need there to be ‘at most 10 children need to be present’ (15) *diese these

10 Kinder brauchen 10 children need

da there

zu sein. to be

Nonveridical contexts are defined as follows: (16) Nonveridicality A function f is nonveridical if and only if for all propositions p: f(p) does not entail p.

Clearly, negation is nonveridical, since not(p) never entails p. The same is true for downward-entailing predicates such as impossible, deny, unlikely, etc. However, some contexts that are not downward entailing are also nonveridical, such as the scope of intensional operators like perhaps. Perhaps John is a linguist does not entail perhaps John is a phonologist (note that there is no contradiction in the sentence John is definitely not a phonologist, but perhaps he is a linguist, nonetheless), but equally clearly, this sentence is nonveridical: it does not entail that John is a linguist. The same is true for the complements of such verbs as want, hope, etc., for disjunctions (from p or q we cannot infer p). Giannakidou (1998) has identified several indefinites in modern Greek which appear in nonveridical contexts only, such as kanenas: (17) Thelo na mu agorasis kanena want SUBJ me buy.3SG any ‘I want you to buy me a book’

vivlio book

Nonveridicality of environments implies lack of existential import. It is likely that expressions such as kanenas derive their distribution from the fact that they are markers of this lack of existential import.

Positive Polarity Items Positive polarity items are less well studied than their negative counterparts. It has been noted that they are sensitive to the presence of negation, but other negative environments appear to have no negative effect on the acceptability of PPIs (Horn, 1989; but see Van der Wouden, 1997 for a different view). For example, both not and few license the acceptability of the NPI any, but only not antilicenses its PPI counterpart some: (18) I don’t want any/*some cheese (19) few of us had any/some money

Moreover, as was already noted above in connection with (4), the antilicensing force of negation

disappears when negation is used to deny a prior claim or when it is used metalinguistically (Horn, 1989): (20) I don’t want some cheese, I want all cheese

The most intriguing difference between NPIs and PPIs, however, lies in their difference with respect to double negation. As was first pointed out by Baker (1970), PPIs may appear in the context of negation, provided that negation is itself embedded in a larger negative context. Compare, for example, what happens when we embed the clause in (21) in the negative context I can’t believe __: (21) *it is not already 5 o’clock (22) I can’t believe it is not already 5 o’clock

NPIs, on the other hand, usually do not distinguish between negative and double negative contexts (Hoeksema, 1986): (23) we’re not in Kansas anymore (24) I can’t believe we’re not in Kansas anymore

Further study of PPIs is needed to see if they are as diverse in their distribution as NPIs. also: Indefinite Pronouns; Monotonicity and Generalized Quantifiers; Negation; Scope and Binding.

See

Bibliography Baker C L (1970). ‘Double negatives.’ Linguistic Inquiry 1, 169–186. Fauconnier G (1975). ‘Polarity and the scale principle.’ Proceedings of the Chicago Linguistic Society 11, 188–199. Giannakidou A (1998). Polarity sensitivity as (non)veridical dependency. Amsterdam/Philadelphia: John Benjamins. Hoeksema J (1986). ‘Monotonicity phenomena in natural language.’ Linguistic Analysis 16, 25–40. Hoeksema J (2000). ‘Negative polarity items: triggering, scope and c-command.’ In Horn L R & Kato Y (eds.) Negation and polarity: syntactic and semantic perspectives. Oxford: Oxford University Press. 115–146. Horn L R (1989). A natural history of negation. Chicago: University of Chicago Press. Klima E S (1964). ‘Negation in English.’ In Fodor J A & Katz J J (eds.) The structure of language: readings in the philosophy of language. Englewood Cliffs, NJ: Prentice Hall. 246–323. Ladusaw W A (1979). Polarity sensitivity as inherent scope relations. Dissertation, University of Texas at Austin. Lakoff R (1969). ‘Some reasons why there can’t be any some–any rule,’ Language 45, 608–615. Linebarger M (1980). The grammar of negative polarity. Dissertation, Massachusetts Institute of Technology. Progovac L (1994). Negative and positive polarity: a binding approach. Cambridge: Cambridge University Press.

694 Politeness Seuren P A M (1974). ‘Negative’s travels.’ In Seuren P A M (ed.) Semantic syntax. Oxford: Oxford University Press. Van der Wouden T (1997). Negative contexts: collocation, negative polarity, and multiple negation. London: Routledge.

Zwarts F (1981). ‘Negatief polaire uitdrukkingen 1.’ GLOT 4(1), 35–132. Zwarts F (1995). ‘Nonveridical contexts.’ Linguistic Analysis 25, 286–312.

Politeness B Pizziconi, University of London, SOAS, London, UK ß 2006 Elsevier Ltd. All rights reserved.

Introduction Despite several decades of sustained scholarly interest in the field of politeness studies, a consensual definition of the meaning of the term ‘politeness,’ as well as a consensus on the very nature of the phenomenon, are still top issues in the current research agenda. In ordinary, daily contexts of use, members of speech communities possess clear metalinguistic beliefs about, and are capable of, immediate and intuitive assessments of what constitutes polite versus rude, tactful versus offensive behavior. Politeness in this sense is equivalent to a normative notion of appropriateness. Such commonsense notions of politeness are traceable as products of historical developments and hence are socioculturally specific. Scholarly definitions of the term, by contrast, have been predicated for several decades on a more or less tacit attempt to extrapolate a theoretical, abstract notion of politeness, capable of transcending lay conceptualizations and being cross-culturally valid. The theoretical constructs proposed, however, have proven unsatisfactory as heuristic instruments for the analysis of empirical data. Much of the current scholarly debate is focused on taking stock of recent critiques of past dominating paradigms and epistemological premises, and on formulating new philosophical and methodological practices based on a radical reconceptualization of the notion of politeness. The point of contention is the very possibility of survival of any useful notion of politeness, when the construct is removed from a historically determined, socioculturally specific, and interactionally negotiated conceptualization of the term.

Constructs of Politeness The ‘Social Norm View’

Politeness has been an object of intellectual inquiry quite early on in both Eastern (Lewin, 1967;

Coulmas, 1992, for Japanese; Gu, 1990, for Chinese) and Western contexts (Held, 1992). In both traditions, which loosely can be defined as pre-pragmatic, observers tend to draw direct, deterministic links between linguistic realizations of politeness and the essential character of an individual, a nation, a people, or its language. Thus, the use of polite language is taken as the hallmark of the good-mannered or civil courtier in the Italian conduct writers of the 16th century (Watts, 2003: 34), or as a symbol of the qualities of modesty and respect enshrined in the Japanese language in pre-World War II nationalistic Japan. Linguistic realizations of politeness are inextricably linked to the respective culture-bound ideologies of use; accounts, which often are codified in etiquette manuals providing exegeses of the relevant social norms, display a great deal of historical relativity. Pragmatic Approaches

Pragmatic approaches to the study of politeness begin to appear in the mid-1970s. Robin Lakoff (1973) provided pioneering work by linking Politeness (with its three rules: ‘don’t impose’; ‘give options’; ‘make the other person feel good, be friendly’) to Grice’s Cooperative Principle to explain why speakers do not always conform to maxims such as Clarity (1973: 297) (see Cooperative Principle). In a similar vein, but wider scope, Leech’s (1983) model postulates that deviations from the Gricean conversational maxims are motivated by interactional goals, and posits a parallel Politeness Principle, articulated in a number of maxims such as Tact, Generosity, Approbation, Modesty, Agreement, and Sympathy. He also envisages a number of scales: cost-benefit, authority and social distance, optionality, and indirectness, along which degrees of politeness can be measured. Different situations demand different levels of politeness because certain immediate illocutionary goals can compete with (e.g., in ordering), coincide with (e.g., in offering), conflict with (e.g., in threatening), or be indifferent to (e.g., in asserting), the long-term social goals of maintaining comity and avoiding friction. This so-called conversational maxim view of

Politeness 695

politeness (Fraser, 1990) is concerned uniquely with scientific analyses of politeness as a general linguistic and pragmatic principle of communication, aimed at the maintenance of smooth social relations and the avoidance of conflict, but not as a locally determined system of social values (Eelen, 2001: 49, 53). Another model, proposed by Brown and Levinson in 1978, de facto set the research agenda for the following quarter of a century (the study was republished in its entirety as a monograph with the addition of a critical introduction in 1987). Like Lakoff and Leech, Brown and Levinson (1987) accept the Gricean framework, but they note a qualitative distinction between the Cooperative Principle and the politeness principles: while the former is presumed by speakers to be at work all the time, politeness needs to be ostensibly communicated (ibid.: 5). Brown and Levinson see politeness as a rational and rule-governed aspect of communication, a principled reason for deviation from efficiency (ibid.: 5) and aimed predominantly at maintaining social cohesion via the maintenance of individuals’ public face (a construct inspired by Erving Goffman’s notion of ‘face,’ but with crucial, and for some, fatal differences: see Bargiela-Chiappini, 2003, Watts, 2003) (see Face). Brown and Levinson’s ‘face’ is construed as a double want: a want of freedom of action and freedom from impositions (this is called ‘negative’ face), and a want of approval and appreciation (a ‘positive’ face). Social interaction is seen as involving an inherent degree of threat to one’s own and others’ face (for example, an order may impinge on the addressee’s freedom of action; an apology, by virtue of its subsuming an admission of guilt, may impinge on the speaker’s want to be appreciated). However, such face threatening acts (FTA) can be avoided, or redressed by means of polite (verbal) strategies, pitched at the level needed to match the seriousness of an FTA x, calculated according to a simple formula: Wx ¼ PðH; SÞ þ DðS; HÞ þ Rx where the Weight of a threat x is a function of the Power of Hearers over Speakers, as well as of the social Distance between Speakers and Hearers, combined with an estimation of the Ranking (of the seriousness) of a specific act x in a specific culture (see Face). Brown and Levinson compared data from three unrelated languages (English, Tamil, and Tzeltal) to show that very similar principles, in fact universal principles, are at work in superficially dissimilar realizations. The means-end reasoning that governs the choice of polite strategies, and the need to redress

face threats, are supposed to be universal. The abstract notion of positive and negative aspects of face (although the content of face is held to be subject to cultural variation) is also considered to be a universal want. The comprehensiveness of the model – in addition to being the only production model of politeness to date – captured the interest of researchers in very disparate fields and working on very different languages and cultures. One could even say that the Brown and Levinsonian discourse on politeness practically ‘colonized’ the field (domains covered include cross-cultural comparison of speech acts, social psychology, discourse and conversation analysis, gender studies, family, courtroom, business and classroom discourse, and so on: see Dufon et al., 1994, for an extensive bibliography; Eelen, 2001: 23 ff.; Watts, 2003). Interestingly, a paper by Janney and Arndt made the point, in 1993, that despite considerable criticism of the then still dominant paradigm, the very fundamental issue of whether the universality assumption could be of use in comparative cross-cultural research went by and large unquestioned (1993: 15). The most conspicuous criticism – paradoxically, for a model aspiring to pancultural validity – was perhaps the charge of ethnocentrism: the individualistic and agentivistic conception of Brown and Levinson’s ‘model person’ did not seem to fit ‘collectivistic’ patterns of social organization, whereas their notion of ‘face’ seemed to serve an atomistic rather than interrelated notion of self (Wierzbicka, 1985; Gu, 1990; Nyowe, 1992; Werkhofer, 1992; de Kadt, 1992; Sifianou, 1992; Mao, 1994). Going one step further, some criticized Brown and Levinson’s emphasis on the ‘calculable’ aspects of expressive choice (and the idea that individuals can manipulate these ‘volitionally’), to the expense of the socially constrained or conventionalized indexing of politeness in some linguacultures (especially, though not exclusively, those with rich honorific repertoires; Hill et al., 1986; Matsumoto, 1988, 1989; Ide, 1989; Janney and Arndt, 1993) (see Honorifics). The Gricean framework implicitly or explicitly adopted in many politeness studies has been criticized for arbitrarily presupposing the universal validity of the maxims, and for a relatively static account of inferential processes. In particular, Sperber and Wilson’s (1995) Relevance Theory recently has been adopted by politeness theorists as a way to compensate for this lack of interpretative dynamism (Jary, 1998a, 1998b; Escandell-Vidal, 1998; Watts, 2003: 201) and the conversational maxims have been reinterpreted as ‘sociopragmatic interactional principles’ (Spencer-Oatey, 2003).

696 Politeness

Others have lamented Brown and Levinson’s exclusive focus on the speaker, as well as their reliance on decontextualized utterances and speech acts (Hymes, 1986: 78), choices that similarly detract from a discursive and interactional understanding of communicative processes (see Speech Acts). Social Constructivist Approaches

Hymes (1986) pointed out quite early on that although Brown and Levinson’s model was impressive as an illustration of the universality of politeness devices, any useful and accurate account of politeness norms would need to ‘‘place more importance on historically derived social institutions and cultural orientations’’ (p. 78). The scientific extrapolation of an abstract, universal concept of politeness was similarly questioned by Watts et al. (1992), who drew attention to the serious epistemological consequences of a terminological problem. According to these authors, the field had been too casual in overlooking the difference between mutually incommensurable constructs of politeness: a first-order politeness (politeness1) derived from folk and commonsense notions, and a second-order politeness (politeness2), a technical notion for use in scientific discourse. Although the latter (echoing the Vygotskyan characterization of spontaneous versus scientific concepts) can be thought to emerge from an initial verbal definition, the former emerges from action and social practice (Eelen, 2001: 33). As social practice, politeness1 is rooted in everyday interaction and socialization processes: it is expressed in instances of speech (expressive politeness), it is invoked in judgments of interactional behavior as polite or impolite behavior (classificatory politeness), and is talked about (metapragmatic politeness) (ibid.: 35). Eelen (2001)’s watershed critique of politeness theories articulates this point in great detail and thus opens up promising new avenues of thought for researchers. The lack of distinction between politeness1 and politeness2 represents a serious ontological and epistemological fallacy of all previous politeness research, as it has determined the more or less implicit ‘reification’ of participants’ viewpoint to a scientific viewpoint (the ‘emic’ account is seamlessly transformed into an ‘etic’ account). This conceptual leap fails to question the very evaluative nature of politeness1 (ibid.: 242) and thereby conceals this ‘evaluative moment’ from analysis. Empirical studies into commonsense ideas of politeness1 (Blum-Kulka, 1992; Ide et al., 1992) indicate that notions of politeness or impoliteness are used to characterize people’s behavior judgmentally.

This evaluative practice has a psychosocial dimension: individuals position themselves in moral terms vis-a`-vis others and categorize the world into the ‘well-mannered,’ the ‘uncouth,’ etc., and a more concrete everyday dimension: it enables indexing of social identities and thus group-formation: in other words, it positively creates social realities (Eelen, 2001: 237). Politeness is said to be inherently argumentative: evaluative acts are not neutral taxonomic enterprises; they exist because there is something at stake socially. Moreover, carrying out an evaluative act immediately generates social effects. (ibid.: 37– 38). A particularly problematic aspect of much of the theorizing about politeness is that in spite of the fact that norms are held by users to be immutable and objective (recourse to a higher, socially sanctioned reality grants moral force), and by theorists to be unanimously shared by communities, one still has to admit that the very acts of evaluation may exhibit a huge variability, and that this is hardly the exception. Capturing the qualities of evaluativity, argumentativity, and variability of polite behavior requires a paradigmatic shift in our underlying philosophical assumptions. Eelen proposes to replace what he sees as a Parsonian apparatus (exemplified by ‘‘priority of the social over the individual, normative action, social consensus, functional integration and resistance to change,’’ p. 203) with Bourdieu’s (1990, 1991) theory of social practice (a proposal followed and developed by Watts, 2003). The following are some of the important consequences of this proposal. The first is a reconceptualization of politeness as situated social action – its historicity is duly restored. Politeness is no longer an abstract concept or set of norms from which all individuals draw uniformly, but is recognized as the very object of a social dispute. Variability, resulting from the properties of evaluativity and argumentativity of politeness1, ceases to be a problem for the researcher, and instead provides evidence of the nature of the phenomenon. As a consequence, even statistically marginal behavior (problematic for traditional approaches: Eelen, 2001: 141) can be accounted for within the same framework. Second, the relation between the cultural/social and the individual is seen as less deterministic. On the one hand, the cultural is part of an individual’s repertoire: it is internalized and accumulated through all past interactions experienced by an individual, thus determining the nature of that individual’s habitus (or set of learned dispositions; Bourdieu, 1991). On the other hand, the cultural can be acted on – be maintained or challenged – to various extents by individuals, depending on those individuals’ resources, or symbolic capital; the cultural is never an immutable entity.

Politeness 697

This discursive understanding of politeness enables us to capture the functional orientation of politeness to actions of social inclusion or exclusion, alignment or distancing (and incidentally uncovers the fundamentally ideological nature of scientific metapragmatic talk on politeness, as one type of goal oriented social practice; see Glick, 1996: 170). Politeness ceases to be deterministically associated with specific linguistic forms or functions (another problem for past approaches): it depends on the subjective perception of the meanings of such forms and functions. Moreover, in Watts’s (2003) view, behavior that abides by an individual’s expectations based on ‘habitus’ (i.e., unmarked appropriate behavior) is not necessarily considered politeness: it is instead simply politic behavior. Politeness may thus be defined as behavior in excess of what can be expected (which can be received positively or negatively but is always argumentative), whereas impoliteness similarly is characterized as nonpolitic behavior (on the important issue of the theoretical status of impoliteness, see Eelen, 2001: 87 and Watts, 2003: 5). As sketched here, the path followed by the discourse on politeness illustrates how the struggle over the meaning and the social function of politeness is at the very centre of current theorizing. Watts adopts a rather radical position and rejects the possibility of a theory of politeness2 altogether: scientific notions of politeness (which should be nonnormative) cannot be part of a study of social interaction (normative by definition) (Watts, 2003: 11). Others, like House (2003, 2005), or O’Driscoll (1996) before her, maintain that a descriptive and explanatory framework must include universal (the first two below) and culture/language-specific levels (the last two below): 1. a fundamental biological, psychosocial level based on animal drives (coming together vs. nolime-tangere) 2. a philosophical level to capture biological drives in terms of a finite number of principles, maxims, or parameters 3. an empirical descriptive level concerned with the particular (open-ended) set of norms, tendencies, or preferences 4. a linguistic level at which sociocultural phenomena have become ‘crystallized’ in specific language forms (either honorifics or other systemic distinctions) (adapted from House, 2003, 2005).

Future Perspectives Although the legacy of the ‘mainstream’ pragmatic approaches described above is clearly still very strong

(see, for instance, Fukushima, 2000; Bayraktarogˇlu and Sifianou, 2001; Hickey and Stewart, 2005; Christie, 2004), the critical thoughts introduced in the current debate on linguistic politeness promise to deliver a body of work radically different from the previous one. The future program of politeness research begins from the task of elaborating a full-fledged theoretical framework from the seminal ideas recently proposed. It must acknowledge the disputed nature of notions of politeness and explore the interactional purposes of evaluations (see, for example, Mills’s 2003 study on gender, or Watts’s 2003 ‘emergent networks’; compare also Locher’s 2004 study on the uses of politeness in the exercise of power). It must articulate how norms come to be shared and how they come to be transformed; it must explore the scope and significance of variability. Relevance theory, Critical Discourse Analysis, and Bourdieuian sociology have all been proposed as promising frameworks for investigation. Empirical research that can provide methodologically reliable data for these questions must also be devised: the new paradigm would dictate that the situatedness of the very experimental context, the argumentativity of the specific practice observed are recognized as integral part of the relevant data. Politeness consistently features in international symposia, and has, since 1998, had a meeting point on the Internet; the year 2005 will see the birth of a dedicated publication, the Journal of Politeness Research. See also: Cooperative Principle; Face; Honorifics; Speech Acts; Context and Common Ground; Conventions in Language; Gender; Irony; Jargon; Memes; Neo-Gricean Pragmatics; Politeness Strategies as Linguistic Variables; Taboo Words; Taboo, Euphemism, and Political Correctness.

Bibliography Bargiela-Chiappini F (2003). ‘Face and politeness: new (insights) for old (concepts).’ Journal of Pragmatics 35(10–11), 1453–1469. Bayraktarogˇlu A & Sifianou M (eds.) (2001). Linguistic politeness across boundaries: the case of Greek and Turkish, Amsterdam: John Benjamins. Blum-Kulka S (1992). ‘The metapragmatics of politeness in Israeli society.’ In Watts R J et al. (eds.). 255–280. Bourdieu P (1990). The logic of practice. Cambridge: Polity Press. Bourdieu P (1991). Language and symbolic power. Cambridge: Polity Press. Brown P & Levinson S (1987). Politeness: some universals in language usage. Cambridge: Cambridge University Press.

698 Politeness Christie C (ed.) (2004). ‘Tension in current politeness research.’ Special issue of Multilingua 23(1/2). de Kadt E (1992). ‘Politeness phenomena in South African Black English.’ Pragmatics and Language Learning 3, 103–116. Escandell-Vidal V (1998). ‘Politeness: a relevant issue for Relevance Theory.’ Revista Alicantina de Estudios Ingleses 11, 45–57. Eelen G (2001). A critique of politeness theories. Manchester: St Jerome. Fraser B (1990). ‘Perspectives on politeness.’ Journal of Pragmatics 14(2), 219–236. Fukushima S (2000). Requests and culture: politeness in British English and Japanese. Bern: Peter Lang. Glick D (1996). ‘A reappraisal of Brown and Levinson’s Politeness: some universals of language use, eighteen years later: review article.’ Semiotica 109(1–2), 141–171. Gu Y (1990). ‘Politeness phenomena in modern Chinese.’ Journal of Pragmatics 14(2), 237–257. Held G (1992). ‘Politeness in linguistic research.’ In Watts R J et al. (eds.) 131–153. Hickey L & Stewart M (eds.) (2005). Politeness in Europe. Clevedon: Multilingual Matters. Hill B, Ide S, Ikuta S, Kawasaki A & Ogino I (1986). ‘Universals of linguistic politeness: quantitative evidence from Japanese and American English.’ Journal of Pragmatics 10(3), 347–371. House J (2003). ‘Misunderstanding in university encounters.’ In House J, Kasper G & Ross S (eds.) Misunderstandings in social life: discourse approaches to problematic talk. London: Longman. 22–56. House J (2005). ‘Politeness in Germany: Politeness in Germany?’ In Hickey & Stewart, (eds.). 13–28. Hymes D (1986). ‘Discourse: scope without depth.’ International Journal of the Sociology of Language 57, 49–89. Ide S (1989). ‘Formal forms and discernment: two neglected aspects of linguistic politeness.’ Multilingua 8(2–3), 223–248. Ide S, Hill B, Cames Y, Ogino T & Kawasaki A (1992). ‘The concept of politeness: an empirical study of American English and Japanese.’ In Watts R J et al. (eds.). 299–323. Janney R W & Arndt H (1993). ‘Universality and relativity in cross-cultural politeness research: a historical perspective.’ Multilingua 12(1), 13–50. Jary M (1998a). ‘Relevance Theory and the communication of politeness.’ Journal of Pragmatics 30, 1–19. Jary M (1998b). ‘Is Relevance Theory asocial?’ Revista Alicantina de Estudios Ingleses 11, 157–168. Lakoff R (1973). ‘The logic of politeness; or minding your p’s and q’s.’ Papers from the Ninth Regional Meeting of the Chicago Linguistic Society 8, 292–305.

Leech G (1983). Principles of pragmatics. London and New York: Longman. Lewin B (1967). ‘The understanding of Japanese honorifics: a historical approach.’ In Yamagiwa J K (ed.) Papers of the CIC Far Eastern Language Institute. Ann Arbor: University of Michigan Press. 107–125. Locher M (2004). Power and politeness in action – disagreements in oral communication. Berlin: Mouton de Gruyter. Mao L (1992). ‘Beyond politeness theory: ‘‘face’’ revisited and renewed.’ Journal of Pragmatics 21(5), 451–486. Matsumoto Y (1988). ‘Re-examination of the universality of face: politeness phenomena in Japanese.’ Journal of Pragmatics 12(4), 403–426. Matsumoto Y (1989). ‘Politeness and conversational universals – observations from Japanese.’ Multilingua 8(2–3), 207–222. Mills S (2003). Gender and politeness. Cambridge: Cambridge University Press. Nyowe O G (1992). ‘Linguistic politeness and sociocultural variations of the notion of face.’ Journal of Pragmatics 18, 309–328. O’Driscoll J (1996). ‘About face: a defence and elaboration of universal dualism.’ Journal of Pragmatics 25(1), 1–32. Sifianou M (1992). Politeness phenomena in England and Greece. Oxford: Clarendon. Spencer-Oatey H & Jiang W (2003). ‘Explaining crosscultural pragmatic findings: moving from politeness maxims to sociopragmatic interactional principles (SIPs).’ Journal of Pragmatics 35(10–11), 1633–1650. Sperber D & Wilson D (1995). Relevance: communication and cognition, Oxford: Blackwell [1986]. Watts R J (2003). Politeness. Cambridge: Cambridge University Press. Watts R J, Ide S & Ehlich K (1992). Politeness in language: studies in its history, theory and practice. Berlin: Mouton de Gruyter. Werkhofer K (1992). ‘Traditional and modern views: the social constitution and the power of politeness.’ In Watts R J (ed.). 155–199. Wierzbicka A (1985). ‘Different cultures, different languages, different speech acts: Polish vs. English.’ Journal of Pragmatics 9, 145–178.

Relevant Website http://www.lboro.ac.uk/departments/ea/politeness/ index.htm

Politeness Strategies as Linguistic Variables 699

Politeness Strategies as Linguistic Variables J Holmes, Victoria University of Wellington, Wellington, New Zealand ß 2006 Elsevier Ltd. All rights reserved.

Although linguists have had a good deal to say about it, politeness is not just a matter of language. When people say about someone, ‘‘she is very polite,’’ they are often referring to respectful, deferential, or considerate behavior which goes well beyond the way the person talks or writes. In Japan, for example, polite behavior encompasses bowing respectfully; in Samoan culture, being polite entails ensuring you are physically lower than a person of higher status. And in formal situations, all cultures have rules for behaving appropriately and respectfully, which include ways of expressing politeness nonverbally as well as verbally. In this article, however, we will focus on linguistic politeness and the social factors that influence its use. We begin by addressing the question ‘What is linguistic politeness?’

What Is Linguistic Politeness? Language serves many purposes, and expressing linguistic politeness is only one of them. In example 1, the main function of the interaction can be described as informative or referential. The two people know each other well, and they do not engage in any overt expressions of linguistic politeness. [Note: Specialized transcription conventions are kept to the minimum in the examples in this article. þ marks a pause; place overlaps between // and /. Strong STRESS is marked by capitalization.] (1) Context: Two flatmates in their kitchen Rose: what time’s the next bus? Jane: ten past eight I think

By contrast, in example 2 (from Holmes and Stubbe, 2003: 37), there are a number of features which can be identified as explicit politeness markers. (2) Context: Manager in government department to Ana, a new and temporary administrative assistant replacing Hera’s usual assistant. Hera: I wondered if you wouldn’t mind spending some of that time in contacting þ while no-one else is around contacting the people for their interviews

Hera’s basic message is ‘set up some interviews.’ However, in this initial encounter with her new assistant, she uses various politeness devices to soften her directive: the hedged syntactic structure, I wondered if you wouldn’t mind, and the modal verb (would). Providing a reason for being so specific about when

she wants the task done could also be regarded as contributing to mitigating the directive. Hera’s use of these politeness devices reflects both the lack of familiarity between the two women, and the fact that, as a ‘temp,’ Ana’s responsibilities are not as clearly defined as they would be if she had been in the job longer. This illustrates nicely how linguistic politeness often encodes an expression of consideration for others. Linguistic politeness has generally been considered the proper concern of ‘pragmatics,’ the area of linguistics that accounts for how we attribute meaning to utterances in context, or ‘‘meaning in interaction’’ (Thomas, 1995: 23). If we adopt this approach, then politeness is a matter of specific linguistic choices from a range of available ways of saying something. Definitions of politeness abound (see, for example, Sifianou, 1992: 82–83, Eelen, 2001: 30–86), but the core of most definitions refers to linguistic politeness as a ‘means of expressing consideration for others’ (e.g., Holmes, 1995: 4; Thomas, 1995: 150; Watts, 2003). Note that there is no reference to people’s motivations; we cannot have access to those, and arguments about one group being intrinsically more polite or altruistic than another are equally futile, as Thomas (1995: 150) points out. We can only attempt to interpret what people wish to convey on the basis of their utterances; we can never know their ‘real’ feelings. We can, however, note the ways in which people use language to express concern for others’ needs and feelings, and the ways that their expressions are interpreted. Linguistic politeness is thus a matter of strategic interaction aimed at achieving goals such as avoiding conflict and maintaining harmonious relations with others (Kasper, 1990). Different cultures have different ways of expressing consideration for others, and the most influential work in the area of linguistic politeness, namely Brown and Levinson’s Politeness Theory (1978, 1987), adopts a definition of politeness that attempts to encompass the ways politeness is expressed universally. This involves a conception of politeness that includes not only the considerate and nonimposing behavior illustrated in example 2, which they label ‘negative politeness’ (Brown and Levinson, 1987: 129), but also the positively friendly behavior illustrated in example 3. (see also Lakoff, 1975, 1979). (3) Context: Small talk between workers in a plant nursery at the start of the day. Des is the manager. Ros is the plant nursery worker. Des: be a nice day when it all warms up a bit though

700 Politeness Strategies as Linguistic Variables Ros: yeah þ it’s okay Des: so you haven’t done anything all week eh you haven’t done anything exciting

This is classic social talk: the content is not important; the talk is primarily social or affective in function, designed to establish rapport and maintain good collegial relationships. Brown and Levinson (1987: 101) use the term ‘positive politeness’ for such positive, outgoing behavior. Hence, their definition of politeness includes behavior which actively expresses concern for and interest in others, as well as nonimposing distancing behavior. Linguistic politeness may therefore take the form of a compliment or an expression of goodwill or camaraderie, or it may take the form of a mitigated or hedged request, or an apology for encroaching on someone’s time or space.

Politeness Theory Brown and Levinson’s definition describes linguistic politeness as a means of showing concern for people’s ‘face,’ a term adopted from the work of Erving Goffman. Using Grice’s (1975) maxims of conversational cooperation, they suggest that one reason people diverge from direct and clear communication is to protect their own face needs and take account of those of their addressees. While it is based on the everyday usages such as ‘losing face’ and ‘saving face,’ Brown and Levinson develop this concept

Figure 1 Chart of strategies: Positive politeness.

considerably further, and they analyze almost every action (including utterances) as a potential threat to someone’s face. They suggest that linguistic politeness comprises the use of interactional strategies aimed at taking account of two basic types of face needs or wants: firstly, positive face needs, or the need to be valued, liked, and admired, and to maintain a positive self-image; and secondly, negative face needs or the need not to be imposed upon, the need for relative freedom of thought and action, or for one’s own ‘space.’ As illustrated in example 2, behavior that avoids impeding or imposing on others (or avoids ‘threatening their face’) is described as evidence of ‘negative politeness,’ while sociable behavior conveying friendliness (as in example 3), or expressing admiration for an addressee is ‘positive politeness’ behavior (Brown and Levinson, 1987) (Figures 1 and 2). Adopting this approach, any utterance that could be interpreted as making a demand or intruding on another person’s autonomy may qualify as a potentially face threatening act (FTA). Even suggestions, advice, and requests may be experienced as FTAs, since they potentially impede the addressee’s freedom of action. Brown and Levinson (1987) outline a wide range of different kinds of politeness strategies, including, as positive politeness strategies, making offers, joking, and giving sympathy, and, as negative politeness strategies, hedging, apologizing, and giving deference (see figures 3 and 4 in Brown and Levinson, 1987: 102 and 131). In support of their claims for

Politeness Strategies as Linguistic Variables 701

Figure 2 Chart of strategies: Negative politeness.

the universality of their theory, they illustrate these strategies with numerous examples from three different languages: South Indian Tamil, Tzeltal, a Mayan language spoken in Mexico, and American and British English. Example 4 is an illustration from Tzeltal of the use of the negative politeness strategy of hedging, in the form of the particles nasˇ and mak, to mitigate a FTA and thus render the utterance more polite. (4) ha nasˇ ya smel yo tan, mak It’s just that he’s sad, I guess [Final segment in nasˇ is the sound at the beginning of ‘ship’]

Since this example, from Brown and Levinson, 1987: 149, like many others, is provided by them without contextual information, the reader has no way of assessing exactly why the utterance is interpretable as a FTA, a point I return to below. Brown and Levinson do, however, recognize the importance of three fundamental sociocultural variables in assessing the relative weight of different FTAs: firstly, the social distance (D) between the participants; secondly, the power (P) that the addressee has over the speaker; and thirdly, the ranking of the imposition (R) expressed in the utterance in the relevant culture. Moreover, they note that the way these variables contribute will differ from culture to culture. Each of these components contributes to the relative seriousness of the FTA, and thus to the assessment of the appropriate strategy or level of politeness

required to express the speaker’s intended message. So, for example, if my son wants to borrow my car, he is likely to judge that although he knows me well (low D), I have a relatively high amount of power in our relationship (compared, say, to his relationship with his brother, though perhaps not as much as is involved in his relationship with his boss), and that he is asking a big favor (high R). He is therefore likely to select a linguistically very polite way of asking this favor, as illustrated in example 5. (5) Context: Son to mother in the family living room [þ marks a very short pause] D: um mum þ um do you think um I could possibly just borrow your car þ M: [frowns] D. um just for a little while þ M. um well [frowns] D: it’s just that I need to get this book to Helen tonight

In making his request, D includes a number of negative politeness strategies in the form of mitigating devices or hedges (hesitation markers um, modal verb could and particle possibly, minimizers just, a little) as well as the positive politeness strategies of using an in-group identity marker (mum) and providing a reason for the request. If he had been asking for a lift or a loan for bus fare, the weight of the imposition would have been considerably less of a FTA in New Zealand culture. In cultures where cars are either more or less valuable, the imposition represented by such a request would be ranked differently; and

702 Politeness Strategies as Linguistic Variables

if the requestor was an equal or superior (such as a husband in some cultures), the P variable would be reduced. Thus the theory attempts to take account of contextual and social considerations and cultural contrasts. The following sketch by Harry Enfield illustrates how the factors D, P, and R interact in different ways with a consequent effect on the different politeness strategies evident in the way Kevin and his friend Perry speak to (a) Kevin’s parents, (b) Perry’s parents on the phone, and (c) each other. Transcript 1: ‘Kevin the Teenager’ Father: Thanks, mum þ Father: Kevin. Kevin: What? Father: Are you gonna thank your mum? Kevin: Ugh! Deurgh! þ Father: Are you going to say thank you? Kevin: I JUST BLOODY DID! Mother: Forget it, Dave – it’s not worth it. Kevin: [sighs] Father: Oh, this is lovely, mum [to Kevin] How’s the exam today? Kevin: Mawight. Father: What was it today – maths? Kevin: Ner. Nurrr. þ Father: Sorry, what did you say? Kevin: Urgh! P-H-Y-S-I-C-S! þ Mother: Well, we can’t hear you if you mumble, Kevin! Kevin: Muh! Uh! Nuh-muh! [the doorbell rings. Kevin goes to answer it] Kevin: Awight, Perry? Perry: Awight, Kev? ’ere – guess wot I dun at school today? I rubbed up against Jennifer Fisher an’ me groin wen’ ’ard! Perry: Hmm? Father: Hello, Perry! Mother: Hello, Perry! How are you? Perry: ’ullo, Mrs Patterson, Mr Patterson. Mother: How are your mum and dad? Perry: All right, thank you. Kevin: Come on, Perry – let’s go. Father: Kevin? Kevin: What? Father: Where d’you think you’re going with all that food? Kevin: Bedroom. Snack. Mother: Your dinner’s on the table – come and finish it. Kevin: Ugh! P-e-r-r-y! Father: Well, Perry can join us. Now, come and sit down. Kevin: [snorts] So unfair! Mother: Now – do you want something to eat, Perry? Perry: No thanks, Mrs Patterson. Mother: Are you sure? Kevin: HE JUST BLOODY SAID ‘‘NO’’!

Father: Kevin. Don’t shout at your mum. Kevin: What? I didn’t say anything! What? Ugh! Ugh! Ugh! Ugh! Father: Oh, Perry – I think you’ve known us long enough not to call us Mr and Mrs Patterson any more. Kevin: Eeurgh! Father: Just call us Dave and Sheila. Kevin: (Eeurgh!) Mother: Is that OK, then? Perry: Yes, Mrs Patterson þ Mother: So – what sort of music do you like at the moment, Perry? Kevin: Eeurgh! Mother: I think Bad Boys Inc. are rather fun. Perry: Bad Boys Inc. suck, Mrs Patterson þ Mother: So – who do you like, then? Perry: We only like Snoop Doggy Dog. Mother: Oh, from Peanuts. Kevin: Muh-eeurgh! Kevin: Finished! Come on, Perry. Mother: No, no, darling-you’ve still got pudding. Kevin: Agh! I don’t want any bloody pudding! Mother: It’s Chocolate Choc Chip Chocolate Icecream with Chocolate Sauce. But you don’t have to have any if you don’t want it. Kevin: Mmmnnnnrrrr! Mother: Perry, would you, er, like – Perry: Yes please, Mrs Patterson. Please. Thank you, Mrs Patterson. Please. Thank you! [slight pause as the icecream is shared out] Father: Have you got a girlfriend yet, Perry? Perry: Munurr! Father: I remember I got my first girlfriend when I was about your age. Tracey Thornton. I remember our first snog. Outside the cinema. Kevin: Eeeeuurrgghh! Mother: I was fourteen when I had my first snog. Kevin: [whimpers] Perry: I gotta go toilet! Kevin: YOU’RE SO BLOODY EMBARRASSING! Mother: Why can’t you be a nice, polite boy – like Perry? [the telephone starts ringing] Kevin: Ugh! WHAT? WHAT’S WRONG WI’ ME? WHAT’S BLOODY WRONG WIIH’ ME EH? Kevin: Hello? Mnuh! Hello, Mrs Carter. Yes, Perry is here, yes. Oh, very well, thank you. Yes, would you like to speak to him? Please? Yes? [to Perry] Perry – it’s your mum. Perry: Eek! What? NO! I DON’T WANT TO! NO! IT’S SO UNFAIR! I HATE YOU! YOU’RE SO BLOODY EMBARRASSIN’! I HATE YOU! Perry: I gotta go now, Mrs Patterson. Fank you Father: Cheerio, Perry! Mother: Bye, Perry! Kevin: See ya Perry: Later Kevin: So you like him more than me do you? I HATE YOU! I WISH I’D NEVER BEEN BORN!

Politeness Strategies as Linguistic Variables 703 Criticisms of Brown and Levinson’s Theory

While it has been hugely influential, Brown and Levinson’s theory has attracted a good deal of criticism. I here mention just a few of the most frequently cited conceptual weaknesses. See Craig et al. (1986) and Coupland et al. (1988) for valuable early critiques, and Eelen (2001) and Watts (2003) for more recent, thorough reviews of criticisms. Firstly Brown and Levinson’s theory relies heavily on a version of speech act theory that assumes the sentence as its basic unit and places the speaker at the center of the analysis. Much early work that used Brown and Levinson’s model adopted this focus on the utterance: e.g., some researchers asked people to judge single decontextualized utterances for degrees of politeness (e.g., Fraser, 1978). However, it is clear that FTAs are certainly not expressed only in sentences or even single utterances. Often they extend over several utterances and even over different turns of talk, as illustrated in example 5. Meaningmaking is a more dynamic process than Brown and Levinson’s approach allows for, and is often a matter of interactional negotiation between participants. Secondly, Brown and Levinson have been criticised for mixing up different types of data, and providing very little indication of its source or context. As they admit in the extensive introduction to their 1987 book, their data is ‘‘an unholy amalgam of naturally occurring, elicited and intuitive data’’ (1987: 11), and, moreover, readers have no way of knowing which is which. Most current researchers in the area of pragmatics and discourse analysis use naturally occurring recorded data, and they provide information about the context in which it was produced. Thirdly, the levels of analysis of different politeness strategies are quite different. So, for instance, negative politeness strategy 3, ‘Be pessimistic,’ is very much more general and can be realized in a very much wider variety of ways than strategy 9, ‘Nominalize,’ which identifies a very specific syntactic device that can be used for distancing the speaker from the addressee. In addition, overall, negative politeness strategies involve a more specific range of linguistic devices (e.g., hedge, nominalize, avoid the pronouns ‘I’ and ‘you’) than positive politeness strategies, which seem much more open-ended and difficult to restrict. Fourthly, context is crucial in assessing politeness, but the range of social factors which may be relevant in analyzing the weight of a FTA is much more extensive than the three Brown and Levinson identify. Factors such as the level of formality of the speech event, the presence of an audience, the degree of liking between the participants, and so on, may well affect

the weightiness of the FTA, or even the judgment about whether an utterance counts as polite at all. So, for example, as Thomas (1995: 156) points out, ‘‘if you’ll be kind enough to speed up a little’’ appears superficially to be a very polite way of saying ‘‘hurry up,’’ but in the context in which it was produced, addressed by a wife to her husband, it expressed intense irritation. And while Brown and Levinson’s rather flexible concept R might be a means of taking account of some of these additional factors, computing R clearly requires detailed familiarity with relevant sociocultural and contextual values. Fifthly, Brown and Levinson’s theory assumes an ideal and very individualistic intentional agent (labelled a Model Person) as its starting point, and has been criticised by many researchers as culturally very restricted and even Anglo-centric in basic conception (e.g., Ide et al., 1992; Eelen, 2001; Watts, 2003). Asian and Polynesian societies, for instance, place a greater emphasis on public undertakings and social commitments (a point discussed further below), and are not interested in trying to figure out what a speaker intended by a particular speech act (e.g., Lee-Wong, 2000); but this is a basic requirement of any analysis using Brown and Levinson’s framework. The implication of these criticisms is that a theory of politeness based on intention recognition cannot apply cross-culturally and universally. Measuring Politeness

Brown and Levinson’s very specific approach to identifying ways of expressing linguistic politeness led to a spate of empirical studies which explored manifestations of politeness in a wide range of contexts and cultures and in many disciplines, including social and cognitive psychology, legal language, communication studies, gender studies, business and management studies, second language acquisition, and cross-cultural communication. A number of these researchers attempted to use Brown and Levinson’s list of strategies as a basis for quantification. Shimanoff (1977), for instance, identified 300 different politeness strategies in an attempt to compare the number of politeness strategies used by men and women in interaction. She found no sex differences but she also found that the distinction between positive and negative politeness strategies was sometimes difficult to maintain. Moreover, there was often an unsatisfactory relationship between the number of strategies used and the intuitions of native speakers about how polite an utterance is (see also Brown, 1979). It soon became apparent that counting strategies is basically a fruitless exercise, since context is so

704 Politeness Strategies as Linguistic Variables

important for interpreting the significance of any linguistic form, and, moreover, the balance of different strategies may be crucially important in assessing the overall effect of a contribution to an interaction. In the following script for the video ‘Not Like This’, for example, Lorraine uses teasing humor, a positive politeness strategy, to soften her criticism of Sam’s inadequate checking of the packets of soap powder. Transcript 2: ‘Not Like This’ (Clip 7 from Talk that Works) Context: Factory packing line. The supervisor, Lorraine, has noticed that Sam is not doing the required visual check on the boxes of soap powder as they come off the line, and she stops to demonstrate the correct procedure. [þ marks a pause; place overlaps between // and /] 1. Lor: [picks up a box and pats it] 2. you know when you check these right 3. you’re supposed to look at the carton 4. to make sure it’s not leaking 5. not like this [pats box and looks away] 6. Sam: oh that’s that’s good checking 7. Lor: /you’re not going to see anything if you’re like this\ 8. Sam: /that’s all right that’s all right\ that’s all right 9. Lor: oh my gosh [smiles] 10. Sam: [laughs] þþ [picks up a box and gives it a thorough shake] 11. Lor: and what’s with the gloves 12. Sam: [smiling] don’t want to get my hands dirty 13. Lor: don’t want to ruin your manicure

It is clear that counting how many times she expresses the criticism (lines and 3, 5, perhaps 7) is not only tricky but also meaningless in terms of measuring the FTA. Similarly, while it is clear that the humorous exchange between the pair functions as positive politeness, analyzing its precise relationship to the FTA and assessing the precise relative contribution of different components is very complex and ultimately pointless. In addition, many utterances are multifunctional and assigning just one meaning to a linguistic device is thus equally misleading. We cannot know if Lorraine’s question ‘‘and what’s with the gloves?’’ (line 11) is asking for information (i.e., she is genuinely puzzled or concerned about why Sam needs to wear gloves), or if this is a preliminary tease which she follows up more explicitly in line 13. Leech (1983) also notes that utterances are often (deliberately) ambivalent, allowing people to draw their own inferences about what is intended, and even about whether they are the intended addressee. As Thomas notes, example 6 (from Thomas, 1995: 159) is a ‘‘potentially very offensive speech act (requesting people not to steal!),’’ but it is expressed in an ambivalent

form, allowing readers to decide about the precise degree of pragmatic force, and whether it applies to them. (6) Context: Notice in the Junior Common Room, Queens College, Cambridge These newspapers are for all the students, not the privileged few who arrive first.

Leech’s Politeness Principle There are a number of alternatives to Brown and Levinson’s approach to the analysis of politeness (see Eelen, 2001: 1–29). Some share a good deal with Brown and Levinson’s approach; others provide elaborations which address some of the criticisms identified above. Robin Lakoff has been labeled ‘‘the mother of modern politeness theory’’ (Eelen, 2001: 2), and her work (1973, 1975) predates Brown and Levinson’s (1978) substantial model by several years. However, her ‘rules of politeness’ (Don’t impose, Give options, Be friendly) have a good deal in common with Brown and Levinson’s politeness strategies. Another approach by Fraser (1990: 233) provides a very broad view of politeness: being polite is regarded as the default setting in conversational encounters, i.e., simply fulfilling the ‘conversational contract.’ One of the most fully developed alternative frameworks is Geoffrey Leech’s model, which was developed at about the same time as Brown and Levinson’s (Leech, 1983), and which shares many of the assumptions of their approach, as well as their goal of universality, but takes a somewhat different tack in analyzing linguistic politeness. Rather than focusing on ‘face needs,’ Leech addressed the issue of ‘‘why people are often so indirect in conveying what they mean’’ (1983: 80). To answer this question, (i.e., basically to account for why people do not consistently follow Grice’s Cooperative Principle and adhere to his Maxims (see Neo-Gricean Pragmatics). Leech proposed a Politeness Principle (PP), and a set of maxims which he regards as paralleling Grice’s Maxims. Leech’s PP states: Minimize (other things being equal) the expression of impolite beliefs. . . . Maximize (other things being equal) the expression of polite beliefs (1983: 81).

So, for example, recently my niece asked me if I liked her new shoes – bright pink plastic sandals, decorated with glitter. I thought they were ghastly, but rather than saying ‘‘I think they’re awful,’’ I replied ‘‘they look really cool.’’ The Politeness Principle accounts for my nicely ambiguous response, which was strictly truthful but minimized the expression of my very impolite beliefs about her shoes.

Politeness Strategies as Linguistic Variables 705

Leech’s set of maxims is very much larger that Grice’s four, and while some are very general, others (such as the Polyanna Principle) are ‘‘somewhat idiosyncratic,’’ to quote Thomas (1995: 160). The more general ones are the maxims of Modesty, Tact, Generosity, Approbation, Agreement, and Sympathy. The Modesty Maxim, for example, states ‘‘Minimize the expression of praise of self; maximize the expression of dispraise of self.’’ Obviously this maxim applies differentially in different cultures. In parts of Asia, including Japan, for instance, this maxim takes precedence over the Agreement Maxim, which states that agreement should be maximized and disagreement minimized. Hence Japanese and Indonesian students in New Zealand often reject a compliment, denying it is applicable to them, as illustrated in examples 7 and 8. (7) Context: Two Malay students in Wellington, New Zealand. This interaction takes place in English. S: eeee, nice stockings R: ugh! there are so many holes already (8) Context: Teacher to a Japanese student who is waiting outside the teacher’s room in the corridor T: what a beautiful blouse S: [looks down and shakes her head] no no T: but it looks lovely S: [stays silent but continues to look down]

People from Western cultures, on the other hand, are more likely to allow the Agreement Maxim to override the Modesty Maxim, and this accounts for the greater tendency among New Zealanders to agree with a compliment, although they may downgrade it or shift the credit for the object of the praise, as illustrated in example 9. (9) M: that’s a snazzy scarf you’re wearing S: yeah it’s nice isn’t it my mother sent it for my birthday

Leech’s maxims thus provide a way of accounting for a number of cross-cultural differences in politeness behavior, as well as in perceptions of what counts as polite in different cultures and subcultures. The main problem with Leech’s approach to the analysis of politeness, as a number of critics have pointed out (e.g., Thomas, 1995; Brown and Levinson, 1987; Fraser, 1990), is that there is no motivated way of restricting the number of maxims. This means it is difficult to falsify the theory since any new problem can be countered by the development of yet another maxim. Thomas (1995: 168) suggests that the maxims are better treated as a series of social-psychological constraints on pragmatic choices, which differ in

their relative importance in different cultures. Adopting this approach, their open-endedness is not such a problem.

Post-Modern Approaches to Politeness More recently a number of researchers have adopted a post-modern approach to the analysis of politeness, challenging the ‘‘transmission model of communication’’ (Mills, 2003: 69), and questioning the proposition that people necessarily agree on what constitutes polite behavior (e.g., Eelen, 2001; Locher, 2004; Mills, 2003; Watts, 2003). Researchers such as Brown and Levinson (1987), Leech (1983), and Thomas (1995: 204–205) support their analyses of the interactional meaning of an exchange with evidence such as the effect of an utterance on the addressee, and by referring to metalinguistic commentary and the development of the subsequent discourse. By contrast, post-modernist researchers eschew any suggestion that the meaning of an utterance can be pinned down. They emphasize the dynamic and indeterminate nature of meaning in interaction, including the expression of politeness. This approach emphasizes the subjectivity of judgements of what counts as polite behavior; meaning is co-constructed, and hence politeness is a matter of negotiation between participants. Adopting this framework, interaction is regarded as a dynamic discursive struggle with the possibility that different participants may interpret the same interaction quite differently. Gino Eelen (2001) led this revolution in politeness research with his in-depth critique of earlier so-called structuralist, positivist, or objective approaches to the analysis of linguistic politeness. Following Bourdieu, he provides a detailed outline of a post-modern approach which synthesizes subjective and objective approaches by focusing on social practice. Eelen makes a fundamental distinction between what he called ‘first-order (im)politeness,’ referring to a common-sense, folk, or lay interpretation of (im)politeness, and ‘second-order (im)politeness’ to refer to (im)politeness as a concept in sociolinguistic theory (2001: 30). He also uses the term ‘expressive politeness’ (2001: 35) to describe instances where participants explicitly aim to produce polite language: e.g., use of polite formulae such as please or I beg your pardon. These terms, or at least the distinctions they mark, have proved useful to others who have developed Eelen’s approach in different ways. Mills (2003), for instance, uses this framework to analyze the role of politeness in the construction of gendered identities in interaction. She dismisses attempts to capture generalizations and to develop a

706 Politeness Strategies as Linguistic Variables

universal theory of politeness, arguing that politeness is a contentious concept. Her approach places particular emphasis on the role of stereotypes and norms as influences on people’s judgements of genderappropriate politeness behavior. In assessing the politeness of an act you have to make a judgement of ‘appropriateness’ ‘‘in relation to the perceived norms of the situation, the CofP [community of practice], or the perceived norms of the society as a whole’’ (2003: 77; though it must be said that it is not clear how the analyst establishes such judgments, especially since post-modernists strongly critique quantitative methodology, and tend to rely on rather small data sets). The following excerpt from the film Getting to Our Place illustrates the relevance of perceived norms in the interpretation of the degrees of politeness and impoliteness expressed in a particular interaction between two New Zealanders. The interaction involves the powerful and influential Sir Ron Trotter, Chairman of the Board planning the development of the New Zealand National Museum, and Cliff Whiting, the highly respected Maori museum CEO. The museum is to include within it a marae, a traditional Maori meeting house and surrounding area for speech-making, for which Cliff Whiting is responsible. At the beginning of the excerpt, Sir Ron Trotter is just finishing a statement of how he sees the museum marae as being a place where pakeha (non-Maori) will feel comfortable (whereas most New Zealand marae are built by and for Maori in tribal areas). Transcript 3: Excerpt from the film Getting to Our Place [þ marks a pause; place overlaps between // and /; (. . .) indicates an unclear word. Strong STRESS is marked by capitalization.] RT: but comfortable and warm and þ part of the place þþ for any Pakeha who er þþ part of the (. . .) that we talked about in the concept of we’re trying to þ develop CW: there are two main fields that have to be explored and er þ the one that is most important is it’s customary role in the first place because marae comes on and comes from þ the tangatawhenua who are Maori þþ /to change it\ RT: /but it’s not just\ for Maori CW: /no\ RT: you you MUST get that if it is a Maori institution and nothing more. THIS marae has failed þ and they MUST get that idea CW: /how\ RT: because CW: /(. . .)\ RT [shouts]: we are bicultural þ bicultural (talks about two) and if it is going to be totally Maori þþ and all þ driven by Maori protocols and without regard for the life museum is a is a Pakeha concept

Many New Zealanders viewing this episode perceive Sir Ron’s behavior as rude, and specifically comment on his disruptive interruption of Cliff Whiting’s verbal contribution, and on the way Sir Ron Trotter raises the volume of his voice as he talks over the top of others. This assessment and interpretation of the interactional meaning of what is going on here draws on generally recognized norms for interaction in New Zealand society. In fact, analyses of cross-cultural differences in interactional patterns between Maori and Pakeha (Stubbe and Holmes, 1999) suggest that this disruption would be perceived as even more impolite by a Maori audience, since Maori discourse norms, even in casual conversation, permit one speaker to finish before another speaker makes a contribution. Stereotypical expectations and norms are thus an important contributing factor in accounting for different people’s judgements of the relative politeness or impoliteness of particular interactions. Focusing on common sense (im)politeness (i.e., Eelen’s first-order (im)politeness), Watts (2003) also pays attention to the relevance of affect and sincerity judgments in an approach which emphasizes that politeness strategies may be used strategically and manipulatively. His particular contribution to research on linguistic politeness is a distinction between what he calls ‘politic’ behavior, i.e., socially constrained politeness, and strategic politeness, where the speaker goes beyond what is required (2003: 4). Politic behavior is ‘‘what the participants would expect to happen in this situation, and it is therefore not polite’’ (2003: 258). It is ‘appropriate,’ ‘non-salient’ and ‘expectable’ (2003: 256–257). Polite behavior is ‘‘behavior in excess of politic behavior’’ (2003: 259); it is marked behavior indicating the speaker’s wish to express concern or respect for the addressee (Locher, 2004). It is moreover an area where subjective judgements become relevant and is thus an area of dispute: ‘‘not everyone agrees about what constitutes polite language usage’’ (Watts, 2003: 252). As an example, Watts argues that there are alternative possible interpretations of the following contribution from a politician in a television debate: ‘‘can I come back on Mandy’s point because I think this is one aspects of TVEI which has been really underemphasised tonight.’’ He suggests that ‘‘some commentators might assess his expression ‘can I come back on Mandy’s point’ . . . as polite behavior; others might suggest . . . that, far from being genuinely polite, he is only simulating politeness and is in reality currying favour with the person he is addressing or some other person or set of persons’’ (2003: 3). It is interesting to note that while both Mills and Watts highlight the

Politeness Strategies as Linguistic Variables 707

indeterminacy of meaning, both researchers frequently assign interpretations quite confidently in discussing their examples. Having outlined a number of approaches to analyzing politeness, and indicated some of their strengths and weaknesses, the final sections of this article discusses research on the interaction of linguistic politeness with different social and cultural variables.

Social Variables and Politeness As the discussion has indicated and the examples have illustrated, the ways in which people express or negotiate politeness is obviously influenced by a range of sociocultural variables, including power relationships, degrees of solidarity, intimacy, or social distance, the level of formality of the interaction or speech event, the gender, age, ethnicity, and social class backgrounds of participants, and so on. Kasper (1997: 382–383) provides a very extensive list of data-based studies which investigate the relevance and the complexity in sociolinguistic and sociopragmatic research of Brown and Levinson’s three social variables (P, D, R) each of which is a composite sociocultural construct. The core variationist literature, however, which pays careful attention to such social variables, and which adopts rigorous statistically-based quantitative measures, rarely explicitly addresses the expression of politeness per se, though some have argued, controversially, that politeness may entail using more standard speech forms (Deuchar, 1988), and research on style offers potential insights into the interaction between formality and politeness. Classifying women and men as members of different (sub-)cultures led to some interesting insights in language and gender research (see Gender) about the relativity of particular discourse features. So, for instance, an ‘interruption’ might be perceived as disruptive by one group but as supportive by another; and back-channeling or ‘minimal feedback’ (mm, yeah, right) and certain pragmatic particles (you know, I think) function variably and complexly as markers of (im)politeness in the usage of different social groups (e.g., Holmes, 1995). Tannen (1990), Coates (1996), and Holmes (1995), for instance, identified linguistic and pragmatic features of (respectively American, British, and New Zealand) English which were widely regarded as indicators of more polite speech, and which tended to be used more frequently in certain social contexts by middle-class, professional, and majority group women than by men from similar backgrounds. Extending such analyses, which introduced a qualitative dimension to the analysis of linguistic and

pragmatic features, the work of a number of discourse analysts further explores the influence of a range of social variables on the expression of politeness. Speech act research provides a particularly rich source of insights into the diversity and complexity of different influences on politeness. See Kasper (1997) for a summary of relevant research up to the mid– 1990s. Locher (2004) examines the interaction of power and politeness in the expression of disagreements in a family, at a business meeting, and in a political interview involving President Clinton. She demonstrates that power is most usefully regarded as dynamic, relational, and contestable, and that while participants of very different statuses exercise power as well as politeness in their use of discourse in context, status tends to influence the degree of negotiability of a disagreement in an interaction, along with many other factors, including the topic’s degree of controversiality, the participants’ degree of familiarity with the topic, and their speaking style, cultural backgrounds, and gender. She also notes great variability in the amount and degree, and even the discursive positioning, of politeness or relational work which accompanies the exercise of power in disagreements in the interaction. Mills (2003) also discusses the relevance of politeness as a factor in the construction of gender identity, especially in British English society. Holmes and Stubbe (2003) describe the interaction of power and politeness in a wide range of New Zealand workplaces, and Harris (2003) applies politeness theory to talk collected in British magistrates courts, doctors’ surgeries, and a police station. Researchers taking this approach highlight the complexities of spoken interaction, and the importance of taking account of the differing and intersecting influences of different social factors (e.g., age, ethnicity, social class, gender) as well as contextual factors (e.g., power and social distance relations, social and institutional role, formality of the speech event, and speech activity) in accounting for the complex ways in which politeness is expressed and interpreted in the very different situations they analyze.

Cross-Cultural Analyses of (Im)Politeness Politeness is also conceptualized and expressed very differently in different cultures (e.g., Siafanou, 1992; Kasper and Blum-Kulka, 1993; Ting-Toomey, 1994; Scollon and Scollon, 1995; Spencer-Oatey, 2000). Nwoye (1989), for example, illustrates the strategic use of euphemisms and proverbs as means of expressing face-threatening acts politely in interactions between the Igbo of Southeastern Nigeria. Using a unified theoretical framework and methodology

708 Politeness Strategies as Linguistic Variables

involving discourse completion tasks which has subsequently been very influential and widely applied, Blum-Kulka et al. (1989) provide information on contrasting patterns in the (reported) use of politeness strategies in speech acts such as apologies and requests in a number of languages, including English, Hebrew, Canadian French, Argentinian Spanish, German, and Danish (see also House and Kasper, 1981). This approach has been applied and adapted for many different languages and with many different speech acts (e.g. Boxer, 1993; Ma´rquez-Reiter, 2000; and many, many more). Focusing just on Europe, Hickey and Stewart’s (2004) very useful collection of papers provides information on linguistic politeness strategies in 22 European societies, ranging from Germany, Ireland, and Belgium in western Europe to Norway, Denmark, and Finland in northern Europe; Poland, Hungary, and the Czech Republic in eastern Europe; and Greece, Cyprus, and Spain in southern Europe. The papers in Bayraktaroglu and Sifianou (2001), on the other hand, focus just on Greek and Turkish but provide information on realizations of politeness in social contexts ranging from classrooms to television interviews. The role of code-switching in the expression of politeness is also relevant in cross-cultural analyses, as illustrated, for example in a study of how London Greek-Cypriot women exploit the fact that directness is more acceptable in Greek than in English, and thus code-switch to Greek to express positive politeness in ethnically appropriate ways (Gardner-Chloros and Finnis, 2003). Greek words, phrases, and clauses are inserted in English macro-structures to soften the effect of a direct criticism, for instance, or an expression of irritation or a demand for a response is interactionally managed by shifting to Greek. Expanding consideration to other times and cultures has led researchers to challenge some of the assumptions made about conceptions of linguistic politeness in early models. Even confining attention to English-speaking societies, there is a good deal of variation in what is included in commonsense understandings of what constitutes polite behavior. As Watts points out, understandings ‘‘range from socially ‘correct’ or appropriate behavior, through cultivated behavior, considerateness displayed to others, selfeffacing behavior, to negative attributions such as standoffishness, haughtiness, insincerity, etc.’’ (2003: 8–9). Nevertheless, a good deal of early research reflected a rather Western and even middle-class British English conception of politeness. These ethnocentric assumptions were often challenged by researchers from other cultures. Researchers on Asian cultures, for instance, pointed to the importance of recognizing that in some

languages, a speaker’s use of certain polite expressions (and specifically honorifics) is a matter of social convention (‘discernment’) or social indexing (Kasper, 1990) rather than strategic choice (e.g., Ide et al., 1992; Matsumoto, 1989; Usami, 2002; Mao, 1994). (Fukishima (2000) argues that social indexing is a sociolinguistic rather than a pragmatic matter and as such is irrelevant to the analysis of [strategic] politeness.) These researchers point out that Western conceptions of ‘face’ are very individualistic, and approaches to politeness based on such conceptions do not account satisfactorily for more socially based notions, such as the twin Chinese concepts of ‘mien-tzu’ (or ‘mianzi’) and ‘lien’ (or ‘lian’). ‘Mien-tzu’ refers to ‘‘prestige that is accumulated by means of personal effort or clever maneuvring,’’ and is dependent on the external environment (Hu, 1944: 465), while ‘lien’ is the respect assigned by one’s social group on the basis of observed fulfilment of social obligations and moral integrity. Loss of ‘lien’ makes it impossible for a person to function properly within the community. ‘‘Lien is both a social sanction for enforcing moral standards and an internalized sanction’’ (Hu, 1944: 45). This is a rather different conception of face than that used in Brown and Levinson’s theory, and it influences conceptions of what is considered ‘polite’ as opposed to required by social sanction and sociolinguistic norms. So, for example, in some languages (e.g., Chinese, Japanese, Korean) choice of stylistic level and address forms are largely a matter of social convention or ‘linguistic etiquette’; respect or deference is encoded in certain linguistic forms which are required when talking to one’s elders or those of higher status, for instance. It has been argued that such sociolinguistically prescribed deference behavior must first be taken into account in assessing ‘politeness’ (Usami, 2002). (There is an obvious parallel here with Watts’ ‘polite/politic’ distinction mentioned above which was formulated in part to take account of this crosscultural issue.) So, in assessing politeness, Chinese participants, for instance, evaluate both whether an appropriate degree of socially prescribed respect or deference has been expressed, as well as the extent to which the addressee’s face needs are addressed discursively in any specific interaction (Lee-Wong, 2000; Usami, 2002). Lee-Wong shows, for instance, that sociocultural values such as ‘sincerity,’ ‘respect,’ and ‘consideration’ are crucially involved in a Chinese speaker’s perception and conceptualization of the politeness. Moreover, in such societies the discursive expression of politeness generally involves the use of avoidance and mitigation strategies (i.e., Brown and Levinson’s negative politeness strategies), and even address terms are extensively used in this way.

Politeness Strategies as Linguistic Variables 709

By contrast, in communities where social relationships are not marked so formally or encoded so explicitly in the grammar or lexicon, politeness is expressed somewhat differently. Greek interactants’ view of politeness, for instance, focuses around expressions of concern, consideration, friendliness, and intimacy, rather than imposition-avoidance and distance maintenance (Siafanou, 1992). Similarly, Bentahila and Davies (1989) claim that Moroccan Arabic culture places greater weight on positive politeness than does the British English culture, which often functions implicitly as the unacknowledged norm in politeness research. Overall, then, it is apparent that the area of the cross-cultural expression of linguistic politeness requires careful negotiation, with the ever-present danger of ethnocentric assumptions a constant potential minefield. Nonetheless, the burgeoning of research in this area in recent years, especially involving Asian researchers, suggests that better understandings of what is meant by linguistic politeness in different cultures are steadily being forged.

Impoliteness Finally, a brief comment on impoliteness, which, despite being less researched, is at least as complex a matter as linguistic politeness. Watts comments that since breaches of politeness are salient, while conforming to politeness norms goes unnoticed, one would expect impoliteness to have attracted more attention than it has. He summarizes research on linguistic impoliteness in one brief paragraph (2003: 5), encompassing research on rude or even insulting behavior in a variety of communities, including middle-class white New Zealanders (Austin, 1990; see also Kuiper, 1991). Austin draws on Relevance Theory to account for behavior perceived as intentionally rude. She introduced the useful term Face Attack Acts for what she calls ‘‘the dark side of politeness’’ (1990: 277), namely speech acts perceived as deliberately intended to insult the addressee. She provides a fascinating range of examples, including the following (from Austin, 1990: 282): (10) Context: Transactional interaction between member of the business community and well-educated middle-class woman [the author, Paddy Austin]. A: Now that will be Miss, won’t it? B: No, Ms. A. Oh, one of those.

Austin details the contextual inferencing which led her to interpret as an insult A’s response to the information that she preferred the title Ms.

By contrast to such a subtle and indirect instance of impoliteness, one might think that swearing at someone would always qualify as impolite behavior, but there is a range of research illustrating that swearing serves many different functions and that even when addressed to another person, it may serve a positive politeness solidarity function, rather than acting as an insult (see, for example, Daly et al., 2004). Some researchers incorporate the analysis of politeness within the same theoretical framework as politeness. Indeed, Eelen (2001) and Watts (2003) use the formulation (im)politeness to signal this. Mills (2003: 124) stresses that impoliteness is not the polar opposite of politeness, but her discussion (2003: 135 ff) suggests that impoliteness can be dealt with using the same analytical concepts as those relevant to the analysis of politeness (appropriacy, face threat, social identity, societal stereotypes, community norms). So, for example, the utterances produced in the Prime Minister’s Question Time in the English House of Commons do not qualify as impolite, despite superficially appearing as if they might, because they are generally assessed as perfectly appropriate in this context (2003: 127). Thomas (1995: 157) suggests some speech acts ‘‘seem almost inherently impolite’’: e.g., asking someone to desist from behavior considered very offensive; in such cases the linguistic choice made will be irrelevant to politeness judgments.

Where Next? It seems likely that exploring what counts as linguistic impoliteness will prove a challenging area for future research. Formulating a satisfactory definition of impoliteness will certainly provide a challenge for those attempting to develop adequate theoretical frameworks, as well as providing a robust testing ground for claims of universality and cross-cultural relevance, always assuming, of course, that researchers accept these as legitimate and useful goals for future research in the area of pragmatics and politeness. Another relatively recent development is the exploration of the broader concept of ‘relational practice’ (Fletcher, 1999; Locher, 2004). Both solidarityoriented, positive politeness and distance-oriented, negative politeness are fundamental components of relational practice, and it seems likely that this will be another fruitful direction for future research in the area of linguistic politeness. Finally, the use of different languages as strategic resources in balancing different social pressures is another area where insights into cross-cultural politeness seem likely to continue to emerge over the next decade.

710 Politeness Strategies as Linguistic Variables See also: Context and Common Ground; Conventions in Language; Gender; Irony; Jargon; Memes; Neo-Gricean Pragmatics; Politeness; Semantics–Pragmatics Boundary; Taboo, Euphemism, and Political Correctness; Taboo Words.

Bibliography Austin P (1990). ‘Politeness revisited: the ‘‘dark side.’’ ’ In Bell A & Holmes J (eds.) New Zealand ways of speaking English. Clevedon: Multilingual Matters. 277–293. Bayraktaroglu A & Sifianou M (eds.) (2001). Linguistic politeness across boundaries: the case of Greek and Turkish. Amsterdam: John Benjamins. Bentahila A & Davies E (1989). ‘Culture and language use: a problem for foreign language teaching.’ IRAL 27(2), 99–112. Blum-Kulka S, House J & Kasper G (1989). Cross-cultural pragmatics: requests and apologies. Norwood, NJ: Ablex. Boxer D (1993). Complaining and commiserating: a speech act view of solidarity in spoken American English. New York: Peter Lang. Brown P (1979). ‘Language, interaction, and sex roles in a Mayan community: a study of politeness and the position of women.’ Ph.D. diss., Berkeley: University of California. Brown P & Levinson S (1978). ‘Universals in language usage: politeness phenomena.’ In Goody E (ed.) Questions and politeness. Cambridge: Cambridge University Press. 56–289. Brown P & Levinson S (1987). Universals in language usage. Cambridge: Cambridge University Press. Coates J (1996). Woman talk: conversation between women friends. Oxford: Blackwell. Coupland N, Grainger K & Coupland J (1988). ‘Politeness in context: intergenerational issues (Review article).’ Language in Society 17, 253–262. Craig R, Tracy K & Spisak F (1986). ‘The discourse of requests: assessment of a politeness approach.’ Human Communication Research 12(4), 437–468. Daly N, Holmes J, Newton J & Stubbe M (2004). ‘Expletives as solidarity signals in FTAs on the factory floor.’ Journal of Pragmatics 36(5), 945–964. Deuchar M (1988). ‘A pragmatic account of women’s use of standard speech.’ In Coates J & Cameron D (eds.) Women in their speech communities. London: Longman. 27–32. Eelen G (2001). A critique of politeness theories. Manchester and Northampton: St. Jerome. Fletcher J (1999). Disappearing acts: gender, power, and relational practice at work. Cambridge: MIT. Fraser B (1978). ‘Acquiring social competence in a second language.’ RELC Journal 92, 1–21. Fraser B (1990). ‘Perspectives on politeness.’ Journal of Pragmatics 14(2), 219–236. Fukushima S (2000). Requests and culture: politeness in British English and Japanese. Bern: Peter Lang.

Gardner-Chloros P & Finnis K (2003). ‘How code-switching mediates politeness: gender-related speech among London Greek-Cypriots.’ Estudios de Sociolingu¨ı´stica 4(2), 505–532. Grice H P (1975). ‘Logic and conversation.’ In Cole P & Morgan J (eds.) Syntax and semantics, 3: speech acts. New York: Academic Press. 41–58. Harris S (2003). ‘Politeness and power: making and responding to ‘‘requests’’ in institutional settings.’ Text 21(1), 27–52. Hickey L & Stewart M (eds.) (2004). Politeness in Europe. Clevedon, Avon: Multilingual Matters. Holmes J (1995). Women, men, and politeness. London: Longman. Holmes J & Stubbe M (2003). Power and politeness in the workplace: a sociolinguistic analysis of talk at work. London: Longman. House J & Kasper G (1981). ‘Politeness markers in English and German.’ In Coulmas F (ed.) Conversational routine: explorations in standardized communication and prepatterned speech. The Hague: Mouton. 289–304. Hu H (1944). ‘The Chinese concepts of ‘‘face.’’’ American Anthropologist 46, 45–64. Ide S, Hill B, Ogino T & Kawasaki A (1992). ‘The concept of politeness: an empirical study of American English and Japanese.’ In Watts R, Ide S & Ehrlich K (eds.) Politeness in language: study in its history, theory, and practice. Berlin: Mouton de Gruyter. 281–297. Kasper G (1990). ‘Linguistic politeness: current research issues.’ Journal of Pragmatics 14, 193–218. Kasper G (1997). ‘Linguistic etiquette.’ In Coulmas F (ed.) The handbook of sociolinguistics. Oxford: Blackwell. Kasper G & Blum-Kulka S (eds.) (1993). Interlanguage pragmatics. Oxford: Oxford University Press. Kuiper K (1991). ‘Sporting formulae in New Zealand English: two models of male solidarity.’ In Cheshire J (ed.) English around the world. Cambridge: Cambridge University Press. 200–209. Lakoff R T (1973). ‘The logic of politeness; or minding your p’s and q’s.’ Papers from the Ninth Regional Meeting of the Chicago Linguistic Society (1973), 292–305. Lakoff R T (1975). Language and woman’s place. New York: Harper. Lakoff R T (1979). ‘Stylistic strategies within a grammar of style.’ In Orasanu J, Slater M K & Adler L L (eds.) Language, sex, and gender: does la diffe´rence make a difference? New York: The Annals of the New York Academy of Sciences. 53–80. Leech G (1983). Principles of pragmatics. London: Longman. Lee-Wong S M (2000). Politeness and face in Chinese culture. Frankfurt: Peter Lang. Locher M (2004). Power and politeness in action: disagreements in oral communication. Berlin: Mouton de Gruyter. Mao L R (1994). ‘Beyond politeness theory: ‘‘face’’ revisited and renewed.’ Journal of Pragmatics 21, 451–486. Ma´rquez-Reiter R (2000). Linguistic politeness in Britain and Uruguay. Amsterdam: John Benjamins.

Polysemy and Homonymy 711 Matsumoto Y (1989). ‘Politeness and conversational universals: observations from Japanese.’ Multilingua 8(2–3), 207–221. Mills S (2003). Gender and politeness. Cambridge: Cambridge University Press. Nwoye O (1989). ‘Linguistic politeness in Igbo.’ Multilingua 8(2–3), 249–258. Scollon R & Scollon S W (1995). Intercultural communication. Oxford: Blackwell. Shimanoff S (1977). ‘Investigating politeness.’ Discourse Across Time and Space: Southern California Occasional Papers in Linguistics 5, 213–241. Sifianou M (1992). Politeness phenomena in England and Greece. Oxford: Clarendon. Spencer-Oatey H (2000). Culturally speaking: managing rapport through talk across cultures. London and New York: Continuum. Stubbe M & Holmes J (1999). ‘Talking Maori or Pakeha in English: signalling identity in discourse.’ In Bell A &

Kuiper K (eds.) New Zealand English. Amsterdam: John Benjamins; Wellington: Victoria University Press. 249–278. Tannen D (1990). You just don’t understand: women and men in conversation. New York: William Morrow. Thomas J (1995). Meaning in interaction. An introduction to pragmatics. London: Longman. Ting-Toomey S (ed.) (1994). The challenge of facework: cross-cultural and interpersonal issues. Albany, NY: University of New York Press. Usami M (2002). Discourse politeness in Japanese conversation: some implications for a universal theory of politeness. Tokyo: Hituzi Syobo. Watts R (2003). Politeness. Cambridge: Cambridge University Press. Watts R, Ide S & Ehrlic K (1992). ‘Introduction.’ In Watts R, Ide S & Ehrlich K (eds.) Politeness in language: study in its history, theory and practice. Berlin: Mouton de Gruyter. 1–17.

Polysemy and Homonymy A Koskela and M L Murphy, University of Sussex, Brighton, UK ß 2006 Elsevier Ltd. All rights reserved.

Polysemy and homonymy both involve the association of a particular linguistic form with multiple meanings, thus giving rise to lexical ambiguity. Polysemy is rooted in a variety of semantic-pragmatic processes or relations through which meanings of words extend or shift so that a single lexical item (a polyseme) has several distinct senses. For example, language is polysemous in that it can be used to refer to the human linguistic capacity (Language evolved gradually) or to a particular grammar and lexis (Learn a new language!). The most clear-cut cases of polysemy (versus homonymy) involve systematic (or regular) polysemy (Apresjan, 1974), in which the relation between the senses is predictable in that any word of a particular semantic class potentially has the same variety of meanings. For example, words for openable coverings of apertures in built structures (She rested against the door/gate/window) are also used to refer to the aperture itself (Go through the door/gate/window). In non-systematic polysemy, the word’s two senses are semantically related, but are not part of a larger pattern, as for arm of government versus human arm. Within the literature, theoretical considerations often lead authors to use polysemy to refer only to either systematic or non-systematic polysemy, and so the term must be approached with caution.

Homonyms, in contrast, are distinct lexemes that happen to share the same form. They arise either accidentally through phonological change or lexical borrowing, or through some semantic or morphological drift such that a previously polysemous form is no longer perceived as being ‘the same word’ in all its senses. Tattoo1 ‘an ink drawing in the skin’ and tattoo2 ‘a military drum signal calling soldiers back to their quarters’ provide a clear example of homonymy, in that the two words derive respectively from Polynesian ta-tau and Dutch taptoe ‘turn off the tap’ (which was also used idiomatically to mean ‘to stop’). The formal identity of homonyms can involve both the phonological and the written form, as for tattoo. Homophones need only be pronounced the same, as in tail and tale, and homographs share written form, but not necessarily phonological form, as in wind /wInd/ and wind /waInd/. Homographs that are not homophones are also called heteronyms. Polysemy and homonymy are generally differentiated in terms of whether the meanings are related or not: while polysemous lexical items involve a number of related meanings, homonyms, as accidentally similar words, do not have any semantic relation to each other. However, as discussed below, a clear distinction between polysemous and homonymous items remains difficult to draw. Nevertheless, the need to distinguish between them remains. For lexicographers, the distinction generally determines how many entries a dictionary has – homonyms are treated as multiple entries, but all of a polyseme’s senses are treated in one entry. For semanticists wishing to

712 Polysemy and Homonymy

discover constraints on or processes resulting in polysemy, weeding out the homonymous cases is necessary. For those wishing to model the mental representation of lexical knowledge, the issue of semantic (un)relatedness is similarly important. Another definitional issue concerns the distinction between polysemy and vagueness, in which a lexical item has only one general sense. (Some authors refer to this as monosemy.) This raises the question of how different two usages of the same form need to be to count as distinct polysemous senses rather than different instantiations of a single underlying sense. For example, the word cousin can refer to either a male or a female, but most speakers (and linguists) would not view cousin as having distinct ‘male cousin’ and ‘female cousin’ senses. Instead, we regard it as vague with respect to gender. A number of different methodologies and criteria have been used to draw the line between polysemy and vagueness, but these are not uncontroversial. The distinction is also influenced by theoretical assumptions about the nature of lexical meaning; thus, demarcation of senses is one of the more controversial issues in lexical semantics.

Evidence Used in Differentiating Homonyms and Polysemes The kinds of evidence used for differentiating polysemous and homonymous items can be roughly divided into two types: evidence regarding the relatedness of the meanings involved and evidence regarding any formal (morphosyntactic or phonological) differences in the linguistic form that correspond to the distinct senses. The key principle for distinguishing polysemes and homonyms involves the relatedness of the meanings associated with the linguistic form: unrelated meanings indicate homonymy whereas related meanings imply polysemy. Relatedness can either be determined diachronically by establishing if the senses have a common historical origin, or synchronically – and neither method is unproblematic. The simplest criterion for synchronic relatedness is whether or not the senses participate in the same semantic field (see Lexical Fields). On this criterion, metonymically related senses such as bench ‘place where the judge sits’ and ‘the judge’ are polysemous, but metaphorically related senses such as foot ‘body part’ and ‘bed part’ are not. Nevertheless, the two senses of foot here are usually considered to represent polysemy, as the more usual methodology for determining synchronic relatedness involves individuals’ intuitions. Intuitive judgements of semantic relations are, however, always subjective, and it may be questioned whether people’s metalinguistic reasoning about

meaning relations can reflect how lexical meaning is mentally represented. Historical evidence has in its favor the fact that etymological relations are often less equivocal; but many approaches consider such evidence irrelevant to questions of mental representation, as etymological information is not part of most speakers’ competence. Furthermore, diachronic and synchronic evidence can be contradictory. In some cases, despite the fact that the different senses are etymologically related, native speakers today perceive no semantic relation. Such is the case with the classic homonym bank; the two senses ‘financial institution’ and ‘a raised ridge of ground’ can be traced back to the same proto-Germanic origin. There are also instances where at least some speakers perceive a synchronic relation in words with no shared history. For example, ear ‘hearing organ’ and ear (of corn) are perceived by some speakers to be metaphorically related, analogous to other body-part metaphors, but in fact they have separate sources. Formal criteria are also used to distinguish homonymy and polysemy. One criterion, adopted widely in lexicography, is that polysemous senses should belong to the same grammatical category. Thus, noun and verb senses of words such as waltz, derived through zero derivation (conversion), might be considered distinct lexical items, and therefore homonyms. Similarly, the potential for different plural forms for the meanings of mouse, ‘small rodent’ and ‘a computer input device’ (mice and mouses, respectively), could indicate that the distinct senses reflect distinct homonymous lexemes. However, the formal criterion often conflicts with the semantic criteria. One can easily see the semantic connection between waltz (n.) and waltz (v.) as part of a pattern of regular polysemy (where any noun denoting a type of dance can also be a verb meaning ‘to perform that dance’), and most people appreciate the metaphorical connection between the two types of mouse. But for some approaches to the lexicon, any morphological differences associated with different senses of the form force its treatment as a homonym, resulting in etymologically related homonyms. The contradictions of the formal, diachronic, and synchronic semantic criteria have led some to abandon strict distinctions between homonyms, polysemes, and vague lexemes. Tuggy (1993) proposes a continuum between these categories, which relies on variable strengths of association among meanings and forms.

Theoretical Approaches to Polysemy and Homonymy While we have so far attempted a theory-neutral definition and description of polysemy and homonymy,

Polysemy and Homonymy 713

variability in the interpretation of these terms is often at the heart of the contrasts among theoretical approaches. The approach is often determined by the range of phenomena considered (and vice versa) – for instance some models define polysemy as equivalent to systematic polysemy and treat irregular cases as tantamount to homonymy. In early generative theory, attempts were made to treat polysemy as the product of synchronic derivational processes. The lexical representation of a word would include a meaning, and further meanings could be derived via lexical rules that operate on some semantic class of words (e.g., McCawley, 1968; Leech, 1974). Leech (1974) notes that such rules are limited in their productivity, and can be seen as motivating but not predicting new senses of words. The assumption of a basic meaning from which additional meanings are derived continues as a theme in some pragmatic approaches to polysemy. For instance, Nunberg (1978) posits that words have lexical meanings but that they can be used to refer in various ways based on a number of conventional referring functions that allow language users to effect different sense extensions. Thus, a referring function that relates producers with their products predicts the interpretation of Chomsky (primary reference is to a person) as ‘the works of Chomsky’ in Chomsky is hard to read. Other theorists have taken the position that word meaning is radically underspecified in the lexicon (e.g., Bierwisch, 1983) or extremely general in sense (Ruhl, 1989) and that semantic or pragmatic factors allow for more specific interpretations in context. Such an approach is Blutner’s (1998) Lexical Pragmatics, where lexical meaning is highly underspecified. Meanings are enriched by a pragmatic mechanism that specifies which particular concepts the word refers to in a particular use based on contextual factors. Such approaches erase the distinction between polysemy and vagueness. The generative lexicon (Pustejovsky, 1995) is an approach to lexical semantics that emerges from computational linguistic work. Sense disambiguation presents a key challenge for natural language processing and drives much current polysemy research. In this theory, systematic polysemy is generated by lexical rules of composition that operate on semantic components specified in the representation of lexical items (and so it could be seen as a development from the early generative approaches). The main concern is to account for regular types of meaning alternation (such as the aperture-covering alternation of door and adjectival meaning variation, as in fast car, fast typist, fast decision, fast road, etc.) in a manner that avoids the problems of simple enumeration of word senses in the lexicon and explicates the systematic nature of

meaning variation in relation to the syntagmatic linguistic environment. The meaning variation of fast, for example, is accounted for by treating the adjective as an event predicate which modifies an event that is specified as the head noun’s function as part of its compositional lexical representation. Another significant strand of polysemy research proceeds within Cognitive Linguistic approaches (see Cognitive Semantics). The general aim is to study the kinds of conceptual processes that motivate the multiple meanings of linguistic forms and how these meanings may be grounded in human experience. As it is argued that lexical categories exhibit the same kind of prototype structure as other conceptual categories, the relations of polysemous senses are usually modeled in terms of radial, family resemblance categories. In these polysemy networks, the senses are typically either directly or indirectly related to a prototypical sense through such meaning extensions processes as conceptual metaphor and metonymy and image-schema transformations (Lakoff, 1987). As Cognitive Linguists also assume that all of linguistic structure, including grammar, is meaningful, some work within this approach has extended the applicability of the notions of polysemy and homonymy beyond lexical semantics to grammatical categories and constructions (Goldberg, 1995). While most other approaches dismiss homonymy as accidental and uninteresting, the relevance of diachronic processes to meaning representation in Cognitive Linguistics means that making hard and fast distinctions between homonymy, polysemy, and vagueness is not necessary (Tuggy, 1993; Geeraerts, 1993). Polysemy and homonymy thus represent two central notions in lexical semantics, inspiring active research interest. See also: Categorizing Percepts: Vantage Theory; Cognitive Semantics; Concepts; Dictionaries; Frame Semantics; Hyponymy and Hyperonymy; Lexical Fields; Lexical Semantics; Lexicology; Meronymy; WordNet(s).

Bibliography Apresjan J (1974). ‘Regular polysemy.’ Linguistics 142, 5–33. Bierwisch M (1983). ‘Semantische und konzeptuelle Repra¨sentation lexikalischer Einheiten.’ In Ru˚zˇicˇka R & Motsch W (eds.) Untersuchungen zur Semantik. Berlin: Akademie-Verlag. Blutner R (1998). ‘Lexical pragmatics.’ Journal of Semantics 15, 115–162. Geeraerts D (1993). ‘Vagueness’s puzzles, polysemy’s vagaries.’ Cognitive Linguistics 4, 223–272. Goldberg A E (1995). Constructions. Chicago: University of Chicago Press.

714 Possible Worlds Lakoff G (1987). Women, fire, and dangerous things. Chicago: University of Chicago Press. Leech G (1974). Semantics. Harmondsworth: Penguin. McCawley J D (1968). ‘The role of semantics in a grammar.’ In Bach E & Harms R T (eds.) Universals in linguistic theory. New York: Holt, Rinehart & Winston. 124–169. Nunberg G (1978). The pragmatics of reference. Bloomington, IN: Indiana University Linguistics Club.

Petho˝ G (2001). ‘What is polysemy?’ In Ne´meth E & Bibok K (eds.) Pragmatics and the flexibility of word meaning. Amsterdam: Elsevier. 175–224. Pustejovsky J (1995). The generative lexicon. Cambridge, MA: MIT Press. Ravin Y & Leacock C (eds.) (2000). Polysemy. Oxford: OUP. Ruhl C (1989). On monosemy. Albany: SUNY Press. Tuggy D (1993). ‘Ambiguity, polysemy and vagueness.’ Cognitive Linguistics 4, 273–290.

Possible Worlds D Gregory, University of Sheffield, Sheffield, UK ß 2006 Elsevier Ltd. All rights reserved.

Anyone who reads contemporary philosophy will soon encounter talk of possible worlds, and anybody who delves a little deeper will quickly discover that there are various conflicting philosophical accounts of their nature. We consider those competing accounts, but it is helpful to begin with a discussion of the ends that possible worlds are commonly made to serve. Possible worlds made their debut on the current philosophical scene through Kripke’s work on the metamathematics of modal logics, but their current ubiquity probably owes more to their relationships to less esoteric matters. We commonly express claims about what is possible by using statements that appear to assert the existence of ‘possibilities,’ for example, or of ‘ways things might have been.’ So, for instance, somebody might assert, quite unexceptionably, that ‘there’s a possibility that the world will end in the next 10 years.’ That habit has striking affinities with one of the characteristic principles concerning worlds, that what is possible at a given world w is what obtains at some world that is possible relative to w. Another central principle invoking worlds relates to necessity: what is necessary at a given world w is what holds at every world that is possible relative to w. That last principle in fact follows immediately from the previous one (and vice versa) on the assumption that worlds are complete, in the sense that each proposition is either true at a given world or false at it. The above theses may be interpreted in numerous ways, depending upon how their talk of ‘possibility,’ ‘necessity,’ and ‘possible worlds’ is read. For instance, the first principle can be construed as concerning the physical possibilities at a given world – whatever is compatible with the fundamental physical nature of that world – so long as we restrict the ‘worlds’ there

considered to those that are physically possible relative to the relevant possible world. And if we impose that restriction upon the possible worlds cited in the second principle, it can be treated as applying to physical necessities. Another central assumption involving possible worlds is that one among them is the actual world, at which precisely the actual truths obtain. That assumption combines with the various readings of the earlier theses relating to possibility and necessity to yield truth conditions for a wide variety of modal statements. So, for instance, it implies that it is actually physically necessary that P just in case P is physically necessary at the actual world. But the central hypothesis relating possible worlds to physical necessities implies that P is physically necessary at the actual world just in case P holds at every world that is physically possible relative to the actual world. The various theses just considered are perhaps the least contentious principles featuring possible worlds (another, more contentious but widely accepted use for worlds is in providing truth conditions for counterfactual conditionals, like ‘if tigers had 10 legs they would be cumbersome’). Elsewhere, contention is the order of the day. Thus some philosophers, like David Lewis, have claimed that possible worlds can be used to provide thoroughly nonmodal analyses of modal claims. But others, like Alvin Plantinga, have disagreed. And some philosophers – Lewis again, for instance – hope to reduce propositions to classes of worlds, whereas others prefer to follow Robert Adams and Arthur Prior in identifying possible worlds with special sorts of propositions, or with set-theoretical constructions based upon propositions. There is disagreement, then, over what it is reasonable to expect from possible worlds – that is, over what we can sensibly hope to use them for. Those differences are echoed in the varying theories of what possible worlds are. For it is commonly held that

Possible Worlds 715

the concept of a possible world is a functional one: possible worlds are those things that are fit to play certain roles in philosophical theorizing about modality and related matters. But if one philosopher thinks a certain range of roles should be filled by possible worlds, while another believes that worlds should serve a somewhat different range of roles, their accounts of which things are possible worlds may consequently diverge simply because the two philosophers have focused on groups of jobs that call for different occupants. At one end of the spectrum, Lewis attempted to make possible worlds perform an extraordinarily ambitious range of tasks (for a comprehensive outline of Lewis’s views on possible worlds, see Lewis, 1986). Lewis claimed that a possible world is a group of things that are all spatiotemporally related to one another and where each thing that is spatiotemporally related to something in the group is also among its occupants. So, for instance, the actual world contains precisely those things that stand in a spatiotemporal relation to you or me. Lewis used his nonmodal account of the nature of possible worlds to provide wholly nonmodal analyses of modal locutions. He also followed the common practice of using talk of possible worlds in formal semantical treatments of fragments of natural language. And he identified numerous types of entities that philosophers and nonphilosophers have posited, such as properties and propositions, with set-theoretical constructions founded on possible worlds and their inhabitants. Another important aspect of Lewis’s position is his denial that distinct possible worlds ever share any inhabitants. This led him to develop counterpart theory, according to which a statement regarding a certain individual belonging to a given possible world w – ‘Kant had a beard,’ for instance – is true in another possible world y just in case y contains a bearded entity that is sufficiently similar to Kant. Although Lewis argued with great virtuosity that his putative possible worlds would perform the various tasks that he considered and that we ought therefore to believe that his worlds exist, some of his theory’s obvious consequences are so implausible that few people have been willing to accept his conclusions. For instance, there might have been talking donkeys. So, Lewis claimed, there is a group of spatiotemporally interrelated items that includes a talking donkey. Hence, a talking donkey exists even if none actually exists – that is, a talking donkey exists even if we do not stand in any spatiotemporal relations to such a beast. Lewis placed great emphasis upon his theory’s provision of nonmodal analyses of modal locutions and argued that none of his view’s major competitors could also provide them. But some of his opponents

would distance themselves from Lewis’s reductionist aims. Indeed, a significant lacuna in Lewis’s case for his position is that he never provided a compelling account of why we should want nonmodal analyses of modal locutions. For unless one is already persuaded of the desirability of such analyses, one of the chief supposed virtues that Lewis claimed for his stance seems instead to be a mere curiosity. Following van Inwagen (1986), Lewis’s theory may be described as concretist, because his possible worlds and their inhabitants are concrete rather than abstract. Lewis’s view thus contrasted with the preponderance of accounts of possible worlds, on which possible worlds are identified with paradigmatically abstract items of some kind or another. For example, Adams (1974) identified possible worlds with special sets of propositions where propositions were assumed to be a variety of abstract object; Plantinga (1974) identified possible worlds with a certain type of states of affairs, another putative type of abstract item; and Stalnaker (1976) identified worlds with a particular variety of ways things might have been, which he took to be a sort of property. Theorists who identify possible worlds with abstract entities – abstractionists, to use some more of van Inwagen’s terminology – need not endorse highly revisionary views concerning what concrete objects exist. That fact makes their general approach more immediately appealing than Lewis’s concretism. Of course, if abstractionists are to respect our ordinary modal opinions, they must still posit very many abstract objects; each total possible state of the world should correspond to a possible world. But we appear to accept that the abstract domain is immensely populous (we seem to believe in infinite collections of numbers, for instance), and that fact may make us jib less, perhaps irrationally, at the hefty ontologies that abstractionists require as compared to equally large concretist ontologies. Abstractionists nonetheless have some work to do if they are to make the ontological foundations of their theories credible. Russell’s discovery that the guiding principles of naive set theory are inconsistent showed that one cannot be assured that there really are abstract items answering to every intuitively appealing map of a portion of the abstract realm. Abstractionists therefore need to persuade us that the abstract things with which they identify possible worlds exist. To take a specific case, why should we believe in the abstract states of affairs that Plantinga equates with possible worlds? At the least, Plantinga should present a strong case that the conception of states of affairs that underlies his approach is consistent. Similar remarks apply to the other abstractionists mentioned earlier, Adams and Stalnaker.

716 Pragmatic Determinants of What Is Said

As stated earlier in this article, Lewis argued that abstractionist accounts of possible worlds cannot provide nonmodal analyses of modal statements. So what can abstractionists do with their putative possible worlds? They can use them to supply truth conditions for many modal claims, for one thing, although those truth conditions may not be stated nonmodally. (On Adams’s theory, for instance, the supplied truth conditions will speak of ‘consistent’ sets of propositions). And they can provide interpretations of those philosophical discussions, which, rather than address the question of what possible worlds are, instead take possible worlds for granted and proceed to frame modal arguments and theses by speaking of them. While those tasks may seem trifling when compared to the more spectacular reductionist uses for possible worlds proposed by Lewis, they should not be dismissed – as noted at the outset, the spread of possible worlds through recent philosophy is, after all, owed to their use in precisely those ways and not to a widespread conviction that the modal ultimately boils down to the nonmodal. An apparently simple method, described clearly by Lewis at various points in his writings, has very often guided philosophical investigations into the nature of possible worlds: one lines up the various contending accounts; one then compares their costs and benefits; and one opts for the position that comes out best overall. But although that methodology looks straightforward, its correct application requires a prior determination of philosophical virtues and vices, and the latter task is not trivial. So, for instance, how are to decide whether Lewis’s nonmodal theory of possible worlds should be preferred on that account to the modal theories frequently offered by abstractionists? And how are we to adjudicate the sometimes competing demands of commonsense modal opinion and theoretical elegance? Those and similar questions have perhaps been a little neglected by philosophers of modality, and further investigation into their answers might breathe life into a debate that has lately looked somewhat stalled. Accounts of possible worlds proliferate, but

attempts to figure out what precisely we should demand from such theories are surprisingly scarce. This would be understandable if the discussion were to manifest a high degree of well-grounded consensus on the underlying desiderata, but that condition is evidently unsatisfied: what looks like a philosophical imperative to one philosopher – the need to avoid modal primitives of one kind or another, say, or to ensure that first-order modal logic has certain inferential features – can often appear a mere fetish to another. Of course, it may be that the current debate merely reflects the fact that we cannot realistically aim for wholly satisfying answers to the sort of questions just identified; but that prognosis is a dismaying one that we should not lightly accept. See also: Character versus Content; Concepts; Counterfactuals; Existence; Extensionality and Intensionality; Formal Semantics; Indexicality; Modal Logic; Montague Semantics; Philosophical Theories of Meaning; Temporal Logic.

Bibliography Adams R M (1974). ‘Theories of actuality.’ Nous 8, 211–231. [Reprinted in Loux (1979).] Divers J (2002). Possible worlds. London: Routledge. Forbes G (1985). The metaphysics of modality. Oxford: Clarendon Press. Hughes G E & Cresswell M (1996). A new introduction to modal logic. London: Routledge. Kripke S (1980). Naming and necessity. Oxford: Blackwell. Lewis D (1986). On the plurality of worlds. Oxford: Blackwell. Loux M J (ed.) (1979). The possible and the actual. Ithaca: Cornell University Press. Plantinga A (1974). The nature of necessity. Oxford: Clarendon. Prior A N & Fine K (1977). Worlds, times and selves. London: Duckworth. Rosen G (1990). ‘Modal fictionalism.’ Mind 99, 327–354. Stalnaker R (1976). ‘Possible worlds.’ Nous 10, 65–75. [Reprinted in Loux (1979).] van Inwagen P (1986). ‘Two concepts of possible worlds.’ Midwest Studies in Philosophy 11, 185–213.

Pragmatic Determinants of What Is Said E Borg, University of Reading, Reading, UK ß 2006 Elsevier Ltd. All rights reserved.

Imagine that Jack, pointing at a book on the table, utters the sentence ‘‘That is better than Jill’s book.’’

What, we might ask, is said by his utterance? Well, since the sentence contains a context-sensitive expression – the demonstrative ‘that’ – we know that determining what is said will require an appeal to the context of utterance to provide a referent. Thus, this is one kind of pragmatic determinant of what is said

Pragmatic Determinants of What Is Said 717

by Jack’s utterance. The sentence is also context sensitive in another, slightly less obvious, way: it is in the present tense, so an appeal to the context of utterance will also be needed to fix a time for Jack’s claim. Finally, many theorists recently have argued that the sentence is also context sensitive in a third, much less obvious, way: for to grasp what is said by Jack it seems that we also need to know what kind of relationship he envisages between Jill and her book. Does Jack intend to say that the book he is demonstrating is better than the book Jill wrote, better than the book Jill owns, the book Jill is reading, etc? And what does he mean by ‘better’ here – better written, better researched, better for standing on to reach a high shelf? Surely Jack says more than merely that the book he is demonstrating is better in some respect or other than a book bearing some relationship or other to Jill, but, if so, then any such additional information can only come from consideration of the context of utterance. Furthermore – and this is where the current case differs from that of demonstratives and tense – it’s not obvious that this last type of pragmatic determinant of what is said is traceable to any overt (i.e., syntactically represented) context-sensitive element. Jack’s utterance does contain the syntactically marked possessive, but ’s certainly doesn’t appear on the list of usual suspects for context-sensitive expressions. And in many cases even this degree of syntactic representation is missing. Consider an utterance of ‘‘It’s raining,’’ where what is said is that it is raining at some location, l, or ‘‘You won’t die,’’ saying that you won’t die from that cut, or ‘‘Nietzsche is nicer,’’ meaning that Nietzsche is nicer than Heidegger. Or again, consider the speaker who produces a nonsentential utterance, e.g., pointing at a child and saying ‘‘George’s brother,’’ thereby conveying that he is George’s brother. In all these cases we have additional information contributed by the context of utterance yet which seems unmarked at the syntactic level. These types of pragmatic determinant of what is said (i.e., contextually supplied information which is semantically relevant but syntactically unmarked) have come to be known as ‘unarticulated constituents’ (UCs). Such elements are extremely interesting to philosophers of language since they seem to show that there are problems with a quite standard conception of formal semantics, according to which the route to meaning runs along exclusively syntactic trails. Now, as noted above, even in formal theories of meaning, like truth-conditional accounts (see Truth Conditional Semantics and Meaning), some appeal to a context of utterance must be made in order to cope with overtly context-sensitive expressions. However, the thought has been that this need not contravene the

essentially formal nature of our theory, since the contextual instructions can be syntactically triggered in this case (crudely, the word ‘that’ tells us to find a demonstrated object from the context of utterance). Much more problematic, then, are pragmatic determinants of what is said which apparently lack any syntactic basis, for they show that context can come to figure at the semantic level regardless of whether it is syntactically called for (thus there will be no purely syntactic route to meaning). In response to the challenge posed by UCs, advocates of formal semantics have sought to reject the idea that there are any such things as syntactically unrepresented but semantically relevant elements, claiming either that 1. every contextual element of what is said really is syntactically represented, even though this fact may not be obvious from the surface syntax of a sentence (i.e., there is more to our syntax than initially supposed), or that 2. any contextual contributions which are not syntactically marked are not semantically relevant (i.e., there is less to our semantics than initially supposed). The first of these moves is pursued by a number of theorists who argue that, if one pays proper attention to all the information given at the syntactic level, one will find that many cases of supposedly syntaxindependent semantic contribution are, in fact, syntactically required (see Stanley, 2000; Stanley and Szabo, 2000; Taylor, 2001; Recanati, 2002). Now, that there is some discrepancy between surface form and underlying syntax is a well-rehearsed point. For instance, Russell’s theory of descriptions invokes a clear distinction between surface form and logical form. Furthermore cases of so-called syntactic ellipsis (like the contraction in ‘‘Jack likes dogs and so does Jill,’’ where the second sentence is held to have the underlying syntactic form ‘‘Jill likes dogs’’ despite its reduced verbal form) show that surface constituents may be poor indicators of syntactic elements. If this is correct, then the principle that there is more to our syntax than is apparent at first glance is independently well motivated. Given this, there certainly seem to be cases in which (1) is an appealing response to putative examples of UCs. To give an example: someone might claim that, because an utterance of ‘‘Jack plays’’ can be used in one context to say that Jack plays the trombone and in another to say that Jack plays football, there must be a syntactically unmarked, contextual contribution to the semantic content of an utterance of this kind. However, closer inspection of the syntax shows this conclusion is dubious: ‘plays’ is a transitive verb requiring a subject and an object and, although the argument place for an object can be left unfilled at the

718 Pragmatic Determinants of What Is Said

surface level without this making the sentence ill formed, it might be argued that the additional argument place is marked in the underlying syntax of the verb. Since ‘plays’ is a transitive verb, the competent interlocutor will, on hearing ‘‘Jack plays,’’ expect to look to a context of utterance to discover what Jack plays, but this is precisely because she is sensitive to the syntactic structure of English in this case. Thus sometimes an appeal to an enriched syntax seems the best way to cope with putative UCs. However, there are also cases where (1) seems less compelling. For instance, in certain contexts, it seems that Jack can use an utterance of ‘‘The apple is red’’ to convey the proposition that the apple is red to degree n on its skin. Yet it is far from clear that the correct syntactic structure for color terms includes argument places for the shade, or precise manner of instantiation, of the color. Furthermore, the idea that syntactic structure can outstrip surface form at all (or at least in the ways required by [1]) has been rejected by some. So, the opponent of UCs might seek to supplement (1) with (2), allowing that some contextually supplied information is not marked at the syntactic level but denying that such information is semantically relevant. The advocate of this kind of response wants to accept that, at an intuitive level, the speaker who utters ‘‘The apple is red’’ (or ‘‘You won’t die,’’ etc.) clearly does convey the contextually enriched proposition (i.e., that the apple is red on the outside, or that you won’t die from that cut) but that this is an instance of speaker meaning rather than semantic ontent (see Semantics–Pragmatics Boundary). Now, whether this move is warranted in all cases is again something of a moot point. One objection might be that it is a mistake to seek to separate speaker intuitions about what is said and claims of semantic content to the radical degree predicted by this kind of move. So, for instance, it seems that the message recovered from an utterance of ‘‘That banana is green’’ will always be that that banana is green in some salient respect, but if this is the kind of proposition competent speakers can and do recover in communicative exchanges, shouldn’t this be what semantics seeks to capture? What role could there be for a more minimal kind of semantic content (e.g., which treats semantic content as exhausted by the proposition that that banana is green simpliciter)? (See Borg, 2004; Cappelen and Lepore, 2005, for attempts to answer this question.) A second worry for the formal semanticist who pursues (2) is that it may undermine her claim that semantics deals with complete propositions or truth-evaluable items. For instance, take the situation where the book Jack demonstrates is better than the book Jill is reading, but worse than the book she wrote. To assess Jack’s

opening utterance as true or false in this situation, it might seem that we need first to determine what relationship Jack intended by his utterance of ‘‘Jill’s book.’’ Without this pragmatically determined aspect of what is said, it is argued, the sentence Jack produces is simply not truth evaluable, it doesn’t express a complete proposition (cf. Recanati, 2003). Clearly, then, although the formal semanticist can respond to the challenge of UCs with moves like (1) and (2) it is not at all obvious that these responses are sufficient. However, we should also be aware that there are problems to be faced on the other side, by the proponent of UCs, as well. One major worry is how to preserve our intuitive distinction between semantically relevant elements and elements which are only pragmatically relevant. Given the traditional conception of formal semantics, the answer to this question was clear: something is semantically relevant if it can be traced to the syntax of the sentence, it is pragmatically relevant otherwise. Once we admit of semantically relevant but syntactically unmarked elements, however, this way of characterizing the distinction is clearly unavailable, so the proponent of UCs owes us some other account. If Jack utters ‘‘I’ve eaten’’ then it might seem plausible to claim that the semantic content of this utterance is that he has eaten breakfast today, but he might also succeed in conveying the messages that he has eaten a cooked breakfast within the last hour, or that he is not hungry, or any number of further propositions. How do we decide which elements come to figure in the supposed semantic content of the utterance, and on what basis do we rule some conveyed messages as merely pragmatic? A further worry seems to be that, for each additional contextual contribution we introduce, this contribution is itself open to further qualification: an utterance of ‘‘That banana is green’’ might intuitively be contextually enriched to that banana is green on its skin, but why stop here? Does the speaker mean to say that that banana is green all over its skin, that that banana is mostly green on a particular patch of skin, or that that banana is a bit green on this bit of surface skin and to a depth of degree n through the skin? The problem is that, for any piece of contextual information we introduce via a UC, this piece of information is itself likely to allow for a number of different contextual qualifications or sharpenings, and we seem to lack any principled reason to disallow these further sharpenings from appearing as part of the semantic content of the utterance. Yet, without such a reason, we run the risk of being launched on a slippery slope whereby the meaning of every utterance turns out to be a proposition which is somehow ‘complete’ in every respect. Yet no such conclusion seems palatable: it undermines notions

Pragmatic Presupposition 719

of systematicity for meaning and makes it completely unclear how we ever come to learn and use a language given our finite cognitive resources (see Compositionality). So, it seems there are problems on both sides of the debate here and the question of how to handle pragmatic determinants of what is said remains a vexed one. The advocate of formal semantics needs to deliver an account both of overtly context-sensitive expressions – the indexicals, demonstratives, and tense markers of a natural language – and of the more covert kinds of context sensitivity discussed here. Reflecting on the multitude of ways in which a context of utterance can apparently affect issues of linguistic meaning certainly seems to show that the formal semanticist runs the risk of undervaluing the role played by pragmatic determinants in what is said. Yet it remains to be seen whether this constitutes merely a potential oversight on her behalf (cf. Stanley, 2000; Stanley and Szabo, 2000; Borg, 2004) or a fundamental failing of the whole approach (cf. Sperber and Wilson, 1986; Carston, 2002). See also: Character versus Content; Compositionality; Connotation; Context and Common Ground; Conventions in Language; Cooperative Principle; Diminutives and Augmentatives; Expression Meaning vs Utterance/Speaker Meaning; Honorifics; Implicature; Irony; Performative Clauses; Pragmatics and Semantics; Semantic Change; Semantic Change, the Internet and Text Messaging; Semantics–Pragmatics Boundary; Truth Conditional Semantics and Meaning.

Bibliography Bach K (1994). ‘Semantic slack: what is said and more.’ In Tsohatzidis S (ed.) Foundations of speech act theory: philosophical and linguistic perspectives. London: Routledge. 267–291. Bach K (1999). ‘The semantics–pragmatics distinction: what it is and why it matters.’ In Turner K (ed.) The semantics/pragmatics interface from different points of view. Oxford/New York: Elsevier Science. 65–83. Borg E (2004). Minimal semantics. Oxford: Oxford University Press. Cappelen H & Lepore E (2005). Insensitive semantics: a defense of semantic minimalism and speech act pluralism. Oxford: Blackwell. Carston R (2002). Thoughts and utterances. Oxford: Blackwell. Elugardo R & Stainton R (2004). ‘Shorthand, syntactic ellipsis and the pragmatic determinants of what is said.’ Mind and Language 19(4), 359–471. Perry J (1986). ‘Thought without representation.’ Proceedings of the Aristotelian Society Supplementary Volume 40, 263–283. Recanati F (2002). ‘Unarticulated constituents.’ Linguistics and Philosophy 25, 299–345. Recanati F (2003). Literal meaning. Cambridge: Cambridge University Press. Sperber D & Wilson D (1986). Relevance: communication and cognition. Oxford: Blackwell. Stanley J (2000). ‘Context and logical form.’ Linguistics and Philosophy 23, 391–434. Stanley J & Szabo Z (2000). ‘On quantifier domain restriction.’ Mind and Language 15, 219–261. Taylor K (2001). ‘Sex, breakfast, and descriptus interruptus.’ Synthese 128, 45–61.

Pragmatic Presupposition C Caffi, Genoa University, Genoa, Italy ß 2006 Elsevier Ltd. All rights reserved.

Introduction Both concepts – ‘pragmatic’ and ‘presupposition’ – can be interpreted in different ways. On the one hand, not being very remote from the intuitive, pretheoretical concept of presupposition as ‘background assumption,’ the concept of presupposition covers a wide range of heterogeneous phenomena (see Presupposition). Owing to the principle of communicative economy as balanced by the principle of clarity (Horn, 1984), in discourse much is left unsaid or taken for granted. In order to clarify the concept of presupposition, some authors have compared speech with a Gestalt

picture in which it is possible to distinguish a ground and a figure. Presuppositions are the ground; what is actually said is the figure. As in a Gestalt picture, ground and figure are simultaneous in speech; unlike the two possible representations in the Gestalt picture, speech ground and figure have a different status, for instance with respect to possible refutation. What is said, i.e., the figure, is open to objection; what is assumed, i.e., the ground, is ‘‘shielded from challenge’’ (Givo´n, 1982: 101). What crucially restricts the analogy is the fact that discourse is a dynamic process, whereas a picture is not. At the same time that an explicit communication is conveyed in the ongoing discourse, an intertwined level of implicit communication is unfolding: understanding a discourse requires an understanding of both. When communicating, we are constantly asked to choose

720 Pragmatic Presupposition

what to put in the foreground and what in the background. Discourses and texts are multilevel constructs, and presuppositions represent (at least a part of) the unsaid. On the other hand, the label ‘pragmatic’ can be used in different ways. It may refer, first, to a number of objects of study and, second, to a number of methods of analysis linked by the fact that they take into account elements of the actual context. What is called ‘pragmatic’ may in fact be an expanded semantic object (see Semantics–Pragmatics Boundary). Generally speaking, a requirement for a truly pragmatic treatment seems to be that it is concerned more with real usages of language(s) than with highly abstract models dealing with oversimplified, construed examples. Ultimately, ‘‘what I can imply or infer, given a presupposition, depends on an active choice made in the face-to-face confrontation with my interlocutor’’ (Mey, 2001: 29). The origin of the concept of pragmatic presupposition must be sought in the recognition by philosophers of language and logicians that there are implicata of utterances that do not belong to the set of truth conditions (see Truth Conditional Semantics and Meaning). Their starting point is the awareness that there are other relations between utterances besides that of entailment. The definitions of pragmatic presupposition proposed in the 1970s have brought about a pragmatic rereading of a problem that was above all logical and that had not found adequate explanation in the available semantic theories. This rereading was basically methodological (see Semantics–Pragmatics Boundary; Pragmatics and Semantics). From Stalnaker’s definition (1970) onward, a pragmatic presupposition was no longer considered a relation between utterances but rather was considered one between a speaker and a proposition (see Propositions). This is a good starting point, but it is far from satisfactory if the label ‘pragmatic’ is meant to cover more than ‘semantic and idealized contextual features,’ i.e., if we adopt a radical pragmatic standpoint. Let us provisionally define a pragmatic presupposition as a ‘me´nage a` trois’ between a speaker, the framework of his/her utterance, and an addressee. From a radical pragmatic standpoint, a substantivist view of presuppositions – i.e., what the presupposition of an utterance is – is less promising than a functional, dynamic, interactional, contractualnegotiating view; the question here is how presupposition works in a communicative exchange. The pragmatic presupposition can be considered as an agreement between speakers. In this vein, Ducrot proposed a juridical definition whereby the basic

function of presuppositions is to ‘‘establish a frame for further discourse’’ (1972: 94; my translation). Presuppositions are based on a mutual, tacit agreement that has not been given before and that is constantly renewed (or, as the case may be, revoked) during interaction. Presuppositions are grounded on complicity. Having been the focus of lively discussions by linguists and philosophers of language during the 1970s, presuppositions seem now to have gone out of fashion. At the time, the ‘‘wars of presupposition’’ (Levinson, 2000: 397) were fought between partisans of a semantic vs. a pragmatic view on the phenomenon. Another war is presently being fought with regard to another – though adjacent – territory, that of the intermediate categories covering the conceptual space between presupposition and implicature (see Implicature). The fight involves the overall organization of the different layers of meaning (and their interplay); it could be labeled the ‘war of (generalized) conversational implicature,’ if the latter term were neutral, rather than being the banner of the faction representing one side of the debate, namely the post-Gricean theorists (eminently, though not exclusively, represented by Levinson [2000]). The other side counts among its representatives, first of all, the relevance theorists, as represented by Carston (2002), who advance an alternative concept, called ‘explicature,’ that is intended to bridge the notions of encoded meaning and inferred meaning (see Implicature). On the whole, the decline of the interest in presupposition as such can be understood, and partly justified, inasmuch as subtler distinctions between the different phenomena have been suggested, and a descriptively adequate typology of implicata is in the process of being built (see Sbisa`, 1999b). The decline is less justified if the substitution is only terminological, that is, if the intuitive concept of presupposition is replaced by a gamut of theory-dependent concepts without offering any increase in explanatory power with respect to authentic data. In the latter case, it is not clear what advantages might result from the replacement of the umbrella term ‘presupposition’ by some other, more modish terminology. The point is, once again, the meaning we assign to the word ‘pragmatic’ and the object we make it refer to: either the rarefied world of construed examples or the real world, where people constantly presuppose certain things in order to reach certain goals. Once it is clear that a radical pragmatic approach to the issue necessarily takes only the latter into consideration, the question is whether the term ‘presupposition’ refers to a range of heterogeneous phenomena or, on the contrary, to a particular type of implicatum, to

Pragmatic Presupposition 721

which other types can be added. But even before that, we should ask whether, between the established notion of semantic presupposition and the Gricean notion of implicature (which, despite its popularity, still is under scrutiny by both philosophers and linguists), there is room for a concept like pragmatic presupposition. To answer this question, we need to say a few words on how pragmatic presupposition is distinct from the two types of adjacent implicata mentioned earlier: semantic presupposition and implicature.

Relation with Semantic Presupposition The concept of semantic presupposition is relatively clear. Its parentage is accredited: Frege, Russell, Strawson. Its lineage seems to be traceable back to Xenophanes, quoted in Aristotle’s Rhetoric (see Rhetoric, Classical), via Port-Royal and Stuart Mill. There is substantial agreement as to its definition: the presupposition of a sentence is what remains valid even if the sentence is denied; its truth is a necessary condition for a declarative sentence to have a truth value or to be used in order to make a statement. A respectable test has been devised to identify it – the negation test. There is a list of the linguistic facts (Levinson, 1983: 181–185) that trigger the phenomenon of presupposition: thirty-one have been listed, from factive verbs (e.g., know, regret; see Factivity) to change-of-state verbs (e.g., stop, arrive), to cleft sentences. The notions of semantic presupposition and conventional implicature, as identified by Grice (who exemplified the latter with therefore), can be unified to a certain degree, as both depend on a surface linguistic element that releases the presupposition. The difference between semantic presuppositions and conventional implicatures is that the latter, unlike the former, are irrelevant with respect to truth conditions (see Semantics–Pragmatics Boundary). In contrast, unlike semantic presuppositions and some conventional implicatures, pragmatic presuppositions are not directly linked to the lexicon, to the syntax, or to prosodic facts (e.g., the contrastive accent in ‘MARY has come’) but are linked to the utterance act. We should distinguish types of implicata that are recorded by dictionaries and that either are part of the semantic representation of a lexeme or are conveyed by given syntactic or prosodic structures, from implicata that are independent of both dictionary and grammar: in particular, pragmatic presuppositions are triggered by the utterance and the speech act involved. They are presuppositions neither of a lexeme nor of a proposition nor of a sentence. They are presuppositions that a speaker activates through the

utterance, the speech act, and the conversational or textual moves that a given speech act performs. Thus, pragmatic presuppositions concern not imperative sentences but orders; not declarative sentences but assertions; and so on. In other words, one ought to stress once more the distinction between syntactic and pragmatic functions: between them there is no one-to-one correspondence (see Lyons, 1977). The same linguistic structure, the same utterance, can perform different functions, convey different speech acts, and refer to different sets of presuppositions (see Mood and Modality; Pragmatics and Semantics). The acknowledgment of a level of theorizing that goes beyond the sentence is the sine qua non for the analysis of pragmatic presuppositions. Pragmatic presuppositions are related to knowledge that is not grammatical but encyclopedic, that is, knowledge that concerns our being in the world. Or, rather, they consist not in knowledge, in something that is already known, but in something that is given as such by the speaker, in something that is assumed as such and is therefore considered irrefutable (Van der Auwera, 1979). Once we have recognized the semantic nature of lexical and grammatical (syntactic and prosodic) presuppositions, we should stress the connection between semantic and pragmatic presuppositions. First, the connection is of a general nature and concerns the obvious (but by no means negligible) fact that, when we are dealing not with abstractions but with real utterances, the phenomenon of semantic presuppositions becomes one of the available means by which the speaker can change the communicative situation. If the link between, for instance, a certain lexeme and the presupposition it triggers is semantic, their analysis within an utterance and a context must face the pragmatic problem of their use and effects. Second, on closer examination, this connection is bound up with the specific pragmatic functions performed by phenomena that can be labeled as semantic presuppositions. For instance, it is important to recognize the effectiveness of semantic implicata (lexical presuppositions, conventional implicatures, presuppositions linked to syntactic constructions such as cleft sentences, etc.,) both in the construction of texts and in the pragmatic strategies of manipulation of beliefs, or, to use Halliday’s categories, both in the textual function and in the interpersonal one. On the one hand, the analysis of the different types of anaphora and their functioning whenever their antecedent is not what the preceding text stated but what it presupposed, has both textual and pragmatic relevance. On the other hand, precisely because it is shielded from challenge, communication via presuppositions lends

722 Pragmatic Presupposition

itself to manipulatory purposes (see Sbisa`, 1999a): suffice it to compare the different persuasive effects of an assertion embedded in a factive verb (see Factivity) vs. a simple assertion (e.g., ‘‘People know that solar energy is wishful thinking’’ vs. ‘‘Solar energy is wishful thinking’’). Having defined the semantic nature of the different types of presuppositional triggers, we should then recognize the role of pragmatics in the study of the production and interpretation of these potentially highly manipulatory implicata. Obviously, it is more difficult to question something that is communicated only implicitly rather than to attack something that is communicated openly, if only because what is implicit must be recognized before it can be attacked. This is attested by the highly polemical and aggressive value inherent in attacks on presuppositions; such moves always represent a serious threat for ‘face’ (see Face).

Relation with Conversational Implicature The criteria put forward by Grice to distinguish conversational implicature (see Cooperative Principle) from other implicata (i.e., calculability, nondetachability, nonconventionality, indeterminacy, cancelability) have not proven to be entirely satisfactory, even when integrated with the further criterion of ‘reinforceability’ (Horn, 1991). In particular, the criterion of cancelability, viewed as crucial by many authors, seems to be problematic (for a discussion, see Levinson, 1983). And in any case, cancelability is linked to the degree of formality of the interaction: in unplanned speech, it is easily tolerated. If we are satisfied with an intuitive differentiation that nevertheless uses these criteria, we can reasonably maintain that pragmatic presuppositions are oriented, retroactively, toward a background of beliefs assumed to be shared. Implicatures, on the other hand, are oriented, proactively, toward knowledge yet to be built. Besides (at least if we prototypically think of particularized conversational implicatures, i.e., the kind that, according to Grice, strictly depends on the actual context; see Levinson, 2000: 16), such knowledge does not necessarily have to be valid beyond the real communicative situation. Thus, in order to distinguish the two types of implicata, the criteria of different conventionality (presuppositions being more conventional than implicatures) and of different validity (the latter being more general in the case of presuppositions, more contingent in the case of implicatures) are called into play. Presuppositions concern beliefs constituting the background of communication. They become the object of communication (thus losing their status of presupposition) only if something goes wrong, if the addressee does not accept them or questions them, forcing the speaker to

put his or her cards on the table. Implicatures, on the contrary, concern a ‘knowledge’ that is not yet shared and that will become such only if the addressee goes through the correct inferences while interpreting the speaker’s communicative intention (see Intention and Semantics). The distinction is thus more a matter of degree than a true dichotomy, the latter, more than the former, requiring the addressee to abandon his or her laziness – the ‘‘principle of inertia,’’ as Van der Auwera (1979) called it, i.e., the speaker’s reliance on shared beliefs – and to cooperate creatively with the discourse. With implicatures, a higher degree of cooperation and involvement is asked of the addressee (the more I am emotionally involved, the more I am willing to carry out inferential work) (see Arndt and Janney, 1987; see Cooperative Principle). Presuppositions can remain in the background of communication and even go unconsidered by the addressee without making the communication suffer. Implicatures must be calculated for communication to proceed in the direction desired by the speaker. The roles of presuppositions and implicatures, seen against the backdrop of the speaker’s expectations and the discourse design, are therefore different; the former are oriented toward the already constructed (or given as such), the latter toward the yet to be constructed or, even better, toward a ‘construction in progress’: the former concern a set of assumptions, the latter their updating. Presuppositions are more closely linked to what is actually said, to the surface structure of the utterance; implicatures are more closely linked to what is actually meant. Their respective degrees of cancelability also seem to be different: presuppositions are less cancelable than implicatures. This difference between presuppositions and implicatures with respect to the criterion of cancelability could be reformulated in terms of utterance responsibilities and commitment. With presuppositions and implicatures, the speaker is committed to different degrees – more with the former, less with the latter – with respect to his or her own implicata. Thus, a definition of pragmatic presuppositions could be formulated as ‘that which our hearer is entitled to believe on the basis of our words.’ In the case of presuppositions, this implicit commitment is stronger, and so, too, is the sanction, should the presupposition prove to be groundless. And the reason is that in the case of presuppositions, an attempt to perform a given speech act has been made: the linguistic devices have traced out, however implicitly, a detectable direction. The addressee is authorized to believe that the speaker’s speech act was founded, i.e., that his or her own presuppositions were satisfied. A character in Schnitzler’s Spiel im Morgengrauen implacably says to the second

Pragmatic Presupposition 723

lieutenant who has lost a huge amount of money playing cards with him and does not know how to pay his debt: ‘‘Since you sat down at the card-table you must obviously have been ready to lose.’’ Communicating is somehow like sitting down at the card table: presuppositions can be a bluff. The Gricean concept of implicature can be compared to Austin’s concept of perlocution (see Speech Acts), with which it shares the feature of nonconventionality: implicature is an actualized perlocution. From the utterance ‘My car broke down,’ I can draw a limited number of presuppositions, e.g., ‘there’s a car,’ ‘the car is mine,’ ‘the car was working before,’ but an indefinite number of implicatures, e.g., ‘Where’s the nearest garage?,’ ‘I can’t drive you to the gym,’ ‘Can you lend me some money to have it repaired?,’ ‘Bad luck haunts me.’ Finally, an interesting relationship is the symbiosis of pragmatic presuppositions and implicatures in indirect speech acts (see Speech Acts): the presuppositions of the act (preparatory conditions in particular) are stated or questioned so as to release a (generalized conversational) implicature (see Implicature).

Pragmatic Presuppositions: ‘Classical’ Definitions Officially, at least, pragmatic presuppositions have had a short life. The available registry data on pragmatic presuppositions reveal the following: born 1970 (Stalnaker); died (and celebrated with a requiem) 1977 (Karttunen and Peters). The two latter authors proposed to articulate the concept of presupposition into (a) particularized conversational implicatures (e.g., subjunctive conditionals, see Conditionals), (b) generalized conversational implicatures (e.g., verbs of judgment), (c) preparatory conditions on the felicity of the utterance, (d) conventional implicatures (e.g., factives, even, only, but). The reader edited by Oh and Dinneen (1979) can also be seen as a post mortem commemoration. Against the backdrop of the inadequacies of the concept of semantic presupposition, Stalnaker (1970: 281) introduced the concept of pragmatic presupposition as one of the major factors of the context. Pragmatic presupposition enabled him to distinguish contexts from possible worlds; it was defined as a ‘‘propositional attitude’’ (see Context). In the same paper, Stalnaker (1970: 279) also said that the best way to look at pragmatic presuppositions was as ‘‘complex dispositions which are manifested in linguistic behavior.’’ Confirming the equivalence between pragmatic presupposition and propositional attitude (see Propositions), Stalnaker (1973: 448)

defined pragmatic presupposition in the following way: ‘‘A speaker pragmatically presupposes that B at a given moment in a conversation just in case he is disposed to act, in his linguistic behavior, as if he takes the truth of B for granted, and as if he assumes that his audience recognizes that he is doing so.’’ Stalnaker’s definition showed a tension between the definition of pragmatic presupposition on the one hand as a disposition to act and on the other as a propositional attitude: pragmatic terms and concepts, such as ‘disposition to act’ and ‘linguistic behavior,’ were used alongside with semantic terms and concepts, ‘the truth of B’ in particular. In Stalnaker’s treatment (1970: 277–279), a narrow meaning of the concept of ‘pragmatic’ was associated with an extended meaning of the concept of ‘proposition,’ which was the object both of illocutionary and of propositional acts. This tension is still present and verges on a clash in Stalnaker (2002), where the most distinctive feature of the propositional attitude (which, according to Stalnaker, was constitutive of a presupposition) was defined as a ‘‘social or public attitude’’ (2002: 701). Keenan, distinguishing between logical and pragmatic notions, defined pragmatic presupposition as a relation between ‘‘utterances and their contexts’’ (1971: 51): ‘‘An utterance of a sentence pragmatically presupposes that its context is appropriate’’ (1971: 49) (see Context; Context Principle). In an almost specular opposition to Stalnaker, Keenan seemed to extend his view of pragmatics until it almost coincided with ‘‘conventions of usage,’’ as Ebert (1973: 435) remarked. At the same time, Keenan also seemed to hold a restricted view of the phenomenon of ‘presupposition,’ as exemplified by expressions that presuppose that the speaker/hearer is a man/ woman (sex and relative age of the speaker/hearer), by deictic particles referring to the physical setting of the utterance and by expressions indicating personal and status relations among participants (e.g., French ‘Tu es de´gouˆtant’, literally, ‘You [informal] are disgusting [male]’). Among other interesting definitions, there is the following one, offered by Levinson (1983: 205): ‘‘an utterance A pragmatically presupposes a proposition B iff A is appropriate only if B is mutually known by participants.’’ Givo´n’s definition (1982: 100) articulated the prerequisite of mutual knowledge more clearly: ‘‘The speaker assumes that a proposition p is familiar to the hearer, likely to be believed by the hearer, accessible to the hearer, within the reach of the hearer etc. on whatever grounds.’’ Some conclusions are already possible. In particular:

724 Pragmatic Presupposition

1. The two definitions of presupposition as semantic and pragmatic (Stalnaker) or as logical and discursive (Givo´n, 1982) are compatible (cf. Stalnaker, 1970: 279). 2. Logical presupposition is a subcase of discursive presupposition: ‘‘logical presupposition is [. . .] the marked sub-case of discourse backgroundedness’’ (Givo´n, 1984: 328). One element occurs with particular frequency in the definitions – it is that of presupposition as ‘common ground.’ Here, the preliminary move from presupposition as propositional attitude to presupposition as shared knowledge, from the world of utterances to the world ‘en plein air,’ was Stalnaker’s (1973). Now, both the narrow definition (presupposition as propositional attitude) and the extended one (presupposition as shared belief) involve a high degree of idealization. We are entitled to ask: What is common ground? What is shared knowledge? Is there a different ‘common ground’ for different communities of speakers? And to what extent and on the basis of what kind of conjecture, even within the same community, can we speak of ‘common ground’? Stalnaker (1974) realized the Platonic flavor of this notion when he gave examples of asymmetry in ‘shared knowledge’ (such as in his imagined conversation with one’s barber). According to Stalnaker, defining presupposition in terms of common knowledge worked ‘‘[i]n normal, straightforward serious conversational contexts where the overriding purpose of the conversation is to exchange information [. . .] The difficulties [. . .] come with contexts in which other interests besides communication are being served by the conversation’’ (1974: 201). But what are the criteria that define a conversation as ‘normal,’ ‘straightforward,’ and ‘serious’? Further, are there no other interests in play beyond those of exchanging information (such as are present in every conversation)? Finally, it is worthwhile stressing that the concept of ‘common ground’ is effective only as a ‘de jure,’ not as a ‘de re,’ concept, i.e., not as something ontologically stated but as something deontically given (see Modal Logic) by the speaker (Ducrot, 1972); that is, it represents a frame of reference that the hearer is expected to accept. On the one hand, a field of anthropological, sociological, and rhetorical investigation opens up before us. On the other hand, the characterization of presupposition as shared knowledge risks being static and idealizing and contains a high amount of ideological birdlime (see Context).

Pragmatic Presuppositions as Felicity Conditions At this point, a few further remarks are in order. First of all, the classic presuppositional model is semantic; even when a notion of pragmatic presupposition is invoked, what is in fact presented is no more than an analysis of some semantic phenomena. Pragmatic notions such as that of ‘utterance’ or ‘context’ (see Context; Context Principle) are invoked with the main aim of avoiding the contradictions inherent in purely semantic models. The most refined treatment of pragmatic presupposition, by Gazdar (1979), did not escape this kind of contradiction. Second, an assertive model underlies the different definitions of pragmatic presupposition. A point of view centered on truth value is lurking behind allegedly pragmatic presuppositions: the concept of proposition (see Propositions) (content of an assertion, whether true or false) is the relevant theoretical unit of measurement here. But can this theoretical construct work also as the pragmatic unit of measure? In particular, can this construct adequately deal with communicative behavior? Nothing seems to escape the tyranny of propositions, from the content of the actual utterance to mental content (which, if not presented in propositional form, assumes that shape after being embedded in a predicate like ‘know’ or ‘believe’), to a common or shared knowledge, and to the representation that a logician or a philosopher gives of that content. To what extent is the concept of proposition adequate here? Pragmatic presuppositions concern not only knowledge, whether true or false; they also concern expectations, desires, interests, claims, attitudes toward the world, fears, and so on. The exclusive use of the concept of ‘proposition’ is idealizing and in the long run misleading, especially when it gives rise to the restoration of the dimension of truth/falsehood as the only dimension of assessing an utterance, whereas this dimension is only one among many possible ones. Neither is the pragmatic level homogeneous with respect to the other levels of linguistic description, such as syntax or semantics; it triggers other questions and anxieties. Pragmatic presuppositions are not a necessary condition to the truth or the falsehood of an utterance; rather, they are necessary to guarantee the felicity of an act. Once we have abandoned the logico-semantic level of analysis, that is, once we have decided to consider the data of the real communication as our main object, the fact that Oedipus has killed his father is – prior to being an entailment or a presupposition of ‘‘Oedipus regrets having killed his father’’ – a shared knowledge common to a particular (Western) culture.

Pragmatic Presupposition 725

‘‘Perched over the pragmatic abyss’’ (Givo´n, 1982: 111), we feel giddy. Pragmatic presupposition could actually be God, or the autonomy of cats (of course, excepting the logicians’ cats, who, as is well known, invariably remain on their mats). But here the notion of presupposition is being spread so thinly as to run the risk of becoming useless. There is, though, a narrower and more technical meaning of pragmatic presupposition that may help us build a protective wall at the edge of the abyss: it is that of pragmatic presuppositions as the felicity conditions of an illocutionary act. Let us assume that the relevant unit for the concept of pragmatic presupposition is not the sentence nor the utterance, but the speech act in its entirety (see Speech Acts and Grammar). Pragmatic presuppositions can be regarded as felicity conditions or, if we adopt Searle’s model, as constitutive rules of conventional speech acts (e.g., promises, requests, assertions). If a presupposition functioning as a felicity condition of the act does not hold, the act fails. (Note, incidentally, that a failure of a presupposition has different consequences with respect to a failure of an implicature; in fact, the failure of the latter has no bearing on the success or failure of the illocutionary act). The identification of presuppositions with felicity conditions is not a new idea: ‘‘By the presuppositional aspect of a speech communication,’’ argued Fillmore (1971: 276), ‘‘I mean those conditions which must be satisfied in order for a particular illocutionary act to be effectively performed in saying particular sentences. Of course, we need not be concerned with the totality of such conditions, but only with those that can be related to facts about the linguistic structure of sentences.’’ For a pragmaticist, this definition is attractive in that it underscores the need to research systematic relations between utterance function and form. However, it cannot be accepted to the extent that felicity conditions necessarily are of a heterogeneous nature (pace Katz, 1977), in that they involve the extralinguistic world; in other words, they concern the place where language and world meet – the communicative situation.

Toward a Pragmatic Definition of Pragmatic Presupposition A philosophical approach to the problem of linguistic action, seen as a kind of social action, was represented by Austin’s typology of infelicities (1962). This was a step forward with respect to undifferentiated notions, such as that of ‘appropriateness,’ often used in the definition of pragmatic presupposition. Austin’s typology helped distinguish

between presuppositions without which the act is simply not performed (e.g., in promising somebody something that we do not have) and presuppositions that, if unsatisfied, make the act unhappy (e.g., in promising something insincerely). There are presuppositions whose satisfaction can be seen in binary terms; in other cases, satisfaction occurs according to a scale, in the sense that the presuppositions can be more or less satisfied: the latter cases are the ones concerning the appropriateness of an act on which it is possible to negotiate (an order, however insolent, has been given; advice, although from an unauthoritative source, has been put forward, etc.). The role of nonverbal communication in the successful performance of acts has to a large extent still to be investigated: e.g., if someone offers me congratulations on my promotion, displaying a sad, long face, can I still say he or she is congratulating me? In any case, the different types of infelicity help recall the substantial homogeneity of linguistic action to action: pragmatics must be connected to praxeology, to the study of actions as effective or ineffective, even before they can be considered true or false, appropriate or inappropriate. A decisive move in analyzing pragmatic presuppositions is that of connecting the typology of infelicities to empirical research on the functioning of linguistic and nonlinguistic interaction. For example, in conversation the typology can be related to the various mechanisms of repairs or, more generally, to research on misunderstandings in dialogue (for a survey, see Dascal [ed.], 1999) and on pathological communication. To achieve this more dynamic view of pragmatic presuppositions, it is crucial to consider presuppositions not only as preconditions of the act (e.g., as done by Karttunen and Peters, 1977) but also as effects: if the presupposition is not challenged, it takes effect retroactively. If you do not react against my order, you acknowledge my power; if you follow my advice, you accept me as an expert who knows what is best for you; if you do not question my assessment, you ascribe a competence to me (see, among others, Ducrot, 1972: 96–97). The analysis of implicata still requires much theoretical and applied work. Some steps may be sketched out as follows. We can imagine the implicata of which pragmatic presuppositions are part, as types of commitments assumed by the speaker in different degrees and ways. The different degrees of cancelability according to which the types of implicata have been traditionally classified are related to a stronger or weaker communicative commitment: the speaker is responsible for the implicata conveyed by his or her linguistic act; if the addressee does not raise any

726 Pragmatic Presupposition

objection, he or she becomes co-responsible for it. A decisive step is that of leaving behind the truthfunctional heritage: rather than recognizing a presupposition as true, the issue is whether or not to accept it as valid. In a pragmatic analysis of pragmatic presuppositions, it is furthermore necessary to consider the following dimensions: 1. Sequential-textual and rhetorical (Eco, 1990: 225). Presuppositional phenomena can be explained only by taking into account the cotextual sequential dimension (this was implicit, albeit in an idealized way, in Grice’s criterion of cancelability), as well as the rhetorical dimension (which was implicit in Sadock (1978) and Horn’s (1991) criterion of reinforceability). For a study of pragmatic presuppositions, it is necessary to move from an analysis of predicates within single sentences to the analysis of textual structures in which the presupposition is one of the effects. Presuppositions change the legal situation of speakers (Ducrot, 1972: 90ff.), that is, their rights and duties within a context that is being constructed along the way. The projection problem (see, among others, Levinson, 1983; van der Sanft, 1988) – namely, the problem of how the presuppositions of a simple sentence are or are not inherited from a complex sentence – may be reformulated in a pragmatic and textual perspective as a problem of the constraints, not only thematic, on coherence and acceptability that arise in the construction of a discourse (see Discourse Semantics). 2. Anthropological-cultural-social. Much work has still to be done on shared knowledge, on the kinds of beliefs that can be taken for granted within a given cultural and social group. Presuppositions are a way of building up such knowledge and of reinforcing it. The social relevance of this research, which might be profitably connected to work in the theory of argumentation (e.g., see Perelman and Olbrechts-Tyteca’s classical analyses [1958]) is obvious. As Goffman (1983: 27) wrote: ‘‘[T]he felicity condition behind all other felicity conditions, [is]. . . Felicity’s Condition: to wit, any arrangement which leads us to judge an individual’s verbal acts to be not a manifestation of strangeness. Behind Felicity’s Condition is our sense of what it is to be sane’’. 3. Psychological. The analysis of implicit communication, even if one takes care to avoid undue psychologizing, does require psychological adequacy. Just an example will clarify this point: we tend to choose those topics that are at least partially

shared, that enable us to be allusive and elliptical; we produce ‘exclusive’ utterances that only the addressee can understand at once. In other words, we engage the maximum of knowledge shared between ourselves and (only) the addressee. The pragmatic analysis of presuppositions is a task that, for the most part, still has to be performed. It is a truly vertiginous enterprise, yet one that cannot be abandoned if, moving beyond the logical relations between utterances, human communication is to be considered a relevant object of study. See also: Conditionals; Context; Context Principle; Coop-

erative Principle; Discourse Semantics; Face; Factivity; Implicature; Intention and Semantics; Modal Logic; Mood and Modality; Pragmatics and Semantics; Presupposition; Propositions; Rhetoric, Classical; Semantics– Pragmatics Boundary; Situation Semantics; Speech Acts; Speech Acts and Grammar; Truth Conditional Semantics and Meaning.

Bibliography Arndt H & Janney R W (1987). InterGrammar. Berlin and New York: Mouton. Austin J L (1962). How to do things with words. London: Oxford University Press. Carston R (2002). Thoughts and utterances: the pragmatics of explicit communication. Oxford: Blackwell. Dascal M (ed.) (1999). ‘Misunderstanding.’ Special issue of Journal of Pragmatics 31, 753–864. Ducrot O (1972). Dire et ne pas dire. Paris: Hermann. Dummett M (1973). Frege: philosophy of language. London: Duckworth. Ebert K (1973). ‘Pra¨suppositionen in Sprechakt.’ In Peto¨fi J S & Franck D (eds.) Pra¨suppositionen in Philosophie und Linguistik. Frankfurt/Main: Athena¨um. 421–440. Eco U (1990). ‘Presuppositions.’ In The limits of interpretation. Bloomington: Indiana University Press. 222–262. Fillmore C (1971). ‘Verbs of judging: an exercise in semantic description.’ In Fillmore C & Langendoen D T (eds.) Studies in linguistic semantics. New York: Holt, Rinehart and Winston. 273–290. Garner R (1971). ‘‘‘Presupposition’’ in philosophy and linguistics.’ In Fillmore C & Langendoen D T (eds.). 23–42. Gazdar G (1979). Pragmatics: implicature, presupposition, and logical form. New York, San Francisco, and London: Academic Press. Givo´n T (1982). ‘Logic vs. pragmatics, with human language as the referee: toward an empirically viable epistemology.’ Journal of Pragmatics 6, 81–133. Givo´n T (1984). Syntax: a functional-typological introduction (vol. 1). Benjamins: Amsterdam-Philadelphia. Goffman E (1983). ‘Felicity’s condition.’ American Journal of Sociology 1, 1–53.

Pragmatics and Semantics 727 Grice H P (1975). ‘Logic and conversation.’ In Cole P & Morgan J L (eds.) Syntax and semantics, vol. 3: Speech acts. New York: Academic Press. 41–58. Horn L (1984). ‘Towards a new taxonomy of pragmatic inference.’ In Schiffrin D (ed.) Meaning, form, and use in context. Washington, DC: Georgetown University Press. 11–42. Horn L (1991). ‘Given as new: when redundant affirmation isn’t.’ Journal of Pragmatics 15, 313–336. Karttunen L & Peters P S (1977). ‘Requiem for presupposition.’ In Proceedings of the third annual meeting of the Berkeley Linguistic Society. 360–371. Karttunen L & Peters P S (1979). ‘Conventional implicature.’ In Oh C K & Dinneen D A (eds.). 1–56. Katz J J (1977). Propositional structure and illocutionary force; a study of the contribution of sentence meaning to speech acts. Hassocks: The Harvester Press. Keenan E L (1971). ‘Two kinds of presuppositions in natural language.’ In Fillmore C & Langendoen D T (eds.). 45–54. Levinson S C (1983). Pragmatics. Cambridge: Cambridge University Press. Levinson S C (2000). Presumptive meanings: the theory of generalized conversational implicature. Cambridge, MA: MIT Press. Lyons J (1977). Semantics. Cambridge: Cambridge University Press. Mey J L (2001 [1993]). Pragmatics: an introduction. Oxford, UK, and Malden, MA: Blackwell. Oh C K & Dinneen D A (eds.) (1979). Syntax and semantics: presupposition. New York, San Francisco, and London: Academic Press.

Perelman C & Olbrechts-Tyteca L (1958). Traite´ de l’argumentation. La nouvelle rhe´torique. Paris: Presses Universitaires de France. Sadock J M (1978). ‘On testing for conversational implicature.’ In Cole P (ed.) Syntax and Semantics, vol. 9: Pragmatics. New York: Academic Press. 281–297. Sbisa` M (1999a). ‘Ideology and the persuasive use of presupposition.’ In Verschueren J (ed.) Language and ideology: selected papers from the 6th international pragmatics conference. Antwerp: International Pragmatic Association. 492–509. Sbisa` M (1999b). ‘Presupposition, implicature and context in text understanding.’ In Bouquet P et al. (eds.) Modeling and using context. Berlin: Springer. 324–338. Searle J R (1969). Speech acts. Cambridge: Cambridge University Press. Stalnaker R C (1970). ‘Pragmatics.’ Synthese 22, 272–289. Stalnaker R C (1973). ‘Presuppositions.’ Journal of Philosophical Logic 2, 447–457. Stalnaker R C (1974). ‘Pragmatic presuppositions.’ In Munitz M & Unger P K (eds.) Semantics and philosophy. New York: New York University Press. 197–213. Stalnaker R C (2002). ‘Common ground.’ Linguistics and Philosophy 25, 701–721. Strawson P F (1950). ‘On referring.’ Mind 59, 320–344. Van der Auwera J (1979). ‘Pragmatic presupposition: shared beliefs in a theory of irrefutable meaning.’ In Oh C K & Dinneen D A (eds.). 249–264. Van der Sandt R (1988). Context and presupposition. London: Croom Helm.

Pragmatics and Semantics W Koyama, Rikkyo University, Tokyo, Japan ß 2006 Elsevier Ltd. All rights reserved.

Critical Introduction: Metatheoretical Presuppositions as Ideological Norms Constraining the Empirical Sciences of Pragmatics and Semantics As a first approximation, pragmatics may be defined as the science of language use (parole) or the discursive functions of language, including its contextual uniqueness and variability (irregularities), whereas linguistics may be defined as the science of the abstract (decontextualizable) regularities that constitute linguistic structure (langue), including semantics as a formally encoded system of denotational meanings. This is a very general and common characterization of pragmatics and semantics that, however, may not be

universally accepted by linguists and pragmaticians; among them, there is no consensus on what those fields are about, where the boundaries lie, or even whether we can or should draw boundaries. At the same time, most scholars nonetheless seem to think that the relationship between semantics and pragmatics is a primary metatheoretical issue, having direct and profound bearings on how things are done in these disciplines, that is, how we conceptualize the relationship constitutes primary principles, norms, or postulates at the metatheoretical level, explicitly or implicitly presupposed in the context of any scientific practices concerning language at the object level. Such metatheoretical presuppositions clearly constrain object-level activities and analyses. Here, we may think of how meta-level phenomena such as Gricean maxims and frames are indexed by object-level phenomena such as floutings and contextualization cues, respectively, and relatively strongly

728 Pragmatics and Semantics

constrain the interpretations of what is done in the speech event in which such phenomena occur (cf. Lucy, 1993; Mey, 2001: 173–205). Note that such metatheoretical principles are, in social-scientific terms, evaluative norms to which the individual scientists differentially orient and against which they evaluate their behaviors; in so doing, they discursively mark their distinct group identities and constitute themselves as members of particular ‘schools’ (cf. Woolard and Schieffelin, 1994; Koyama, 1999, 2003; Kroskrity, 2000). Thus, these value-laden, normative conceptualizations of the relationship between semantics and pragmatics may be seen as the metalinguistic ideologizations that index the social positions and stances of particular scholars. Below, we describe the general frameworks of such ideologizations, place them in a historical context from the perspective of critical pragmatics, and examine what language users, including semanticians and pragmaticians, do with words, that is, how they (re)create their theories, disciplines, and themselves in their sociohistorical contexts (cf. Koyama, 2000b, 2001a; Mey, 2001). Because metatheoretical conceptualizations may focus on methodological stances or the more empirical issue of how to set the boundaries of pragmatics (vis-a`-vis semantics or ‘extralinguistics,’ i.e., semanticism, complementarism, and pragmaticism), we shall, for analytic purposes, deal with them separately. Starting with the former, we show the correspondences between metatheoretical conceptualizations and the sociohistorical contexts that endow them with socially interested values.

Three Methodological Stances to Pragmatics and Semantics The Componential View

The first way to articulate the relationship concerns methodologies of, or metatheoretical stances toward, linguistic sciences. Clearly, they can be objective (de-agentivized) or subjective (agentive), analytic (compositional) or synthetic (holistic), or various combinations thereof; but we may detect three ideal types in the recent history. The first of such types, best represented by the Chomskyan approach and generally characterized by objective and analytic attitudes, is called ‘componential,’ ‘compartmental,’ or ‘modular.’ It sees the linguistic sciences as analytic pursuits dealing with decomposable objects (i.e., modules) such as phonetics, phonology, morphophonology, morphology, syntax, semantics, and pragmatics, naturalistically studied by compositional methods and described independently of the particular sociohistorical contexts

in which analysts are involved as social agents. Clearly, this metatheoretical stance, which underlies the organizational institutions and academic curricula of linguistics, is typical of the modern sciences coming out of Baconian natural philosophy in 17th-century England (cf. Bauman and Briggs, 2003). That is, it is the stance of the modern natural sciences (Naturwissenschaften), with their laws and other regularities, which try to explain empirically occurring phenomena by appealing to hypothetically idealized quasiempirical regularities (‘covering laws’) analytically abstracted from the total contexts of scientific activities, which are thereby decomposed into distinct fields, in our case semantics and pragmatics. This epistemological characterization may be supplemented with a pragmatic understanding of the social and historical processes of epistemic authorization (cf. Kroskrity, 2000: 85–138). That is, the componential stance toward language is pragmatically positioned in, and presupposingly indexes, the historical context of the post-Baconian specialized sciences and, more broadly, Durkheimian ‘organic society’ in modernity, or the age of social specialization and the division of labor. Because the truthfulness, appropriateness, and effectiveness of any actions, including scientific ones, depend on the contexts in which they take place, this implies that the componential stance and its ‘scientific’ results receive their authority primarily from the post-Baconian modernization project. This project is characterized not only by social and epistemic specialization, but also by the standardizing regularization, methodical rationalization, and experimental controlling of contextual contingencies, chance happenings, and unique events, which earlier were understood as fate, epiphanies, and other phenomena originating in the heavenly universe, and transcending the sublunar, empirically manipulable space of human agency (cf. Hacking, 1990; Owen, 1994; Koyama, 2001b). Here, we must note that standardizing regularization, Saint-Simonian technocratic specialization, and Benthamite instrumental rationalization generally characterize the modern discipline of logic (or, when applied to linguistics, of formal syntax and semantics), which came to dominate American philosophy around 1900 and whose rise is precisely due to these social forces (cf. Kuklick, 1977). The Perspectival View

The second stance is less objectivizing and more synthetic because it emerged as a critical reaction to the crisis of the Baconian sciences and the modernization project in the early 20th century. This critical reaction may be characterized as the Husserlian phenomenological movement, genealogically going back

Pragmatics and Semantics 729

to Hegel’s totalistic dialectic phenomenology and Kant’s critical philosophy and encompassing some of the early structuralists, Gestalt psychologists, and other (broadly defined) neo-Kantians who emphasized the processual nature of scientific knowledge as empirically anchored in the holistic context of human beings in their worlds, with their origin located at the hic et nunc of agentive experiences (cf. Jakobson, 1973). One of the critical aspects differentiating this stance from the nomothetic one of the natural sciences is the understanding of scientists and their activities as a contextually integrated part of the phenomena analyzed by the scientists themselves, here theorized as contextualized beings. Hence, the phenomenological sciences are closer to a Geisteswissenschaft such as Weberian interpretative sociology, in that it tries to come to grips with the dialectic fact that scientists, always already involved in their social contexts, cannot ever get rid of socially contextualized perspectives (cf. Geuss, 1981; Duranti and Goodwin, 1992). Hence, the phenomenological stance toward the linguistic sciences is characterized by a perspectival and ethnomethodological (i.e., critically agentive) understanding of such sciences, in particular the understanding that language can be studied from an objectivizing or agentive perspective and that scientific findings are largely a function of the particular methodological perspectives chosen by social agents (cf. Calhoun et al., 1993; Verschueren, 1999; note the perspectivalist notion of ‘seeing-as,’ genealogically derived from Wittgenstein’s, Nietzsche’s, and Kant’s epistemological focus on form, epistemology, and methodology rather than on substance, ontology, or the Ding an sich). In this understanding, semantics studies the linguistic phenomena that emerge when scientific agents take a deagentivizing, decontextualizing perspective, whereas pragmatics is a study of linguistic phenomena seen from the perspective of ‘ordinary language users’ who nonetheless critically examine their own linguistic activities ‘from within.’ Hence, pragmatics in the phenomenological tradition is a critical science of, by, and for ordinary language users (cf. Mey, 2001). Here, we may note that the perspectival view is generally preferred by pragmaticians, presumably because ‘perspective’ is a pragmatic notion par excellence, whereas the componential view is preferred by semanticians. This suggests that analytic decomposition is a methodological action that displaces or suppresses perspectival indexicality, compartmentalizes linguistic sciences, and privileges (or generates) structural objects such as semantics, in opposition to pragmatics, which it relegates to the ‘waste-basket’ of linguistics (cf. Mey, 2001: 19–21).

The Critical Sociological View

The previously discussed methodological understanding of the sciences goes back to Kant’s transcendental critical philosophy; it was the turn-of-the-century neo-Kantians, in particular, Weber, Adorno (the Frankfurt School), and other critical sociologists, who, following Nietzsche, came to squarely understand this methodological, Kantian condition of modern sciences and its consequences, especially its epistemological relativism (cf. Geuss, 1981; Owen, 1994; Mey, 2001: 315–319). More importantly, such scientists followed the critical spirit of Kantian philosophy and launched a metacritique of methodologically critical Kantian science and its relativism, shown to reflect and partially constitute the contextual, social condition of the age in which such science had been created, viz., modernity. In this metacritical view, what is needed is a metacritical science that can relate the critical methodological instances of Kantianism to their sociohistorical context – that is, the political-economic and other kinds of pragmatic ‘reality’ that have endowed these instances with plausibility, if not verity – so as to show the critical limits of methodological relativism that are inherent in its historical and societal context. Thus, just as methodologically critical perspectivalism and its sociological metacritiques (i.e., critical pragmaticism) came out of Kantian critical philosophy, so, too, pragmatics itself and, at least in part, semantics evolved out of this philosophy. This is illustrated by the following brief excursion into the historical context in which these two fields originated. First, we note that the term ‘pragmatics’ was coined by the American pragmatist Charles Morris (1901–1979), who articulated the tripartite distinction among syntax (form), semantics (meaning), and pragmatics (context), appealing to the semiotic Pragmati(ci)sm of Charles Peirce (1839–1914), which tried to base knowledge on praxis (actions). Peirce’s Pragmati(ci)sm, characterized by the phenomenological and critical orientations of neo-Kantianism, explicitly pointed to Kant as the founder of critical sciences, primarily because Kant articulated philosophy as a critique, in particular of philosophy itself. Hence, post- or neo-Kantians advanced ‘semantics’ and ‘pragmatics’ as means to critically examine and articulate the foundations of modern epistemological and praxeological pursuits, respectively. More importantly, the contextual motivations for these critical sciences came from the historical conditions of the Enlightenment (or early modernity), which replaced the older schema in which ratio was subordinated to intellectus (emanating from the heavens to the sublunar human world, which was passively affected by it)

730 Pragmatics and Semantics

with a new schema in which reason (epistemology) came to preside over imagination, fancy, or even madness (aesthetics). This was part of the comprehensive historical shift that replaced the theocentric, religious universe of the late Middle Ages – in which the universal, catholic, and absolute truth resided in the heavens, transcending the empirical, earthly, contingent, relative, human universe – with the anthropocentric, secular universe of modernity, in which human beings became empirical and transcendental (i.e., autonomous) subjects, no longer relying on the once transcendental heavens for their existence. By the same token, philosophy became anthropocentric and relativistic too, as the sciences of, by, and for humans (cf. Foucault, 1973; Koyama, 2001b, 2003). This transformation followed the development of commerce in Europe from the 12th–13th centuries onward, which brought together radically diverse cultures and histories, thus eventually giving rise to (1) the relativistically skeptic philosophy of Descartes, which started with doubting; (2) the speech genre of travel journals (developing into modern novels and (Alexander) Humboldtian ‘natural histories,’ out of which emerged modern anthropology and comparative sociology); and (3) humanism (eventually turning into philology and linguistics). The relativistic social and epistemic condition of modernity is correlated with the historical transfer of transcendence from the heavenly realm to the empirical realm of humans, who thus became transcendentalempirical doublets, obliged to justify themselves by means of, precisely, such metalinguistic critiques as semantics, pragmatics, and, more generally, Kantian transcendental philosophy. Thus, modern semantics and pragmatics came out of Kantian critical philosophy and show historical characteristics of modernity, whereas philosophy (science) tries to critically examine and justify – within limits – itself because there is no longer any God (or deus ex machina) available to ground the humanmade edifice (cf. Vico’s scienza nuova). Post-Kantian modern semantics carries out a metalinguistic (metasemantic) critique of scientific and nonscientific languages, concepts, and (correct and literal) referential acts (cf. logical positivism, General Semantics, and the work of Ogden and Richards), whereas pragmatics (often incorporated into the social sciences, in the tradition of Durkheim, Weber, Boas, Marx, Hegel, the brothers Humboldt, Herder, and ultimately, Kant) carries out a metalinguistic (metapragmatic) critique of scientific and nonscientific discourses and other actions in the sociohistorical context of modernity and beyond. This is what the post-Kantian demand of critical science requires us to do, by reflexively examining post-Kantian sciences themselves, including

semantics and pragmatics, and articulating their historical, cultural, social limits as shown by (1) the decontextualized, dehistoricized notion of ‘truth’ advanced by the semanticians who ethnocentrically assume their cultural stereotypes as universal concepts; (2) the modern anthropocentric, ‘phonocentric’ understanding of discourse as having its foundational origin in the hic et nunc of microsocial communication; and (3) methodological relativism (cf. Koyama, 1999, 2000b, 2001b, 2003). Thus, the third, critical-sociological, view tries to transcend modern perspectivalism and methodological relativism by showing their historical limits and advancing a transcendentally ideal total science that attempts to integrate a legion of specialized sciences and perspectives, in particular trying to overcome the division between the theoretical (conceptual, analytic, and linguistic-structural) and the pragmatic (empirical, social, and contextual) (cf. Koyama, 2000a). This approach, which underlies Pragmati(ci)sm as a project seeking to anchor theoretical knowledge in contextualized actions, and pragmatics as a project aimed at the unification of theoretical and applied linguistics, can be called ‘semiotic,’ insofar as semiotics is a science that tries to integrate decontextualized and contextualized epistemic, praxeological, and aesthetic phenomena, separately studied by specialized sciences, under the unitary umbrella of semiotic processes anchored in the pragmatics of social actions and events. In this view, it is the socially embedded discursive interaction that contextually presupposes or even creates the variety of perspectival stances assumed by social agents such as linguistic scientists and language users. Thus, this view is less agentivist than the phenomenological one in that it tries to articulate the contextual mechanisms that give rise to agentive, perspectivalized, and ideologized accounts and uses of language. Such a ‘discursive-functional’ approach was advanced by Peirce, Sapir, Whorf, Jakobson, Bateson, Hymes, and other semioticians, one of the most important frameworks being Jakobson’s (1960) model of the six functions of (linguistic) communication: the referential, emotive, conative, phatic, poetic, and ‘metalingual’ (metalinguistic, metacommunicative) functions. The critical significance of this framework is that it is a model of linguistic phenomena reflexively including itself (cf. Lucy, 1993; Mey, 2001: 173–205): Any linguistic theorizing or understanding, be it explicit or implicit, whether carried out by professional linguists or ordinary language users, can be characterized in terms of the metalinguistic (metasemantic and metapragmatic) functions of discursive interaction. In this model, semantics is characterized as the metasemantic function

Pragmatics and Semantics 731

of discursive interaction, that is, the system of the symbolic signs that discursive interactants believe exist prior to the discursive interactions in which those signs are presupposingly used to interpret the referential ‘meanings’ (what is said) of such interactions; on the other hand, pragmatics is equated with discursive interaction, encompassing all six functions, including the metasemantic function (to which pragmatics in the narrow sense is opposed) and the metapragmatic function, that is, the principles, maxims, norms, and other reflexive devices for textualization (framing, intertextual renvoi, and Bakhtinian voicing) that language users such as linguists use in order to regiment, interpret, and create referential and social-indexical meanings (i.e., effects) in discursive interactions. In the next section, we articulate the relationship between semantics and pragmatics in this critical view, specifically in relation to the boundary problem.

The Boundary Problem As has been noted, we may articulate the relationship between pragmatics and semantics in terms of the empirical boundaries between them and between pragmatics and extralinguistics. Obviously, how we distinguish semantics from pragmatics may significantly affect how we delimit pragmatics from ‘the beyond,’ and conversely. Semanticism

First, as to the boundary between semantics and pragmatics, we may discern three approaches: semanticism, pragmaticism, and complementarism. Semanticism envisions pragmatics as the extension of semantically encoded yet metapragmatically characterizable categories (especially moods and verba dicendi, or verbs of saying), just as traditional logic has understood pragmatics (discourse) as the extensional universe onto which intensional (i.e., metasemantically characterizable) categories are projected. Naturally, semanticism is aligned with the decontextualized, analytic methodological perspective and sees language in use from the metasemantic perspective of the denotational code (linguistic structure) presupposingly used by language users, including linguistic scientists, to interpret the referential significances of discursive interactions in which they are involved as discourse participants. Hence, this approach, typically taken by the philosophers of language such as Austin and Searle, characteristically uses the method of ‘armchair’ introspection, in which only the presupposable (and referential) aspects of communicative processes may readily emerge. This approach generally excludes the social-indexical aspects of pragmatics,

particularly as regards group identities and power relations among discourse participants and other contextual beings, inasmuch as these are created (vs. presupposed) in discursive interaction and only coincidentally, indirectly, or opaquely related to linguisticstructurally encoded symbols (cf. Lucy, 1992; Mey, 2001). In this approach, pragmatics is mostly reduced to metapragmatically characterizable types in what essentially is a semantic, denotational code; examples are mood/modality/tense categories, verbs of saying, pronouns, and other shifters (i.e., ‘denotational–indexical duplexes’; cf. Lee, 1997) or their psychological correlates, as observed in Austin’s ‘performative utterances’ and Searle’s ‘primary illocutionary forces,’ allegedly underlying the perlocutionary effects regularly created by the indirect (nonliteral) use of the tokens of such types (see Speech Acts). Hence, this approach can be characterized as denotationalist, universalist, and presuppositionalist; it is based on, and expresses, a consensualist linguistic ideology adopted by the scientific agents who, like many other language users, are inclined to see only or primarily the denotationally explicitly characterizable parts of language (cf. Lucy, 1992), while their social interest is limited to seeing language in vitro (vs. in vivo), essentially consisting of a set of universal categories commonly presupposed rather than contextually created in discursive interaction (cf. Mey, 2001). Historically, this ideology derives from the 17th-century Lockean ideas about language and communication that started to prescribe and advocate, against the rhetorical practices of Scholastic disputants and the radically egalitarian Puritans, the transparently referential (‘correct and literal’) use of language based on the ‘universal public (i.e., bourgeois) consensus’ about the proper, cooperative, rational use of language that should be presupposed by ‘anyone’ (in reality ‘Modern Standard Average Middle-Class Man’; cf. Silverstein, 1985; Bauman and Briggs, 2003). Complementarism

Unlike semanticism, complementarism advocates the coexistence of semantics and pragmatics as ‘different but equal’ disciplines. Hence, it is compatible with both the compartmental and perspectival methodologies, as well as with the general ideologies of modern liberalism, relativism, and specialization. This may partially account for the current popularity of this approach, at least since the demise of the more ambitious projects of Generative Semantics and Searle’s semanticism in the 1970s (cf. Koyama, 2000b) (see Generative Semantics). The approach, espoused by mainstream pragmaticians such as Leech (1983) and Levinson (1983), has its roots in Grice, who set up the uniquely pragmatic Cooperative Principle to delimit

732 Pragmatics and Semantics

and save the symbolic realm of logico-semantics from the caprice of contextual variability and irreducible contingency (see Cooperative Principle). As this indicates, insofar as complementarism tries to construct a kind of pragmatic theory that complements the semantic theory of decontextualized signs, it is inclined to focus on pragmalinguistics, that is, the aspects of pragmatics adjacent to and compatible with semantics, such as the universal principles, maxims, and other regularities of language use that can be fairly transparently linked to propositions and other referential categories. Thus, complementarism may be characterized as a weak version of semanticism that does not try to reduce pragmatics to semantics but attempts to create a pragmatic correlate of semantic theory, as witnessed by the affinity between Searle’s semanticist doctrine of indirect speech acts and Grice’s complementarist doctrine of implicature or, more fundamentally, by the referentialist, universalist, rationalist, and consensualist characteristics shared by traditional semantics and the pragmatic theories advanced by Grice and Levinson. From Complementarism to Pragmaticism

Nonetheless, complementarism is distinct from semanticism insofar as the former advances a uniquely pragmatic principle (or perspective). This principle (or perspective) is communication, a notion that conceptualizes discursive interaction as opposed to linguistically encoded semantic categories (even though communication is still often taken as the intentional correlate of semantic categorization). This shift from decontextualized code to contextual process is also observed in the robustly pragmatic understanding of presupposition that has emerged in the wake of Karttunen’s complementarist distinction between ‘semantic’ and ‘pragmatic presuppositions’ (cf. Mey, 2001: 27–29); in the discursive-functional model of communication, presupposition is a fundamentally pragmatic contextual process rather than an essential property of semantic types or their tokens. Discursive interactions indexically presuppose and create their (con)texts, yielding contextual appropriateness and effectiveness, both in the referential and social-indexical domains of discourse. The focus of pragmatics is thus on the discursive interactions that indexically create referential and social-indexical texts by indexically presupposing contextual variables, including but not limited to metalinguistic codes such as metapragmatic principles and metasemantic relationships (see Presupposition; Pragmatic Presupposition). Thus, complementarism may lead to the kind of pragmaticism that tries to include semantics as a part of the all-encompassing pragmatic processes of

presupposing and creative indexicality in both referential and social-indexical dimensions, involving not just ordinary language users but also linguistic scholars and including not just linguistic but also extralinguistic (sociohistorical) domains. There is, however, another, reductionist kind of pragmaticism, which is actually a variation of complementarism that narrowly limits the realm of pragmatics and exclusively deals with the part of pragmatics that correlates with semantics – the area of referentially focused pragmatic regularities. For instance, in Relevance Theory, the dialectically indexical processes of presupposing contextualization and creative textualization in discursive interactions are theorized exclusively from the referentialist perspective that tries to relate ‘what is said’ (referential text) to ‘what is meant’ (rather than to ‘what is done’), the gap between the two being bridged by the ideological assumption that discursive participants universally share the rationalistic and consensualist notions of economy, cooperation, and regular and eufunctional communication (cf. Mey, 2001: 179–181). Hence, this approach excludes the vast realm of pragmatics dealing with social conflicts in group identities and power relations, as well as historically contextualized contingent happenings, which can always defeat regularities and other contextual presuppositions and create different presuppositions and texts (cf. Auer and Luzio, 1992; Koyama, 2001a).

Historical Contextualization of the Ideologies of Pragmatics and Semantics Thus far, we have explored how the three methodological stances are related to how we define the disciplinary boundaries and noted the correspondence between the methodological scale, stretching from componentialism to perspectivalism to critical pragmaticism, and another scale, extending from semanticism to complementarism, reductionistic pragmaticism, and total pragmaticism. Like any ideology, this configuration of metalinguistic ideologies is also embedded in the sociohistorical context; thus, we may observe a historical drift of pragmatics from componentialism and semanticism to perspectivalism and complementarism and, finally, to total and critical pragmaticism (cf. Koyama, 2001b). Again, let us start with Kant’s critical turn, which yielded two traditions: the semantic and the social scientific (i.e., pragmatic). Whereas Kant’s central concerns were ‘(re)cognition’ and ‘judgment,’ behind which the problematics of ‘meaning’ and ‘language’ were still hidden (cf. Coffa, 1991), Frege’s (1848– 1925) later discovery of predicate logic replaced the Kantian notion of synthetic judgment (or value) with

Pragmatics and Semantics 733

analytic logic and the notions of ‘proposition’ and ‘sentence,’ further divided into ‘force’ (assertoric, interrogative, imperative, and so on) and propositional content (represented by structurally decomposable units such as the subject, predicate, and their subcategories). This led to the birth of modern semantics, as it came to be pursued by Russell (1872–1970), Carnap (1891–1970), Tarski (1902–1983), and Quine (1908–2000), as well as by the linguistic formalists of the Copenhagen and the Neo-Bloomfieldian Schools, whose impressive successes allowed the componential semantic tradition to penetrate into more empirical disciplines such as psychology and anthropology, where ‘ethnoscience’ came to flourish in the mid-20th century. Moreover, the semantic traditions of Carnap and the Neo-Bloomfieldian structural linguists, especially Harris (1909–1992), converged in Chomsky (1928–), whose generativism began to dominate linguistics in the 1960s, when the semantic tradition moved more deeply into the empirical fields, thereby giving rise to empirical semantics, as witnessed by the rise of Generative Semantics, fuzzy logic, and Rosch’s prototype semantics in the late 1960s to mid-1970s, eventually congealing into today’s cognitive linguistics. Such was the evolution of the branch of the postFregean semantic tradition that focused on propositional content. In contrast, the other branch, focusing on ‘force,’ was developed by the Ordinary Language Philosophers, starting with Austin, who translated Frege’s œuvre into English, to be followed by Searle, Grice, and others. This branch is usually called ‘pragmatics,’ but (as we have already seen) it is actually a kind of empirical semanticism, dealing with pragmatic matters only insofar as they can be systematically related to propositions or referential texts (‘what is said’; see Implicature; Pragmatic Determinants of What Is Said). In the early days, modern pragmatics was dominated by Speech Act Theory and by Gricean thinking: It was practiced by extending the methodological orientations of logic and structural linguistics, with their focus on denotational regularities. In those quasipragmatic theories, contingent occurrences of actual language use were still theorized as tokens of some regular Searlean type of speech act (or speech act verb) or some Gricean postulates (principle and maxims) rather than as indexical events presupposingly contextualizing other indexical events, indexical event types (e.g., speech genres, sociolects, and dialects), and symbolic structures and contextually creating certain pragmatic effects (cf. Auer and Luzio, 1992; Koyama, 2001a). In short, the notions of context and contextualization, which constitute the core of our pragmatic understanding, were yet to be given full theoretical significance, especially in comparison with the anthropological tradition,

which, since the days of Malinowski and Sapir, has focused on the (socially situated) ‘speech event’ (cf. Gumperz and Hymes, 1964; Duranti and Goodwin, 1992; Duranti, 2001; see Context). Thus, it was only in the late 1970s and 1980s, when anthropology and linguistic pragmatics began to influence one another more intimately, that the genuinely pragmatic view of ‘pragmatics’ as the idiographic, ethnographic, historiographic science of contextualization started to emerge. Then, it became clear that the (con)text indexed by language in use (‘what is done’) includes not just ‘what is meant’ (i.e., the part of the context that may be regularly related to ‘what is said’); rather, the context of language in use is primarily captured by social-indexical pragmatics dealing with ‘what is done,’ that is, with the power relations and group identities of discourse participants, involving their regionality, class, status, gender, generation, ethnicity, kinship, and other macrosociological variables, indexed and actualized in microsocial speech events to entail some concrete, pragmatic, ‘perlocutionary’ effects. The changing focus is reflected, first, in the move from the early quasipragmatic theories, which generally embraced the componential view and focused on illocutionary and other propositionally centered referential pragmatic regularities in microsocial speech events, to the next generation of more fullfledged pragmatic theories, which based themselves on the perspectival view, paying attention to the social-indexical (especially cultural) aspects of pragmatics as well. Further, the changing focus is also observed in the more recent move toward a ‘total pragmatics,’ with its critical view and its emphasis on the (micro- and macro-)contextual aspects of pragmatics. These aspects focus especially on a social-indexical pragmatics (including not just culture but also class, gender, and other sociological and historical variables) that views as its core ‘what is done’ (the pragmatic, praxis), a special subset of which is constituted by ‘what is said’ (referential practice) and by linguistic structure, including semantics (as presupposingly indexed in referential practice) (cf. Koyama, 2000a, 2000b, 2001a; Mey, 2001). Only then can semantics and pragmatics be united in their proper contextualization within the macrosocial historical matrix.

See also: Context; Cooperative Principle; Evolution of Semantics; Generative Semantics; Implicature; Interpreted Logical Forms; Irony; Metalanguage versus Object Language; Neo-Gricean Pragmatics; Nonstandard Language Use; Performative Clauses; Pragmatic Determinants of What Is Said; Pragmatic Presupposition; Presupposition; Semantics–Pragmatics Boundary; Speech Acts.

734 Pre-20th Century Theories of Meaning

Bibliography Auer P & Luzio A D (eds.) (1992). The contextualization of language. Amsterdam: John Benjamins. Bauman R & Briggs C L (2003). Voices of modernity. Cambridge, UK: Cambridge University Press. Calhoun C, LiPuma E & Postone M (eds.) (1993). Bourdieu. Chicago, IL: University of Chicago Press. Coffa J A (1991). The semantic tradition from Kant to Carnap. Cambridge, UK: Cambridge University Press. Duranti A (ed.) (2001). Linguistic anthropology. Oxford: Blackwell. Duranti A & Goodwin C (eds.) (1992). Rethinking context. Cambridge, UK: Cambridge University Press. Foucault M (1973). The order of things. New York: Vintage Books. Geuss R (1981). The idea of a critical theory. Cambridge, UK: Cambridge University Press. Gumperz J J & Hymes D (eds.) (1964). American Anthropologist 66(6)Part 2. Special Edition on The Ethnography of Communication. Hacking I (1990). The taming of chance. Cambridge, UK: Cambridge University Press. Jakobson R (1960). ‘Concluding statement.’ In Sebeok T (ed.) Style in language. Cambridge, MA: MIT Press. 350–377. Jakobson R (1973). Main trends in the science of language. London: George Allen & Unwin. Koyama W (1999). ‘Critique of linguistic reason, I.’ RASK 11, 45–83. Koyama W (2000a). ‘Critique of linguistic reason II.’ RASK 12, 21–63. Koyama W (2000b). ‘How to be a singular scientist of words, worlds, and other (possibly) wonderful things.’ Journal of Pragmatics 32, 651–686.

Koyama W (2001a). ‘Dialectics of dialect and dialectology.’ Journal of Pragmatics 33, 1571–1600. Koyama W (2001b). ‘Reason, experience, and criticalhistorical pragmatics.’ Journal of Pragmatics 33, 1631–1636. Koyama W (2003). ‘How to do historic pragmatics with Japanese honorifics.’ Journal of Pragmatics 35, 1507–1514. Kroskrity P V (ed.) (2000). Regimes of language. Santa Fe, NM: School of American Research Press. Kuklick B (1977). The rise of American philosophy. New Haven, CT: Yale University Press. Lee B (1997). Talking heads. Durham, NC: Duke University Press. Leech G N (1983). Principles of pragmatics. London: Longman. Levinson S C (1983). Pragmatics. Cambridge, UK: Cambridge University Press. Lucy J A (1992). Language diversity and thought. Cambridge, UK: Cambridge University Press. Lucy J A (ed.) (1993). Reflexive language. Cambridge, UK: Cambridge University Press. Mey J L (2001). Pragmatics (2nd edn.). Oxford: Blackwell. Owen D (1994). Maturity and modernity. London: Routledge. Silverstein M (1985). ‘Language and the culture of gender.’ In Mertz E & Parmentier R J (eds.) Semiotic mediation. Orlando, FL: Academic Press. 219–259. Verschueren J (1999). Understanding pragmatics. London: Arnold. Woolard K A & Schieffelin B B (1994). ‘Language ideology.’ Annual Review of Anthropology 23, 55–82.

Pre-20th Century Theories of Meaning G Haßler, Potsdam, Germany ß 2006 Elsevier Ltd. All rights reserved.

Early Theories of Meaning The problem of the basis on which linguistic signs mean something and can designate real objects or relations is a central subject of theories of language. The starting point of debates on this matter might be the dialog Cratylus by Plato (428/7–349/8), in which two opposite positions are contrasted: the conventionalist position (thesei), which ascribes the imposition of names to a voluntary convention between people, and the naturalist position, which supposes a natural denomination of an object depending on its properties (physei). Plato introduced

Socrates (470/69–399) as a mediator between these two positions and presented them as complementary. For Aristotle (384/3–322), names are kata syntheken: unlike the unarticulated sounds of animals, they are historically imposed. The Aristotelian concept of the sign, which corresponds to the notion of arbitrariness in later theories, implies semantic function as well as the relation between the name and the designated reality. During the following centuries, several changes brought to the fore this relation between sounds and objects and its genetic explanation and partially disregarded the question of how signs mean something. The idea that linguistic signs were not naturally motivated, but invented by voluntary imposition, found different expressions (non natura sed ad placitum, ex arbitrio, arbitrarius, willku¨rlich, arbitraire, arbitrary).

Pre-20th Century Theories of Meaning 735

The explanation of the semantic function could then concentrate on the description of this voluntary imposition. For the medieval authors of modi significandi a sound (vox) became a word by its meaning (ratio significandi). The bilateral character of the linguistic sign was expressed in this context by the opposition of significatum and quod significatur, which appeared in several variations (Thomas of Erfurt, Grammatica speculativa, I, x3: signum, vel significans; Martinus de Dacia: res designata/vox significans). The relation between both sides of the linguistic sign was presented as a problem by nominalist theories. The Logic of William of Ockham (1285?–1349?) was mainly based on semantic observations of expressions that have meanings. The signification, corresponding to the result of the previous formation of concepts, was differentiated into suppositions concerning the actual and contextual relation of a sentence. In this manner, the significatio as a potential of meaning was opposed to the suppositio, the actualized meaning. This consideration of meaning by the nominalists implied a higher degree of independence of meaning with regard to reality. The doctrine of St. Augustine (354–430) led to another conception of meaning and its relation to reality. A main tenet of the Augustinian rationalist doctrine was the merely spiritual nature of all notions and of the relations between them. The denotation of a term was regarded as a mental object that could only have a representational relation to the word and could not depend on linguistic signs and their corporeal nature. The form words obtain in the different languages was regarded as arbitrary, whereas the composition of the concept was universal and did not depend on sensations. For the rationalist thinkers, the necessity of language appeared only with communication between people when the transmission of pure incorporeal notions was impossible. But linguistic signs met the necessities of communication in a very insufficient way because intuitive conceptions overwhelmed our mind, whereas their linguistic signs distracted from their content and slowed down the process of thinking. Although words had different forms in different languages, the ideas designated were neither Greek nor Latin but universal and independent of any sensual support. This rationalist theory limited the impact of language on cognition and, in the same way, the trustworthiness of sensory cognition. But in some authors we find the opposite perspective. Let us mention the Spaniard Luis de Leo´n (1527–1591), who wanted to achieve knowledge about the nature of religious concepts by studying the denominations of Christ in the Bible. The basis of his De los nombres de Cristo was a semantic theory that supposed a capacity of representation of the

denominated (nombrado) by the name (nombre) in cognitive processes. By the separation of the original status of language from the development of historical languages, the validity of the biblical doctrine on the origin of language was clearly established and the possibility of reflection on arbitrary signs based on human reason was opened.

Intension and Extension in Port-Royal Logic The Grammar (1660) and the Logic (1660) of PortRoyal took up and developed semantic concepts that exercised an important influence on the further development of semantic theories. The authors of the Port-Royal Logic, Antoine Arnauld (1612–1694) and Pierre Nicole (1625–1695) treated the problem of the arbitrary sign in its complexity, regarding words not as natural (signes naturels) but as conventional signs that functioned as grounded on tradition (signes d’institution et d’e´tablissement). But the establishment of a convention presupposed the existence of already formed ideas, which could be denominated by words chosen conventionally. The primacy of thought set prerequisites for the interpretation of arbitrariness. What could be regarded as arbitrary were not ideas themselves but only the attribution of sounds to these ideas, which would exist even without them. The meanings of the linguistic signs did not appear in the moment of their conjunction with sounds but, rather, they existed as clear and reasonable ideas and were independent of their denomination. The fact that convention varied in different languages could prove easily that it had nothing to do with the nature and formation of ideas. But this conclusion, which derived from the rationalist thesis of innate ideas, was not the only restriction on the arbitrariness of signs. The authors of the Logic declared that the relation between sound and meaning was only arbitrary in the individual use of language but that it was determined by common use through communication between people. It seems remarkable that the assignment of sounds to meanings was regarded as a relation between two mental entities. The idea of the psychological nature of both sides of the linguistic sign is obviously not a product of the beginning of the 20th century. In the examination of meaning, Arnauld and Nicole distinguished the intension (compre´hension) and the extension (e´tendue) of a sign. By intension they meant the totality of features that constituted an idea, none of which might be taken away without destroying the idea. Extension was defined as the totality of objects, notions, and subnotions to which an idea could refer. What determined the identity of a meaning was not its

736 Pre-20th Century Theories of Meaning

reference to different objects and notions but the intension. Although it was always possible to limit the extension of a word, leaving out a feature of its intension would make it lose its identity. The reference point for the intensional definition of meaning was language as a product and not its use in communication. The meaning was obligatory in its features and was determined by convention, whereas the extension depended on the use of the word to denote either a whole class of objects or an individual. The discussion of the actualization of conventional signs in use leads to another distinction in the PortRoyal Logic. Arnauld and Nicole distinguished between a proper meaning (signification propre) and accessory ideas (ide´es accessoires). The latter derived from the dynamic character of meaning in communication in which accessory ideas were evoked. For exampal, the sentence you have lied has the same principal (proper) meaning as you know that the contrary of what you are saying is true, but it evokes the accessory ideas of insult and contempt. Accessory ideas were not ascribed to a word by common use, but they appeared in the individual use of language. Repetition could lead to generalization of such accessory ideas and, as the result of this process, accessory ideas might be linked habitually to words, especially to synonyms. But in this case, we would have to take into account the inconsistency of such accessory ideas. The differential function of the accessory ideas was highly estimated by the authors of Port-Royal. They even demanded to explain them in dictionaries. This assertion of distinct words with similar meanings helped to prepare dictionaries of synonyms that became popular in many European countries. Arnauld and Nicole also discussed oppositions and proposed a classification of them. They called ‘divisions’ such pairs of opposites that covered the whole extension of a notional field and could be related to a hyperonym (pair/impair ! nombre). In cases in which the opposites were divided by a zone of indifference, they even described the field of this zone, including the antonyms in a narrow sense (Table 1). The confusion of these two types of oppositions was regarded as harmful for logical conclusions, if a person took the negation of a term as its opposite and did not take into account the zone of indifference.

For a further differentiation, Arnauld and Nicole gave the terms of opposites shown in Table 2. The Logic of Port-Royal contained a comprehensive theory of different semantic oppositions and even attempts at a systematic description of the vocabulary.

The Recognition of the Historical and Cultural Nature of Meaning The Recognition of the Genius of a Language

At the end of the 17th century, the notion of the genius of the language (ge´nie de la langue) became very fashionable. It appeared in systematizations of Vaugelas’s Remarques sur la langue franc¸aise by Louis Du Truc (1668), Jean Menudier (1681), and Jean d’Aisy (1685) and was taken up by grammarians and philosophers as well. The ‘Essay concerning human understanding’ (1690) by John Locke (1632–1704) gave a new answer to the question of how thought could be influenced by language. According to Locke, linguistic signs did not represent the objects of knowledge but the ideas that the human subject created. The nominalist explanation of complex ideas led to the denial of innate ideas and to the supposition of a voluntary imposition of signs on a collection of simple ideas for which there was no pattern in reality. This rendered possible an extension of the concept of arbitrariness to the composition of meaning. Language was no longer regarded as a simple expression of a universal reason but as a system that organized thought following historical and social principles. Bernard Lamy (1640–1715), who was influenced by the Port-Royal authors but found a way to integrate their theory into the assertion of a sensual basis of human cognition, remarked on the different division of lexical areas in various languages. In his Rhetoric (La rhe´torique ou l’art de parler, 1699), he explained the quantity and differentiation of words by the attention different peoples give to some fields of knowledge. In this way, he explained the existence of more than 30 words to denote ‘camels’ in Arabic as well as the fact that people who cultivated the sciences and arts had a developed and highly

Table 2 Kinds of oppositions Table 1 Opposites and zones of indifference Opposites

Zone of indifference

sain/malade jour/nuit avarice/prodigalite´

indispose´, convalescent cre´puscule libe´ralite´, e´pargne louable, ge´ne´rosite´

Name of the opposition

Examples

Termes relatifs Termes contraires Termes privatifs Termes contradictoires

pe`re/fils, maıˆ tre/serviteur froid/chaud; sain/malade vie/mort; vue/aveuglement voir/ne pas voir

Pre-20th Century Theories of Meaning 737

differentiated vocabulary. Especially in the formation of diminutive and augmentative denominations, languages differed considerably: In Italian, for example, many diminutives had developed, whereas they were absent in French. Because languages used different points of view in the denomination of things and concepts, a wordby-word translation from one language into another was nearly completely impossible. This was an opinion that was largely shared by 18th-century authors. If a person looked at the motives for the formation of words, the differences between languages became even more obvious. Lamy used an old example for the argumentation on this subject: the words for ‘window’ in the Romance languages that are derived from three Latin etyma. The Spanish ventana (< ventus) emphasizes the blowing of the wind, the Portuguese janela (< janua) uses a comparison with a gate, and the French feneˆtre (< fenestra < Greek jainEin ‘to shine’) uses the transmission of light. This meant that the same word, which stood for a fundamental idea, had different significations in different languages. The Study of Metaphors

With the positive evaluation of the cognitive force of sensation and emotional speech, the problem of metaphors became more important. Ce´sar Chesneau de Du Marsais (1676–1756) dedicated a digression in his book Des Tropes ou des diffe´rents sens dans lesquels on peut prendre un meˆme mot dans une meˆme langue (1730) to the study of metaphors and idioms in various languages. He started with the difficulties of translation of lexical metaphors and the lack of assistance from dictionaries. Although it was possible to find an equivalent for the proper meaning (sens propre) for which the word had originally been installed, it was often impossible to translate all the figurative meanings (sens figure´s) of words. The use of words in their metaphorical meaning was not regarded as an exception. Naming an idea by a sign that was related to an associative idea was a regular case in language. In addition, the use of words in their figurative meanings could fill the gaps in vocabulary. The French word voix, for example, had, in addition to its proper meaning ‘the sounds emitted by the mouth of animals, and especially by human mouth,’ several figurative senses: (1) ‘inner inspiration or pangs of conscience’ in the sentence le mensonge ne saurait e´touffer la voix de la ve´rite´ dans le fond de nos cœurs (‘the lie will not be able to suffocate the voice of truth in the deep of our hearts’), (2) ‘inner sensations’ in word groups such as la voix du sang (‘the voice of blood’) and la voix de la nature (‘the voice of nature’), and (3) ‘opinion, view, judgment.’ Not all

of these figurative meanings of voix could be translated by the Latin word vox. In the same manner, it would not be possible to translate porter ‘to carry’ by ferre in the case of figurative uses such as porter envie, porter la parole, and se porter bien. Du Marsais criticized the usual practice in French-Latin dictionaries, which gave just a series of verbs (in the case of porter: ferre, invidere, alloqui, valere) without mentioning the specificity of their semantic qualities. If dictionaries continued in this way, they might arrive at the explanation of aqua ‘water’ by feu ‘fire,’ simply because the cry for help regarding a fire in Latin was aquas aquas and in French au feu. From a certain referential meaning of a word, it was not possible to infer its essential semantic property. As Du Marsais claimed, dictionaries should first of all give the significations propres and then explain the figurative meanings that could not be deduced from the proper one. This explanation should be given by examples and by grouping the figurative meanings according to the given state of language. But a historical explanation of figurative meanings should also be possible because all of them developed out of the proper meaning that was regarded as the original one. The manner of such a development might differ in various languages, which would lead to a complicated picture of the relations between figurative meanings of words in two languages. In his Fragment sur les causes de la parole, Du Marsais arrived at an important theoretical conclusion concerning the semantic function of words. He supposed meaning to be a systematic virtual property of language and opposed it to the realizations of words and their meanings in language use. The systematic property, called la valeur des mots by Du Marsais, was acquired by education and contact with other people. It corresponded to an abstraction of the different senses a word might have.

The Study of Synonymy The study of synonymy in 17th- and 18th-century linguistics did not concern just supplying different means for the expression of the same idea but mainly differentiating the meanings of synonyms and determining the exact definition of the meaning of every single word. In addition to the theoretical interest that synonyms as a simple systematic phenomenon represented, practical needs contributed to the study of synonymy. The starting point was the doctrine of a verbum proprium, an appropriate word for each purpose, which should be defined by its invariable semantic properties. Defining synonyms became an aristocratic game in France in the 17th century and was disseminated throughout Europe.

738 Pre-20th Century Theories of Meaning

The most influential work was written by Gabriel Girard (1777/8–1748) (La Justesse de la langue franc¸aise, ou les diffe´rentes significations des mots qui passent pour synonymes, 1718; Synonymes franc¸ais, leurs significations et le choix qu’il faut en faire pour parler avec justesse, 1736). Girard declared explicitly that the language of a certain state formed a system, despite the unsystematic character of the formation of languages. In this context, he defined the valeur of a word as its correct meaning, which corresponded to the current use in the language. He started from a rationalistic position and supposed that ideas were independent of language and only had to be denominated by words. The valeur of a word consisted in the representation of ideas that the language use had related to it, and therefore it was determined by a social convention or an explicit individual imposition. But in the description of synonyms, Girard mainly regarded the differences of their values. Following his approach, synonyms were words that expressed a common idea but were distinctive in the expression of accessory ideas. The similarity of their meanings did not encompass the whole area of their significations. The accessory ideas gave a special and proper character to every synonym and determined its correct use in a certain situation. The richness of a language was therefore not only determined by a multitude of words but by the distinctions between their meanings and by precision in the expression of simple and complex ideas. In Girard’s system of synonyms, only the valeur counted, and it was described by the relations of a synonym to words with similar significations. He rejected the opinion that synonyms should only help to avoid monotony by variation in sound. In the practical distinction of synonyms, Girard presented the similarities and distinctions of synonyms using the genus proximum–differentia specifica scheme of antiquity and scholasticism. He tried not to create differences that could not be observed in language use; however, in many cases he attempted to establish a logical relation between synonyms, for example, a gradation (ordinaire, commun, vulgaire, trivial), a purpose-means relation (projet – dessein), or causal relations (re´formation – re´forme). Girard’s description of complex lexical structures showed that the term synonymy could be used in 18th-century texts for lexical fields and their structure. The words designated as synonyms could be different in their intension and extension, showed use restrictions, or entered into various oppositions. On this basis, Girard distinguished the elements of the semantic field of intellectual qualities, establishing semantic compatibilities and determining features of

meaning for every word. Apart from a substantial description of the meaning of words, Girard also supplied oppositions, such as: esprit – beˆtise raison – folie bon sens – sottise jugement – e´tourderie entendement – imbe´cilite´ conception – stupidite´ intelligence – incapacite´ ge´nie – ineptie

In Girard’s doctrine, the value (valeur) of a word is a use-independent semantic property that could be described by its relations to other words. In the use of language, it allowed different significations to be produced. Girard’s approach was widely disseminated in Europe. It was used, for example, by Johann August Eberhard (1730–1809) in his Synonymisches Handwo¨rterbuch der deutschen Sprache fu¨r alle, die sich in dieser Sprache richtig ausdrucken wollen (1802). In Spain, synonymy became an important field of discussion, as expressed in works such as the Ensayo de los synonimos by Manuel Dendo y Avila (1756).

Condillac’s and the Ide´ologues’ Semantics Etienne Bonnot de Condillac (1714–1780) formulated a coherent sensationalist theory of cognition by substituting for Locke’s dualist explication of sensation and reflection the concept of transformed sensation (sensation transforme´e), which helped to explain even complex thought as being made up of simple sensations. The instrument allowing this transformation was language, to which Condillac attributed an important role in human thought. The signs of human language operated according to the principle of analogy, which corresponded to a motivated relation between signs of analogous content. Instead of signes arbitraires, and in order to emphasize the genetic character of language, Condillac used the term signes d’institution, finally preferring in his Grammar (1775) the denomination signes artificiels. An arbitrary relation existed not only between sounds and the ideas related to them but in the composition of complex ideas. This arbitrariness was relativized by the long historical process of imposition of signs, in which a language evolved its specific shape or genius. This specificity concerned the functions of languages as analytic means and it became important in the discussions of the Ideologues of the end of the 18th century. The linguistic ideas of the Ide´ologues have mainly been studied in relation to theories they

Pre-20th Century Theories of Meaning 739

took up and modified, as well as from the point of view of the continuation of their legacy by later linguistic theorists. Whether the Ide´ologues really were representatives of a transitional thinking that rendered possible the explication of the school grammar categories of the 19th and 20th centuries has also been discussed. They were considered, moreover, as the starting point and the background for several theoreticians, for example Wilhelm von Humboldt (1767–1835) and Ferdinand de Saussure (1857–1913), for whom language was, above all, an instrument of the articulation of thought. But the Ide´ologues themselves did not admit that they had much continuity with the theory of Condillac; they even stressed their independence from all previous authors. This was an expression of a break with mechanical education.

Meaning in 19th-Century Linguistics General Evolution

In this section we do not follow the fascinating history of reflection on meaning in general and on the meaning of words in particular in the 19th century; instead, we examine some approaches to semantic description. In the 19th century, the fascination with meaning led to a sudden increase in the number of books, treatises, and pamphlets on semantic topics, in the widest sense of the term. Books on words were widely read and shaped the popular image of semantics, thereby undermining its claims as a science. Even Bre´al’s Essai de se´mantique (1897) was regarded as entertaining or amusing. Conversations on etymology contributed to this general soft image of semantics in a century that is now considered the advent of historical and comparative linguistics, with a focus on the discovery of sound laws. Nevertheless, during the 19th century semantics was a very productive field. It led to innovations and went through three phases or stages (see Nerlich, 1992: 3). During the first stage, questions about the origin of language (see the previous discussion of the proper signification, the Grundbedeutungen, or original meanings) were gradually replaced by the problem of a continuous evolution or transformation of language and meanings. The search for a true meaning was abandoned in favor of the search for the types, laws, or causes of semantic change. It was claimed that the meaning of a word was not given by its etymological ancestry but by its current use, and that omitting etymology was an important factor in the functioning and evolution of language. During the second stage, questions about the types and causes of semantic change were slowly replaced by

reflections on the mechanism of communication, comprehension, and linguistic interaction between speakers and hearers in a particular situation or context. During the third stage, semantics merged with what we now call pragmatics; word meaning was seen as an epiphenomenon of the sentence meaning. Even though these three stages did not emerge neatly separated, profound changes took place. Semantics shed its early historical ties to comparative philology to become more and more attached to other fields, such as psychology and sociology. The history of semantics in the various countries did not, in any way, unfold simultaneously. There were also major differences among influences on the development of semantics, stemming from various philosophical traditions, on the one hand, and from the various natural sciences (biology, geology, medicine, etc.), on the other. Finally, the development of semantics differed by the influence of other linguistic disciplines and fields such as rhetoric, classical philology, comparative philology, and grammar, in general, and etymology, lexicography, and phonetics, in particular. The birth of semasiology in Germany and sematology in England almost coincided. Semasiology was brought into being by Christian Karl Reisig’s (1792–1829) lectures, given in the 1820s and posthumously published in 1839; sematology was founded with the publication of Benjamin Humphrey Smart’s (1786–1872) treatise on sign theory, Outline of sematology (1831). In Germany, the maturation process of the discipline started with the edition of Friedrich Haase’s (1808–1867) lectures on semasiology in 1874, followed by the re-edition of Reisig’s work from 1881 to 1890. In England, sematology matured when, in 1857, the Philological Society of London decided to embark on a huge project: the New English Dictionary. In 1831, Smart already noted the similarity and difference between French Ideology and his sematology. Both were theories of signs, but French Ideology studied the development of ideas in isolation, whereas Smart was interested in the context and in meaning construction. At the end of the 19th century, a group of linguists adhered strictly to the doctrine of August Schleicher (1821–1868) and regarded linguistics as a natural science and language as an evolving organism. Similar metaphors were also used by E´mile Littre´ (1801–1881), already in 1850, and by Darmesteter (1887), although in Arse`ne Darmesteter’s (1846–1888) case the influence of psychology on his semantic thought was soon outweighed by that of biology. The psychological influence brought about a shift from an interest in the mere classifications of types or

740 Pre-20th Century Theories of Meaning

laws of semantic change to a search for the causes of semantic change. According to different versions of psychology, especially of Vo¨lkerpsychologie (psychology of people), developed by Heymann Steinthal (1823–1899) and Moritz Lazarus (1824–1903), on the one hand, and by Wilhelm Wundt (1832–1920), on the other, these causes were to be sought in different phenomena. Psychology became the midwife to the second and most successful kind of semantics, developed in France mainly by Michel Bre´al (1832–1915). He too referred to Condillac and the conception of the sign put forward by the Ide´ologues, in order, however, to criticize those naturalists who saw language as a natural organism or studied the growth, life, and death of words. For Bre´al, words as signs had no autonomous existence. They changed because they were signs of thought, which were created and used by the speakers of a language. In his lectures and articles, he argued, like William Dwight Witney (1827–1894) in the United States, against the German way of doing linguistics – the search for an Ursprache (original language) and the study of language as an organism. He excluded from his criticism only Franz Bopp (1791–1867), citing his work as an example of careful methodology. Semasiology in Germany

Between 1830 and the end of the 19th century, it is possible to detect an increasing broadening of the field of semasiology, from the atomistic study of words and the rather ill-conceived classification of types of semantic changes toward the recognition of the importance of cultural and social factors and the acknowledgment of context as a factor in semantic change. The logico-historico-classificatory tradition (Reisig; Haase; and Ferdinand Heerdegen, 1845–1930) was followed by the psychological phase (Lazarus and Steinthal) and the further development of semasiology (Max Hecht, 1857–?). Another landmark was the Vo¨lkerpsychologie of Wundt, who made possible a psychological tradition in semasiology. At the turn of the century, the time was ripe for new ideas in semantics, whether developed in response to Wundt or independently of his work (e.g., by Hermann Paul, 1846–1921; Philipp Wegener, 1848–1916; Stu¨cklein, 19th century; and Karl-Otto Erdmann, 1858–1931). The Development of the Se´mantique in France

For Bre´al, the history of a language was not just an internal history; it was intimately linked to political, intellectual, and social history, where creation and change were at any moment our own work. In his

lectures (1866–1868), he showed that the meaning or function of words could survive the alteration of form. The force countering the alterations and filling the gaps was the human mind. Darmesteter asked himself what were the causes and laws of semantic change. The evolution of language at all its higher levels was based on what Darmesteter called, in accordance with Bre´al, the disregard of etymology. What was here described as a loss of etymological adequacy was seen in La vie des mots e´tudie´e dans leurs sigifications (1887) as the gain of an adequate signification for language use. From Sematology to Significs in England

English semantics in the 19th century was not a theoretically well-established field. It consisted of two disjointed strands of thought (Nerlich, 1992: 207): sematology and semasiology. The first type of semantics was a predominantly philosophical one that emerged from thinkers such as Locke, John Horne Tooke (1736–1812), and Dugald Stewart (1753– 1828), culminating in Smart. The second type was a predominantly practical one that sprang from lexicographers and etymologists such as Samuel Johnson (1709–1784), Noah Webster (1758–1843), and Charles Richardson (1775–1865), culminating in Richard Chenevix Trench (1807–1886), James Murray, and Walter William Skeat (1835–1912). By the end of the century, Lady Victoria Welby (1837–1912) had rekindled philosophical interest in semantic questions and fostered a new approach to the problem of meaning. This approach was called significs and constituted a return to Smart’s reflections on semiotics, but it lifted sematology, semasiology, and la se´mantique onto a higher psychological, moral, and ethical plane. Summary

To summarize this section (see Nerlich, 1992: 19), in Germany we can distinguish between two main approaches to semantics: an atomistic-historical attitude and a holistic-psychological one. In France, the historical se´mantique sprang directly from psychological insights into language evolution. And in England, the psychological approach was marginal and the historical one dominant until the end of the century. See also: Aristotle and Linguistics; Cognitive Semantics; Color Terms; Definition in Lexicology; Dictionaries and Encyclopedias: Relationship; Ideational Theories of Meaning; Lexical Fields; Lexical Semantics; Lexicology; Metaphor and Conceptual Blending; Metonymy; Nominalism; Onomasiology and Lexical Variation; Synonymy; Thought and Language.

Presupposition 741

Bibliography Arnauld A & Nicole P (1965–1967). L’Art de penser. La Logique de Port-Royal (2 vols). Baron von Freytag Lo¨ringhoff B & Brekle H E (eds.). Stuttgart-Bad Cannstatt: Friedrich Frommann Verlag (Gu¨nther Holzboog). Auroux S (1979). La se´miologie des encyclope´distes. Essai d’e´piste´mologie historique des sciences du langage. Paris: Payot. Auroux S & Delesalle S (1990). ‘French semantics of the late nineteenth century and Lady Welby’s significs.’ In Schmitz W (ed.) Essay on significs: papers presented on the occasion of the 150th anniversary of the birth of Victoria Lady Welby (1837–1912). Amsterdam/Philadelphia: John Benjamins. 105–131. Dutz K D & Schmitter P (eds.) (1983). Historiographia Semioticae: Studien zur Rekonstruktion der Theorie und Geschichte der Semiotik. Mu¨nster: Nodus Publikationen. Gauger H-M (1973). Die Anfa¨nge der Synonymik: Girard (1718) und Roubaud (1785). Ein Beitrag zur Geschichte der lexikalischen Semantik. Tu¨bingen: Tu¨binger Beitra¨ge zur Linguistik. Gipper H & Schmitter P (1975). ‘Sprachwissenschaft und Sprachphilosophie im Zeitalter der Romantik.’ In

Sebeok T A (ed.) Current trends in linguistics, vol. 13/2. The Hague: Mouton. 481–606. Gordon W T (1982). A history of semantics. Amsterdam/ Philadelphia: John Benjamins. Haßler G (1991). Der semantische Wertbegriff in Sprachtheorien vom 18. bis zum 20. Jahrhundert. Berlin: Akademie-Verlag. Haßler G (1999). ‘Sprachtheorie der ide´ologues.’ In Schmitter P (ed.) Geschichte der Sprachtheorie, vol. 4. Tu¨bingen: Narr. 201–229. Knobloch C (1987). Geschichte der psychologischen Sprachausfassung in Deutschland von 1850–1920. Tu¨bingen: Niemeyer. Nerlich B (1992). Semantic theories in Europe 1830–1930. Amsterdam/Philadelphia: John Benjamins. Quadri B (1952). Aufgaben und Methoden der onomasiologischen Forschung. Eine entwicklungsgeschichtliche Darstellung. Bern: Francke. Swiggers P (1982). ‘De Girard a` Saussure: Sur l’histoire du terme valeur en linguistique.’ In Travaux de linguistique et de litte´rature, publie´s par le centre de Philologie et de litte´ratures romanes de l’Universite´ de Strasbourg 20/1: Linguistique, Philologie, Stylistique. Paris: Klincksieck. 325–331.

Presupposition P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

Introduction A presupposition is a semantic property of a sentence making that sentence fit for use in certain contexts and unfit for use in others. This property is partly based on the fact that if a sentence B presupposes a sentence A (B  A), then B entails A (B  A): whenever B is true, A is necessarily also true, given the same situation referred to, in virtue of the meanings of B and A. Presuppositions (P-entailments) are thus a subclass of entailments. Entailments that are not presuppositional are classical or C-entailments. (1) illustrates a C-entailment (c); (2a–d) illustrate P-entailments (): (1) Jack has been murdered. c Jack is dead. (2a) Jack lives in Manchester.  Jack exists. (2b) Jill has forgotten that Jack is her student.  Jack is Jill’s student. (2c) Jack is divorced.  Jack was once married. (2d) Only Jack left.  Jack left.

(2a) illustrates existential presupposition. (2b) exemplifies factive presuppositions (Kiparsky and

Kiparsky, 1971): the factive predicate (have forgotten) requires the truth of the that-clause. (2c) is a case of categorial presupposition, derived from the lexical meaning of the main predicate (be divorced). (2d) belongs to a remainder category, the presupposition in question being due to the particle only. There are various differences between P-entailments and C-entailments. When B  A, A is somehow ‘prior’ to B, restricting the domain within which B is interpretable. Presuppositions present themselves specifically, whereas C-entailments are ‘unguided’ and thus lack the function of restricting the interpretation domain. This makes presupposition relevant for the cognitive aspects of linguistic information transfer. There is also a logical difference, especially regarding negation, the central operator in any logic. In standard logic a sentence that is false for any reason whatsoever is made true by the preposed negation. In language, however, negation is sensitive, in default uses, only to C-entailments, leaving the P-entailments intact. Suppose (2d) is false on account of a C-entailment’s falsity, for example because other people left as well. Then (3a), the negation of (2d), is true and (3b) is coherent, as expected. Not so when a P-entailment of (2d) is false, as in (3c), where the presupposition ‘Jack left’ is denied. (3c) is incoherent because

742 Presupposition

Not only Jack left still presupposes that Jack left (‘!!’ indicates incoherence): (3a) Not only Jack left. (3b) Not only Jack left: other people left as well. (3c) !!Not only Jack left: Jack did not leave.

This raises the question of the truth value of (3a) in cases where, say, Jack did not leave. In standard logic, the entailment from both (2d) and its negation (3a) to Jack left means that if Jack did not leave, both (3a) and (2d) are false simultaneously, which violates the Principle of Contradiction (‘a sentence and its negation cannot be true or false simultaneously’). Standard logic thus rules out falsity for Jack left, because its falsity leads to a contradiction. This makes Jack left a necessary truth. But Jack left is, if true, contingently so. Therefore, standard logic is inadequate for presuppositions. Many feel that this calls for a rejection of the Principle of the Excluded Third or PET (‘any sentence expressing a proposition is either true or false, without any values in between or outside’) and for the introduction of a third truth value into the logic of language. Others prefer to keep standard bivalent logic intact and seek a pragmatic way out in terms of conditions of use. This question is discussed in the next section.

Operational Criteria Presuppositions are detectable (‘observable’) irrespective of actual token use. Like C-entailments, P-entailments can be read off isolated sentences, regardless of special context. Yet they evoke a context. (2a) requires it to be contextually given that there exists someone called ‘Jack’ and thus evokes such a context, asserting that he lives in Manchester; (2b) evokes a context where Jack is Jill’s student, asserting that Jill has forgotten that; (2c) requires a context where Jack was once married, and asserts that the marriage has been dissolved; and (2d) requires a context where Jack left, while asserting that no one else left. This, together with the entailment criterion, provides a set of operational criteria to recognize presuppositions. First, if B  A then B  A. The usual heuristic criterion for an entailment B  A is the incoherence of the juxtaposition of not(A) with B. On the whole, this suffices as a criterion. For example, (4a) does not entail, and therefore does not presuppose, (4b), because (4c) is still coherent. (4a) Lady Fortune neighs. (4b) Lady Fortune is a horse. (4c) Lady Fortune is not a horse, yet she neighs.

But this criterion overkills when the entailing sentence is qualified by an epistemic possibility operator,

like English may, as in (5a), which does not entail (5b), even though (5c) is incoherent. Epistemic possibility requires compatibility of what is said to be possible with what is given in discourse or knowledge. Therefore, if B  A, then with not(A) in the knowledge base, possibly(B) results in inconsistency, although possibly(B) does not entail A. The entailment criterion can be refined, without loss of generality, by testing the (in)coherence of the juxtaposition of possibly (not(A)) with B, as in (5d). Because (5d) is coherent, (5a) 6 (5b): (5a) Jack may have been murdered. (5b) Jack is dead. (5c) !! Jack is not dead, yet he may have been murdered. (5d) Jack may not be dead, yet he may have been murdered (and thus be dead).

To distinguish P-entailments from C-entailments further criteria are needed. First there is the projection criterion: if B  A and B stands under an entailment-canceling operator like possibly or not or believe, A survives not as a P-entailment but as a more or less strongly invited presuppositional inference (>). Generally, O(BA) > A, where ‘BA’ stands for ‘B presupposing A’ and ‘O’ for an entailment-canceling operator. In standard terminology, the presupposition A is projected through the operator O. The conditions under which presuppositions of embedded clauses are projected through higher operators constitute the projection problem of presupposition. Projection is typical of P-entailments, as in (6), not of C-entailments, as in (7): (6) Jill believes that Jack is divorced > Jack was once married. (7) Jill believes that Jack has been murdered 6> Jack is dead.

The projection criterion is mostly used with negation as the entailment-canceling operator. Strawson (1950, 1952) held, incorrectly, that presupposition is always preserved as entailment under negation. In his view, a sentence like: (8) The present king of France is not wise.

still presupposes, and thus entails, that there exists a king of France, who therefore, if (8) is true, must lack wisdom. Although presupposition is, in fact, normally weakened to invited inference under negation, Strawson’s ‘negation test’ became the standard test for presupposition. Provided the condition of ‘entailment’ is replaced by that of ‘at least invited inference,’ the test is sound. Then there is the discourse criterion: a discourse bit A and/but BA (with allowance for anaphoric

Presupposition 743

processes) is felt to be orderly and well planned – that is, sequential. The condition of sequentiality is used to characterize stretches of acceptable ptext that have their presuppositions spelled out (‘ ’ signals sequentiality): (9a)

p

There exists someone called ‘Jack,’ and he lives in Manchester. p (9b) Jack is Jill’s student, but she has forgotten that he is. p (9c) Jack was married, but he is divorced. p (9d) Jack left, and he is the only one who did.

C-entailments and inductive inferences behave differently. When they precede their carrier sentence the result may still be acceptable, yet there is a qualitative difference, as shown in (10a,b), where a colon after the first conjunct is more natural (‘#’ signals nonsequential but coherent discourse): (10a) #Jack is dead: he has been murdered. (10b) #Jack earns money: he has a job now.

The discourse criterion still applies through projection: A and/but O(BA) is again sequential (the entailment-canceling operators are printed in italics): (11a)

p

Jack really exists, and Jill believes that he lives in Manchester. p (11b) Jack is Jill’s student, but she has probably forgotten that he is. p (11c) Jack was once married, and he is not divorced. p (11d) Jack left, and he is not the only one who did.

These tests reliably set off P-entailments from C-entailments.

to be dropped, very much against Aristotle’s wish. Although Aristotle himself was unable to show Eubulides wrong, there is a flaw in the paradox. It lies in the incorrectly assumed entailment in the first premise ‘‘What you have not lost you still have.’’ For it is possible that a person has not lost something precisely because he never had it. The same problem was raised by Strawson (1950, 1952), but with regard to existential presuppositions. Like Eubulides, Strawson assumed full entailment of presupposition under negation and concluded that PET had to go. For him, nonfulfillment of a presupposition leads to both the carrier sentence and its negation lacking a truth value altogether. Frege (1892) had come to the same conclusion, though from a different angle. In a sentence like: (12) The unicorn ran.

analyzed as ‘Run(the unicorn)’, the subject term lacks a referent in the actual world, though the existence of such a referent is presupposed. That makes it impossible to test the truth of (12): since there is no unicorn, there is no way to check whether it actually ran. Therefore, Frege (and Strawson) concluded, (12) lacks a truth value. This posed a profound problem for standard logic in that the applicability of standard logic to English would have to be made dependent on contingent conditions of existence – a restriction no logician will accept. In the effort to solve this problem two traditions developed, the Russell tradition and the Frege–Strawson tradition. The Russell Tradition

The Logical Problem The Threat to Bivalence

The first to see the threat posed by presuppositions to standard logic was Aristotle’s contemporary Eubulides of Miletus (Kneale and Kneale, 1962: 113–117). He formulated (besides other paradoxes such as the Liar) the paradox of the Horned Man (Kneale and Kneale, 1962: 114): ‘‘What you have not lost you still have. But you have not lost your horns. So you still have horns.’’ This paradox rests on presupposition. Read B for You have lost your horns and A for You had horns. Now B  A (the predicate have lost induces the presupposition that what has been lost was once possessed). Eubulides implicitly assumed that P-entailments are preserved under negation: B  A and not(B)  A. Under PET, this would make A a logically necessary truth, which is absurd for a contingent sentence like You had horns. To avoid this, PET would have

In his famous 1905 article, Russell proposed a new analysis for sentences with definite terms, like (13a). Putting the new theory of quantification to use, he analyzed (13a) as (13b), or ‘there is an individual x such that x is now king of France and x is bald, and for all individuals y, if y is now king of France, y is identical with x’: (13a) The present king of France is bald. (13b) 9x [KoF(x) ^ Bald(x) ^ 8y [KoF(y) ! x ¼ y]]

In order to save bivalence, Russell thus replaced the time-honored subject-predicate analysis with an analysis in which the definite description the present king of France no longer forms a constituent of the logically analyzed sentence, but is dissolved into quantifiers and propositional functions. The negation of (13a) should be (13b) preceded by the negation operator, i.e. (14a). However, Russell held, speakers often prefer, for reasons best known to themselves, to interpret The present king of France

744 Presupposition

is not bald as (14b), with internal negation over ‘Bald(x)’: (14a) :9x [KoF(x) ^ Bald(x) ^ 8y [KoF(y) ! x ¼ y]] (14b) 9x [KoF(x) ^ :Bald(x) ^ 8y [KoF(y) ! x ¼ y]]

This makes sentences like (8) ambiguous. This analysis, known as Russell’s Theory of Descriptions, was quickly accepted by logicians and philosophers of language, as it saved PET. At the same time, however, it drove logicians and linguists apart, as it defies any notion of sentence structure. Moreover, the ‘uniqueness clause’ in (13b), 8y [KoF(y) ! x ¼ y], saying that only one king of France exists, is meant to account for the uniqueness expressed by the definite article. In fact, however, the definite article implies no claim to uniqueness of existence, only to discourse-bound uniqueness of reference. Then, this analysis is limited to definite descriptions and is unable to account for other kinds of presupposition. Factive and categorial presuppositions, and those derived from words like all, still, or only, fall outside its coverage. An important objection is also that negation can only cancel presuppositions when it is a separate word (not a bound morpheme) and in construction with a finite verb. In all other cases, the negation fully preserves P-entailments. Thus, (3a), with not in construction with only, preserves the presupposition induced by only. Moreover, sentence-initial factive that-clauses preserve presuppositions even though the negation is constructed with the finite verb: (15a) That Jack left surprised Jill  Jack left. (15b) That Jack left did not surprise Jill  Jack left.

Likewise for cleft and pseudocleft sentences: (16a) It was Jack who left / The one who left was Jack  Someone left. (16b) It wasn’t Jack who left / The one who left wasn’t Jack  Someone left.

When cases like these, overlooked by the authors discussed, are taken into account and the logic is kept bivalent, the presuppositions of sentences like (2d) and (3a), (15a,b), or (16a,b) would again have to be necessary truths. The same goes for: (17a) All men are mortal  There exist men. (17b) Not all men are mortal  There exist men.

In standard Predicate Calculus, however, (17a) does not entail (and thus cannot presuppose) that there exist men, whereas (17b) does, because ‘not all F is G’ is considered equivalent with ‘some F is not G,’ which entails the existence of at least one F. Yet both (17a) and (17b) satisfy the operational criteria given

earlier. Standard Predicate Calculus thus seems to fit the presuppositional facts badly. To account for other than existential presuppositions some have proposed to change Russell’s analysis into: (18) 9x [KoF(x)] ^ Bald(he)

or ‘there is a king of France, and he is bald’. He is now no longer a bound variable but an anaphoric pronoun. With a logical mechanism for such anaphora (as in Kamp, 1981; Groenendijk and Stokhof, 1991), this analysis can be generalized to all categories of presupposition. A sentence BA is now analyzed as A and BA, and Not(BA), though normally analyzed as A and Not(BA) with small scope not, can also, forced by discourse conditions, be analyzed as Not (A and BA), with large scope not. This analysis, which saves PET, is known as the Conjunction Analysis for presupposition. Anaphora is needed anyway, because Russell’s analysis fails for cases like (19), where quantifier binding is impossible for it, which is in the scope of I hope, whereas I hope is outside the scope of I know: (19) I know that there is a dog and I hope that it is white.

The Conjunction Analysis, however, still cannot account for the fact that (20a) is coherent but (20b) is not: (20a) There is a dog and it is white, and there is a dog and it is not white. (20b) !!There is a dog and it is white and it is not white.

(20a) speaks of two dogs, due to the repetition of there is a dog, but (20b) speaks of only one. Yet the Conjunction Analysis cannot make that difference, because the repetition of there is a dog makes no logical or semantic difference for it. Attempts have been made to incorporate this difference into the logic (e.g., Kamp, 1981; Heim, 1982; Groenendijk and Stokhof, 1991) by attaching a memory store to the model theory that keeps track of the elements that have so far been introduced existentially. Even then, however, the Conjunction Analysis still postulates existence for term referents whose existence is denied: (21) Santa Claus does not exist. The Frege-Strawson Tradition

Strawson (1950, 1952) was the first to oppose the Russell tradition. He reinstated the traditional

Presupposition 745

subject-predicate analysis and discussed only existential presuppositions. Negation is considered presupposition-preserving. Sentences with presupposition failure are considered truth-valueless. Strawson’s definition of presupposition is strictly logical: B  A ¼ DefB  A and Not(B)  A. This analysis requires a gapped bivalent propositional calculus (GBPC), shown in Figure 1. Insofar as truth values are assigned, GBPC preserves standard logic. Moreover, * is ‘infectious’: when fed into a truth function it yields *. Remarkably, GBPC limits the applicability of logic to situations where the presuppositions of the sentences involved are true. The applicability of GBPC thus varies with contingent circumstances. Wilson (1975) and Boe¨r and Lycan (1976) side with Russell and criticize Strawson, showing examples of presupposition-canceling under negation: (22a–c) are coherent, though they require emphatic, discourse-correcting accent on not: (22a) The present king of France is NOT bald: there is no king of France! (22b) Jill has NOT forgotten that Jack is her student: Jack isn’t her student! (22c) Jack is NOT divorced: he never married!

For these authors, classical bivalent logic is adequate for language; P-entailments differ from C-entailments only pragmatically. There would be a point if (a) a pragmatic explanation were available, and (b) presuppositions were always canceled under negation. But neither condition is fulfilled. In fact, the presupposition-canceling ‘echo’ negation NOT of (22a–c) is impossible for cases that preserve P-entailments under negation: (23a) !!NOT only Jack left: he didn’t leave! (23b) !!NOT all students protested: there weren’t any students! (23c) !!That Jack left did NOT surprise Jill: he didn’t leave! (23d) !! The one who left was NOT Jack: nobody left!

Likewise for the negation required with negative polarity items (NPIs) in assertive main clauses (NPIs are printed in italics):

Figure 1 Strawson’s gapped bivalent propositional calculus (GBPC). Key: , presupposition-preserving negation; T, truth; F, falsity; *, unvalued.

(24a) !!Jack does NOT mind that he is in trouble: he isn’t in trouble! (24b) !!Jack has NOT come back yet: he never went away! (24c) !!Jill has NOT seen Jack in weeks: she doesn’t exist!

This analysis is thus fatally flawed. The Trivalent Solution

One may envisage a three-valued logic, identical to standard bivalent logic but for a distinction between two kinds of falsity, each turned into truth by a separate negation operator. Minimal falsity (F1) results when all P-entailments are true but not all C-entailments, radical falsity (F2) when one or more P-entailments are false. Correspondingly, minimal negation () turns F1 into truth (T) and T into F1, leaving F2 unaffected, whereas radical negation (’) turns F2 into T and both T and F1 into F1. In Kleene’s (1938) trivalent propositional calculus, ^ yields T only if both conjuncts are T, F1 when either conjunct is F1, and F2 otherwise. Analogously, _ yields T when either conjunct is T, F1 only if both conjuncts are F1, and F2 otherwise. The corresponding tables are given in Figure 2 (see Multivalued Logics, where the value F2 is named ‘indefinite’ or I). This logic preserves all theorems of standard logic when bivalent : replaces trivalent . Kleene’s calculus lacks the radical negation (’), but comes to no harm if it is added. Kleene’s calculus is used by some presuppositional logicians (e.g., Blau, 1978). It is empirically problematic in that it yields F1 for ‘A ^ B’ when either A or B is F2 whereas the other is F1, thus allowing presupposition failure in one conjunct while still considering the conjunction as a whole free from presupposition failure. This makes no sense in view of and as a discourse incrementer. Kleene’s calculus is more suitable for vagueness phenomena with F2 as an umbrella value for all intermediate values between T and F (Seuren et al., 2001). In Seuren’s presuppositional propositional calculus TPC2 (Seuren, 1985, 2001: 333–383; Seuren et al., 2001) the operators ^ and _ select, respectively, the highest and lowest of the component values (F2 > F1 > T), as shown in Figure 3. Classical negation (:),

Figure 2 Trivalent propositional calculus (TPC1).

746 Presupposition

Figure 3 Trivalent propositional calculus (TPC2).

added for good measure, is the union of  and ’, but is taken not to occur in natural language, which has only  and ’. In TPC2, F2 for either conjunct yields F2 for ‘A ^ B’, as required. TPC2 is likewise equivalent with standard bivalent logic under the operators :, ^, and _ (Weijters, 1985). Thus, closed under (:, ^, _) standard bivalent logic is independent of the number of truth values employed, though any value ‘false’ beyond F1 will be vacuous. Moreover, in both generalizations with n truth values (n  2), there is, for any value i  2, a specific negation Ni turning i into T, values lower than i into F1, and leaving higher values unaffected. Thus, in TPC2, NF1 is  and NF2 is ’. Classical bivalent : is the union of all specific negations. Consequently, in the standard system, : is both the one specific negation allowed for and the union of all specific negations admitted. Standard logic is thus the most economical variety possible of a generalized calculus of either type.

The Discourse Approach Presupposition is not defined, only restricted, by its logical properties: (25) If B  A, then B  A and B  A, and A/’A  ’B.

(25) thus specifies necessary, but not sufficient, conditions for presupposition. Were one to adopt a purely logical definition, various paradoxical consequences would follow. For example, any arbitrary sentence would presuppose any necessary truth, which would make the notion of presupposition empirically vacuous. Attempts have been made (Gazdar, 1979; Heim, 1982; Seuren, 1985, 2000) at viewing a presupposition A of a sentence BA as restricting the interpretable use of B to contexts that admit of, or already contain, the information carried by A. Such an approach creates room for an account of the discourse-correcting ‘echo’ function of presupposition-canceling (radical) NOT. Horn (1985, 1989) correctly calls NOT metalinguistic, in that it says something about the sentence in its scope – though his generalization to other metalinguistic uses of negation is less certain. Neither TPC2 nor TPC1 can account for this metalinguistic

property. This means that the argument of NOT is not a sentence but a quoted sentence. NOT(‘BA’) says about the sentence BA that it cannot be sequentially incremented in a discourse refusing A. Sequential incrementation to a discourse D restricts D to a progressively narrower section of the universe of all possible situations U, making the increment informative. Incrementation of A, or i(A), to D restricts D to the intersection of the set of situations in which D is true and the set of situations where A is true. The set of situations in which a sentence or set of sentences X is true is the valuation space of X, or /X/. For D incremented with A we write ‘D þ A’. D þ A is the conjunction of D and A, where D is the conjunction of all incremented sentences since the initial U. The sequentiality condition requires: (a) for any A, /D/ is larger than /D þ A/ (informativity: remember that D is restricted by A); (b) if B  A then i(A) must precede i(B) (not so when B  c A).

If A has not already been incremented prior to i(BA) it is cognitively ‘slipped in,’ a process called accommodation or post hoc suppletion. A text requiring accommodation is not fully sequential, but still fully coherent. On the assumption that D, as so far developed, is true, any subsequent sentence B must be valued T or F1, because F2 for B implies that some presupposition of B, and hence D as a whole, is not true. This assumption is made possible by the Principle of Presumed Truth (PPT), which says that it must be possible for any D to be true. The assumption that D is actually true blocks the processing of a new sentence that would be valued F2 in D. For example, let D contain i(A). Now BA is blocked, because A is valued F1 (assuming that D is true). But NOT(‘B’) can be incremented and is true under PPT, as it says about B that it cannot be incremented. Therefore, /D/ must contain situations with sentences as objects. Cognitively speaking, this is perfectly plausible, because speakers are aware of the fact that they utter words and sentences. That awareness enables them to refer back to words and sentences just uttered or expected to be uttered. Words and sentences as objects are a necessary corollary of any speech utterance. This corollary underlies the free mixing of object language and metalanguage in natural language. The prohibition issued by logicians against such mixing has no ground in natural language (Seuren, 2001: 125–130). Natural language negation is, in a sense, ambiguous (depending on syntactic conditions) between

Presupposition 747

presupposition-preserving (minimal) not and presupposition-canceling (radical) NOT. Many find this unacceptable, because ambiguities are normally language specific, whereas this ambiguity would appear to be universal. Yet the obvious question of what would be the overarching single meaning of the negation in all its uses has not, so far, been answered. Similar problems occur with other logical operators, especially with and, or, and if, as the following examples show: (26a) (26b) (26c) (26d) (26e)

Do as I say and you will be a rich man. Don’t come nearer, or you’ll be a dead man. That’s awful, or should I say ‘dreadful’? Let’s go home, or do you have a better idea? If you’re tired, I have a spare bed.

In the case of not and the other logical operators, speech act factors as well as factors of metalinguistic use play an important role. Unfortunately, the grammar and semantics of both speech acts and metalinguistic use are still largely unexplored. Horn (1985) pointed out that English not is often used metalinguistically, as in: (27a) Not Lizzy, if you please, but the Queen is wearing a funny hat. (27b) She wasn’t happy, she was ecstatic!

And he classifies radical NOT with the other metalinguistic cases. However, as pointed out in Seuren (2001: 345–347), NOT, though metalinguistic, differs from the other cases in that it can only occur in construction with the finite verb (the ‘canonical position’), whereas the other metalinguistic negations can occupy any position normal not can occur in. (28a) is coherent, with a canonically placed NOT, but (28b,c), likewise with NOT, are incoherent, as NOT is in a noncanonical position: (28a) He did NOT only lose $500. He only lost $20. (28b) !!NOT only did he lose $500. He only lost $20. (28c) !!He NOT only lost $500. He only lost $20.

The question of the overall meaning description of the logical operators, in terms of which their strictly logical meaning would find a place, defines a research project of considerable magnitude – a project that has so far not been undertaken in a coordinated way.

The Structural Source of Presuppositions The source of at least three of the four types of presupposition distinguished earlier lies in the satisfaction conditions of the main predicate of the carrier sentence. The satisfaction conditions of an n-ary predicate Pn are the conditions that must be satisfied by any n-tuple of objects for Pn to yield truth. Thus, for the unary predicate white the conditions must

specify when any object can truthfully be called ‘white’. For the binary predicate wash they must specify when it can truthfully be said of any pair of objects that ‘i washes j’. A distinction is made between two kinds of lexical conditions, preconditions and update conditions. When a precondition is not fulfilled, the sentence is radically false; failure of an update condition yields minimal falsity. Fulfillment of all conditions gives truth. The satisfaction conditions of a predicate Pn are specified according to the schema ([[Pn]] is the extension of Pn): (29) [[Pn]] ¼ { : . . .(preconditions). . . | . . .(update conditions). . . } or: ‘the extension of Pn is the set of all n-tuples of objects such that . . . (preconditions) . . . and . . . (update conditions) . . .’.

The satisfaction conditions of the predicate bald, for example, may be specified as follows (without claiming lexicographical adequacy): (30) [[Bald]] ¼ {i : i is normally covered, in prototypical places, with hair, fur, or pile; or i is a tire and normally covered with tread | the normal covering is absent}

This caters for categorial presuppositions. Factive presuppositions are derived by the precondition that the factive clause must be true. Existential presuppositions are derivable from the precondition that a specific term t of a predicate Pn refers to an object existing in the real world. Pn is then extensional with respect to t. Talk about, for example, is extensional with respect to its subject term, but not with respect to its object term, because one can talk about things that do not exist. The satisfaction conditions of talk about will thus be as in (31), where the asterisk on j indicates that talk is nonextensional with respect to its object term: (31) [[talk about]] ¼ { : . . . (preconditions) . . . | . . . (satisfaction conditions) . . . }

The predicate exist lacks any preconditions and is to be specified as nonextensional with respect to its subject term: (32) [[exist]] ¼ { i* | i is an object in the actual world}

A definite subject of the verb exist must be represented somewhere in D, normally in some intensional subdomain, e.g., the subdomain of things that Jack keeps talking about, as in: (33) The man that Jack keeps talking about really exists.

748 Projection Problem for Presupposition

The incremental effect of (33) is that the representation of the thing that is said to exist is moved up to the truth domain of D. This analysis requires the assumption of virtual objects (see Virtual Objects). The remainder category of presuppositions, induced by words like only or still, or by contrastive accent or (pseudo)cleft constructions, looks as if it cannot be thus derived. The choice here is either to derive them by ad hoc rules or to adopt a syntactic analysis in terms of which of these words and accents figure as (abstract) predicates at the level of semantic representation taken as input to the incrementation procedure. See also: Cognitive Semantics; Connectives in Text; Con-

notation; Discourse Domain; Discourse Semantics; Factivity; Formal Semantics; Inference: Abduction, Induction, Deduction; Lexical Conditions; Multivalued Logics; Negation; Nonmonotonic Inference; Polarity Items; Pragmatic Presupposition; Pragmatics and Semantics; Projection Problem; Virtual Objects.

Bibliography Blau U (1978). Die dreiwertigc Logik der Sprache. Ihre Syntax, Semantik und Anwendung in der Sprachanalyse. Berlin-New York: De Gruyter. Boe¨r S & Lycan W (1976). The myth of semantic presupposition. Indiana University Linguistics Club. Frege G (1892). ‘Ueber Sinn und Bedeutung.’ Zeitschrift fu¨r Philosophie und philosophische Kritik 100, 25–50. Gazdar G (1979). Pragmatics: implicature, presupposition, and logical form. New York-San Francisco-London: Academic Press. Geach P T (1972). Logic matters. Oxford: Blackwell.

Groenendijk J & Stokhof M (1991). ‘Dynamic predicate logic.’ Linguistics and Philosophy 14, 39–100. Heim I (1982). ‘The semantics of definite and indefinite noun phrases.’ Ph.D. diss., University of Massachusetts at Amherst. Horn L R (1985). ‘Metalinguistic negation and pragmatic ambiguity.’ Language 61, 121–174. Horn L R (1989). A natural history of negation. Chicago: University of Chicago Press. Kamp H (1981). ‘A theory of truth and semantic representation.’ In Groenendijk J, Janssen T & Stokhof M (eds.) Formal methods in the study of language 1. Amsterdam: Mathematisch Centrum. 277–322. Kiparsky P & Kiparsky C (1971). ‘Fact.’ In Steinberg D & Jakobovits L (eds.) Semantics: an interdisciplinary reader in philosophy, linguistics, and psychology. Cambridge: Cambridge University Press. 345–369. Kleene S (1938). ‘On notation for ordinal numbers.’ Journal of Symbolic Logic 3, 150–155. Kneale W & Kneale M (1962). The development of logic. Oxford: Oxford University Press. Russell B (1905). ‘On denoting.’ Mind 14, 479–493. Seuren P A M (1985). Discourse semantics. Oxford: Blackwell. Seuren P A M (2000). ‘Presupposition, negation and trivalence.’ Journal of Linguistics 36, 261–297. Seuren P A M (2001). A view of language. Oxford: Oxford University Press. Seuren P A M, Capretta V & Geuvers H (2001). ‘The logic and mathematics of occasion sentences.’ Linguistics & Philosophy 24, 531–595. Strawson P F (1950). ‘On referring.’ Mind 59, 320–344. Strawson P F (1952). Introduction to logical theory. London: Methuen. Weijters A (1985). ‘Presuppositional propositional calculi.’ Appendix to Seuren (1985). Wilson D (1975). Presuppositions and non-truth-conditional semantics. London-New York-San Francisco: Academic Press.

Projection Problem for Presupposition P A M Seuren, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands ß 2006 Elsevier Ltd. All rights reserved.

The projection problem was considered the central problem in presupposition theory during the early 1970s. After that, it was realized that it is automatically accounted for in terms of a wider theory of presupposition and discourse semantics. The projection problem is posed by the behavior of presuppositions of embedded clauses. The property of presuppositions to be sometimes preserved

through embeddings, albeit often in a weakened form, is called projection. The projection problem consists of formulating the conditions under which the presuppositions of an embedded clause (a) are kept as presuppositions of the superordinate structure, or (b) remain as an invited inference that can be overruled by context, or (c) are canceled. Let sentence B presuppose sentence A (B  A); then B also semantically entails A (B  A). Now, when BA (B presupposing A) is embedded in a larger sentence C, then either C  BA or C 6 BA. When C  BA, then C  A. That is, when BA is entailed (or presupposed) by its embedding clause C, then C inherits

Projection Problem for Presupposition 749

the presuppositions of BA. For example, (1a)  (1b) and (1c)  (1a). Therefore, (1c)  (1b): (1a) Susan got her money back. (1b) Susan had lost her money. (1c) Susan managed to get her money back.

The operator and is the only exception. When a conjunction is of the form ‘A and BA’ (the first conjunct expresses the presupposition of the second), BA is entailed, yet A is not presupposed. Thus, although (1a)  (1b) and (2)  (1a), (2) 6>6> (1b), but it is still so that (2)  (1b): (2) Susan had lost her money and she got it back.

Entailment is thus ‘stronger’ than presupposition in the sense that entailments are properties of sets or series of sentences, whereas presuppositions are properties of single sentences. When C 6 BA (and hence C 6>6> BA), then, in all cases but one, C 6 A (and hence C 6>6> A). The one exception is negation: (3a)  (3b) and (3c)  (3b), even though, obviously, (3c) 6 (3a): (3a) Only Jim laughed. (3b) Jim laughed. (3c) Not only Jim laughed.

This answers question (a), except for the behavior of and and not. When entailment is lost A often remains an invited inference of C (C > A). When C > A, the suggestion is that A is true if C is true, but the inference can be overruled by contextual factors. Moreover, when C > A, A followed by C makes for an orderly bit of discourse, just as when C  A. Presuppositions thus differ from other entailments, which are never kept as invited inferences across nonentailing embeddings. Thus, (4a)  (4b) and (4c) 6 (4a), but (4c) > (4b). This inference is overruled in (4d), which says that Harry wrongly believes that he has a son, and that this (nonexistent) son lives in Kentucky: (4a) (4b) (4c) (4d)

Harry’s son lives in Kentucky. Harry has a son. Harry believes that his son lives in Kentucky. Harry has no son, but he believes that he has one and that this son lives in Kentucky.

In the case of if the antecedent clause may express the presupposition of the consequent clause. In such cases, the presupposition is canceled, as in (5a). Similarly for or: one disjunct may cancel the presupposition of the other, as in (5b): (5a) If Harry has a son, his son lives in Kentucky. (5b) Either Harry has no son, or his son lives in Kentucky.

Not is again problematic in that it normally lets presuppositions project as invited inferences, but occasionally as full presuppositions, as in (3c), and occasionally also cancels them altogether, as in (6), which contains the positive polarity item hardly (positive polarity items only allow for a higher not if some presupposition is canceled): (6) Bob does NOT hardly feed his dog: he doesn’t even have one.

This gives an idea of what is involved in the questions (b) and (c). None of the proposals made in the literature has been able to offer a principled account of the projection properties of presupposition. The first proposal was made by Langendoen and Savin (1971), who proposed that presuppositions are always maintained as such through embeddings, which is observationally inadequate. A more sophisticated theory was proposed by Karttunen in various publications. Observing that projection properties depend on the embedding operator, he distinguished between plugs, holes, and filters. Plugs are operators that always cancel presuppositions and invited inferences. Holes are operators that always let them through, either as presuppositions or as inferences (e.g., believe). Filters sometimes let them through and sometimes do not (e.g., not, if, or). He did not succeed, however, in formulating adequate conditions for the three classes, in particular the filters. It is now generally agreed that though Karttunen’s work focused attention on these phenomena, it was too taxonomic and failed to provide a satisfactory solution. The second main approach is Gazdar (1979). Here, presuppositions are brought together with entailments and implicatures into one system of hierarchically ordered cancellation conditions. The notion of entailment is classical, and so is the logic administering it. In principle, all implicatures and presuppositions are deemed to ‘survive’ through embeddings, unless there is a conflict, in which case selective canceling (‘filtering’) takes place. Implicatures and presuppositions of the smallest possible sentential structures are spelled, respectively, ‘im-plicatures’ and ‘pre-suppositions.’ Only when they have made it to the surface, through all embeddings, is the spelling ‘implicature’ and ‘presupposition’ (without hyphen) used. Im-plicatures are scalar or clausal. Scalar implicatures are of the kind familiar in pragmatics: an expression e occupying a position on a semantic scale s generates the scalar im-plicature ‘not stronger than e’. Thus, Some men died has the scalar im-plicature K(not all men died) – read as ‘for all the speaker knows, not all men died.’ Clausal im-plicatures are generated by sentences A containing as a subpart

750 Projection Problem for Presupposition

some clause B such that A entails neither B nor :B. The im-plicature is then of the form P(B) ^ P(:B, ‘P’ standing for the epistemic possibility operator (Gazdar, 1979: 58–59). Thus, Nob thinks that Bob is brave clausally im-plicates P(Bob is brave) ^ P(Bob is not brave). A pre-supposition is an implicature that is also semantically entailed. A presupposition, as a property of a possibly complex sentence, may or may not be entailed. The filtering mechanism works as follows. Given a sentence A an inventory is made of its eventual entailments (E), of its accumulated im-plicatures (I), and of its accumulated pre-suppositions (P). If E contains contrary entailments, A is uninterpretable. If some e 2 E is incompatible with any i 2 I or p 2 P, i or p is canceled and A remains interpretable in all contexts compatible with E. If some i 2 I is incompatible with some p 2 P, p is canceled and i remains. Mutually incompatible im-plicatures or pre-suppositions cancel each other. Entailments thus take precedence over im-plicatures and im-plicatures over pre-suppositions. For example, ‘if A then B’ generates the im-plicature P(A)^( P(not-A) ^ P(B) ^ P(not-B), all four being admissible knowledge states. If B pre-supposes A, the pre-supposition K(A) is canceled by the incompatible im-plicature P(not-A). Gazdar was among the first to stress the relevance of presupposition for an incremental theory of discourse semantics. Given a context (i.e., a set of propositions) C, a newly presented sentence A is incremented to C, thus creating a new context C0 for a following sentence. Eventual presuppositions (including invited inferences) are incremented to C prior to their carrier sentences (Gazdar, 1979: 132). Incremented propositions are considered linked by and. When C ^ A is inconsistent, A is uninterpretable in C. When C is incompatible with some i 2 I or p 2 P (A 6 p), then i or p is filtered out and A loses that implicature or pre-supposition in C. This incremental aspect of Gazdar’s theory is an extension of the filtering mechanism to any context C, as the same results are obtained by conjoining A with (the sentences expressing) the propositions in C. A more unifying view is obtained when the principles of Maximal Unity (MaU) and Minimal Size (MiS) are assumed for Ds. MaU entails a maximal leveling of information through the subdomains of a discourse domain D. MiS entails that what has been incremented must not be doubted again. MaU ensures that besides what is explicitly incremented in a subdomain Dn, Dn also contains all information previously stored in higher domains, including the commitment domain Do, provided Dn remains consistent. This downward percolation allows the use of discourse addresses and increments from

higher domains in lower domains. The counterpart of downward percolation is the upward percolation of presuppositions from lower to higher domains unless stopped by inconsistency with either explicitly stored information or available background (scenario) knowledge. Thus, in (4c) MaU generates the invited inferences that Harry has a son and that there is a place called ‘Kentucky.’ The invited inference that Harry has a son is blocked in (4d) because the higher domain says that he has no son. Some subdomains are subject to the requirement that they be themselves incrementable to their superordinate domain. The subdomain created by epistemic may, for example, require that what is said to be possible is a proper potential increment to Do, and must thus be consistent with Do, but not already contained in it. Likewise for the discourse-splitter or and the hedger if. ‘A or B’ is incremented as two alternative subdomains ‘A’ and ‘not-A and B.’ Both alternatives must be incrementable to Do. This condition automatically blocks the projection of (4b), ‘Harry has a son,’ from the disjunction (5b): if (4b) were added to Do the first alternative would not be incrementable. Analogously for conditionals, as in (5a): there is no invited inference that Harry has a son because if there were, it could not be doubted again by if in virtue of MiS. And is primarily a discourse-incrementer; its being a truth function is derived from that (which explains why and does not like to stand under negation). A ^ BA does not presuppose (or invite the inference) A because if it did, A would have to be incremented twice, which would violate MiS. Not(BA) normally preserves the presupposition A as an invited inference, because BA under not must have the right papers for the current D. Yet not is special. First, it is not allowed over positive polarity items and is required by negative polarity items. Then, it preserves the full entailing presupposition in all cases where it occurs in ‘noncanonical’ position (for English, not in construction with the finite verb, as in (3c) above) and also when it stands over clefts or pseudoclefts, and when a factive that-clause stands in front position as in (7), which fully presupposes that Janet’s brother was arrested: (7) That her brother was arrested did not surprise Janet.

There also is a presupposition-canceling metalinguistic NOT, which says of its argument sentence BA that it does not fit into D because A clashes with D, as in (6). This NOT requires heavy accent, must stand in canonical position, and allows for positive polarity items, like hardly. How not and NOT relate to each other is still a matter of debate.

Pronouns 751

Gazdar’s analysis can thus be perfected and be seen to follow from a postulated mechanism of discourse incrementation. See also: Discourse Domain; Discourse Semantics; Formal Semantics; Inference: Abduction, Induction, Deduction; Multivalued Logics; Negation; Nonmonotonic Inference; Polarity Items; Presupposition.

Bibliography Gazdar G (1979). Pragmatics. Implicature, presupposition and logical form. New York-San Francisco-London: Academic Press.

Karttunen L (1971). ‘Some observations on factivity.’ Papers in Linguistics 4, 55–69. Karttunen L (1973). ‘Presuppositions of compound sentences.’ Linguistic Inquiry 4, 169–193. Karttunen L (1974). ‘Presupposition and linguistic context.’ Theoretical Linguistics 1, 181–194. Karttunen L & Peters P S (1979). ‘Conventional implicature.’ In Oh C-K & Dinneen D A (eds.) Presupposition. Syntax and Semantics 11. New York: Academic Press. 1–56. Langendoen D T & Savin H (1971). ‘The projection problem for presuppositions.’ In Fillmore C & Langendoen D T (eds.) Studies in linguistic semantics. New York: Holt. 55–60.

Pronouns A Saxena, Uppsala University, Uppsala, Sweden ß 2006 Elsevier Ltd. All rights reserved.

According to the traditional definition (from the Web version of Webster’s 1913 dictionary), a pronoun is ‘‘[a] word used instead of a noun or name, to avoid the repetition of it. The personal pronouns in English are I, thou or you, he, she, it, we, ye, and they.’’ Pronouns are a special case of the more general linguistic category ‘proforms,’ i.e., (semantically empty or nearly so) function words that replace (lexical content-bearing) syntactic units of a particular category. In the case of pronouns, the units in question are NPs or modifiers of the head noun in an NP, i.e., adjective phrases, quantifiers, or determiners. Even the most prototypical personal pronouns can be said to fit this definition if we say that the replaced NPs are the names of the speaker(s) or addressee(s), etc., (as implied in the Webster definition), or possibly expressions like ‘the speaker(s),’ etc. Since Latin grammar – whence the term ‘pronoun’ (pronomen) originates – categorizes nouns and adjectives as special cases of ‘names’ (nomina), it is not surprising that ‘pronouns’ and ‘promodifiers,’ too, are considered to form one part of speech. Hence, pronouns form a quite heterogeneous class of words. The most prototypical members of this class are the personal pronouns, among which are normally counted also possessive (corresponding to English my  mine, her  hers, etc.), reflexive (e.g., themselves), and reciprocal (e.g., each other) pronouns. Depending on the language and grammatical tradition, at least the following other kinds of pronouns are also commonly recognized:

. demonstrative pronouns, which identify or specify a noun. The number of spatial distinctions made varies among languages, but at least ‘here’ (near speaker) and ‘there’ (away from speaker) is commonly found, yielding the English opposition this  that (see Demonstratives). . relative pronouns, which form relative clauses, i.e., clauses serving as modifiers in the NP. . interrogative pronouns; ‘wh-items’; used to ask questions (see Interrogatives). . indefinite pronouns, which form a very heterogeneous group, prototypically encompassing items expressing indefinite reference (such as English something, anything, nothing), but commonly also taken to include quantifiers such as many, every, generic pronouns such as German man ‘one, you (impersonal),’ and relational expressions such as other, same (see Indefinite Pronouns). Sometimes the corresponding proadverbs (or ‘pronominal adverbs’) are grouped together with pronouns in grammars, i.e., interrogative pronouns such as English who and what, are considered as being of a kind with interrogative adverbs like where, when, and how. Pronouns are function words, which means that they constitute a closed word class, with a small set of members. In many languages, the text frequency of pronouns is high, and consistent with this frequency, pronouns tend to have a fairly simple phonological structure, normally one or two syllables with simple consonants or consonant clusters, although longer and more complicated pronouns do exist, as do word-formation patterns restricted to pronouns (and pronominal adverbs), such as

752 Pronouns

Swedish/English na˚gonting/something  anything  ingenting/nothing, or multi-word pronouns, like Swedish den ha¨r ‘this’  den da¨r ‘that’; Swedish/ Polish vad som helst/byle co ‘anything (whatsoever)’  vem som helst/byle kto ‘anybody (whosoever).’ The prototypical function of pronouns is a ‘signpost-like’ one: A pronoun serves to identify referents, not by describing or naming them, but by ‘pointing them out,’ as it were. There are two kinds of identification often distinguished in the literature, ‘deictic’ and ‘phoric.’ A deictic pronoun (or a deictic usage of a pronoun) refers directly to something in the world: If I say He did it! pointing at a man standing next to me, I am using the personal pronoun he deictically. A phoric (usage of a) pronoun, on the other hand, provides an index to some part of the discourse in which it occurs; it ‘corefers’ rather than refers (hence this indexing mechanism is often called ‘coreference’ in the literature). The phoric uses of pronouns are demonstrative and personal pronouns used to refer to complete predications (as in He admitted that he did it. This means that I am innocent.), and reference to lexical NPs elsewhere in the discourse: reflexive and reciprocal pronouns used intrasententially, and, most prototypically, ‘anaphoric’ (the pronoun comes after the lexical NP, called the antecedent) and ‘cataphoric’ (the pronoun anticipates the lexical NP) reference. Whereas a full NP will pick out its referent by describing it lexically (e.g., the house on the hill), a phoric pronoun will provide as much structural information as is needed and allowed by the grammar of the language, so that the listener can pick out the intended NP in the discourse. As a consequence, phoric pronouns will often be capable of richer inflectional expression than nouns. That this capability should be so seems quite natural in the case of agreement features, such as gender/noun class, number, and occasionally posture and spatial position and movement. Thus, in the Swedish sentence Tallriken fo¨ll i golvet och den gick so¨nder ‘The plate fell to the floor and it broke,’ the phoric pronoun den ‘it’ is nonneuter singular, and given only this sentence as context, it can only refer to the likewise nonneuter singular NP tallriken ‘the plate,’ but not to the neuter singular NP golvet ‘the floor,’ which would have required the use of the phoric pronoun det ‘it (neuter sg)’ instead. We should note at this point, that lexical NPs, too, are often used anaphorically; especially common is the anaphoric use of a hyperonym (more general term) of the antecedent NP, a mechanism that provides a natural starting point for the grammaticalization of general nouns into pronouns (see below) (see Anaphora, Cataphora, Exophora, Logophoricity). However, pronouns tend to express other grammatical features as well as express them to a higher degree

than nouns. Thus, in the continental Scandinavian languages Danish, Norwegian-Bokma˚l (Norwegian, Bokma˚l), Norwegian-Nynorsk (Norwegian, Nynorsk), and Swedish, in English, in French, and in Macedonian, and many other languages, personal pronouns are inflected for case (subject, direct object, dative), a category completely lacking in nouns in those languages. This tendency is less easily explained by the need for discourse-internal reference, but may be a natural stage on the way toward the grammaticalization of phoric pronouns as agreement affixes on verbs. It has been noted, for example, that in some languages, such as Tuscarora (an Iroquoian language spoken in North America), sometimes called pronominal argument languages, lexical NPs – neither expressing case nor showing syntactic function by word order – behave like loose adjuncts to clauses, rather than full syntactic arguments of the verb, a role usurped, as it were, by agreement morphemes (prefixes in Tuscarora), which express case and coindex any lexical NPs by agreement features. In some languages, subject pronouns express the category of ‘switch reference,’ where the form of the subject pronoun shows if the subject of the current (dependent) clause is the same as in the preceding (main) clause (SS), or a different one (DS), as in the following two examples from Diyari (Dieri) (Australia): (1) nhulu he-ERG

nganthi pardaka-rna warrayi, meat-ABS bring-PART AUX thanali thayi-lha they-ERG eat-SS ‘He brought the meat for them (him and others) to eat’

(2) nhulu

nganthi thanali

pardaka-rna warrayi, thayi-rnanthu eat-DS ‘He brought the meat for them (others) to eat’ (From Wiesemann, 1986: 458)

Further, pronouns – both deictic and phoric – often express such features as distance (from speaker and/or addressee), visibility, and politeness/social distance (between speaker and addressee, and between speaker and persons mentioned). Compared to nouns and adjectives, pronouns tend to have more irregular and often highly characteristic inflectional patterns. How many persons personal pronoun systems express varies between languages. A very common system has three persons (first, second, third) in two numbers (singular and plural), for a total of six person/number slots in the paradigm. Often, a dual number is found in addition to singular and plural, increasing the number of person/number combinations to nine. Some personal pronoun systems have

Pronouns 753

trial number, and a quadral number exists at least in Sursurunga, an Austronesian language spoken in Papua New Guinea. In nonsingular nonthird-person forms, the distinction between inclusive and exclusive forms is often encountered, as in Huallaga Quechua (Quechua, Hua´nuco, Huallaga) noqakuna ‘we, excluding you, the addressee(s)’  noqanchi: ‘we, including you, the addressee(s).’ One of the more elaborate person systems described is that of Ghomala’ (Ghoma´la´’), a Bantu language spoken in Cameroon, evidencing free subject forms of personal pronouns encoding the following persons and combinations of persons: 1sg, 1sg þ 2sg, 1sg þ 3sg, 1pl þ 2sg, 1pl þ 2pl, 1pl þ 3sg, 1pl þ 3pl, 1pl 2sg, 2sg þ 3sg, 2pl þ 3sg, 2pl þ 3pl, 2pl 3sg, 3sg þ 3sg, 3pl 1sg þ 2sg þ 3sg, 1pl þ 2sg þ 3sg, 1pl þ 2pl þ 3sg, 1pl þ 2pl þ 3pl

It is not always easy to formulate necessary and sufficient criteria for distinguishing pronouns from other linguistic categories. Pronouns share many characteristics both with nouns and with agreement clitics and affixes. This feature becomes especially clear in crosslinguistic and typological studies, where the same function that is accomplished with pronouns in one language utilizes some other mechanism in another language. If verbs in some language show rich agreement morphology (encompassing one or more core arguments of the verb), pronouns may be used mostly for emphasis. Where agreement morphology is missing, as in Swedish, pronouns may be required even with argumentless verbs (analogous to expletive it in English): Det bla˚ser ‘It blows (¼ there is a wind/the wind is blowing),’ cf. the Finnish version of the same utterance: Tuulee ‘Blows,’ with no pronoun at all, only third-person singular agreement morphology on the verb. Similarly, a prepositional phrase containing a pronoun NP in English may instead be expressed with an adposition–possessive affix combination in some other language, e.g., Hungarian neked (nek-ed ‘DATIVE–2SG’) ‘to you,’ cf. [az] embernek (ember-nek ‘man-DATIVE’) ‘to [the] man’ and embered (ember-ed ‘man–2SG’) ‘your man.’ This difference means that the text frequency of pronouns (at least phoric pronouns) will vary widely among languages, depending on structural factors such as the ones just cited. Further, linguists have noted that core argument pronouns seem to appear with different frequencies, depending on their grammatical function. One proposed generalization concerns the form of core argument NPs in languages, stating that lexical A(gents) – i.e., typically subjects in transitive

clauses – will tend to be avoided, i.e., A will tend to be expressed by a pronoun or nothing (zero anaphora). This generalization means that there should be a tendency for overuse of pronouns (or only agreement morphology) in the A role across languages. For example, examining a traditional narrative in Kinnauri (a Tibeto-Burman language spoken in northern India), comprising a total of 420 clauses containing altogether 293 A arguments, we find that only about one-fifth, or 19% of the A arguments are in the form of lexical NPs, and the remainder are expressed by zero anaphora (73%) or pronouns (8%). Students of grammaticalization have noted a frequently occuring grammaticalization pathway from nouns over pronouns and clitics to agreement morphology, as well as a partial pathway from demonstrative pronoun over third-person independent pronoun, clitic pronoun, and finally agreement suffix. However, it should be mentioned here that there are other sources for agreement morphology as well, and not only pronouns. The noun component of indefinite or polite/ honorific personal pronouns is often still discernible, e.g., -thing, -body in English indefinite pronouns, Swedish/German generic man (