“A striking turn in the history of philosophy over recent decades has been the spread and growth of analytic philosophy

*613*
*143*
*2MB*

*English*
*Pages [256]*
*Year 2013*

- Author / Uploaded
- Paolo Casalegno
- Pasquale Frascolla
- Diego Marconi
- Elisa Paganini

*Table of contents : TABLE OF CONTENTSPREFACECHAPTER ICHAPTER IICHAPTER IIICHAPTER IVCHAPTER VCHAPTER VICHAPTER VIICHAPTER VIIICHAPTER IXBIBLIOGRAPHYPAOLO CASALEGNO’S PUBLICATIONSABOUT THE EDITORS*

Truth, Meaning and the Analysis of Natural Language

Truth, Meaning and the Analysis of Natural Language

By

Paolo Casalegno

Edited by Pasquale Frascolla, Diego Marconi and Elisa Paganini

Truth, Meaning and the Analysis of Natural Language, by Paolo Casalegno Edited by Pasquale Frascolla, Diego Marconi, Elisa Paganini This book first published 2013 Cambridge Scholars Publishing 12 Back Chapman Street, Newcastle upon Tyne, NE6 2XX, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Copyright © 2013 by Paolo Casalegno All rights for this book reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN (10): 1-4438-4948-0, ISBN (13): 978-1-4438-4948-7

TABLE OF CONTENTS

Preface ....................................................................................................... ix Chapter I ...................................................................................................... 1 Approaches to Quantification Chapter II ................................................................................................... 29 Only: Association With Focus in Event Semantics (with Andrea Bonomi) Chapter III ................................................................................................. 77 Three Remarks on Truth and Reference Chapter IV ................................................................................................. 91 Modal Properties of Truth Chapter V ................................................................................................ 123 Quinean Inscrutability vs. Total Inscrutability Chapter VI ............................................................................................... 137 The Referential and the Logical Component in Fodor's Semantics Chapter VII .............................................................................................. 165 Logical Concepts and Logical Inferences Chapter VIII ............................................................................................ 183 Truth and Truthfulness Attributions Chapter IX ............................................................................................... 207 Reasons to Believe and Assertion Bibliography ............................................................................................ 227 Paolo Casalegno’s Publications ............................................................... 233 About the Editors..................................................................................... 241

PREFACE

Paolo Casalegno (Torino, September 27, 1952 - Milano, April 12, 2009) was one of the best European philosophers working within the analytic tradition. He was well known in the analytic community, where many remember the clarity, efficacy, and originality of his presentations at conferences and workshops, his inexhaustible argumentative verve, and his uncommon ability in singling out a philosophical view's essential point and, often, fatal weakness. However, with the exception of a few articles in international journals such as dialectica and the Proceedings of the Aristotelian Society, much of his research work was published in Italian and dispersed in minor journals, collective volumes, and Festschriften for colleagues and friends (Paolo was never particularly picky about where and how his papers were printed). There lies the main motivation for the present volume: we hope it will contribute to better and more widespread knowledge of a philosopher we regard as one of the ablest and most profound in European philosophy of the last decades. Casalegno graduated from the Scuola Normale of Pisa in 1975 and started his academic career as a logician. His much later textbook Teoria degli insiemi (Set theory, with Mauro Mariani, 2004) shows that he never lost interest for the discipline; indeed, logic was the permanent background of his philosophical research, which was mostly in the areas of philosophy of language (including formal semantics), the theory of truth, and, in his later years, epistemology. His work in formal semantics is here represented by two papers, "Approaches to Quantification" and "Only: Association with Focus in Event Semantics". The first is a very clear and thorough account of work in the theory of generalized quantifiers of the late Eighties. It includes several original hints, e.g. on the possibility of extending Hans Kamp's treatment of "donkey sentences" beyond the case of indefinite anaphoric antecedents and, particularly, on the weakness of attempts at getting rid of Jaakko Hintikka's ramified quantifiers by ultimately reducing the relevant cases to special cases of the collective reading of the quantifier. "Only: Association with Focus in Event Semantics" (written with Andrea Bonomi) exemplifies Casalegno's skill in analyzing natural language with the tools of formal semantics. The problem is the analysis of sentences containing phrases of the form "Only[...]", where the brackets

x

Preface

indicate focus: e.g. "John only [kissed Mary]" as distinct from "John only [kissed] Mary". After showing that the extant theories (proposed by Mats Rooth, Arnim von Stechow and J.A.G. Groenendijk) were inadequate in that they could not satisfactorily deal with several structures of the form "Only[NP]", Casalegno and Bonomi put forward an original theory within the framework of event semantics and showed that it applied to a large number of structures, including all counterexamples to previous theories; moreover, it could be successfully extended to sentences with multiple focus, such as "John only introduced [Bill] to [Sue]", and to "Only when" structures such as "Only [when John comes in] Mary goes out". However, Casalegno's central philosophical preoccupation was with the notions of truth and reference. His deep-seated beliefs about them are synthetically expressed at the beginning of "Three Remarks on Truth and Reference". On the one hand, he believed that both truth and reference were puzzling, obscure notions: "The idea that words correspond to bits of reality and that, in virtue of this correlation between words and things, statements have well-defined truth conditions is [...] notoriously hard to make clear and precise. The traditional explanations of how language can attach to the world are inadequate, and general philosophical considerations seem to indicate that no adequate explanation exists [...] our image of the relation between language and reality is illusory and misleading". On the other hand, he also thought that we cannot do without truth or reference: "Not only are these two notions [...] integral to our pre-theoretical image of how language works, but they also seem to be indispensable when we come to theorise". One side of this "Nec tecum, nec sine te" attitude was reflected both in constant criticism of semantic naturalism and its attempts at accounting for the relation between language and reality (in "Three Remarks on Truth and Reference", in the last section of "The Modal Properties of Truth", and more fully in "The Referential and the Logical Component in Fodor's Semantics") and in his support and reinforcement of Quine's indeterminacy arguments ("Quinean inscrutability vs. total inscrutability"). The other side was visible in his later attempt at showing why the notion of truth is useful not just in semantic theorizing but in the context of human life ("Truth and Truthfulness Attributions"). Moreover, Casalegno was extremely suspicious of any view that he saw as compromising truth or reference with epistemological notions: while this is obvious in the papers he devoted to the discussion of Michael Dummett's philosophy of language (not included in the present collection), there is a trace of it in "Reasons to Believe and Assertion", the only article in epistemology he ever published. Indeed, there is no doubt that the

Truth, Meaning and the Analysis of Natural Language

xi

notions of truth and reference that Casalegno regarded as both puzzling and indispensable were the realistic notions. In the "Three Remarks on Truth and Reference" paper, Casalegno does (predictably) three things: first, he produces a very simple and clever a priori argument against the possibility of naturalizing reference. Secondly, he argues against the claim that the semantic notions, being theoretical notions (i.e., notions that are involved in the explanation but not in the description of empirical data), are justified by their success in accounting for the speakers' semantic intuitions. Casalegno argues that the semantic notions can only do their explanatory job by being supplemented with our pretheoretical intuitions about truth. Thirdly, he criticizes Donald Davidson's claim that while theories of truth presuppose a pretheoretical notion of truth, the notion of reference is satisfactorily accounted for once we are aware of its role in the characterization of truth. Against this, Casalegno insists that here, too, we need a pretheoretical notion of reference to check a truth theory's T-sentences. In a way, all three remarks make the same point: both justifications and "reductions" of truth and reference fail in that they presuppose the intuitive notions of truth and reference. In "The Modal Properties of Truth" Casalegno showed that even Tarskian truth definitions do not fully capture the intuitive notion of truth, not because such definitions are not intensionally adequate (they are), but because the Tarskian notion of truth alone does not suffice to make sense of the idea that languages are characterized by their semantic rules: to achieve that result we must independently appeal to the intuitive notion of truth (see pp. 101-104). Thus he agreed with Hilary Putnam's argument against Carnap in Representation and Reality (1988) to the extent that Putnam's point was "to show that the intuitive notion of truth cannot be replaced everywhere with a predicate defined by Tarski’s method – and moreover that that method does not provide us with an exhaustive theoretical counterpart of the intuitive notion of truth"; but he disagreed with him if Putnam's argument was intended to prove the point by proving the intensional inadequacy of Tarskian definitions. Even though a Tarskian truth definition does not fully explain what it is for a language to be identified by its semantic rules, languages are identified by their semantic rules, hence biconditionals like "'Peter is happy' is true if and only if Peter is happy" are not just true but necessarily true. Consequently, Tarskian truth definitions are not just extensionally adequate but they are intensionally adequate as well. Another way one might think of grounding the semantic notions is by having linguistic and non-linguistic behavior determine them: this is the view Quine intended to undermine by the thought experiment of radical

xii

Preface

translation and the arguments arising from it. In "Quinean Inscrutability vs. Total Inscrutability" Casalegno agreed with the gist of Quine's indeterminacy arguments, though not with the rationale that is often invoked to explain how they work. In his view, the inscrutability of reference does not depend on certain properties being necessarily coinstantiated (whenever there is a rabbit there is an undetached rabbit part, a rabbit stage, etc.) but rather on Quine's not imposing any constraint on how a translation manual translates individual words, except for the single requirement that stimulus-meaning of observation sentences be preserved. "Now—Casalegno observes—it turns out that this is compatible with any way of translating words, no matter how crazy" (p.135). Consequently, as Casalegno shows in great detail, inscrutability of reference is total, not restricted to gavagai-like cases; and it remains total even if one adds more demanding constraints on which translation manuals are admissible. In "The Referential and the Logical Component in Fodor's Semantics" Casalegno defends inscrutability against Jerry Fodor's claim that reference is, indeed, scrutable. In The Elm and the Expert (1994) Fodor had tried to show that it is possible to determine whether by the words "square" and "triangle" a speaker means square and triangle or part of a square and part of a triangle, by asking the speaker questions that involve logical notions such as conjunction. The meaning the speaker assigns to the logical words expressing such notions (e.g. "and") can be determined, in turn, by finding out which argument forms he accepts, i.e. by observing the speaker's inferential dispositions. Casalegno attacks Fodor's argument on three counts: first, even if we could be sure that the speaker means conjunction by "and" we could not choose between the two hypotheses (square vs. part of a square, etc.); secondly, we could not determine by Fodor's method whether the speaker does mean conjunction by "and"; thirdly, even if we could choose between the two hypotheses inscrutability would still be there, for the kind of reasons Casalegno had spelled out in the "Quinean Inscrutability vs. Total Inscritability" paper. The gist of Casalegno's refutation of Fodor's argument hinges on a controversial claim: that inference patterns do not fix the meanings of the logical words. A speaker may accept standard inference schemata for a connective '*' while assigning non-standard extensions to—say—'F * G'. In the paper on Fodor's semantics, Casalegno proved his point by offering counterexamples to the meaning fixation thesis. However, he also remarked that the fact that the meaning of the logical symbols is not fixed by the set of valid inferences "would deserve to be discussed much more thoroughly than I can do here" (p.153-154). Such a thorough discussion is

Truth, Meaning and the Analysis of Natural Language

xiii

exactly what he provided in "Logical concepts and logical inferences". There he intended to show that, contrary to what some philosophers such as Paul Boghossian and Christopher Peacocke have claimed, it is not the case that a subject knows the meaning of a logical constant if and only if she accepts a certain set of logical rules of inference. First of all, being disposed to use a logical constant according to certain inference rules is not sufficient to know its meaning. According to Casalegno, if this were the case then it would follow that a speaker is entitled to assert a logically complex sentence only if the sentence logically follows from some finite set of logically simple sentences the speaker accepts. But this is clearly false, as Casalegno shows by a few simple counterexamples. Rather, "our capacity to describe situations by means of logically complex sentences is a primitive capacity which does not appear to be reducible to anything else and certainly cannot be reduced to the mere readiness to perform logically correct inferences" (p.176). Casalegno was also sceptical about the other direction of the biconditional, i.e. he doubted that semantic competence about logical words ("or", "every", etc.) required acceptance of a certain set of inference rules. In each single case, we may have legitimate doubts that acceptance of a certain rule—say, Modus Tollens—is necessary for competence about a logical idiom ("if...then..."); but aside from that, the main difficulty is that we have not been told what kind of data are supposed to settle the issue. No doubt, from a normally competent speaker we expect a certain amount of inferential ability; this does not entail, however, that we have a sufficiently precise notion of "rule acceptance of which is necessary to possess a given logical concept". In "Truth and Truthfulness Attributions"—the last paper Casalegno devoted to the semantic notions, truth and reference—he did something entirely different: by an elegant thought experiment he showed why it is good for us to have the (intuitive) notion of truth. Availability of the truth predicate, he argued, significantly increases our capacity for acquiring true beliefs, and this is an end in itself. Notably, the truth predicate cannot be surrogated in this function by other predicates such as "verified", or by "truthful" or "reliable" as predicated of a speaker, or by full-fledged justifications of an assertion. Underscoring the truth predicate's unique utility was Casalegno's last explicit tribute to those puzzling words, "is true". However, even in his very last paper, "Reasons for Belief and Assertion", he meant to show that in laying down the constitutive norm for assertion one cannot replace a truth involving notion (knowledge) by non-truth involving notions such as rationality or reasonableness. I.e., he wanted to show that Timothy

xiv

Preface

Williamson's knowledge rule, "One must: assert that p only if one knows that p", cannot be satisfactorily replaced by either Igor Douven's rationality rule, "X must: assert that p only if it is rational for X to believe that p", or by Jennifer Lackey's reasonableness rule, "X must: assert that p only if it is reasonable for X to believe that p". Casalegno's criticism is based on careful analysis of alleged counterexamples to the knowledge rule: eventually, he didn't claim to have proved Williamson right but only to have shown his proposal to be better than the competition. We hope this short introduction to be helpful in giving the reader some idea of the breadth and import of Paolo Casalegno's work, and the centrality of the topics he dealt with for analytic philosophy. To get a feeling of the subtlety, originality and efficacy of his arguments, there is no alternative to reading the papers themselves.

CHAPTER I APPROACHES TO QUANTIFICATION*

1. Two recent volumes of papers–Generalized Quantifiers (Gärdenfors 1987) and Studies in Discourse Representation Theory and the Theory of Generalized Quantifiers (Groenendijk, de Jongh & Stokhof 1987)1–allow one to form a fairly precise idea of the way in which the topic of quantification in natural language is addressed nowadays within logical semantics. Given its evident importance, this topic has naturally always received some attention, but, in these last few years, studies in the field have blossomed luxuriantly. The stimulus here has been the publication of a number of important works, notable for their richness in original and fruitful ideas. The pages to follow can be read as a sort of introduction to these two recent volumes. I will illustrate some of the interesting problems that arise in the domain of natural language quantification, together with some of the theoretical approaches that have been proposed—starting with the “theory of generalized quantifiers” ably promoted by Jon Barwise and Robin Cooper, which features already in the titles of the two collections of papers. The picture that I sketch here will not be complete and will not be very thorough in its details, but it should be able to serve as a rough guide for initial purposes, or so I hope. 2. The theory of generalized quantifiers2 is based on a very simple idea that is not in the least bit new. It is the idea that Frege expressed when he said that quantifiers are “second order concepts,” and, translated into extensionalist terms, it comes out as: quantifiers denote sets of sets. This * I thank Maria Zinanni and Claudio Saccon for their generous aid in the preparation of this text. Eva Picardi and Ernesto Napoli read and commented on an earlier version of this work-my gratitude goes to them as well. 1 From this point on I will refer to these two volumes as GQ and SDRTTGQ. 2 The best introduction to the topic remains Barwise-Cooper 1981. For a synthetic presentation of the theory’s basic concepts, see Sandri 1983 and the first chapter of van Benthem 1986. For a broad overview that takes into consideration numerous developments and applications, see Westerståhl 1986.

Chapter I

2

point might be obscured somewhat by the fact that elementary logic texts treat the symbols and syncategorematically, and do not endow them with any semantic values of their own. However, a moment’s reflection suffices for it to become completely evident. To assert the truth of X M– given a universe of discourse U and relative to a pre-established interpretation of the basic symbols–is to state that every element of U satisfies M, or in other words that the set [[M]] of elements of U satisfying M coincides with U itself, or in other words that [[M]] {U}; and to assert the truth of X M is to assert that some element of U satisfies M, or in other words that [[M]] is not empty, or in other words that [[M]] {X | X U & X z }. In light of these obvious considerations, we can decide (departing from the letter, but certainly not from the spirit of the presentation in standard texts) to assign semantic values to and themselves and to posit that [[]] = {U} and [[]] = {X | X U & X z }. Following which, the truth conditions of sentences of the form Q X M with Q = or Q = can be expressed in a unitary way via the following clause: (1) Q X M is true (given the universe of discourse U and relative to the pre-established interpretation) if and only if [[M]] [[Q]]. Now, once it has been established that the universal and existential quantifiers can be seen as denoting sets of sets, the question naturally arises if the notion of a quantifier can be generalized–that is, if one can introduce other expressions denoting sets of sets that would be of interest to the logician. The first to pose this question–with essentially logicomathematical concerns–was Andrzej Mostowki in the fifties 3 , who launched a research program that would subsequently undergo important developments. Barwise and Cooper, for their part, stress that the generalized notion of a quantifier is not only of interest to mathematics but also relevant for the logical analysis of natural language: according to these two researchers, all the expressions that linguists call N(oun)P(hrase)s deserve to be treated as generalized quantifiers. In fact, this is not a new idea either–it figures already in Montague 1973. The originality of Barwise and Cooper’s work lies in the way in which they exploit this idea. But before entering into the heart of the matter, a couple of clarifying remarks are in order. It is sometimes said that the symbols and roughly correspond to the words every and some. Now, every and some are not NPs: they give rise to an NP only once they are combined with an N(oun) (every man, 3

Cf. Mostowski 1957.

Approaches to Quantification

3

some book). The equation “quantifiers = NPs” might thus seem odd; for an instant, one might even entertain the idea of replacing it with the equation “quantifiers = DET(erminer)s” (where DET is the precise syntactic category that the words some and every belong to). So let us try to elucidate this point. The fact is that it is only in a loose sense that and “correspond” to the words every and some. It would be more accurate to say that they correspond to those NPs of the form every N and some N in which N denotes the universe of discourse–for example, every thing and some thing. Indeed, if we articulate our analysis of natural language in such a way as to have [[every thing]] = {U} and [[some thing]] = {X | X U & X z }, and if we stipulate moreover that for sentences of the form NP V(erb)P(hrase) with NP = every thing or NP = some thing the following clause applies (in obvious analogy with (1)): (2) NP VP is true if and only if [[VP]] [[NP]], we arrive at the correct truth conditions for the sentences under consideration, abstracting away from some exceptions.4 But how should we treat an NP of the form every N or some N when [[N]] z U? As is well known, one of Frege’s intuitions was that anything that can be expressed with NPs of this kind can equally be expressed by using the terms every thing and some thing (or by using and , if we prefer a transcription into symbols). For example, (3) every man is mortal is equivalent to every thing, if it is a man, is mortal, and (4) some book is boring is equivalent to some thing is a boring book. However, it is in no way essential to proceed by way of these paraphrases. Nothing prevents us from treating every man and some book as quantifiers in their own right and imagining that [[every man]] = {X | X U & [[man]] X} and [[some book]] = {X | X U & [[book]] X z }, in which case (3) and (4) are true if and only if [[mortal]] [[every man]] and [[boring]] [[some book]] respectively. In general, we can posit that [[every N]] = {X | X U & [[N]] X} and [[some N]] = {X | X U & [[N]] X z }; 4

I have in mind here cases where the subject NP falls within the scope of an NP contained within the VP, as well as the cases discussed in the two sections to follow.

4

Chapter I

once we do this, we find that the truth conditions of a sentence NP VP with NP = every N or VP = some N are just as clause (2) predicts. At this point, the reader should be able to imagine how this kind of analysis could be extended further to NPs containing DETs different from every and some. Take for example the DETs no, five, most. Given the above discussion, it shouldn’t come as much of a surprise that we can assign the following values to NPs containing these DETs: [[no N]] = {X | X U & [[N]] X = }, [[five N]] = {X | X U & |[[N]] X | = 5}, [[most N]] = {X | X U & |[[N]] X| > |[[N]]–X|}5 Naturally, what justifies these choices is the fact that the truth conditions of sentences containing the NPs in question can then be derived correctly again from the schema in (2). Now let us ask: what semantic value ought we to attribute to a DET D if we want to obtain the denotation of an NP of form D N compositionally on the basis of [[N]] and of [[D]]? The answer is obvious: since [[N]] is a subset of U and since [[NP]] is a set of subsets of U, [[D]] must be a function that maps each subset of U to some set of subsets of U. 6 For example, [[every]] will be the function that maps any Y U to the set {X | X U & Y X}, [[some]] will be the function that maps any Y U to the set {X | X U & Y X z }, [[most]] will be the function that maps any Y U to the set {X | X U & |Y X| > |Y–X|}, etcetera. In general, [[D]] will be defined in such a way that, for any N, [[D N]] = [[D]] ([[N]]). One last remark before we proceed to more substantial issues. At first glance it might seem that something has to let proper names wiggle out somehow from the quantificational treatment that NPs are subject to: after all, we are used to thinking of proper names as denoting not sets of sets, but rather mere individuals. But here an old trick ǯ
ǣɋǡ
ɋ
Ǥ ǡ Emily Dickinson can be viewed as denoting the set of sets containing Emily Dickinson–and in that case, parallel with what we have seen, we can arrive at the correct truth conditions of a sentence like Emily Dickinson is well-known by saying that the sentence is true if and only if [[well-known]] [[Emily Dickinson]].

5

I use the notation |Y| to indicate the cardinality of the set Y. It would actually be more accurate to say that every DET is associated with a functional F that applies to any structure and yields a function from P(U) to P(P(U)). But in the text I will try to keep to a level of maximal simplicity, at the cost of some approximation.

6

Approaches to Quantification

5

And we come at last to the crucial question: is there really some profit to be made by viewing NPs as quantifiers? Does this point of view genuinely lead to a better understanding of the logical structure of natural language? Going by Montague 1973 only, one would be tempted to give a frank “no.” Montague’s prevailing concern was with compositionality, understood in a certain austere way: for Montague, expressions of the same syntactic category are to be matched at the semantic level with entities of the same type. It was in order to keep to this principle that Montague resorted to the trick we mentioned just above, which raises the type of the entities denoted by proper names. As for the remaining NPs (and here the only ones included in the English fragment that Montague studied were NPs of the form every N and a(n) N, together with definite descriptions), the choice to treat them as sets of sets7 seems like nothing more than a statement of the obvious. Well, as I mentioned earlier, the essence of Barwise and Cooper’s originality is that, in an apparently obvious idea, they perceived surprising new possibilities for development. Barwise and Cooper limit their attention to simple linguistic constructions –for instance, they put aside issues concerning intensional contexts. But to compensate, rather than restricting themselves to a narrow class of NPs as Montague did, they undertake a systematic investigation of all of the NPs admitted by natural language. This enlarged perspective allows them to identify large-scale regularities that, surprisingly, turn out to be describable (and perhaps even explainable) once reference is made to structural properties of the set-theoretic objects associated with NPs and DETs. More precisely: Barwise and Cooper show how, once DET and NP denotations are characterized in the way we have seen, it is possible to (I) formulate restrictive conditions that it seems that the DETs and NPs of any natural language must satisfy (“semantic universals”); (II) describe linguistically significant classes of DETs and NPs (that is, classes of DETs and NPs that behave in a uniform way with respect to various linguistic phenomena–and the concern here is not only with behavior specific to individual languages but also with behavior across the range of possible human languages). I will illustrate point (I)8 with an example that is very simple but not devoid of interest. We say that a DET D is conservative when, for any X, Y U, Y [[D]](X) if and only if XY [[D]](X). To get a sense of 7

In reality, as Montague wishes to deal with intensional contexts as well, he is forced to treat NPs as denoting properties of properties rather than sets of sets. But we can abstract away from this further complication here. 8 For a presentation and critical discussion of some of the semantic universals proposed by Barwise and Cooper, see Delfitto 1986.

6

Chapter I

this definition, think about the equivalence of sentences like Every man is mortal and Every man is a mortal man, No actor is shy and No actor is a shy actor, Most dogs are faithful and Most dogs are faithful dogs: these equivalences testify to the conservativity of every, no and most. One of the semantic universals proposed by Barwise and Cooper is the following: in any natural language, the DETs are all conservative. The reader can consider those languages known to him, and verify to what extent they conform to this condition, running through the various DETs in the inventory of each. It isn’t easy to find counterexamples to the conservativity universal. One might think about only: Only Japanese tourists visit the Louvre is not equivalent to Only Japanese tourists are Japanese tourists who visit the Louvre. But it is doubtful that only is a DET, for reasons we will not go into here.9 I won’t dwell further on the topic of semantic universals, in part because I don’t wish to go into overly technical details, in part because it seems to me that this aspect of Barwise and Cooper’s work is significant more for its methodological implications than for the intrinsic interest of the specific hypotheses presented. However, I would like to mention that, among the articles in GQ and SDRITQG that talk about semantic universals, Keenan 1987a merits particular attention. Many of the universals proposed by Barwise and Cooper concern simple DETs, that is, those DETs that are not built out of other DETs10. (A very elementary example: in any natural language, the simple DETs are not trivial. That is, we do not find any DET D for which we would say that [[D]] maps any Y U to , nor do we find any DET D for which we would say that [[D]] maps any Y U to the set of all subsets of U. For these kinds of DETs, the truth of D N VP would never depend on the N and VP chosen: in the first case, D N VP would invariably be false, and in the second case it would invariably be true.) Now, Keenan in his article sets out a very general picture in which these conditions find a natural place. Keenan distinguishes between categories that are lexically free and those that are not. In the case of lexically free categories, there is no principled distinction to be made between denotations that simple elements of the 9

The so-called Keenan-Stavi Theorem–one of the most well-known contributions to the theory of generalized quantifiers–relates to the notion of conservativity (cf. Keenan and Stavi 1986). Keenan and Stavi take their result to show that, at least in a certain sense, all conservative DETs are expressible in natural language. For a concise presentation of the theorem and for a justified invitation not to interpret it as being stronger than it is, see van Benthem 1986, pp. 10-11. 10 For example, all, three and four are simple DETs, while not all and three or four are not.

Approaches to Quantification

7

category can have, on the one hand, and denotations that complex elements of the category can have, on the other. By contrast, in the case of categories that are not lexically free, simple elements can have only some of the denotations available to complex elements. Against this background, the category of DETs and the category of NPs would be seen as categories that are not lexically free, and this would justify the possibility of formulating semantic universals for these categories of the kind we have touched on. Keenan moreover attempts to explain why certain categories are lexically free and others are not: here we find the thesis that the lexically free categories are the “small” ones, those matched with a small set of potential denotations. I am not sure that Keenan’s reasoning coheres perfectly, but the attempt deserves appreciation. I pass now to the second of the two points I mentioned above: the use of the conceptual apparatus of the theory of generalized quantifiers in order to characterize linguistically significant classes of NPs. Here a particularly effective example11, one that I think is worth some discussion, concerns sentences of the form there is/are NP. It is well known that, in contexts of this kind, certain NPs are admissible and others are not. Thus, the sentences (5) there is a boy / are some (ten, less than ten, more than ten, several, an odd number of) boys in the garden are fully acceptable, but the same is not true of the following sentences: (6) there is every (the) boy / are all (most, one half of the) boys in the garden. The question is: is there a way to distinguish the two groups of NPs, other than by pure and simple enumeration? Before Barwise and Cooper, linguists had attempted to answer this question with the suggestion that “indefinite” NPs may appear in there-sentences but “definite” NPs may not. The trouble was that these terms were often used without any satisfactory definition. G. Milsark wrote in this regard: The notion “definite” [is] a notion whose status in linguistic theory is anything but clear. The term has been used for generations in the pedagogy and scholarly description of many of the Indo-European languages, but within that tradition it is usually used only in discussing the overt formal 11

Not the only one: another nice example involves the use of the notion of monotonicity to characterize the distribution in English of words like any and ever. Moreover, this idea antedates Barwise and Cooper’s work: cf. Ladusaw 1979.

8

Chapter I contrast between “definite” and “indefinite” determiners such as English the and a/an. In philosophical logic, the related notion “definite description” has a similarly narrow scope, referring really only to singular nominal expressions introduced by the and their logical equivalents. Linguists, on the other hand, tend to take the term “definite noun phrase” in a broader sense […] As far as I know, the only coherent motivation that has ever been given for the inclusion of [...] NP types under the rubric “definite” has concerned similarity of distribution. [...] Clearly, more than this is needed, however. (Milsark 1977, p. 5)

In the 1977 article from which this passage is taken, Milsark strives to remedy the situation. He attempts to characterize the NPs that can occur in there-sentences on the basis of explicit criteria that can be applied with some generality. But his analysis–which is conducted entirely in informal terms–cannot be said to be fully satisfying either. Despite his good intentions and despite the insightfulness of some of his observations, Milsark’s discussion leaves various points shrouded in obscurity. Things finally change with Barwise and Cooper. Using very modest formal equipment, they address the issue with perfect clarity and complete rigor. Let us say that a DET D is positive strong if, for every universe of discourse U and every X U, X [[D]](X), and that it is negative strong if, for every universe of discourse U and every X U, X [[D]](X). A DET is strong if it is positive or negative strong, and is weak if it is not strong. Now, Barwise and Cooper’s conjecture is that sentences of the form there is/are NP admit only NPs with weak DETs. For a first test of the adequacy of this proposal, consider again examples (5) and (6): a moment’s reflection suffices to convince oneself that indeed the DETs in (5) are weak, while the DETs in (6) are strong. As for why these two classes differ in their behavior, Barwise and Cooper propose the following. A sentence of the form there is/are D N is interpreted as expressing that U [[D N]] (where U is, as usual, the universe of discourse). But it is easy to show that, in the case where D is strong, U [[D N]] is either a tautology or a contradiction 12 : the corresponding sentence with there therefore has no informational value, and this, according to Barwise and Cooper, is the reason why it sounds unnatural.

12

By conservativity, U [[D]] (X) if XU [[D]] (X) if X [[D]] (X). Therefore, if D is positive strong, that is if X [[D]] (X) is always true, then U [[D]] (X) is also always true; if instead D is negative strong, that is if X [[D]] (X) is always false, then U [[D]] (X) is also always false.

Approaches to Quantification

9

Barwise and Cooper’s solution is so simple that it could even give the impression of triviality: one might think that, if this is the kind of result that we get out of the theory of generalized quantifiers, then the theory can’t be of much interest. But it would be wrong to reason in this way. Natural language semantics aims to be an empirical discipline. What makes a theoretical proposal in this field interesting, then, is not its degree of formal complexity but rather its capacity to account for facts of natural language. If, by adopting a certain point of view and by making use of a certain technical apparatus, we find that we can explain the facts of language in a simple way, then we can conclude that this point of view and these technical resources were well chosen. Now, to say that Barwise and Cooper’s solution is simple, clear and precise is certainly not to say that it is perfect, or that it leaves no room for improvement. The advantage of having a hypothesis that is formulated in simple, clear and precise terms is that one can perceive its implications more easily and see how to test it. It is unsurprising therefore that some researchers have returned to the problem of there-sentences convinced that they could do better than Barwise and Cooper. This is the case for Johnsen 1987 in particular. According to Johnsen, an NP can occur in sentences of the form there is/are NP13 only if contains an intersective DET, that is, a DET D such that, for any X, Y U, to say that Y [[D]] (X) is to say that Y [[D]] (X Y). Keenan 1987b’s analysis is still more interesting. Among other things, Keenan strongly–and not implausibly–criticizes Barwise and Cooper’s idea (adopted by Johnsen as well) that the unacceptability of certain sentences can be explained on the basis of the triviality of their informational content. A detailed discussion of these developments would certainly be worthwhile but here I will not go beyond this brief mention of their existence. 3. For all its merits, the kind of analysis defended by Barwise and Cooper 1981 is far from solving all the problems related to quantification in natural language. Further tools are needed, for example, to treat socalled donkey-sentences, that is, sentences like (7) every farmer who owns a donkey beats it14

13

To be precise, Johnsen considers not only there-sentences of this kind, but also sentences like there arrived some men at the airport. Higginbotham 1987 also concerns himself with sentences of this sort. 14 Examples of this kind were first brought up by Geach 1962.

10

Chapter I

where the relevant interpretation is one on which the pronoun it takes a donkey as its antecedent. That a sentence of this kind should create difficulty might puzzle those who view the logical analysis of language as an enterprise that consists in taking complete sentences and paraphrasing them with symbolic terms (an exercise that is not always trivial but that is of modest theoretical relevance). After all, any student who has a modicum of familiarity with logical notation is capable of proposing the following translation for (7) after a bit of thought. (8) xy ((farmer(x) & donkey(y) & own (x, y)) o beat (x, y)). Where is the problem, then? The problem arises because the goal of logical semantics is not to analyze individual sentences considered in isolation from all other sentences, but rather to derive the logical forms of sentences using general principles that can be applied in a systematic way. It is from this standpoint that (7) poses difficulty. If we try to analyze (7) while keeping to the Fregean recipe for the translation of sentences containing NPs of the form every N and an N, we obtain something like (9) x ((farmer(x) & y (donkey(y) & own (x, y))) o beat (x, z)) which is certainly not equivalent to (8) and which is glaringly inadequate. Notice in particular that in (7) (on the interpretation that interests us) the pronoun it is anaphorically related to the NP a donkey, but in (9) the variable z–which ought to correspond to the pronoun–is free. In a moment of scant lucidity, one might think about improving (9) by replacing z with an occurrence of y, in the illusion that the variable corresponding to the pronoun in (7) would then end up bound by the existential quantifier; but this move would obviously be futile, as this new occurrence of y would remain outside the field of action of . The problem that we are faced with is thus the problem of identifying general principles for arriving at a sentence’s logical form that can do two things at once: on the one hand, in “normal” cases, they should yield results equivalent to those we get out of the Fregean recipe for translating NPs with every and a in terms of and ; on the other hand, they should also justify the attribution of a logical form like (8) to a sentence like (7). And the theoretical framework presented in § 2 does not seem to be of any help to us here.15 15

Donkey-sentences also constitute a problem for the binding theory as formulated within Chomskian linguistics. Here the difficulty arises from the fact that the pronoun in a donkey-sentence is not c-commanded by its antecedent at the level of S-structure. For a recent discussion of this issue, see Reinhart 1987.

Approaches to Quantification

11

An ingenious attempt at a solution was made by Hans Kamp16. Kamp’s solution is seated within an approach to semantics that is original in many ways and unquestionably deserving of attention: this approach is the Discourse Representation Theory referred to in the title of one of the two volumes that we are concerned with in these pages. In Kamp’s theory, the ascription of truth conditions to sentences (which are considered not only in isolation but also in the context of sequences that form “discourses”) is mediated by the construction of “discourse representations” (DRs), or, more generally, by the construction of “discourse representation structures” (DRSs). Kamp claims that the procedures for constructing DRs mirror the mental processes that take place in an individual who is actually interpreting a discourse. The sentences of a discourse get taken into consideration one after the other, in the order in which they are presented. Moreover, the treatment of each individual sentence is not compositional: while traditionally one seeks to determine a sentence’s truth conditions on the basis of the semantic values assigned to the expressions that make it up, in the case of DR construction one starts from the complete sentence and one breaks it down progressively into parts. This process gradually turns out the informational content of the sentence in the form of clauses that, taken all together, furnish a sort of picture of a possible state of affairs. If this picture happens to be compatible with reality, the sentence that served as the starting point is true; otherwise, it is false. By way of example, let us consider a discourse of the simplest kind, one that consists of a single sentence: (10) a boy loves a girl One of the salient characteristics of the generation procedure for DRs is that every occurrence of an NP of the form an N entails the introduction of a corresponding “discourse referent.” In practice, one can think of a discourse referent as a free variable. Since (10) contains two occurrences of NPs of the relevant form, a DR for (10) will have to contain two discourse referents, say x and y. Beyond this, a DR like this will have to contain the following clauses (which result from the process that breaks (10) down into informational content and which are not further reducible): 16

Cf. Kamp 1981. A related analysis of donkey-sentences was proposed by Irene Heim in Heim 1982. Unfortunately this text has remained unpublished, but see Heim 1983. [Translator’s note. The dissertation by Irene Heim that Casalegno refers to here was subsequently published as : Heim, I. 1988, The Semantics of Definite and Indefinite Noun Phrases, Garland Publishing (Outstanding Dissertations in Linguistics Series), New York.]

Chapter I

12

boy (x), girl (y), love (x, y). These three clauses taken together depict, so to speak, a possible state of affairs. In technical terms, the compatibility of this possible state of affairs with reality (and thus the truth of (10)) can be identified with the existence of a function f from {x, y} to the universe of discourse such that f(x) and f(y) satisfy the conditions expressed by the clauses under consideration. At this point one might object that all this is quite uninteresting. One might very well have the impression that the foregoing is just a pointlessly laborious way of saying that (10) is true if and only if the open formula boy (x) & girl (y) & love (x, y) is satisfiable–and in that case, why not simply say that (10) can be translated as xy (boy (x) & girl (y) & love (x, y))? In fact, the decision to treat NPs of the form an N as “corresponding to” free variables has its advantages. A first advantage concerns the possibility of analyzing, within the framework of Kamp’s theory, not only individual sentences but also–as we said above–sequences of more than one sentence structured so as to make up a discourse. Let’s suppose that (10) is the first sentence of a discourse that continues with (11) she hates him The most natural reading is the one on which the pronouns she and him are interpreted as referring respectively, by way of anaphora, to the girl and the boy mentioned in (10). Now, it is very easy to capture this idea in terms of DRs: all one needs to do is to extend the DR previously constructed for (10) by adding the further clause hate (y, x). If instead one represents (11) by means of a closed formula in which the variables are existentially quantified, it becomes more difficult to account for the discourse constituted by (10) and (11) (presumably one would represent the entire discourse by means of a single closed formula that would not contain as a subformula the formula used to represent (10) in isolation). A second advantage of the treatment of NPs with the indefinite article in terms of free variables is that this allows us to solve the problem we started from, namely that of sentences like (7). But to see how, one needs to know something first about the way in which NPs of the form every N are treated in Kamp’s theory. If (a discourse contains) a sentence (which) contains an NP of this form, the analysis in this case provides for the construction not merely of a single DR, but rather of a DRS–that is to say, a set of suitably ordered DRs. More specifically, when one has a sentence of the form every N VP, one has to introduce two distinct DRs, call them DR1 and DR2, where one corresponds to N and the other to VP. DR1 and DR2 are conceived in such a way that the starting sentence is true if and only if every assignment of values to the discourse referents contained in

Approaches to Quantification

13

DR1 that satisfies DR1 is extendable to some assignment of values to the discourse referents in DR2 that satisfies DR2. An elementary example: in the DRS associated with (12) every farmer is happy DR1 and DR2 will consist of the clauses farmer (x) and happy (x), respectively. Now for (7). In the case of (7), DR1 is constructed in the following way: we begin by generating the clauses farmer (x) and own a donkey (x); subsequently, in accordance with the idea that NPs introduced by the indefinite article have to be treated in terms of free variables, the second of these two clauses gets broken down into donkey (y) and own (x, y). As for DR2, it will be constituted of the single clause beat (x, y)–what is essential here is that the discourse referent introduced into DR2 that corresponds to the pronoun it (y in the case at hand) is the same as the discourse referent introduced into DR1 that corresponds to the NP a donkey. Now, let us ask with regard to a DRS constructed in this way: when will it be the case that every valuation of DR1’s discourse referents satisfying DR1 can be extended to a valuation of DR2’s referents that satisfies DR2? Since in this case DR2 doesn’t contain any discourse referent that isn’t already in DR1, we can reformulate this question as follows: when will it be the case that every valuation satisfying DR1 satisfies DR2 as well? The answer is obvious: this will happen precisely when (8) is true. But (8), as we pointed out right at the outset, constitutes an adequate translation of (7). Kamp’s analysis of (7) therefore arrives at the right result. This presentation was rather rough–and too much so, of course, to allow the reader to appreciate all the merits of Discourse Representation Theory. But on reading the articles collected in SDRTTGQ and GQ, one can easily see how influential Kamp’s ideas have been. Even if only two authors–Ewan Klein 1987 and Henk Zeevat 1987–choose to work directly within Kamp’s framework, there seems to be a general conviction that this theory furnishes an adequate solution to the problem posed by donkeysentences (and other problems as well). There are some attempts at reformulation, among which the most interesting is unquestionably Barwise 1987. Barwise’s goal in this article is to show how one can refashion what Kamp has done in a way that respects the principle of compositionality, without recourse to DRs and without departing very far from the spirit of the “theory of generalized quantifiers” 17 . While the 17

As the problem of donkey-sentences falls outside the framework outlined by Barwise and Cooper, the reader might wonder if this fact constitutes a “rebuttal” to

14

Chapter I

article deserves a detailed examination, I will abstain from this in order to avoid an overly technical discussion. One piece of advice to the reader: before taking a stab at Barwise’s text, it might be useful to read Mats Rooth’s contribution (Rooth 1987) by way of introduction. Rooth presents a formalism that is analogous but much simpler. One issue that is not addressed adequately either in SDRTTGQ or in GQ, and which however ought to be discussed thoroughly, is the issue of how to extend Kamp’s theory to NPs that contain DETs other than every and a. Obviously, every and a are not the only DETs that can appear in donkey-sentences; extending the theory is therefore an inescapable duty. But performing this task requires untangling some knotty issues. I think it is worth dedicating a few words to this. A preliminary problem first. It is well known that NPs introduced by certain DETs can’t serve as antecedents for a donkey-anaphor. This is the case for NPs of the form every N: notice, for example that in (13) a gourmet who knows every neighborhood restaurant visits it regularly the pronoun it cannot take every neighborhood restaurant as its antecedent (and in fact Kamp’s theory correctly excludes this possibility). But what general distinction can one draw between those NPs that can serve as the theory of generalized quantifiers. It is worthwhile emphasizing that it does not. The existence of donkey-sentences does show (as do the facts we will take up in § 4) that the procedure that Barwise and Cooper consider for evaluating sentences is applicable only within a limited domain-and that, if we assume that an expression’s denotation has to include everything relevant for determining the truth conditions of all the sentences in which the expression appears, then we probably have to conclude that strictly speaking NPs do not “denote” sets of subsets of the universe of discourse. But this ultimately has little importance. The fact remains that, by associating NPs with sets of sets in the way we have seen and by then introducing simple criteria for classifying these set-theoretic objects, Barwise and Cooper manage to account for significant regularities within individual languages and even across all natural languages; this is proof enough that their point of view is valuable and fruitful. This said, it should be added that in any event, when one looks at the current literature on quantification, one finds it rather fragmentary: the theoretical approaches to quantification that exist are heterogeneous and cannot directly be compared with one another (this is the case precisely for Barwise and Cooper’s theory and Discourse Representation Theory), and one perceives the lack of a single uniform theoretical framework capable of integrating these different approaches. The exercise in formalization that Barwise undertakes in the work cited just above can be seen as an attempted step towards achieving just such an integration.

Approaches to Quantification

15

antecedents to pronouns in donkey-sentences and those that can’t? Even if researchers have by no means neglected this question, I do not see that a persuasive answer has been given as yet. It is a widespread belief18 that the class of possible antecedents to donkey anaphors coincides with the class of indefinite NPs–those which can occur in there-sentences and to which, as we have seen, the theory of generalized quantifiers seeks to provide a rigorous characterization. But is this really the case? I have my doubts. It seems to me, for instance, that one can find donkey anaphora with an antecedent of the form all the N, and an NP of this kind is certainly no more indefinite than an NP of the form every N. As evidence, consider the contrast between (13) and the sentence obtained from (13) by substituting all the neighborhood restaurants for every neighborhood restaurant and by putting the pronoun in the plural form: unlike (13), this new sentence can be read perfectly well as a donkey-sentence. Or take the sentence (14) every man who seduces all the women he meets treats them with little respect. Here too, it seems to me that nothing prevents us from interpreting the sentence in donkey style. (Naturally, if what I am saying is correct–that is, if the DETs every and all behave differently with respect to the phenomenon of donkey anaphora–the consequences of this go beyond the simple fact that the class of NPs capable of serving as donkey anaphor antecedents cannot be identified with the class of indefinites. It also follows that it is impossible to characterize this class using the criteria available within the theory of generalized quantifiers: after all, from the point of view of that theory, there is no difference whatsoever between every and all !) I will not dwell further on this point because I am not able to propose any hypothesis about what the right distinction is, and a few more rounds of the example-counterexample game would probably be as inconclusive as they would be easy to play. I am convinced that the issue requires further study, however. Apart from this preliminary problem, there are some more specific problems that one runs into when one sets about extending Kamp’s theory. We have seen that Kamp’s analysis of a sentence like (7) is based on two assumptions: (I) NPs of the form an N do not have, so to speak, any quantificational value of their ownin the construction of a DRS, every occurrence of an NP of this kind leads to the introduction of a free variable, and it is only on the basis of the context that one can establish whether in the end this variable will be interpreted as if it were bound by an 18

See for example Higginbotham 1987 and Reinhart 1987.

16

Chapter I

existential quantifier or in some other way instead; (II) by contrast, NPs of the form every N have a quantificational value of their own–it is as though every were associated with a universal quantifier of a kind that is sometimes called “unselective,” that is, a quantifier that can act on several variables at once, including any around that correspond to NPs of the form a N. Now, it isn’t clear how to extend (I) and (II) to the analysis of donkey-sentences in which we find DETs other than every and a. Consider (15) every farmer who owns at least five donkeys beats them. The temptation here is to say, on analogy with (I), that the NP at least five donkeys does not have any quantificational value of its own. And at first sight it seems reasonable to go about constructing the DRS for (15) by introducing a clause like at least five donkeys (x) to correspond to this NP, where x is a variable over sets or “groups” of individuals19. The trouble is that there also exist sentences like (16) at least five farmers who own a donkey beat it. In this case it seems impossible not to attribute a quantificational value of its own to the NP with the DET at least five. We thus face a dilemma: do NPs with the DET at least five have a quantificational value of their own, or don’t they? At this point, one could look for a way out by hypothesizing that NPs containing certain DETs behave on some occasions like true quantifiers and on other occasions in another way. In fact, an idea of this kind is suggested in GQ by Sebastian Löbner 1987 and by Jan Lønning 198720 (for reasons not directly connected with the problem of donkeysentences). However, I am rather skeptical about the actual workability of this solution. Löbner’s and Lønning’s arguments are not very persuasive, and on top of this they do not offer an adequate formal implementation of their proposal. Worse still, there are complicated examples of donkeysentences (like if more than five farmers who own a donkey beat it, Pedro reports them to the Animal Protection Agency) in which the reasons for and against the attribution of a quantificational value to a single occurrence of an NP seem to cancel each other out (or, better, sum together). A further difficulty concerns sentences of the following kind: 19

The introduction of groups of individuals into the universe of discourse seems to be necessary in any event in order to account for the so-called “collective reading” of quantifiers–see § 4. 20 The same idea already appears in Milsark 1977.

Approaches to Quantification

17

(17) almost all farmers who own a donkey beat it. If we enthusiastically extend to (17) the strategy summarized in point (II) above, we interpret this sentence as meaning: for almost all pairs x,y such that x is a farmer and y is a donkey owned by x, x beats y. But obviously this paraphrase is incorrect. What (17) expresses is that almost all the farmers owning donkeys beat all the donkeys that they own. This difficulty has been pointed out various times in the literature on the subject; but it is more challenging than one might think to find an acceptable solution. See Reinhart 1986 on this matter. Reinhart treats sentences like (17) in a satisfactory way, but her analysis applied to an example like no farmer who owns a donkey beats it generates the following paraphrase: no farmer who owns donkeys beats all the donkeys that he owns! I don’t mean to say that these problems are unsolvable, or even that solving them will necessarily turn out to be complicated. They are simply problems that one has to bear in mind. Discourse Representation Theory as Kamp formulated it does not yet provide a complete and definitive treatment of all the difficulties connected with donkey-sentences. It is only a point of departure–though one whose qualities are beyond question. 4. Donkey-sentences do not constitute the only case for which the technical and conceptual apparatus of §2 proves inadequate. Though for different reasons, the so-called “collective use” and “branching use” of quantifiers also call for some enhancements to the theoretical framework outlined in Barwise and Cooper 1981. Let us have a quick look at this. The collective use is exemplified by sentences of the following kind: (18) the conspirators assembled at the stroke of midnight (19) some demonstrators surrounded the police car. It is easy to see in what ways these sentences are special. (18) and (19) assert that a certain action was undertaken not by single individuals, but rather by a group of individuals acting, in a word, collectively: the group of conspirators in one case, a group made up of some demonstrators in the other. Compare (18) and (19) with (20) the conspirators grew whiskers (21) some demonstrators were injured. The truth conditions of (20) and (21) come out correctly if we attribute to the DET the the same denotation that we attributed to every in §2, and if we assume that [[some]] is the function that associates any Y U with the

18

Chapter I

set {X | X U & |Y X| 2}. But obviously, when it comes to (18) and (19), this sort of treatment for the and some will not work: (18) is not equivalent to every conspirator assembled at the stroke of midnight (assuming this sentence means something), and (19) is not equivalent to two or more demonstrators each surrounded the police car. Collective quantification is the concern of Godehard Link 1987 in his contribution to GQ. In this article, Link brings up–together with a wealth of applicationsan analysis that he developed in earlier work21. In order to account for the collective use of quantifiers, Link suggests using structures whose universe of discourse U contains two sorts of individuals: “ordinary” or “atomic” individuals and “sum” or “group” individuals. U must moreover be structured by a binary operation whose intuitive sense is as follows: if x and y are both atoms, then x y is the sum of x and y, or in other words the group formed by x and y; if x and y are both groups, then x y is the group obtained by merging x and y; and finally, if x and y are such that one of them is an atom and the other a group, x y is the group obtained by adding the atom on to the group. The ordered pair must constitute a complete semilattice. Let d be the order on U induced by : if x is an atom and y is a group, then x d y means that x is among the constituents of y; if instead x and y are both groups, then x d y means that y either coincides with x or extends it. In a structure of this kind, predicates are interpreted in the usual way, that is, as subsets of U; and, importantly, a subset of U can serve as the interpretation of a predicate even if it has elements that are not atoms. Consider now examples (18) and (19). (18) says something about a group made up of all the conspirators: to be precise, it says that this group assembled at the stroke of midnight. But in a structure à la Link the group of all the conspirators is represented by sup([[conspirators]]): therefore, (18) counts as true in a structure if and only if in that structure sup([[conspirators]]) [[assemble at the stroke of midnight]]. (19) asserts the existence of a group of demonstrators that satisfies certain conditions: namely, it is a group that is made up of at least two individuals and that surrounded the police car. We can thus say that (19) is true in a structure if and only if in that structure there exist a group x and two distinct atoms y, z such that y d x, z d x and x [[surround the police car]]. These analyses lead us to the following denotations for the and some: for any Y U, [[the]] (Y) = {X | X U & sup([[Y]]) X} and [[some]](Y) = {X | X U &xyz (u(u* d x o u Y) & y z z & y* d x & z* d x & x X)} 21

See in particular Link 1983.

Approaches to Quantification

19

(where v* d w means: “v is an atom and v d w”). And from this point on, we follow the course charted in §2: an NP of the form D N, with D = the or D = some, will have as its denotation [[D]]([[N]]), and the truth conditions of sentences like (18) and (19) will come out as conforming to schema (2). In conclusion: the treatment of collective quantification proposed by Link is fully compatible with the idea that NPs denote sets of sets and that DETs denote functions from sets to sets of sets; the big innovation, as we have seen, lies in the use of structures that are more complicated than usual. One last observation before passing on to a different matter. Link maintains that in sentences like (20) and (21) the conspirators and some demonstrators have the same semantic value that they have in sentences like (18) and (19). However, there is an apparent difficulty here: as Link points out, the predicates grow whiskers and be injured are “distributive” predicates–that is, predicates whose extension is made up exclusively of atoms (what could it mean to say of a group of individuals that it grew whiskers collectively or that it was injured collectively?). Therefore, it would be wrong to say that (20) is true if and only if sup ([[conspirators]]) [[grow whiskers]] or that (21) is true if and only if there exist a group x and two distinct atoms y, z such that y d x, z d x and x [[be injured]]. To get around this difficulty, Link introduces the operator *: given any predicate P, P* is a new predicate whose extension includes not only the extension of P but also all those groups whose atomic parts are in the extension of P. So for example [[*grow whiskers]] contains all the individuals who grow whiskers along with all the groups made up of individuals who grow whiskers, and [[*be injured]] contains all the individuals who are injured along with all the groups made up of individuals who are injured. It now becomes possible to analyze (20) and (21) in the following way: we attribute to the conspirators and some demonstrators the same denotations that they have in (18) and (19) but we resort to a variant of schema (2) in which we replace the condition [[VP]] [[NP]] by [[*VP]] [[NP]]. We now come to branching quantification. This rather complex topic found itself at the center of a lively debate in the sixties, due to the original and controversial theses of Jaakko Hintikka22. I won’t go into the details, but will simply recall that there was disagreement over the status of sentences like some relative of each villager and some relative of each townsman hate each other and some book by every author is referred to in some essay by every critic: some researchers maintained that sentences of 22

Cf. Hintikka 1973 and the other texts reprinted in Saarinen 1979.

Chapter I

20

this kind could be translated adequately into a first order language; Hintikka, by contrast, considered these sentences to exemplify a kind of quantification different from the usual kind, a kind of quantification called branching quantification. This debate reached at least a tentative resolution with the publication of an important article by Barwise 1979 23 . The sentences that Hintikka had used in support of his position were somewhat complicated and problematic, and Barwise claimed that there are examples that show more convincingly that natural language contains branching quantification. Consider the following sentences: (22)

a. more than half of the boys danced with more than half of the girls b.

fewer than five boys danced with fewer than ten girls

c.

four boys danced with five girls.

Each of these three sentences admits an interpretation that is so to speak unsurprising, where the second NP is within the scope of the first or vice versa.24 But Barwise maintains (rather plausibly, it seems) that the three sentences also admit an interpretation of another kind–a branching interpretation, whose content can be specified as follows: (23)

a. there exist a set X and a set Y such that (i) X contains more than half of the boys, (ii) Y contains more than half of the girls, (iii) if x X and y Y, then x danced with y. b. there exist a set X and a set Y such that (i) X contains fewer than five boys, (ii) Y contains fewer than ten girls, (iii) if x danced with y, then x X and y Y. c. there exist a set X and a set Y such that (i) X contains exactly four boys, (ii) Y contains exactly five girls, (iii) if x

23 To this date, this article by Barwise constitutes the best introduction to the theme of branching quantification in natural language. As for the logico-mathematical side, Mundici 1985 offers a useful overview. 24 The interpretation on which the first NP is outscoped by the second is perhaps less natural, but I don’t think it should be considered as excluded. In order to account for it, one has to complicate slightly the picture sketched in § 2. [Translator’s note. The sentences Casalegno considers in the original text are the Italian sentences Più della metà dei ragazzi ha ballato con più della metà delle ragazze, Meno di cinque ragazzi hanno ballato con meno di dieci ragazze, and Quattro ragazzi hanno ballato con cinque ragazze.]

Approaches to Quantification

21

X and y Y, then x danced with y, (iv) if x danced with y, then x X and y Y.25 It is clear what relation there is between the branching interpretation of (22a, b, c) and their “normal” interpretations (where one of the two NPs is in the scope of the other): neither of the two normal interpretations entails the branching interpretation, while the branching interpretation entails both of the normal interpretations. At this point the reader will ask himself what the general formula is for determining the truth conditions of sentences that admit the branching interpretation. Barwise’s own article doesn’t provide an answer to this question–Barwise only discusses some significant classes of examples. However, there is an answer in another of the articles included in GQ, namely, Westerståhl 1987. I will not illustrate Westerståhl’s proposal in detail: it is rather complex and perhaps not to be regarded as definitive. Instead, I would like to draw on this work to introduce a problem that seems to me particularly interesting and to which it seems to me worthwhile dedicating the remainder of this section. It is the problem that Westerståhl alludes to at the very end of his article, where he mentions that some researchers take the branching interpretation of quantifiers to be reducible to the collective interpretation (Westerståhl notes that he himself dissents from this position). While I do not find any explicit formulation of this thesis in the literature on the topic, it is not so hard to reconstruct the terms of the discussion. If we wanted to maintain that the branching interpretation reduces to the collective interpretation, we could proceed in two stages, so to speak. We could first try to show that the branching interpretation is reducible to what I will call, for lack of a better term, the “independent interpretation”; and we could then go on to claim that the independent interpretation has no true status of its own, but dissolves into the collective interpretation. To clarify this sketch, we can again consider the examples in (22). So as not to overly complicate matters, let us concentrate on (22c). Let us imagine a situation in which four boys (call them a1, …, a4) and five girls (call them b1, …, b5) have danced; let us imagine moreover that the dancing couples made up of one boy and one girl are the couples (ai, bi) 25

To be honest, Barwise 1979 does not consider the branching reading of sentences like (22c), but only that of sentences like (22a) and (22b) (in technical terms, he considers the branching reading of sentences whose NPs are monotone increasing or monotone decreasing). The first person to explicitly bring up the fact that a sentence like (22c) can be associated with the interpretation (23c) was van Benthem, it seems.

22

Chapter I

where i goes from 1 to 4, as well as the couples (a4, b3) and (a4, b5). Let us now ask what truth value (22c) has in this situation, which we will call “Situation S”. Taking into account the interpretations we have considered until now, the answer is simple: in Situation S (22c), is false both if it is interpreted with one NP in the scope of the other and if it is interpreted as in (23c). Yet it is clear that the sentence in question can also be used in a way that makes it true in Situation S, and this means that it must admit an interpretation that we have not yet taken into account. What is the content of this new interpretation? One possibility is that it corresponds to the following paraphrase: (24) there exist a set X and a set Y such that (i) X contains exactly four elements, (ii) Y contains exactly five elements, (iii) the elements of X are boys, (iv) the elements of Y are girls, (v) for every x X there exists a y Y such that x danced with y, (vi) for every y Y there exists an x X such that x danced with y. This is the kind of interpretation that I alluded to above when I spoke of an “independent interpretation”26. It is easy to see that, interpreted in this way, (22c) comes out as true in Situation S, as desired. Once we admit the existence of the independent interpretation, however, the suspicion might arise that, on balance, the attribution of interpretation (23c) to (22c) is superfluous. Obviously, any situation that verifies (23c) verifies (24) at the same time. One might be tempted to conclude from this that the existence of the branching interpretation is in fact an illusion that arises from narrow-mindedly considering only some of the situations that verify the independent interpretation: one might conjecture that all the apparent cases of branching quantification in natural language will turn out on closer examination to be cases of the independent interpretation. This is what I had in mind when I said that one might propose that the branching interpretation reduces to the independent interpretation. Let us now try to understand what route one could take in order to try to eliminate the independent interpretation in favor of the collective interpretation. An attempt of this kind is made by Jan Lønning 1987, so we can refer to this work. Lønning essentially adopts Link’s theoretical framework, in which the independent interpretation of (22c) can be reformulated as in (25) below. The difference with respect to (24) is that here the sets are replaced by groups of individuals: 26

Cf. Carlson 1982 and Schein 1987.

Approaches to Quantification

23

(25) xy (four boys (x) & five girls (y) & u(u* d x o z(z* d y & danced with (u, z))) & z (z* d y o u(u* d x & danced with (u, z)))). But Lønning claims that it would be out of place to assign these truth conditions to (22c). In his view, the interpretation of (22c) that makes it true in Situation S has, so to speak, a less complex, less articulated structure. Here, concretely, is what Lønning proposes: (26) xy (four boys (x) & five girls (y) & danced with (x, y)) According to Lønning, (22c) can be used to assert that the relation expressed by dance with holds between two groups of individuals, regardless of the way in which this same relation connects the members of one group and the members of the other. Stated differently: (22c) is able to provide a correct description of situations like Situation S simply because these situations can be conceived of as events in which a group of four boys and a group of five girls are collectively involved. It is in this sense that Lønning eliminates (or at least attempts to eliminate) the independent interpretation in favor of the collective interpretation. What reasons are there for preferring (26) to (25)? To begin with, Lønning holds that one ought to aim for maximal simplicity in the logical analysis of natural language, and (26) is obviously more simple than (25). Lønning emphasizes that, once we have introduced the tools that we need in any event in order to account for the collective use of quantifiers, the assignment of interpretation (26) to (22c) doesn’t create any problems, while (25) would bring with it further complications. A second argument is the following. Suppose that there is a group x of four boys and a group y of five girls, that every boy in x danced with some girl in y, that every girl in y danced with some boy in x, but that, nonetheless, the performances of the couples in question are so unconnected with each other as to prevent them from being viewed as constituting together a single event. In this kind of situation, says Lønning, the use of (22c) would be inappropriate even though the truth conditions of (25) are satisfied. This difficulty does not arise with the paraphrase in (26): one can assume that two groups x and y are in the extension of dance with only if x and y are involved in a single event. There is then a third argument adduced by Lønning in support of his proposal. Consider (27) three boys ate four cakes.

24

Chapter I

If we attribute to (22c) the interpretation in (25), then we should attribute to (27) the interpretation (28) xy (three boys (x) & four cakes (y) & u(u* d x o z(z* d y & ate (u, z))) & z(z* d y ou(u* d x & ate (u, z)))). At first glance, it might seem that there is no important difference between the two examples. The fact is, however, that dance with and eat express relations of a different nature. If Giuseppe and Stefano danced with Daniela, then Giuseppe danced with Daniela and Stefano danced with Daniela. But if Giuseppe and Stefano ate, one forkful at a time, the cake that was on the table, we can certainly not infer that Giuseppe ate the cake that was on the table and that Stefano ate the cake that was on the table. Well, imagine now a situation in which Giuseppe, Stefano and Mario ate cakes t1, t2 and t3 respectively, and in which moreover Giuseppe and Stefano together ate a fourth cake t4. In a case of this kind, observes Lønning, sentence (27) can be considered true while (28) is false given that there is no boy of which we can say that he individually ate cake t4. Therefore, whoever is not satisfied with an analysis of (27) along the lines of (26) will have to either modify (28) or assume that, beyond the interpretation represented by (28), there is yet another interpretation of (27), one that makes (27) true in the situation just described. But, according to Lønning, both these choices involve intolerable complications. With the (brief but I hope faithful) presentation of these arguments of Lønning’s, we have reached the end of the path that we wished to explore: we have seen how one can deny the existence of a distinct branching reading for quantifiers by privileging the role of the independent reading; and we have seen–precisely by considering Lønning–how one can call into question the autonomous existence of the independent reading itself and reduce it to a special case of the collective reading. The moment has now come for an assessment. Is the chain of reasoning that I have tried to reconstruct really so solid? I think not–in my judgment, the matter requires further consideration. Let us start from the first point: the reduction of the branching reading to the independent reading. There is no doubt that many of the alleged examples of branching quantification that one finds in the literature can be reinterpreted as examples of the independent reading. But there are also sentences for which the branching reading can be spotted at a glance. While (22c) is a dubious case, the same is certainly not true for

Approaches to Quantification

25

(29) each of four boys danced with each of five girls.27 It is an indisputable fact that we can attribute to (29) the truth conditions specified in (23c), and so, if (23c) constitutes a typical case of branching quantification, it must be concluded that branching quantification is truly present in natural language after all. At this point, it becomes rather irrelevant to determine whether (22c) admits the interpretation corresponding to (23c) over and above the interpretation corresponding to (24) (or to (25)). And now the second point: the reduction of the independent reading to the collective reading. As I said, Lønning is tenaciously attached to the idea that semantic analysis ought to be conducted in the simplest possible terms, and this is one of the fundamental reasons for the fact that he considers it worthwhile to deny the autonomy of the independent reading. But on reflection, this is quite a fragile reason. The degree of complexity of semantic analysis cannot be fixed a priori: it can depend only on the degree of complexity of the linguistic facts to be explained. Now, it is my impression that Lønning, in the name of simplicity, sways himself into providing an arbitrarily simplified picture of the facts. Here is an example that I hope serves to clarify what I have in mind. Consider our earlier observation about dance with: if Giuseppe and Stefano danced with Daniela, then Giuseppe danced with Daniela and Stefano danced with Daniela. The most natural way to account for this (while staying within a Linkian framework) is to say that the relation denoted by dance with is a relation between “atomic” individuals. Extending the terminology that Link uses for unary predicates, we could say that dance with is a “distributive” binary predicate. But this way of proceeding is obviously incompatible with Lønning’s approach. For a formula like (26) to make sense, the relation dance with must be able to connect “groups” of atomic individuals as well, which precisely excludes the possibility of treating dance with as a distributive predicate. By contrast, the distributivity of this predicate fits perfectly with the attribution of interpretation (25) to (22c). These remarks, however, do not yet furnish a response to the specific arguments adduced by Lønning to demonstrate the inadequacy of (25). Recall that one of these arguments is based on the assumption that a sentence like (22c) can be used appropriately only to describe a portion of reality that can be thought of as a single event. This is a correct intuition, I think, but in the present context it has no significance. The fact is that any 27

[Translator’s note. The Italian example that Casalegno uses here is Quattro ragazzi hanno ballato ciascuno con ciascuna di cinque ragazze.]

26

Chapter I

portion of reality can be conceived of as constituting a single event once one takes the right point of view. Instead of saying that (22c) can be used appropriately only if a certain portion of reality constitutes a single event, I would say rather that the appropriate use of (22c) has the effect of carving out a certain portion of reality as a single event. While I am aware that this issue requires more extensive discussion, I won’t elaborate here. It remains to examine the argument of Lønning’s that turns on example (27). Lønning is certainly right when he points out the impossibility of analyzing (27) in a way that corresponds to (28). At the same time, all it takes to obtain an adequate analysis is a minimal modification of (28): (30) xy (three boys (x) & four cakes (y) & u(u* d x o vz (u* d v & v d x & z* d y & ate (v, z))) & z(z* d y o v(v d x & ate (v, z))))28. It is easy to verify that, in the situation described above that makes (28) false, (30) comes out as true. Notice that, if we do not adopt an analysis of this kind but instead fall back on the analysis proposed by Lønning, we find ourselves with a problem for eat analogous to the problem that we brought up earlier for dance with. If Giuseppe ate the apple cake and the cherry cake, then Giuseppe ate the apple cake and Giuseppe ate the cherry cake. It seems then that the relation expressed by eat holds between x and y only if y is an atomic individual: eat is, so to speak, distributive on its second argument. But adopting an analysis à la Lønning would prevent us from following this intuition and would force us to say that the extension of eat includes pairs where x and y are both groups. But, accepting that (25) and (30) are adequate analyses of (22c) and (27), how precisely do we arrive at these interpretations? (25) and (30) are different from one another, as we have seen, and it is possible that other examples will require still different analyses. One idea (valid at least for the simplest cases) might be the following. Link defines the operator * only for unary predicates; but it is not unreasonable to extend the domain of application of * stipulating that, if R is a binary relation, [[*R]] is the set of pairs with the following properties: (i) for every u* d x there exist a v d x and a w d y such that u* d v and [[R]] ; for every u* d y there exist a v d x and a w d y such that u* d w and [[R]]. Once we stipulate this, we can assume that the independent interpretations of (22c) and (27) correspond to xy (four boys (x) & five girls (y) & 28

This is in essence the independent interpretation of (27) as Carlson 1982 and Schein 1987 conceive it.

Approaches to Quantification

27

*danced with (x, y)) and xy (three boys (x) & four cakes (y) & *ate (x, y)). If we compare these analyses with those suggested by Lønning, at first glance the difference looks negligible; but if dance with and eat are interpreted in the way we considered above (that is, as a distributive predicate and as a predicate distributive on its second argument, respectively), these two paraphrases turn out to be exactly equivalent to (25) and (30). When we proceed to analyze other sentences analogously, we find that we always obtain an acceptable interpretation. I do not intend these slight and hasty remarks to be taken as a solution to the problem of the independent interpretation of quantifiers. The goal was just to show how, once one starts to dedicate some thought to the topic, one finds it altogether natural to enrich and articulate the decidedly reductive picture proposed by Lønning. Before concluding, I would like to touch on an aspect of an issue that Lønning completely ignores: cases of independent quantification in sentences that contain more than two NPs. Consider for example (31) three detectives solved two cases for two well-known agencies.29 (31) can be understood in such a way that each of the three NPs is independent of the other two; or in such a way that three detectives and two well-known agencies are independent of each other and two cases is in the scope of three detectives; or in such a way that three detectives and two well-known agencies are independent of each other and two cases is in the scope of two well-known agencies; and so forth. It does not seem to me that the approach outlined by Lønning offers any useful starting point for the treatment of sentences like (31). But I have to add that I do not know any truly satisfying treatment of examples of this kind30. This could turn out in the future to be one of the more interesting domains for the development of the study of quantification in natural language. (Translated by Orin Percus) 29

[Translator’s note. Casalegno’s original example is Tre detectives hanno risolto due casi per conto di due agenzie molto note. In my own judgment, the English counterpart does not lend itself as readily to the same variety of readings. At the same time, some of the less accessible interpretations can be brought out by the judicious insertion of each.] 30 The credit for having drawn attention to sentences like (31) goes to B. Schein. At the same time, Schein’s own analysis of these sentences seems flawed to me. For example, the complex technical apparatus deployed in Schein 1987 does not account–it seems to me–for the second of the three interpretations of (31) mentioned in the text.

CHAPTER II ONLY: ASSOCIATION WITH FOCUS IN EVENT SEMANTICS (WRITTEN WITH ANDREA BONOMI)

…For a child knows that logic and meaning are only nothing nothing screening… — F. Pessoa

1. Introduction Only is a typical example of a word requiring association with a focus. The interpretation of a sentence containing only cannot be determined unless only has been associated with a focused expression, and in general, different choices of the focused expression correspond to different interpretations of the sentence. Consider sentence (1): (1) John only kissed Mary. Here the focused expression can be Mary, kissed Mary, or kissed. These three possibilities can be represented as follows: (2) John only kissed [Mary]F.

We wish to express our gratitude to Arnim von Stechow, who has wisely alternated words of encouragement and useful criticisms. We are indebted to Gennaro Chierchia, Irene Heim, Angelika Kratzer, and Manfred Krifka for their comments and suggestions. Previous versions of this paper were presented at a workshop on reference held in Padua in June 1991, and at a workshop on focus which was part of the activities of the Third European Summer School in Logic, Language and Information (Saarbrücken, August 1991); we thank the participants for their reactions.

30

Chapter II

(3) John only [kissed Mary]F. (4) John only [kissed]F Mary.1 (2), (3), and (4) correspond to interpretations (5), (6), and (7) respectively: (5) John kissed Mary and nobody else. (6) John did nothing but kiss Mary. (7) John did nothing to Mary but kiss her. The nonequivalence of these three interpretations is obvious. Notice that the truth conditions expressed by (5), (6), and (7) can have little plausibility if certain restrictions determined by the context are not taken into account. For instance, when we use (1) in the sense of (5), what we are likely to mean is not that Mary is the only person ever kissed by John, but rather that John did not kiss anybody else in a certain situation, or in a certain restricted range of situations, specified by the context. As to (6) and (7), it is clear that, literally taken, they are always false. Consider (6): even assuming that we are confining our attention to what John did on a given occasion, it is clear that he must have done something else besides kissing Mary. Nevertheless, if John did nothing relevant besides kissing Mary, the use of (1) is perfectly appropriate. Of course, what counts as relevant depends on the context. Similar remarks apply to (7). We have nothing interesting to say here on the role played by the context in the interpretation of the sentences containing only. Our aim is simply to explain how paraphrases such as (5), (6), and (7) can be systematically correlated with structures such as (2), (3), and (4). In the next sections we shall present an analysis of only in terms of event semantics. We find this analysis simple and natural, and it seems to us that, without events, no equally good analysis would be possible. Some motivation for the use of events is provided in this introduction.2 The first thing to be emphasized is that the class of expressions with which only can be associated is quite large: it includes not only proper names, transitive verbs, and verbal phrases (as is shown by (2)-(4)), but 1

There is a sharp difference in the way (2) and (3) on the one hand and (4) on the other are pronounced: in (2) and (3) the main stress is on Mary, whereas in (4) it is on kissed. 2 To a large extent, the discussion contained in the present section was prompted by some penetrating remarks of Irene Heim and Angelika Kratzer.

Only: Association With Focus in Event Semantics

31

also complex noun phrases, determiners, common nouns, adverbs, etc. Now, some of these expressions raise serious difficulties for the analyses of only proposed so far. The most blatant case perhaps is that of NPs: as far as we know, a fully adequate treatment of the association of only with focused NPs has never appeared in the literature. So we shall proceed as follows. We shall discuss the topic of NPs at some length, trying to make the difficulties explicit and examining different possible lines of approach. It will turn out that if we restrict our attention to expressions of the form ‘only [a]F’, where a is a NP, then the difficulties can be overcome (at least to a large extent) without any recourse to events. But obviously expressions of this form cannot be treated independently of other kinds of occurrence of only, and we shall try to convince the reader that the need for events arises at this point. To introduce the problem concerning NPs, let us consider the analysis of only developed by Rooth (1985) in the framework of his well-known theory of focus (but most of what we are going to say also applies to the analysis in terms of structured meanings proposed by von Stechow (1988, 1991) and Krifka (1991); see also Kratzer (1991) and Rooth (1992)). Rooth’s analysis works nicely when the focused expression is a proper name, but its extension to other NPs is not so obvious. Consider a simple sentence like (8): (8) Only [John]F cried. The analysis proposed by Rooth is more or less the following: (9) For every a belonging to the set of alternatives determined by [John]F, a satisfies ‘cried (x)’ if and only if a = John (where John is the denotation of John). The set of alternatives determined by [John]F is the set of objects whose type is the type of John . Thus, if proper names are taken to denote individuals, (9) amounts to saying that an individual a satisfies ‘cried (x)’ if and only if a is John, which is an obviously correct way of expressing the truth conditions of (8). So far so good. But now consider (10): (10) Only [two boys]F cried. An analysis of (10) similar to (9) would be:

32

Chapter II

(11) For every Q belonging to the set of alternatives determined by [two boys]F, Q satisfies ‘X(cried)’ if and only if Q = two boys . Here the set of alternatives determined by [two boys]F is the set of objects whose type is the type of two boys , i.e. (assuming that we are treating two boys as a generalized quantifier) the set of sets of sets of individuals. But such an analysis of (10) would be unacceptable: the condition stated in (11), far from capturing the content of (10), can never be fulfilled.3 So the cases in which only is associated with complex NPs represent a serious difficulty for Rooth’s analysis.4 Is there any way out? If we are willing to introduce suitable modifications in Rooth’s analysis, can the difficulty be eliminated? Instead of tackling these questions directly, we prefer to consider a slightly different problem: is it possible to define an operator O mapping sets of sets of individuals into sets of sets of individuals such that for every NP Ƚ, only [Ƚ]F can be identified with O( Ƚ )? An observation due to Groenendijk and Stokhof (1990) implies a negative answer. 5 The observation is that, at first sight, only appears to be “nonfunctional”: there are pairs of NPs Ƚ and Ⱦ such that Ƚ and Ⱦ seem to have the same denotation, and nevertheless the denotations of ‘only [Ƚ]F’ and ‘only [Ⱦ]F’ are different. We call this problem the “nonfunctionality puzzle.” Let us illustrate it for the pair of NPs a boy and one or more boys. It is commonly assumed that a boy = one or more boys = {X | X is a set of individuals 3

Suppose two boys satisfies ‘X(cried)’, i.e. Two boys cried is true. Then One or more boys cried is true as well, which means that ‘X(cried)’ is also satisfied by one or more boys . Since one or more boys is a set of sets of individuals distinct from two boys , we are forced to conclude that (11) does not hold. (This argument is taken from von Stechow (1988).) 4 It is perhaps worth pointing out that to eliminate the difficulty, it is not enough to say that the set of alternatives must always be regarded as suitably restricted by the context. Take the following example: Mary doesn’t like reading. She has read only [two detective stories]F. Suppose the context in which this is said makes it clear that we are interested only in the books read by Mary. This means that the set of relevant alternatives does not contain the quantifiers denoted by NPs such as John’s address in the telephone directory or the opening instructions on a can of beans. But the quantifiers denoted, say, by two or three detective stories, a few detective stories, or two novels by Agatha Christie are certainly not ruled out, and this suffices to raise the difficulty illustrated in the text. 5 Actually, the problem discussed by Groenendijk and Stokhof is not that of only, but that of the exhaustiveness condition expressed by some answers (see section 4 below). The two problems are closely related, however; so we take the liberty, here and in the following, of reconstructing their line of reasoning as referring to only.

Only: Association With Focus in Event Semantics

33

containing at least one boy}. Therefore, no matter how we choose the operator O, we have O( a boy ) = O( one or more boys ), and if we use O to specify the denotation of ‘only [Ƚ]F’ when D is a NP, we are forced to assign the same denotation to only [a boy]F and to only [one or more boys]F. But this is wrong, as the following examples illustrate: (12)

Only [a boy]F cried.

(13)

Only [one or more boys]F cried.

These sentences clearly have different meanings: (12) entails that the boy who cried is unique, whereas (13) does not. A similar difficulty arises with pairs of NPs such as two boys and two or more boys, three boys and three or more boys, etc. As a natural reaction to the nonfunctionality puzzle, one can question the assumption that the two NPs of each problematic pair have the same denotation. The trouble is that to assign them denotations which are distinct yet reasonable is not so easy. For instance, we could distinguish between a boy and one or more boys by treating the former as if it were synonymous with exactly one boy, but this would clearly be a mistake. According to Groenendijk and Stokhof, the puzzle can be solved if we assume (as many people have done) that a proper semantic treatment of NPs requires groups.6 The idea is roughly the following. We extend the universe of discourse by including in it not only ordinary individuals but also every group of individuals. A group consisting of exactly one individual is identified with the individual in question. A predicate denotes a subset of the universe of discourse closed under union of groups. A NP denotes a set of sets of groups. For example, as the denotations of a boy and one or more boys we can now take {X | X contains a (group consisting of exactly one) boy} and {X | X contains a group consisting of one or more boys}, respectively. This choice can be justified as follows. A boy and one or more boys are interchangeable when the predicate is distributive, not when it is collective. (A boy cried has the same truth conditions as One or more boys cried, but A boy lifted the stone 6

This is the central idea of Link’s algebraic semantics (see Link 1983). The reader should bear in mind that in this paper, we use the term “group” in the sense in which Link uses “plural individual” or “sum”. The word “group” has been employed by other authors to mean something else (see, for instance, Landman 1989 and Schwarzschild 1992). The distinction between “sums” and “groups” (in the more specialized sense of the word) is also relevant to the analysis of only, but discussion of this point would lead us too far.

34

Chapter II

can be false even when One or more boys lifted the stone is true.) Now, if quantifiers are conceived of as sets of sets of individuals, no account of collectivity is possible: all we can do is restrict ourselves to distributive contexts, take note that in those contexts a boy and one or more boys are interchangeable, and assign them the same denotation. With groups things are different. Let us suppose that a boy and one or more boys denote the sets of sets of groups specified above; then the fact that the denotation of a distributive predicate such as cried cannot contain a group unless it also contains its individual members explains why A boy cried and One or more boys cried are equivalent, and the fact that the denotation of a collective predicate such as lifted the stone can in fact contain a group without containing all its individual members explains why A boy lifted a stone and One or more boys lifted the stone are not equivalent. So now we have different denotations for a boy on the one hand and for one or more boys on the other. Is this sufficient to account for the difference between only [a boy]F and only [one or more boys]F? Can we find an operator O from sets of sets of groups into sets of sets of groups such that only [a boy]F = O( a boy ) and only [one or more]F = O( one or more boys )? Groenendijk and Stokhof’s suggestion is that we take O = EXH, where EXH(Q) = {X | X אQ and there is no proper subset Y of X such that Y אQ) for every quantifier Q. At first sight, this suggestion is correct. EXH( a boy ) only contains singletons of boys; therefore, if (12) means that the denotation of cried belongs to EXH( a boy ), (12) cannot be true unless the boy who cried is unique. At the same time, if (13) means that the denotation of cried belongs to EXH( one or more boys ), (13) can be true even if the boys who cried are more than one, for EXH( one or more boys ) contains singletons of groups of boys. The fact that EXH ( a boy ) is properly included in EXH ( one or more boys ) seems to explain why (12) entails (13) but not conversely. A moment’s reflection shows, however, that this solution to the nonfunctionality puzzle does not work. The reason is that the truth conditions assigned to sentences like (13) are not the right ones. Let us imagine a situation in which two boys—say, John and Peter—cried, and nobody else did. In such a situation (13) is intuitively true, but Groenendijk and Stokhof’s analysis in terms of EXH makes it false. The analysis says that (13) is to be counted true if and only if cried is the singleton of a group of one or more boys, but in the situation we are imagining, cried contains three different elements: John, Peter, and the group made up of John and Peter. So EXH is not what we need. A better choice would be an operator O defined as follows. For every set of sets of groups Q, let Q+ = {g | {g} אQ } and

Only: Association With Focus in Event Semantics

35

(14) O(Q) = {X | there is an h אQ+ such that h אX and g is a subgroup of h for every g אX } If we now take only [a boy]F = O( a boy ) and only [one or more boys]F = O( one or more boys ), the difficulty faced by Groenendijk and Stokhof’s proposal is avoided. Let us check that this is so. O( a boy ) is easily seen to coincide with EXH( a boy ), for O( a boy ) = {X | there is a (group consisting of exactly one) boy h such that h אX and g is a subgroup of h for every g אX} = {{h} | h is (a group consisting of exactly one) boy}. On the other hand, we have O( one or more boys ) = {X | there is a group h of one or more boys such that h אX and g is a subgroup of h for every g אX}. Now, what happens in the situation described above, i.e. in the situation in which John and Peter cried and nobody else did? As we have already said, in such a circumstance cried contains John, Peter, and the group whose members are John and Peter. Let h be the group of John and Peter. We have that (i) h is a group of two boys; (ii) h אcried ; (iii) g is a subgroup of h for every g אcried (since the only elements of cried other than h are (the groups consisting of) John and Peter). It follows that cried is contained in O( one or more boys ), and if O( one or more boys ) is identified with only [one or more boys]F , (13) turns out to be true, as required. It is easy to see that the use of O also provides a satisfactory treatment of the other problematic pairs mentioned in the formulation of the nonfunctionality puzzle: for example, if we let two boys = {X | X contains a group of exactly two boys} and two or more boys = {X | X contains a group of two or more boys}, we are then entitled to identify only [two boys]F and only [two or more boys]F with O( two boys ) and O( one or more boys ), respectively. Does an analysis of ‘only [Ƚ]F’ in terms of O work for every NP Ƚ? The reader can check by herself that for many NPs, such an analysis is indeed appropriate. There are two difficulties, however; as we shall see, the first can easily be overcome, whereas the second is slightly more embarrassing. The first difficulty arises with NPs such as every boy. As the denotation of every boy we can take {X | X contains every boy} (this is obviously different from {X | X contains the group of all boys}, which can be taken as the denotation of the boys; one of the advantages of the approach in terms of groups is that it enables us to distinguish between every boy and the boys and to explain why the former is compatible only with distributive predicates). Now, what about only [every boy]F? Some speakers find sentences such as Only [every boy]F cried a little unnatural, but whatever the explanation of this fact might be, there is no doubt that

36

Chapter II

only can be associated with NPs whose determiner is every, and we must account for this case, too. 7 Unfortunately, since every boy = {X | X contains every boy}, we have O( every boy ) = {X | there is an h such that {h} contains every boy, h אX, and g is a subgroup of h for every g אX} = {X | there exists exactly one boy h, and X = {h}}, which means, of course, that we cannot identify only [every boy]F with O( every boy ). This is the difficulty. We can overcome it, however, by slightly modifying the definition of O. For every set of sets of groups Q, let Q# = {sup(X) | X אQ and there is no proper subset Y of X such that Y אQ} (where sup(X) is the supremum of X, i.e. the union of the groups in X; the supremum of the empty set is the empty group). We can now redefine O as follows: for every set of sets of groups Q, (15) O(Q) = {X | either Q# contains the empty group and X is the empty set, or Q# does not contain the empty group, X אQ, and there is an h א Q# such that g is a subgroup of h for every g אX} Let us compute O( every boy ) according to this definition of O. To begin with, we have every boy # = {h | either there are no boys and h is the empty group, or there are boys and h is the group of all boys}. Therefore, 7

Obviously, only [every boy]F must not be confused with only [every]F boy: the latter is indeed unacceptable. The naturalness of the association of only with a focused NP of the form ‘every Į’ seems to vary with the context: for instance, even the speakers who do not like sentences such as Only [every boy]F cried find (i) perfectly all right. (i) John only introduced [every priest]F to [a nun]F. The only NPs of the form ‘every D’ incompatible with only are everything, everybody, and the like. Why only cannot be associated with certain NPs (and, more generally, with certain expressions of other categories) is an interesting problem on which we have little to say. We think that there is no uniform explanation. Presumably, we cannot say Only [everybody]F jumped because an event in which everybody jumped is a top element (relative to event inclusion) in the class of events in which somebody jumped, and we cannot say Only [every]F boy jumped because an event in which every boy jumped is a top element in the class of events in which boys jumped; only can never be used in sentences describing events which turn out to be such top elements. Only is also banned from sentences describing events that are bottom elements in classes of events of the kind just mentioned. This explains the unacceptability of only [nothing]F, only [nobody]F, etc. But other cases cannot be accounted for along the same lines. For instance, why is only [at least two boys]F so bad, whereas only [two or more boys]F is acceptable? (Maybe the explanation has to do with the fact that at least is itself a focus operator.)

Only: Association With Focus in Event Semantics

37

O( every boy ) = {X | either there are no boys and X is the empty set, or there are boys, X contains every boy, and every group in X is a group of boys}. The identification of only [every boy]F with O( every boy ) is now possible. (It is also easy to see that (15) works for all those cases for which (14) was already adequate: notice that if Q contains singletons, Q# = Q+). The second—and perhaps more serious—of the two difficulties mentioned above concerns expressions of the form ‘only [Ƚ]F’ where Ƚ is a NP such as less than ten boys or ten boys at most. Let us consider, for example, only [less than ten boys]F. If less than ten boys has to denote a set of sets of groups, the most natural choice seems to be {X | for every g, if g is a group of boys and g אX, then g has less than ten members}. The problem is that the denotation of only [less than ten boys]F is something completely different from the set of sets of groups we obtain by applying O to {X | for every g, if g is a group of boys and g אX, then g has less than ten members}, whether we define O as in (14) or as in (15). Should we try a third definition? Unfortunately, no definition would be appropriate. Here is why. Consider the NPs less than seventy Miss World’s and less than seventy Fields medals. Since the Miss World’s and the winners of the Fields medal are actually less than seventy, we have {X | for every g, if g is a group of Miss World’s and g אX, then g has less than seventy members} = {X | X is a set of groups} = {X | for every g, if g is a group of winners of the Fields medal and g אX, then g has less than seventy members}. So the denotation of the two NPs is the same. It follows that no matter how O is defined, the application of O to the denotation of less than seventy Miss World’s gives the same result as the application of O to the denotation of less than seventy Fields medals. Therefore, no matter how O is defined, O cannot provide a suitable denotation for only [less than seventy Miss World’s]F and only [less than seventy Fields medals]F, since only [less than seventy Miss World’s]F and only [less than seventy Fields medals]F are clearly not interchangeable. (The meaning of Only [less than seventy Miss World’s]F have such an exceptional IQ is clearly distinct from that of Only [less than seventy Fields medals]F have such an exceptional IQ.) This is, of course, a new instance of the nonfunctionality puzzle. Is there any solution? We could decide that the NPs in question are not really monotonic decreasing, and that less than n entails at least one. Then the denotations assigned to less than seventy Miss World’s and to less than seventy Fields medals would be distinct, and the difficulty would no longer arise. But we are not sure that such a move would not be ad hoc. Let us take stock. We have seen that if NPs are conceived of as

38

Chapter II

expressions denoting sets of sets of individuals, it is impossible to find an operator O such that for every NP Ƚ, only [Ƚ]F = O( Ƚ F]). On the other hand, if we follow Groenendijk and Stokhof’s suggestion and exploit the fact that the semantic treatment of NPs can involve groups, we can come close to a positive solution of the problem. If O is defined as in (15), O provides a satisfactory analysis of ‘only [Ƚ]F’ for a large class of NPs Ƚ. The only remaining difficulty is that concerning NPs like less than ten boys or ten boys at most. We can now turn to the question: why do we think that a good analysis of only requires events? We have said that one of the inadequacies of the analyses proposed so far is their inability to account for all the cases in which only is associated with a focused NP. But the preceding discussion shows that a reasonable treatment of expressions of the form ‘only [Ƚ]F’, where Ƚ is a NP, can be achieved simply by adding groups to the universe of discourse. So far at least, events seem unnecessary. Why should we draw them into the picture, then? (It will be shown in the next section that events enable us to deal with NPs like less than ten boys without renouncing the assumption that they are monotonically decreasing; but this could be considered too modest an advantage to justify the use of events.) The answer is that the analysis of ‘only [Ƚ]F’, where Ƚ is a NP, is not an end in itself. What we want is a uniform explanation of how only can be associated with expressions of different categories (including NPs, of course), but as far as we can see, without the resources of event semantics such a uniform explanation would be very difficult to attain: we would be forced to treat different categories in different ways, and this would be highly unsatisfactory. To make this point clearer, let us go back to Rooth’s theory. The analysis of ‘only [Ƚ]F’, where Ƚ is a NP, in terms of the operator O defined in (15) can be translated quite easily into a clause à la Rooth. For example, one can assume that the set of alternatives determined by a focused NP consists of those sets of sets of groups Q such that for some group g, Q = {X | X contains g}. Then one can say that a sentence of the form ‘Only [Ƚ]F Ⱦ’, where Ƚ is a NP and Ⱦ is a VP, is true if and only if (16) Either Ƚ # contains the empty group and Ⱦ is the empty set, or Ƚ # does not contain the empty group, Ⱦ אȽ , and there is an h אȽ # such that for every Q belonging to the set of alternatives determined by [Ƚ]F, if Ⱦ אQ and g אQ#, then g is a subgroup of h. ((16) is “à la Rooth” in the sense that the truth conditions of ‘Only [Ƚ]F Ⱦ’ are specified in terms of Ƚ , Ⱦ and the set of alternatives determined by

Only: Association With Focus in Event Semantics

39

[Ƚ]F.) So it might seem that to remedy the inadequacy of Rooth’s analysis concerning the association of only with NPs, all we have to do is admit groups into the universe of discourse and add (16) (plus something else, perhaps) to the clauses already formulated by Rooth. Such a move would indeed enable us to assign correct truth conditions to sentences not covered by Rooth’s original analysis (for an important qualification, see below). The trouble is, however, that (16) and the clauses provided by Rooth have very little in common; it is impossible to see them as different instances of the same general schema (unless, of course, the schema is formulated in a completely ad hoc way). And, if we are unwilling to enrich our ontology any further, the difficulty is insurmountable: no matter how many adjustments we make, we shall always end up with a list of heterogeneous and partially unrelated clauses that certainly could not be regarded as giving us the meaning of only. (The approach in terms of structured meanings raises exactly the same problem.) On the other hand, if we employ event semantics—or, more exactly, a version of event semantics making use of groups—a uniform representation of the truth conditions of all sentences containing only becomes possible. As will be shown, a sentence with only can always be interpreted, irrespective of the category of the focus with which only is associated, as stating that every event of a certain kind is included in an event of another kind. (Such a representation in terms of events could also be a first step toward a unified account of the “nonscalar” and “scalar” readings of only. The issue of scalarity is beyond the scope of the present paper, but see the remarks in section 4.) One last observation about Rooth’s theory. What has been said so far might give the impression that if we are not interested in conceptual coherence and perspicuity—if we are just looking for a formal machinery assigning correct truth conditions to sentences—then the extension of Rooth’s approach to cases in which only is associated with NPs other than proper names only requires the inclusion of groups into the universe of discourse. But this is not true. In the preceding discussion, we have confined our attention to expressions of the form ‘only [Ƚ]F’ where Ƚ is a NP, and the occurrence of only in such expressions can indeed be treated by means of clauses like (16). We should bear in mind, however, that only is not always contiguous to the focused NP with which it is associated. For example, instead of saying John kissed only [Mary]F, we can say John only kissed [Mary]F (this was sentence (2)). Or, instead of saying Mary kissed only [the boy scouts]F, we can say (17):

40

Chapter II

(17) Mary only kissed [the boy scouts]F. Now, it can be proved that there is absolutely no way of accounting for a sentence like (17) in the framework of Rooth’s theory. Groups are of no help here. Let us see why. For an analysis of (17) in the style of Rooth to be possible, we should be able to define the extension of only kissed [the boy scouts]F in terms of the intension of kissed the boy scouts and of the set of alternatives determined by kissed [the boy scouts]F. But it turns out that such a definition does not exist. The actual proof is a bit tedious, but the idea behind it is extremely simple. Let us suppose we are working with models which are like the usual ones except that the domain of individuals is replaced by a domain of groups. What is the set of alternatives determined by kissed [the boy scouts]F? The answer depends on what one has chosen as the set of alternatives determined by [the boy scouts]F. If the latter is specified as in (16), then the former will be the set whose elements are the intensions of expressions of the type ‘kissed A’, where A denotes in every possible world the set of sets of groups containing a certain group g. (There are other possible options, but they only require slight changes in the argument.) Now, let us consider a model containing in particular two worlds v and w with the following properties: (i) the only difference between v and w is that in v the boy scouts are John and David, whereas in w the boy scouts are John, David, and Peter; (ii) in both worlds, the individuals kissed by Mary are John, David, and Peter. It follows that Mary belongs to the extension of only kissed [the boy scouts]F in w but not in v. On the other hand, no condition expressed in terms of the intension of kissed the boy scouts and of the relevant set of alternatives can discriminate between v and w. The reason is that the intension of kissed the boy scouts and the set of alternatives are the same in v and in w, and that the information about the worlds contained in these two objects does not make any difference between v and w (on the one hand, the extension of kissed the boy scouts is the same in v and w, and on the other, for every x and y, x kissed y in v if and only if x kissed y in w). The rest of the present paper is organized as follows. In the next section we present our analysis in terms of events for a fragment of language in which only occurs only in expressions of the form ‘only [Ƚ]F’ where Ƚ is a NP. In doing so, of course, we do not disavow what we have been saying about the need for a unified and systematic treatment of all contexts in which only can occur. We just think that by starting off with this special case, we can make it easier for the reader to grasp certain basic aspects of our approach. The extension of our analysis to a wider

Only: Association With Focus in Event Semantics

41

range of situations is performed in section 3, which is the core of our work. Finally, in section 4 we sketch some possible developments. Our exposition will be as neutral as possible, in the sense that we will not discuss the possibility of integrating our analysis of only in terms of events into current theories of focus; this could be the topic of another paper.

2. NPs in Focus Let us begin by sketching the main principles of our event semantics, which is similar to the “algebraic” version of event semantics of Krifka (1989).8 We use models whose domain contains two sorts of elements: events9 and objects. The set of events and the set of objects are structured as complete Boolean algebras.10 By E and ŀE we denote the join and the meet of the algebra of events, by O and ŀO the join and the meet of the algebra of objects. كE and كO are the “natural” partial orderings of the two structures, 0E and 0O their bottom elements (the null event and the null object). The assumption that any set of events has a supremum means that given any set of events, the events in the set can be seen as the constituent parts of a larger event. (So our events are very abstract constructs; for instance, no spatio-temporal unity is presupposed.) The set of objects is conceived of as containing not only ordinary individuals, which correspond to atoms of the Boolean algebra,11 but also groups of individuals. Given two groups x and y, their join x O y is nothing else but their “union” or “sum”, i.e. the group whose members are the members of x plus the members of y. More generally, the supremum of a set X of groups is the union or sum of the groups in X, i.e. the group whose members are exactly those contained in some element of X. As in section 1, we ignore the distinction between an individual and the group whose only member is that individual. Events are related to objects by 8

The idea of extending to events the algebraic approach to the semantics of NPs proposed in Link (1981) goes back to Bach (1986). For further developments see Link (1987) and Krifka (1989). 9 Like Link (1987) and Krifka (1989), we use the word ‘event’ in a very wide sense, ignoring finer classifications (such as the distinction among events proper, processes, and states). 10 Recall that a Boolean algebra is complete if every set of elements of the algebra has a supremum. 11 Recall that an element x of a Boolean algebra is an atom if x is not the bottom element and, for every element y such that y x, either y is the bottom element or y = x.

42

Chapter II

“thematic relations”: we shall consider in particular the thematic relations of “agent” and “patient”. We assume that our models satisfy the following conditions: (I)

(II)

(III)

Thematic relations are partial functions from events to objects; thus, if an event has an agent, the agent is unique, and if an event has a patient, the patient is unique. In those cases in which we might be tempted to say that each of several individuals is an agent of a certain event e, we must say instead that the (unique) agent of e is the group formed by all those individuals. Similarly for patients. Let X be a nonempty set of events with an agent and let e be the supremum of X. Then the agent of e is the supremum of {x | x is an object and x is the agent of f for some f אX ` . Similarly for patients. 0O can be neither the agent nor the patient of an event, unless the event in question is 0E.

Starting from the basic types t (the type of truth values), o (the type of objects), and e (the type of events), we can construct complex types by applying the following rule: if ɐ and ɒ are types, then (ɐ, ɒ) is a type. The formal language we shall make use of contains expressions of every type. We shall employ x, y,… as variables of type o, and e, f,… as variables of type e. We can now illustrate the analysis of only to be developed in the following pages. Let us go back to the simplest example introduced so far, i.e. (8): Only [John]F cried. Our starting point is the observation that the content of (8) can be paraphrased as follows: John cried, and every event of crying is included in an event of crying whose agent is John. If we translate this paraphrase into the language of event semantics, the result is (18): (18) e[cried'(e) & AG(e, John')] & f[cried'(f) ĺ g[cried'(g) & AG(g, John') & f كE g]] where cried' is a constant of type (e, t) denoting a set of events other than 0E (intuitively, the events of crying), John' is a constant of type o denoting an atom in the Boolean algebra of objects (intuitively, (the group whose only member is) John), and AG denotes the thematic relation of agent.

Only: Association With Focus in Event Semantics

43

If the reader is not yet convinced that (18) is a correct way of representing the content of (8), here is an explicit argument. Everybody would presumably agree 12 that (8) can be reasonably paraphrased as follows: for every x, x cried if and only if x is John; or, to put it in slightly different terms: for every x, x is the agent of an event of crying if and only if x is John. In symbols, (19) x[f[cried'(f) & AG(f, x)] ļ x = John'] Now, (18) and (19) are easily seen to be equivalent. Every event of crying has an agent (this is ensured by a suitable meaning postulate). Therefore, by the conditions (II) and (III) introduced above, an event of crying f is included in an event whose agent is John if and only if the agent of f is John. It follows that the second conjunct of (18) is equivalent to x[f[cried'(f) & AG(f, x)] ĺ x = John']. On the other hand, it is obvious that the first conjunct of (18) is equivalent to x[x = John' ĺ f[cried'(f) & AG(f, x)]]. The advantage of (18) over (19) is, as we shall see, that it exemplifies a form of representation which can be extended to every sentence containing only, irrespective of the kind of expression with which only is associated. (Moreover, we shall argue that this form of representation is a good starting-point for a unified account of the nonscalar and scalar readings of only, see section 4.) In this section we try to convince the reader that the form of representation exemplified by (19) is appropriate for any sentence containing an expression of the form ‘only [Ƚ]F’ where Ƚ is a NP. We shall now sketch a compositional procedure for translating into logical forms the sentences of a small fragment of language. Since for the time being our attention is confined to occurrences of only of the sort just described, we can assume that in the fragment the occurrences of only are introduced by a syntactic rule which changes a NP Ƚ not containing only into a new NP ‘only [Ƚ]F’ The translation of an I(ntransitive)V(erb) will be an expression of type (o, (e, t)). The translation of a T(ransitive)V(erb) will be an expression of type (o, (o, (e, t))). The translation of a NP will be an expression of type ((o, (e, t)), (e, t)). The translation of a S(entence) will be obtained in two steps (as usual in event semantics): we shall first map the S into an expression of type (e, t), the so-called “intermediate” 12

At least, if all the subtleties concerning the presupposition/assertion distinction are left aside. In the present paper, to keep the overall picture as simple as possible, we shall ignore this aspect of the matter.

44

Chapter II

translation; then we shall take the existential closure of the intermediate translation as the “official” translation of the S.13 Instead of giving a fully detailed description of the translation algorithm, we shall now illustrate it by means of a series of examples. To begin with, we want to explain how the translation algorithm can be applied to (8). As a preliminary, let us consider the corresponding sentence without only. EXAMPLE 1 We want to translate (20): (20) John cried. Let us suppose we have a syntactic rule S1 which combines a NP and an IV into a S. We can assume that (20) has been obtained by means of S1. The translations of John and cried are ɉFɉeF(John') (e) and ɉxɉe[cried'(e) & AG(e, x)] respectively. (F is a variable of type (o, (e, t)), John' is a constant of type o which denotes an individual, i.e. an atom in the Boolean algebra of objects, cried' is a constant of type (e, t).) We now assume that the translation rule corresponding to S1 is a rule which says that we must perform a functional application of the translation of the NP to the translation of the IV. Let us call this rule T1. In the present case the application of T1 gives ɉFɉe[F(John') (e)] (ɉxɉe[cried'(e) & AG(e, x)]) which is equivalent, by ɉ-conversion, to (21) ɉe(cried'(e) & AG(e, John')] 13

More precisely: if A is the expression of type (e, t) associated with the S, the official translation of the S will be eA(e) (in the present section we always use the term “existential closure” in this sense).

Only: Association With Focus in Event Semantics

45

(21) is the intermediate translation of (20). As explained above, we obtain the final translation by performing an existential closure. What we get in this way is (equivalent to) e(cried'(e) & AG(e, John')] In words: there was an event of crying whose agent was John. EXAMPLE 2 Let us now revert to (8): (8) Only [John]F cried. We assume that (8) is obtained from only [John]F and cried by means of S1, and that only [John]F, is obtained from John by means of the rule— call it SO—which turns a NP Ƚ into the NP only [Ƚ]F. We now come to the crucial point: the formulation of the translation rule corresponding to SO. We call it TO. TO: Suppose Ⱦ is a NP obtained by means of SO from a NP Ƚ, and let A be the translation of Ƚ. Then the translation of Ⱦ will be O(A), where O is the operator of type (((o, (e, t)), (e, t)), ((o, (e, t)), (e, t))) defined as follows: O =df ɉQɉFɉe[Q(F) (e) & f[xF(x) (f) ĺ g[Q(F) (g) & f كE g]]] Here Q is a variable of type ((o, (e, t)), (e, t)). The translation of only [John]F obtained by applying TO is ɉQɉFɉe[Q(F)(e)&f[xF(x)(f)ĺg[Q(F)(g)&fكEg]]](ɉFɉeF(John')(e)) which is equivalent to ɉFɉe[F(John') (e) & f[xF(x) (f) ĺ g[F(John') (g) & f كE g]]] We can now apply T1 and obtain the intermediate translation of (8), which turns out to be equivalent to ɉe[[cried'(e) & AG(e, John')] & f[x[cried'(f) & AG(f, x)] ĺ g[cried'(g) & AG(g, John') & f كE g]]] So the final translation is (22):

46

Chapter II

(22) e[[cried'(e) & AG(e, John')] & f[x[cried'(f) & AG(f, x)] ĺ g[cried'(g) & AG(g, John') & f كE g]]] (22) is not quite the same as (18), which we have seen to be a reasonable way of representing the content of (8). But the differences between the two formulas are inessential; notice in particular that, since every event of crying has an agent, x[cried'(f) & AG(f, x)] is equivalent to cried'(f). We must now take up NPs other than proper names and show that if the translation of a NP Ƚ is chosen in a sensible way, then the translation of ‘only [Ƚ]F’ provided by our rule TO is always correct. We start from the NPs a boy and one or more boys (this was one of the pairs of NPs which gave rise to the nonfunctionality puzzle discussed in section 1). EXAMPLE 3 We want to translate (23): (23) A boy cried. This sentence is obtained by means of S1 from a boy and cried. As the translation of a boy we take the following: ɉFɉex[A-BOY(x) & F(x) (e)] Here A-BOY(x) means that x is a group containing only one individual member, and that member is a boy. 14 It would be easy to derive this translation from the translation of boy (a constant of type (o, t)) and the translation of a (a suitable expression of type ((o, t), ((o, (e, t)), (e, t))). But for the moment we are not interested in such a derivation. Let us apply Tl: the intermediate translation of (23) turns out to be the following (modulo ɉ-conversion, of course): (24) ɉex[A-BOY(x) & cried'(e) & AG(e, x)] The final translation is obtained from (24) by existential closure:

14

More precisely, A-BOY(x) stands for the following formula: [boy'(x) & x 0O & y[[y كO x & y x ] ĺ y = 0O]] i.e., x belongs to the set of objects denoted by boy', and x is an atom of the Boolean algebra of objects.

Only: Association With Focus in Event Semantics

47

(25) ex[A-BOY(x) & cried'(e) & AG(e, x)]. In words: there was a crying whose agent was a boy. EXAMPLE 4 Let us now consider (26): (26) One or more boys cried. Our translation of one or more boys is the following: ɉFɉex[ONE-OR-MORE-BOYS(x) & F(x) (e)]. Here ONE-OR-MORE-BOYS(x) means that x is a group whose members are boys.15 So the intermediate translation of (26) we obtain by applying T1 is (27): (27) ɉex[ONE-OR-MORE-BOYS(x) & cried'(e) & AG(e, x)] Finally, the existential closure of (27) is (28): (28) ex[ONE-OR-MORE-BOYS(x) & cried'(e) & AG(e, x)] A point to be emphasized is that in our semantics (28) is equivalent to (25). In other words, the truth conditions we have assigned to (26) coincide with the truth conditions we have assigned to (23). Let us explain why (28) and (25) are equivalent. Obviously the set of events denoted by (24) is included in the set of events denoted by (27): this suffices to conclude that the existential closure of (24), i.e. (25), entails the existential closure of (27), i.e. (28). The entailment in the opposite direction is justified as follows. Since cry is a distributive verb, we must have a meaning postulate saying that, if x is a group which is the agent of an event of crying and y is an individual member of the group x, then there is an event of crying whose agent is y. Given this meaning postulate, it is easy to see that (28) entails (25), as required. (Obviously, the meaning postulate we have just appealed to is nothing else but a translation into our 15

Instead of ONE-OR-MORE-BOYS(x), we could simply write boy'(x); we write ONE-OR-MORE-BOYS(x) to remind the reader that the formula in question is the result of combining the translation of one or more with the translation of boy.

48

Chapter II

version of event semantics of the characterization of the distributivity of cry in terms of groups: if a group of persons cried, then each individual member of the group cried.) EXAMPLE 5 We now want to verify that the treatment of a boy and one or more boys described in the previous examples together with the translation rule TO provides a solution to the nonfunctionality puzzle. To begin with, let us consider the following sentence: (29) Only [a boy]F cried. (29) is derived from only [a boy]F and cried by an application of S1; only [a boy]F is obtained by applying SO to a boy. It is easy to see that the translation of only [a boy]F provided by TO is equivalent to ɉFɉe[x[A-BOY(x) & F(x) (e)] & f[xF(x) (f) ĺ g[x[A-BOY(x) & F(x)(g)] & f كE g]]] If we now apply the translation of only [a boy]F to the translation of cried and perform the required ɉ-conversions, we obtain ɉe[x[A-BOY(x) & cried'(e) & AG(e, x)] & f[x[cried'(f) & AG(f, x)] ĺ g[x[A-BOY(x) & cried'(g) & AG(g, x)]&f كE g]]] whose existential closure is equivalent to (30): (30) e[x[A-BOY(x) & cried'(e) & AG(e, x)] & f[x[cried'(f) & AG(f, x)] ĺ g[x[A-BOY(x) & cried'(g) & AG(g, x)|& f كE g]]] To convince the reader of the adequacy of (30), we show that in our models for event semantics (30) is true if and only if (31) is true: (31) y[A-BOY(y) & x[f[cried'(f) & AG(f, x)] ļ x = y]] (31) is an obviously correct way of expressing the content of (29). It can be read as follows: there is a boy y such that, for every x, x is the agent of an event of crying if and only if x = y. It is clear that ex[A-BOY(x) & cried'(e) & AG(e, x)] is equivalent to y[A-BOY(y) & x[x = y ĺ f[cried'(f) & AG(f, x)]; so all we have to do to show that (30) is

Only: Association With Focus in Event Semantics

49

equivalent to (31) is to prove the equivalence of the subformula of (30) introduced by the universal quantifier and y[A-BOY(y) & x[f[cried'(f) & AG(f, x)] ĺ x = y]]. By condition (II) formulated above, to say that every event of crying is included in an event of crying whose agent is a boy is the same as saying that the agent of every event of crying is a boy. To complete the proof, it suffices to verify that the agent of every event of crying is a boy if and only if there is a unique boy y such that the agent of every event of crying is y. The entailment from right to left is trivial. To prove the entailment in the other direction, let us suppose that there are two events of crying f' and f" with agents y' and y" respectively, y' y". By condition (II), the agent of f = f' E f" is y = y' O y", and since y is a group of more than one member, A-BOY(y) does not hold. But f is an event of crying, because the sum of two events of crying is again an event of crying; so there is an event of crying whose agent is not a boy. One last remark. In the preceding argument, we have used the fact that the sum of two events of crying is an event of crying. As a matter of fact, we assume a meaning postulate which says that the set of events E denoted by cried' (like any other set of events corresponding to a simple verb) satisfies the following condition: if F is a nonempty set of events other than 0E , then the supremum of F belongs to E if and only if F is a subset of E.16 This meaning postulate will be exploited again and again in 16

Some versions of event semantics make use of principles which are (or seem to be) incompatible with this meaning postulate, so a brief comment is in order. The postulate consists of two parts. The first part says that the set of events denoted by a verb is “cumulative”, i.e. it contains the supremum of each of its nonempty subsets (this is all we need in the present section). Now, cumulativity is a rather intuitive notion; besides, it has an independent motivation in the role it can play in the explanation of linguistic facts which have nothing to do with those discussed in the present paper (see Krifka 1989). The second part of our meaning postulate says that if the supremum of a set F of events 0E belongs to the set of events E denoted by a verb, then F is a subset of E. This can be reformulated as follows: if e אE, f كe e and f 0E, then f אE . The reader might have the impression that such an assumption is unjustified. Suppose John cleared the table, and one of the several things he did to clear the table was to remove a book. So we have two events: John’s clearing the table, and John’s removing the book. Call these two events e and f respectively. Since f is, in a sense, “part” of e, one might be tempted to conclude that f كE e, and since f is not an event of clearing whereas e is, one might claim that this is a counterexample to the second half of our meaning postulate. (Notice that this could also be used as a counterexample to our condition (II): the patient of f is the book and the patient of e is the table, but although f is part of e, the book is not part of the table.) The answer to an objection of this kind is very

50

Chapter II

the following pages (usually without being explicitly mentioned). EXAMPLE 6 Let us now consider (32): (32) Only [one or more boys]F cried. Once again the sentence is obtained by means of S1 and the NP is obtained by means of SO. The translation of only [one or more boys]F given by TO is equivalent to ɉFɉe[x[ONE-OR-MORE-BOYS(x) & F(x) (e)] & f [xF(x) (f) ĺ g[x[ONE-OR-MORE-BOYS(x) & F(x) (g)]& f كE g]]] As a consequence, the intermediate translation of (32) obtained by applying T1 is equivalent to ɉe[x[ONE-OR-MORE-BOYS(x) & cried'(e) & AG(e, x)] & f[x[cried'(f) & AG(f, x)] ĺ g[x[ONE-OR-MORE-BOYS(x) & cried'(g) & AG(g, x)] & f كE g]]] We can now perform the existential closure and obtain (33): (33) e[x[ONE-OR-MORE-BOYS(x) & cried'(e) & AG(e, x)] & f[x[cried'(f) & AG(f, x)] ĺ g[x[ONE-OR-MORE-BOYS(x) & cried'(g) & AG(g, x)]& f كE g]]] The reader can verify by herself that (33) is equivalent to (34): (34) y[ONE-OR-MORE-BOYS(y) & f [cried'(f) & AG(f, y) & x[f[cried'(f) & AG(f, x)] ĺ x كO y]]] simple: the relation holding between the two events described above is not the same relation as كE. Instead, it is the relation of “lumping” investigated by Kratzer (1989). Irene Heim and Angelika Kratzer have pointed out to us that this relation too is relevant to the analysis of only, if an event e lumps an event f, then e and f cannot both belong to the range of alternatives to be taken into account in evaluating the uniqueness condition expressed by a given occurrence of only. For instance, if removing the book was one of the things John did to clear the table, then we cannot say that John only [cleared the table]F is false because John, besides clearing the table, also removed the book.

Only: Association With Focus in Event Semantics

51

(In words: there is a group y consisting of one or more boys which is the agent of an event of crying, and every agent of an event of crying is included in y.) Clearly, (34), and therefore (33), correctly represent the content of (32). In particular, (33) is not equivalent to (30) (our translation of Only [a boy]F cried): the nonfunctionality puzzle for the pair consisting of a boy and one or more boys has been solved. It should be clear that this solution to the nonfunctionality puzzle is based on the strategy described in section 1 without any reference to events. The puzzle arises from the fact that a boy and one or more boys are often treated as having the same denotation. But these two NPs are interchangeable only in distributive contexts; we are entitled to assign them different denotations provided we can show that when the context is distributive, this difference is neutralized. In section 1 we discussed the possibility of distinguishing between a boy and one or more boys by saying that ‘A boy Ƚ’ is true if a boy is the extension of the predicate Ƚ, whereas ‘One or more boys Ƚ’ is true if the extension of a contains a group of one or more boys. The idea behind the analysis developed in Examples 3-6 is basically the same; the only novelty is that the relevant groups are now conceived of not as elements of the extensions of the VPs, but as agents of the events specified by the VPs. It is easy to see that our treatment of expressions of the form ‘only [Ƚ]F’ for Ƚ = John, a boy, or one or more boys can be extended to many other NPs Ƚ; for instance, to NPs such as John and Mary, John or Mary, boys, some boys, two (three, . . .) boys, two (three, . . .) or more boys, the boys, one half of the boys, many boys, most boys, an odd number of boys, between ten and twenty boys, etc. The reader can easily guess how these NPs are dealt with in our event semantics, and in each case she can easily verify that the application of TO gives a correct result. We shall not discuss all these examples in detail. Instead we shall examine the two cases which in section 1 were seen to require special care: the case of NPs like every boy and that of NPs like less than ten boys. EXAMPLE 7 We know that every boy is incompatible with a nondistributive reading of the predicate and that this is one of the main differences between every boy and the boys. Suppose the team of the boys won a chess tournament against the team of the girls. Then sentence (35) would be true, but (36) could be false: (35) The boys won.

52

Chapter II

(36) Every boy won. For instance, (36) would be false if John (one of the boys) had lost all his games. It follows that ex[SUPO(x, boy') & won'(e) & AG(e, x)] (In words: the supremum of the set of the groups of boys, i.e. the group containing all the boys, is the agent of an event of winning) is acceptable only as a translation of (35), not as a translation of (36). The actual choice of translation for (36) requires a little reflection. One might try something like (37): (37) ex[A-BOY(x) ĺ f [f كE e & AG(f, x) & won'(f)]] To obtain (37) as the translation of (36), it would obviously suffice to translate every boy as (38): (38) ɉFɉex[A-BOY(x) ĺ f [f كE e & F(x) (f)] Now, it is unquestionable that (37) expresses the truth conditions of (36) correctly. (37) says that there is an event e such that for every boy x, x is the agent of an event of winning included in e; so (37) cannot be true unless each of the boys has gained his own individual victory. In spite of its prima facie plausibility, however, this way of translating (36) would not be adequate. Consider the expression we obtain when the leftmost existential quantifier of (37) is replaced by a lambda: this expression (which would be the intermediate translation of (36) if (37) were its official translation) denotes the set of events which include, for every boy x, a victory of x. The problem is that an event which includes, for every boy x, a victory of x, can include much else besides. So the treatment of (36) would contrast with the treatment of other sentences, for instance of a sentence like John won, because the set denoted by the intermediate translation of John won contains only events consisting of a victory of John, not events properly including such a victory. And it is not hard to see that such lack of conceptual coherence would have immediate repercussions on the question which is our main concern here. Suppose we translate only [every boy]F by applying TO to (37). Then the translation of, say, (39) (39) Only [every boy]F won

Only: Association With Focus in Event Semantics

53

turns out to be paraphrasable as follows: every boy won, and every event of winning is included in an event g such that for every boy x, there is a victory of x included in g. But this is true if and only if every boy won; so (36) and (39) are assigned the same truth conditions, which is obviously absurd. To get a correct analysis, (36) must be understood as saying that there is an event e which includes a victory of x for each boy x, and nothing else. Here is a way of writing this: (40) eȰ[x[A-BOY(x) ĺ [won'(Ȱ(x)) & AG(Ȱ(x), x)]] & SUPE(e, ɉf x[A-BOY(x) & f = Ȱ(x)])] In (40) Ȱ is a variable of type (o, e), i.e. a variable for functions mapping objects into events. So (40) means the following: there are an event e and a function Ȱ such that Ȱ maps every boy x into a victory of x, and e is the sum of all those victories. Clearly, to obtain (40) as the translation of (36), all we have to do is translate every boy as (41): (41) ɉFɉeȰ[x[A-BOY(x) ĺ F(x) (Ȱ(x))] & SUPE(e, ɉfx[ABOY(x) & f = Ȱ(x)])] EXAMPLE 8 If we take (41) as the translation of every boy, TO provides us with a satisfactory translation of only [every boy]F. The translation in question turns out to be (equivalent to) (42): (42)ɉFɉe[Ȱ[x[A-BOY(x) ĺ F(x) (Ȱ(x))] & SUPE(e, ɉfx[ABOY(x) & f = Ȱ(x)])] & f[xF(x) (f) ĺ g[Ȱ[x[A-BOY(x) ĺ F(x) (Ȱ(x))] & SUPE(g, ɉfx[A-BOY(x) & f = Ȱ(x)])]] & f كE g]] To verify that (42) is indeed a satisfactory way of translating only [every boy]F, let us use it to translate (39): Only [every boy]F won. What we obtain via T1 and existential closure is (equivalent to) (43): (43) e[Ȱ[x[A-BOY(x) ĺ [won'(Ȱ(x)) & AG(Ȱ(x), x)]] & SUPE(e, ɉfx[A-BOY(x) & f = Ȱ(x)])] & f[x[won'(f) & AG(f, x)] ĺ g[Ȱ[x[A-BOY(x) ĺ [won'(Ȱ(x)) & AG(Ȱ(x), x)]] & SUPE(g, ɉf x[A-BOY(x) & f = Ȱ(x)])]] & f كE g]] In other words, there is at least one event which is the sum of the events associated with boys by a function Ȱ mapping each boy x into a victory of

54

Chapter II

x, and every event of winning is included in an event of this kind. By now, the reader should be able to see immediately that such a paraphrase is nothing else but a roundabout way of expressing the content of (39); in any case, she can proceed as in the previous examples, and verify that (43) is equivalent to some other formula whose adequacy as a representation of the content of (39) is more evident to her. EXAMPLE 9 Let us consider the following: (44) Less than ten boys cried. What (44) means is that there is no event of crying whose agent is a group of ten or more boys. Now, this is a negative statement, and the treatment of negation in event semantics is a notoriously difficult matter. How can we reduce a statement of the form “there is no event such that…” to a statement of the form “there is an event such that…”? A solution to this problem is sketched in Krifka (1989). Krifka would translate (44) more or less as follows: (45) e[e = 1E & ~f[f كE e & x[TEN-BOYS(x) & cried'(f) & AG(f, x)]]] where 1E denotes the top element in the Boolean algebra of events, and TEN-BOYS(x) means that x is a group of exactly ten boys. Such a translation is certainly adequate, in the sense that it expresses the truth conditions of (44) correctly. Nevertheless, we cannot adopt it here. To be able to obtain (45) as the translation of (44), we should translate less than ten boys as ɉFɉe[e = 1E & ~ f [f كE & x[TEN-BOYS(x) & F(x) (f)]]] and it is not hard to see that this translation of less than ten boys together with rule TO would produce an unacceptable translation of only [less than ten boys]F. So we need something else. Our translation of (44) will be (46): (46) e[[~fx[boy'(x) & cried'(f) & AG(f, x)] & e = 0E] [ שx[< 10BOYS(x) & cried'(e) & AG(e, x)] & f[[x[boy'(x) & cried'(f) & AG(f, x)]& e كE f] ĺ e = f ]]]

Only: Association With Focus in Event Semantics

55

(where < 10-BOYS(x) is an abbreviation of the formula which says that either x is 0O or x is a group of boys whose cardinality is between one and nine). (46) can be read as follows: there is an event e such that either (i) no boy cried and e = 0E, or (ii) e is an event of crying whose agent is a group of less than ten boys, and e is “maximal” among the events of crying whose agent is a group of boys (i.e. no event of crying whose agent is a group of boys is “larger” or “more comprehensive” than e). One of the drawbacks of (46) is its length. To shorten it a little, we introduce an abbreviation. Let Ƚ be an expression of type e, and let Ⱦ be an expression of type (e, t): we shall use MAX(Ƚ, Ⱦ) as an abbreviation of f[Ⱦ(f) & Ƚ كE f ] ĺ Ƚ = f]]]. So (46) can be rewritten as follows: (47) e[[~f x[boy'(x) & cried'(f) & AG(f, x)] & e = 0E] [ שx[