Columbia School linguistics in the 21st century 9789027262332, 9027262330

238 93 3MB

English Pages 310 [321] Year 2019

Report DMCA / Copyright


Polecaj historie

Columbia School linguistics in the 21st century
 9789027262332, 9027262330

Table of contents :
Introduction: Columbia School Linguistics in the functional-cognitive space of the 21st century / Nancy Stern --
Using big data to support meaning hypotheses for some and any / Nadav Sabar --
The object of explanation for linguistics: Diver's radical proposal for the foundations of linguistic theory / Wallis Reid --
The relevance of relevance in linguistic analysis: Spanish simple past tenses / Bob de Jonge --
La estabilización del tema discursivo: Estrategia de uso de los llamados pronombres personales sujetos en condiciones de correferencia en español / Berenice Darwich --
Aproximación al significado de la forma española QUE dentro de la Escuela de Columbia / Eduardo Ho-Fernández --
El "juego" intraparadigmático: Una mirada al uso actual de los clíticos en Buenos Aires / Angelita Martínez --
Re-visitando significados: Las formas del llamado "futuro" en español / Angelita Martínez and Verónica N. Mailhes --
Being polite in Argentina / Elisabeth Mauder and Angelita Martínez --
A comparative study of the restrictive markings of Mandarin jiù, cái, and zhi / Xuehua Xiang --
Evolutionary Phonology as human behavior / Juliette Blevins.

Citation preview

studies in functional and structural linguistics

Columbia School Linguistics in the 21st Century

edited by Nancy Stern, Ricardo Otheguy, Wallis Reid and Jaseleen Sackler

John Benjamins Publishing Company


Columbia School Linguistics in the 21st Century

Studies in Functional and Structural Linguistics (SFSL) issn 1385-7916 Taking the broadest and most general definitions of the terms functional and structural, this series aims to present linguistic and interdisciplinary research that relates language structure – at any level of analysis from phonology to discourse – to broader functional considerations, whether cognitive, communicative, pragmatic or sociocultural. Preference will be given to studies that focus on data from actual discourse, whether speech, writing or other nonvocal medium. The series was formerly known as Linguistic & Literary Studies in Eastern Europe (LLSEE).

For an overview of all books published in this series, please see

Founding Editor

Honorary Editor

John Odmark

Eva Hajičová

Charles University

General Editors Yishai Tobin

Ben-Gurion University of the Negev

Bob de Jonge

Groningen University

Editorial Board Alexandra Y. Aikhenvald

James A. Matisoff

Joan L. Bybee

Jim Miller

Ellen Contini-Morava

Marianne Mithun

Nicholas Evans

Lawrence J. Raphael

Victor A. Friedman

Olga Mišeska Tomić

Anatoly Liberman

Olga T. Yokoyama

La Trobe University

University of New Mexico University of Virginia

University of Melbourne University of Chicago University of Minnesota

University of California, Berkeley Emeritus, University of Edinburgh University of California, at Santa Barbara CUNY and Adelphi University Leiden University UCLA

Volume 77 Columbia School Linguistics in the 21st Century Edited by Nancy Stern, Ricardo Otheguy, Wallis Reid and Jaseleen Sackler

Columbia School Linguistics in the 21st Century Edited by

Nancy Stern The City College of New York and Graduate Center, City University of New York (CUNY)

Ricardo Otheguy The Graduate Center, City University of New York (CUNY)

Wallis Reid Rutgers University

Jaseleen Sackler Columbia School Linguistic Society

John Benjamins Publishing Company Amsterdam / Philadelphia



The paper used in this publication meets the minimum requirements of the American National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984.

doi 10.1075/sfsl.77 Cataloging-in-Publication Data available from Library of Congress: lccn 2019012336 (print) / 2019017712 (e-book) isbn 978 90 272 0341 0 (Hb) isbn 978 90 272 6233 2 (e-book)

© 2019 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Company ·

Table of contents

Acknowledgements Introduction: Columbia School linguistics in the functional-cognitive space of the 21st century Nancy Stern Using big data to support meaning hypotheses for some and any Nadav Sabar The object of explanation for linguistics: Diver’s radical proposal for the foundations of linguistic theory Wallis Reid The relevance of relevance in linguistic analysis: Spanish simple past tenses Bob de Jonge

vii 1 33

73 105

La estabilización del tema discursivo: Estrategia de uso de los llamados pronombres personales sujetos en condiciones de correferencia en español Berenice Darwich


Aproximación al significado de la forma española QUE dentro de la Escuela de Columbia Eduardo Ho-Fernández


El “juego” intraparadigmático: Una mirada al uso actual de los clíticos en Buenos Aires Angelita Martínez


Re-visitando significados: Las formas del llamado ‘futuro’ en español Angelita Martínez and Verónica N. Mailhes


Being polite in Argentina Elisabeth Mauder and Angelita Martínez



Columbia School Linguistics in the 21st Century

A comparative study of the restrictive markings of Mandarin jiù, cái, and zhǐ Xuehua Xiang


Evolutionary Phonology as human behavior Juliette Blevins


Name index


Subject index



The editors express our appreciation to the Schoff Fund at the University Seminars at Columbia University for their help in publication. The ideas presented here have benefitted from discussions in the University Seminar on Columbia School Linguistics, as well as at a conference that was co-sponsored by University Seminars and the Columbia School Linguistic Society. We would also like to thank Alice Newton and Pamela Guardia for their steady support. We are grateful to Nataly Shahaf and Billur Avlar, and the staff at Columbia’s Faculty House, for all their operational assistance, and to Stephany Betances, Roxana Risco, and Lauren Whitty who also contributed to the preparation of this volume.

Introduction Columbia School linguistics in the functional-cognitive space of the 21st century Nancy Stern

The City College of New York and Graduate Center, City University of New York (CUNY)

Keywords: Columbia School, Cognitive linguistics, functional-cognitive space, meaning, monosemy

1. Context and overview This volume is the fifth in a series of books that have grown out of conferences on Columbia School linguistics, the theoretical framework established by the late William Diver and Erica García and their students at Columbia University in the 1960s and actively pursued since. (For full theoretical statements, see Davis, 2006; Diver, 1995; Huffman, 2001; for a bibliography of Columbia School work, see www. The present volume moves the framework forward by revisiting existing hypotheses, presenting innovative new analyses, and offering clarification of theoretical issues, all a reflection of the maturity of the Columbia School approach (henceforth CS).1 The first volume in this series, Meaning as Explanation, edited by Ellen Contini-Morava and Barbara Sussman Goldberg, was published in 1995. In the introduction, Contini-Morava offers a cogent description of CS linguistics, highlighting the differences between it and generative grammar, the predominant theoretical paradigm at that time. But times change. Since Contini-Morava’s writing, the theories that alongside CS occupy what Butler and Gonzalvez-García (2005, 2014) call the functional-cognitive space have continued to grow and contribute to our understanding of language. Accordingly, in the second in the series of CS volumes (Reid, Otheguy, & Stern, 2002), the introduction by Wallis Reid included a 1. In its early years, the CS framework was known as Form-Content analysis. © 2019 John Benjamins Publishing Company


Nancy Stern

brief section noting points of contact with Cognitive linguistics, a theme continued in the introduction (Kirsner, 2004) to the third volume (Contini-Morava, Kirsner, & Rodríguez-Bachiller, 2004), intended by its editors as a full dialogue between the two approaches. Continuing that trend, we offer here an introduction to CS linguistics that recognizes its similarities with Cognitive linguistics, and highlights some defining differences from that framework. This introduction will range over a wide array of topics, including the role of communication in linguistic analysis, the nature of meaning, the status of traditional constructs, the nature of explanation and of data itself, inference, the role of metaphor, and more. A comparison between CS and Cognitive linguistics is quite fitting at this time, as that important theoretical paradigm is now and has been since the 1990s well established (Nerlich & Clarke, 2007). Cognitive linguistics is itself comprised of a great many practitioners and, in some cases, disparate viewpoints (see Geeraerts & Cuyckens, 2007; Evans & Green, 2006; and Croft & Cruse, 2004 for overviews). Consequently, this introduction cannot exhaustively cover the extensive bibliography of that framework, but can only situate CS in comparison to it in terms of general principles and some illustrative analyses. CS linguistics, along with other functional/cognitive theories, is thriving in the 21st century. A major milestone for CS was the publication in 2012 of nearly 600 pages of previously difficult-to-find and unpublished manuscripts of William Diver’s original work (Huffman & Davis, 2012). The 2000’s have also seen the publication of three compiled volumes: the two we have mentioned (Reid et al., 2002; Contini-Morava et al., 2004) as well as Davis, Gorup, and Stern (2006). In addition, two book-length CS works have appeared, one offering an analysis of a grammatical problem in Italian (Davis, 2017) and the other studying a lexical problem in English (Sabar, 2018). 2. The role of communication in the formulation of theory Despite some assertions to the contrary (e.g., Chomsky, 2002, pp. 76ff; Chomsky, 2012, pp. 12ff), the function of enabling communication is widely recognized by laymen and linguists alike as central to language. In CS, the centrality of the role of communication guides the formulation of grammatical hypotheses and constrains the form these hypotheses can take. Diver put it in these terms: It is no novelty to associate language with communication. The critical point, however, is not whether language is used for communication (or the extent to which it is used for purposes other than communication) but whether its very design and structure are directly motivated by the act of communication.  (Diver, 1975/2012, p. 47)

Introduction: Columbia School in the functional-cognitive space

Systems of communication as diverse as Morse code, musical notation, and traffic lights are known to be systems of signs; that is, communication is effected through the use of signals paired with meanings. It follows then from the communicative orientation that an operational hypothesis regarding linguistic analysis is that the basic structural unit of language is the sign, that is, the pairing of a linguistic form (a signal) and its meaning. The conception of linguistic structure in terms of signs is reminiscent of Saussure (1916).2 Speakers and writers can be assumed to be deploying signs to help them communicate the messages they wish to express, and in so doing can be observed to be producing non-random patterns (also referred to as asymmetries) in the sounds of speech or the marks of writing. It is the task of CS linguists to account for these observable patterns (of sounds or written marks) by advancing testable hypotheses regarding the exact nature of signs (forms and their meanings) that speakers or writers appear to be using in their production of spoken and written language. While Cognitive linguists also acknowledge that the communicative function is important for language, Geeraerts and Cuyckens (2007, p. 5), in their introduction to an overview of Cognitive linguistics, state that “the primary function of language is categorization.” By contrast, CS linguists see communication as the organizing principle of language, a point that we will clarify below, and which is further developed in Wallis Reid’s paper in this volume. A second, widely recognized fact, referred to as the ‘human factor,’ is that language is used by human beings with specific characteristics, abilities, and limitations (Diver, 1975/2012). This postulate also guides, and constrains, the CS conception of grammar. Cognitive linguistics shares with CS the idea that language is an instance of human behavior and is therefore expected to share characteristics that are found in other aspects of human activity and cognition. In general, it is a focus on commonalities between language and cognition that is at the center of Cognitive linguistics. These two axioms, that the structure of grammar is shaped by its communicative function and by the traits of its human users, are the two major orientations that delimit CS linguistics. An orientation is something known prior to, and independently of, the study of a range of observations, which tells us what types of constructs the analyst can, and cannot, assume to be components of description and explanation.

2. For analysis and comparison between Saussurean and Diverian concepts of the sign, see Davis (2004a) and Reid (2006).



Nancy Stern

Thus, in the CS approach, grammars are at their core semiotic, consisting of large structured inventories of signs. These signs are found in the form of lexical stems as well as grammatical forms such as morphological affixes, and word-orders. Cognitive linguistics also represents a semiotic approach, in that it sees grammar as composed of form-meaning correspondences (Goldberg, 1995; Langacker, 1987). In describing Cognitive linguistics, Taylor (2007) notes its “commitment to a symbolic view of language” and contrasts it with theories that “treat grammatical constructions as meaningless” (p. 567). An additional commonality between the two approaches is of considerable theoretical significance. Neither CS linguistics nor Cognitive linguistics addresses utterances in terms of sentential truth conditions. Rather, both approaches see grammar as a matter of conceptualization, and consider the full import of what Fillmore (1985) calls ‘understanding’, which Croft & Cruse (2004, p. 8) describe as “the full, rich understanding that a speaker intends to convey in a text and that a hearer constructs for that text.” This ‘understanding’ is virtually equivalent to what CS calls ‘messages’. These similarities between CS and Cognitive linguistics led Langacker (2004, p. 56) to say that he regards the differences between CS and Cognitive Grammar as being “less fundamental and less important than they might appear to be on superficial examination. This has the consequence … that I consider CS to be part of Cognitive linguistics ….” However, the present introduction, and any careful comparison of these two frameworks, requires that we go beyond the question of equivalence or category membership. For there still exist many significant and fundamental theoretical and analytical characteristics that are unique to CS and to which we now turn. 3. Signal, meaning and message The title of the second volume in this series, Signal, Meaning, and Message (Reid et al., 2002) highlights a central feature of CS linguistics, namely the distinction that is made between the semantic contributions of linguistic forms on the one hand, and the actual results of communication on the other. More so than in other approaches, and in a way quite different from the familiar semantic-pragmatic distinction, a sharp line is drawn in CS between the meanings posited for grammatical and lexical signs on the one hand, and the inferred interpretations that result from the deployment of these signs in acts of language use on the other. The CS distinction between meanings in the grammar and contextually derived messages in the communication is quite different from the approach taken within Cognitive linguistics, a usage-based model of language where “it does not make sense to draw a sharp distinction between what is traditionally called ‘competence’

Introduction: Columbia School in the functional-cognitive space

and ‘performance,’ since performance is itself part of a speaker’s competence” (Barlow & Kemmer, 2000, p. xi). On this account, CS does line up closer to generative grammar than to Cognitive linguistics (Boogaart & Foolen, 2015, pp. 216–17). Albeit with vastly different conceptions of the general form of the linguistic system – as well as different conceptions of the role of the guiding orientations, the functions of language, and of the role of truth conditions and sentential meaning – CS is akin to generative grammar in distinguishing between underlying system and overt use (Huffman, 2012, p. 9), where the underlying system is the inventory of lexical and grammatical meanings, and use is the deployment of these meanings in acts of communication. This distinction is captured with the terms introduced above, namely meaning and message. In CS, meaning is a technical term, referring to the constant semantic contribution made, each time they appear, by lexical items, grammatical formatives, and certain facts of word order. By contrast, messages are the result of the deployment of linguistic units, along with vast amounts of non-linguistic information, within complex interpersonal and sociocultural contexts. That is, messages (what Fillmore called ‘understanding’) are the ongoing interpretive results of the use of lexical and grammatical resources.3 We can illustrate this CS distinction between meaning and message with the lexical item eat, which we will assume, for current illustrative purposes, always has a stable meaning related to ingestion. With this single, invariant meaning, the form can be interpreted in many different ways, depending on context: eating a sandwich (taking bites of something most often held in the hands), eating a steak (using knife and fork), eating carrots (crunching), eating soup (no chewing) or eating one’s words (no physical ingestion). There are many other parameters along which the messages associated with eating may vary: one may eat quickly or slowly; standing up, sitting down, or even lying down; when one is hungry or not; a lot or a little; and many more. The wide range of interpretations are part of messages that may be communicated, even though we can assume that in all these examples, the lexical meaning of eat remains constant.4 Because of the complexity and multi-dimensionality of communications, messages are not understood as singular, discrete, or even identifiable in their entirety. It is therefore often more felicitous to speak in terms of message elements, fragments, or parameters, and perhaps even better to speak in terms of messaging, the 3. A third category, which is different from both meaning and message, is the scene, which is the objective or referential reality that in some theories determines the truth value of utterances. CS rejects this referential reality as a basis for linguistic analysis (Diver, 1975/2012, pp. 48–49). 4. This sketch of a description for eat is for illustrative purposes only; for a complete and validated CS lexical analysis see Sabar’s (2018) study of look, see, seem, and appear.



Nancy Stern

ongoing ever-shifting results of the deployment of linguistic resources in acts of communication. 4. Monosemy As we have said, CS linguists aim to explain observed distributions by postulating signs, that is, by offering testable hypotheses about signals with invariant meanings. Given a finite number of signs and an infinite number of possible communications, each meaning must be used for a wide variety of communicative purposes (as in the case of eat in Section 3). CS analyses begin with what Ruhl (1989) calls a monosemic bias: researchers’ initial efforts are directed toward identifying a form and a single meaning for it, a meaning whose sparse semantic contribution can be interpreted in multiple ways, leading to contextually conditioned, inferentially mediated message elements. Meanings are thus stable features of the linguistic system, whereas messages are unique, momentary, complex and multifaceted, and always context-dependent. The authors of Saussure’s Cours remarked on the puzzle that one can adopt a fashion and adopt a child, yet they proposed that adopt is nevertheless the same sign in both instances (Saussure, 1916, p. 108). Diver would have taken the analysis further; he would have pointed out that there are many other things to be adopted besides fashion and children (such as views, methods, rules, etc.) and would have aimed to focus on the similarities among these uses to hypothesize a semantic constant that adopt invariantly contributes to the many different messages involved in its use. The theoretical conceptualization of meanings and messages within CS bears restating, with emphasis on the question of what is the object of linguistic analysis. As we have seen, a meaning (together with its signal) is a linguistic unit hypothesized by the analyst for the purpose of explaining the observed non-random distribution of sounds in speech or marks in writing. The nonrandom distributions are the facts to be explained, while the meanings (and their signals) are the testable hypotheses that explain them. Messaging is what speakers are assumed to be doing when producing speech or writing, but messages are not the object of study. That is, it is important to forestall misunderstanding and to stress that messages are not what’s being explained. The object of linguistic analysis is not the messages themselves, but rather, it is the distribution of sounds or marks generated in acts of messaging. Messages serve as the basis for the testing of meaning hypotheses, which are put forward in order to explain the distribution of forms (or word order). The meaning must help in some way to communicate the message; however, the CS analyst has no analytical responsibility for messages beyond that. In his paper in this volume, Wallis Reid resolves this apparent paradox, and explains that while the

Introduction: Columbia School in the functional-cognitive space

role of communication is central in CS, the framework does not provide a theory of communication. That is, CS offers a theory of observed distributional patterns (individual sounds and groups of sounds, or written marks) in speech or writing. The theory assumes that the patterns, which are referred to as asymmetries, are the result of speakers’ and hearers’ acts of communication. It is those observed patterns, generated by language use, and not the ongoing communications themselves, that are the object of study. In the examples seen earlier, the object of study would be the distribution of the lexical item (eat, or adopt) not the content of communications involving what is eaten or adopted, when, where, how, why, etc. In proposing signs, the CS linguist usually starts with the tentative assumption that, in most cases, the available morphemic analysis is accurate; that is, the linguist begins with the expectation that the signal has been correctly identified. In practical terms, the linguist is often investigating, not so much the non-random distribution of sounds or marks, but of tentatively identified signals. What remains, then, is the very large task of discovering the meaning, and in doing so, re-checking the correctness of the signal. Grammar is, as Diver (1975/2012, p. 54) put it, “the attempt to understand the nonrandom distribution of the signals,” an attempt that sometimes involves confirming, and sometimes disconfirming, their preliminary identity. For instance, in this volume Angelita Martínez and Verónica Mailhes start with the initial assumption that the morphemic analysis of Spanish synthetic and periphrastic futures (e.g., cantaré, voy a cantar, both ‘I will sing’) has isolated the signals correctly. While they confirm the identity of the signals themselves as analyzed in the preliminary morphemic parsing, they find that previous work has gotten the meanings wrong (because the distribution of these signals is not as the traditional meaning analysis predicts). Instead, they propose a meaning hypothesis pertaining to the speaker’s control or lack of control over the event, which they demonstrate accounts for the forms’ distributions. In his analysis of any and some in this volume, Nadav Sabar concludes that the any in he does not have any candy is the same as the any in anyone and anything (he has not seen anyone/anything). Sabar must determine whether the morphemic parsing, and in this case the spelling, has gotten things right, a question that can only be answered by finding and testing a meaning for any. In short, CS analyses are hypotheses about both signals and meanings, even though in practical terms the signal often is, at least provisionally, available in a preliminary morphemic analysis. The monosemic approach of CS (monosemic in that it begins with the expectation that each of the linguistic signals hypothesized by the linguist has a synchronically stable single meaning) is diametrically different from that of Cognitive linguistics, where a polysemous view of meaning is adopted. Reid (2004, p. 93) offers a CS perspective on these different approaches:



Nancy Stern

Linguistic tradition has a three-way typology for describing the outcome of the semantic analysis of a lexical or grammatical form. The analysis may posit a single meaning for the form, as one would do for the English word barn, ‘a large wooden structure on farms for animals.’ Charles Ruhl (1989) has dubbed this monosemy. Alternatively the analysis may treat the form as two or more independent linguistic units and posit distinct and unrelated meanings for each. The traditional name for this is homonymy, and a classic example is the two English words seal the animal and seal the stamp. Finally, the analysis may treat a single form as a single linguistic unit and posit a series of semantically related meanings, or senses for it. John Taylor (1995, p. 103) cites the word neck, meaning both ‘neck of a human body’ and ‘neck of a bottle’ as a clear example of what is traditionally called ‘polysemy’.

In contrast to the monosemic bias of CS, Cognitive linguistics expects polysemy, in lexical items and grammatical constructions alike.5 Goldberg (1995, pp. 31–32) notes that: Constructions are typically associated with a family of closely related senses rather than a single, fixed abstract sense. Given the fact that no strict division between syntax and lexicon is assumed, this polysemy is expected, since morphological polysemy has been shown to be the norm in study after study (Wittgenstein 1953; Austin 1940; Bolinger 1968; Rosch 1973; Rosch et al. 1976; Fillmore 1976, 1982; Lakoff 1977, 1987; Haiman 1978; Brugman 1981, 1988; Lindner 1981; Sweetser 1990; Emanatian 1990).

So while Cognitive linguists place the set of related interpretations within the linguistic system itself, CS places it outside, in the realm of language use. The various interpretations are then, in Cognitive linguistics, the content of the form (of the word, the affix, the construction). But not so in CS, where the content of the form is only the sparse and constant semantic contribution (the meaning) that it makes to the interpretations. In Cognitive linguistics, the interpretations are the multiple senses that constitute the forms’ polysemous content. In CS, however, these senses do not form an enumerable set, but instead are the ad-hoc, inferentially derived, and contextually conditioned messaging associated with the use of a form.6 The CS distinction between meaning and message is not the familiar distinction between semantics and pragmatics. In the usage of many linguists, semantics is the study of the meaning of sentences, and pragmatics the study of their use.

5. Goldberg (1995, pp. 9–21) points to theoretical advantages (in relation to lexicosemantic accounts), as well as evidence from sentence processing and child language acquisition, in support of the polysemic approach. 6. From a methodological standpoint, polysemy may be more difficult to falsify than a monosemic analysis, as additional senses can be added without challenging the original analysis.

Introduction: Columbia School in the functional-cognitive space

That is, a sentence is seen as a linguistic object with a literal meaning that is then pragmatically enhanced. By contrast, in CS there is no call to study the meaning of sentences because, in the CS view, sentences are not linguistic objects, and utterances themselves have no literal meaning. Meaning resides only in the constitutive elements of an utterance, and none of the contextual inferences of an utterance is seen as given directly by the linguistic forms themselves. This then eliminates the need to distinguish a basic semantics from a derived pragmatics. 5. Rejection of the a priori constructs and categories of the sentence An important feature of CS linguistics is the rejection of traditional constructs handed down from Greek philosophy and logic (see Diver, Davis, & Reid, 2012, pp. 430–437; García, 2009; Otheguy, 2002). Diver has demonstrated, and experience has shown, that the postulation of signs is only successful when it does not begin with the familiar categories of the sentence such as parts of speech, subjects and objects, subject-verb agreement, and even the cornerstone construct of the tradition, the sentence itself. Without the familiar guideposts to linguistic structure provided by the tradition, analysts must approach the units invoked in previous studies with skepticism, a skepticism often confirmed when the categories are shown to interfere with the effort to advance successful hypotheses about the identity and content of signs. For instance, Reid (1991) finds that the category ‘question’ is not part of the grammar of English, as the signals that contribute to the formulation of questions are the same as those used for other communicative purposes (e.g., is it true that he knew? cf.: but not only is it true that he knew; does he want to sell the house? cf.: still less does he want to sell the house). Similarly, Davis (2017) shows that the distribution of Italian si can be accounted for by rejecting the categories of ‘reflexive’ and ‘impersonal’; Stern (2001, 2004, 2006) shows that the distribution of the English -self forms can best be understood when we set aside the category ‘reflexive’. And Reid (1991, 2011) shows that only by dispensing with ‘subject-verb agreement’ is it possible to come to an understanding of the distribution of suffix -s (and -es) in English. Inherited traditional categories can be admitted to CS analyses only if they fit the distributional facts. Accordingly, CS work by Contini-Morava on Swahili (2002) and by Reid on Spanish (2018) concludes that the traditional category of gender (but not the category of agreement) is in fact a category of these languages, even if it is in need of considerable revision and elaboration. These CS linguists did not set out to analyze gender, but rather, they found that the distribution of sounds (or written marks) in Swahili and Spanish could not be accounted for without postulating word classes. The point is simply that CS linguists do not begin with traditional



Nancy Stern

categories; they only accept into analysis those constructs that can be shown to be exclusively associated with actual facts of morphological or word-order distribution in the languages for which they’re postulated. Scholars working within Cognitive linguistics such as Croft (2014) have made similar observations, noting the non-universality of the categories handed down from the grammatical tradition. Haspelmath (2010, p. 664) writes: Each language has its own categories, and to describe a language, a linguist must create a set of descriptive categories for it …. These categories are often similar across languages, but the similarities and differences between languages cannot be captured by equating categories across languages.

Nevertheless, it should be noted that in practice many Cognitive analyses do lean on the familiar constructs of the tradition. Goldberg (1995, 2006) describes utterances in terms of sentential categories such as Subject, Object1 and Object2, and also in terms of constructions she calls Topicalization, Indefinite determiner, and Plural. None of these categories result from empirical observations or testing; rather, Goldberg has assumed, prior to her analysis, that they exist. Langacker’s (2004) analysis of double subject constructions refers not only to the notion of (double) subject, but also to higher-level clause, active sentences, and future tense, all of which are constructs that he has found useful, but that have not been postulated as the result of empirical analysis. If the traditional notions are not linguistic objects, and are not legitimate accounts of linguistic facts, what are they? From his earliest work, Diver characterized the categories of the tradition as descriptions of types of communications (Diver, Davis, & Reid, 2012). If, instead of undertaking an analysis of linguistic forms, we analyze the messages associated with sentences as propositions, we may indeed find subjects, reflexives, predicates, direct objects, and more. Crucially, however, these units match neither the morphology nor the word order of actual languages (even for the languages they were created to describe). And Otheguy (2002) argues persuasively that we find these units when we observe language use only because we are looking for them. In fact, CS analysts have shown that the traditional categories may actually provide a mischaracterization, as they classify messages in predetermined ways (based on traditional grammar) rather than on the myriad ways messages might be described. For instance, Huffman (1997, p. 138) shows that grammarians undertaking the study of French lui are misguided by descriptions of the message in terms of ‘dative of the advantaged’ and ‘dative of the disadvantaged’. Similarly, what Davis (2004b, 2018) aims to show is much more than that Italian si is not a reflexive or an impersonal in the grammar. Davis shows that impersonality or reflexivity (which others might propose as strategies of use for si) are not even

Introduction: Columbia School in the functional-cognitive space

accurate characterizations of the communication. In a striking demonstration, Davis (2017, pp. 12–20) shows that when utterances that contain Italian si are translated into English by bilinguals, the resulting messages have nothing in them that anyone would characterize as impersonal or reflexive. (In fact, the tradition would call them ‘intransitives.’) A translation, quite obviously, aims to capture elements of the original message, and yet no element of reflexivity or impersonality is found in actually documented translations of si, casting considerable doubt, not only on the role of impersonal and reflexive as units of the language, but even in their role as categories of the communication. After stating that impersonal is not a category of the language, Davis puts it this way: But the category impersonal is also not even a very good category for talking about the kinds of messages that Italian writers convey, not even very good, that is, for talking about interpretation. In Columbia School terms, impersonal is not even a ‘message parameter’ … To think of a certain passage of discourse as containing an impersonal reference is largely to miss the writer’s point. The actual message has a finer grain than that…. (Davis, 2017, p. 36)

For this reason of empirical failure, then, as well as out of principles of theoretical coherence, CS analyses begin, not with grammatical categories, but with the observation of distributional facts and the postulation of language-specific signs.7 Several papers in this volume illustrate an approach that dispenses with the categories of the tradition. Eduardo Ho-Fernández offers an analysis of the Spanish form que in all its uses, not by first segregating the specific syntactic or grammatical functions that have been handed down from traditional grammar. Ho-Fernández does not ask, “what is the relative pronoun in Spanish,” or “what are the interrogative pronouns in Spanish?” Instead, he looks with fresh eyes at the form que itself, to determine whether it is a signal and, if it is, to discover a meaning that will account for its distribution. Similarly, Xuehua Xiang brings a CS perspective to the analysis of Mandarin jiù (就), cái (才), and zhǐ (只), and identifies invariant meanings for these three forms in their preverbal positions. (She postpones for future study a full analysis of jiù in all its uses.) And Nadav Sabar’s analysis of any and some considers attested uses of these forms without reference to pre-established logical categories such as the a priori notion of quantification on which previous analyses have relied. Sabar compares his own analysis to that of Langacker (2017), whose goal is to determine the semantic conceptualization of these forms that leads to the effects of existential and universal quantification.

7. It may also be the case that one will discover inductively that similar meanings recur in many languages.



Nancy Stern

As we have seen, CS denies the relevance not only of the categories of the sentence, but also of the sentence itself. Many papers in this volume explicitly reject sentential boundaries as the scope of linguistic analysis. Bob de Jonge’s paper on the two Spanish forms that the tradition calls the preterite and the imperfect (Spanish pretérito indefinido and pretérito imperfecto) validates a previous CS hypothesis where the meanings of the forms are hypothesized to relate to the discourse status of information.8 Such a discovery, and the literary analysis it allows, would not be possible with sentence-level data. The paper therefore illustrates the importance of looking beyond the boundaries of clauses and sentences to understand linguistic structure. Berenice Darwich’s study of the appearance of pronouns in positions of continuing reference (that is, non-switch reference), also depends on discourse-level data. (In fact, the variationist observations regarding switch reference are themselves the product of analysis that goes beyond the level of the sentence, but that is another story.) Similarly, Xuehua Xiang, in her investigation of the Mandarin particles that we have mentioned, looks beyond the boundaries of sentences to understand the communicative import of these forms. 6. Description and explanation The history of linguistics, according to Roy Harris (1988, p. ix), is the history of “conflicting views as to how we should set about the analysis of language,” a position that informed the work of Saussure (1916) and is echoed in contemporary accounts such as Agha (2007) and Pennycook (2018). In this context, Diver was explicit regarding his view of the goal of grammatical analyses: to provide an explanatory theory of the acoustic asymmetry of the sounds of speech or the marks of writing. As we have noted, Reid’s contribution to the present volume unpacks this notion. In CS and Cognitive linguistics, theory becomes explanatory when it is connected to independently known factors. Within Cognitive linguistics, similarities between language and cognition are central. Langacker (1991/2002, p. 24) describes the goal of linguistic analysis as “an accurate characterization of linguistic knowledge as an accurate characterization of human cognition,” and Goldberg (2006, p. 12) states, “cognitive and functional linguists tend to seek out generalizations that apply beyond language whenever these can be justified; a goal is to posit as little that is specific to language as possible.”

8. There are other labels used for these verb forms as well. For example, the preterito indefinito is sometimes called the pretérito perfecto simple. However, de Jonge relies on terms that do not carry unwanted connotations regarding the forms’ meanings.

Introduction: Columbia School in the functional-cognitive space 13

CS analyses offer an explanation for linguistic phenomena by positing signals and meanings that account for speakers’ (and writers’) deployment of linguistic forms. And here too, a title of one of the volumes in this series, Meaning as Explanation (Contini-Morava & Goldberg, 1995), sums this up: meanings (invariant semantic contributions) are posited to explain the asymmetries of speech and writing. That is, “[m]eaning is the explanation, not the object of explanation” (Davis, 2017, p. 240). Diver believed the linguist’s theory of language ought to be parallel to the molecular theory of the chemist, the atomic theory of the physicist or the planetary theory of the astronomer: “[A] theory is a claim that there exists some internal, unobservable factor of which the observable phenomena are external manifestations” (Diver, 1969/2012, p. 158).9 In CS, the communicative and human orientations constitute the known factors that earn hypotheses about signs their explanatory power. The communicative orientation not only motivates the sign as the basic structural unit, but also entails that the hypothesized signs themselves can be tested against the communicated messages to which they contribute. Similarly, when it comes to phonology, the purpose of CS analysis is to explain the distributions of phonological units within signals, also drawing on independently known orientations, in this case human physiological and psychological characteristics. In this volume, Juliette Blevins notes that Diver (1974, 1979) was ahead of his time in formulating a phonology that considered both phonetic and communicative factors. Comparing CS Phonology with the theory of Evolutionary Phonology (Blevins, 2004, 2005, 2006, 2008, 2009, 2014), Blevins notes that both theories share an interest in accounting for sound patterns on the basis of articulation, perception and cognition. Nevertheless, she offers a cogent critique of CS work on phonology, covering not only Diver but also Tobin (1997, 2011) and Dekker and De Jonge (2006), observing that CS phonology has not integrated more recent empirical and typological work in phonetics.

9. This tenet is in clear opposition to the premises of generative linguistics. For Chomsky, explanation is believed to be in the link between generality of description and innate language-specific features of the human mind, known as Universal Grammar (UG). The trouble, of course, is that UG is unrelated to anything we independently know to be true, either about language, communication, or cognition. The formal descriptions of the generativists thus link the observations to an unknown, not to a factor that can anchor an explanation.


Nancy Stern

7. The data of linguistic analysis Hopper (2007, p. 239) provocatively notes that the primary theoretical division in the field of linguistics “does not divide autonomous grammarians from all the rest, but rather segregates those who work with made-up introspective data from those who use a database of living samples of spontaneous language.” CS falls squarely in the camp of those who study samples of spontaneous language, and in fact, takes that dataset quite seriously as the object of explanation, aiming, as we have seen, to account for what has been said (and written). CS analysts rely on neither judgments nor intuitions about what people can say or might say; instead, the analyst’s responsibility is to explain the occurrence of forms in attested written or transcribed texts. Hopper further asks, in a challenge to the grammaticality judgments of generative grammar: Is it a question of what people are able to do under optimal conditions of isolation and setting and education and literacy, or is it rather what people habitually say under ordinary everyday conditions? If, as I think the investigation of language includes the crucial question of why grammar is the way that it is, it would seem that a study of how people habitually talk with one another must be given priority over what they could say in idealized circumstances. (Hopper, 2007, p. 239)

Here again there is some divergence worth noting between CS and Cognitive linguistics. In the CS view, the signal-meaning pairs of language are viewed as communicative resources that speakers deploy – sometimes in routinized ways, and at other times, in surprising and completely novel combinations. The analyst is responsible for all this data, and in fact, CS linguists have found that what seem like unexpected or unusual uses are often quite enlightening (see Davis, 2017; Reid, 1991, 2011; Stern, 2004, 2006, 2019, for examples). The use of attested data is, of course, not limited to CS linguists. Many scholars deal with naturalistic linguistic data. The sociolinguistic research by William Labov and his followers has demonstrated the importance for linguistics, as for any other human science, to deal with what people actually do rather than with their intuitions about what they think they do. Longitudinal studies on child language acquisition have long been based on attested data (e.g., Brown, 1973; Bloom, 1970), and the important stream of research by Joan Bybee and her followers are also examples of the successful use of speech or writing as data (e.g., Bybee, 2010, 2007, 2006, 2001). In addition, the field of corpus linguistics, which addresses attested language use, has grown because of the availability of large-scale sets of naturalistic data (Biber, Conrad, & Reppen, 1998).10 10. It is worth noting that many practitioners see corpus linguistics as a methodology, rather than as a theoretical framework (McEnery & Wilson, 2001, p. 3).

Introduction: Columbia School in the functional-cognitive space 15

Nevertheless, constructed sentences continue to form the dataset for most grammatical analyses, even among functionalists. Nuyts (2007, p. 553), himself a Cognitive linguist, observes that Cognitive analyses “draw on the linguistic practices which have become established since the generative revolution in the fifties and sixties, namely, to use artificial examples or natural examples which have occasionally or accidentally been picked from written or spoken discourse.” The type, size, and general nature of the texts used in CS analyses have been considerably varied and continue to change as different forms of data become available.11 Some CS analyses are based on individual texts or sets of texts. For instance, one of the earliest papers on CS, ‘The System of Relevance of the Homeric Verb’ (Diver, 1969/2012), is based on an analysis of three passages from the Iliad, while Huffman’s (1997) book on French lui and le is based on the analysis of a set of 40 contemporary French novels. In this volume, Bob de Jonge’s analysis of Spanish tenses applies a CS grammatical analysis to a single short story. CS scholars choose texts for analysis in different ways; for example, Angelita Martínez and Verónica Mailhes study two newspapers during a two-week period. Other CS linguists now work with large-scale corpora. Nadav Sabar continues his innovative use of the Corpus of Contemporary American English (COCA, Davies, 2008-) and Xuehua Xiang collects Mandarin data from both Academia Sinica Version 4.0 and Gigaword2all. In all cases, the CS analyst is responsible for accounting for the distribution of attested forms. An important feature of CS research is that qualitative analyses of individual examples often lead to predictions pertaining to quantitative data, which are used to test signal-meaning hypotheses. Quantitative analysis can be conducted on corpora of various sizes, from individual texts of various lengths, to groups of texts, to large scale electronic corpora. Through such predictions, previously unknown facts of distribution may be discovered. In this way, CS analyses produce new data, and more importantly, these data are accounted for on the basis of the signal-meaning hypotheses that generated the prediction. This point bears repeating: quantitative analyses are not exercises in descriptive linguistics; rather, they aim to confirm, or disconfirm, predictions that follow from analytical claims. Several papers in the present volume include quantitative data in support of hypotheses formulated on the basis of tokens found in the examination of texts. Nadav Sabar’s meaning hypotheses for any and some account for a set of individual examples that he analyzes in detail. This qualitative analysis generates predictions 11. An important point to make here is that naturalistic data is not an end in itself. CS studies the natural production of speakers and writers not to discover, arrange, or quantify distributional patterns in the data, but to explain these regularities by means of the postulation of signs. More precisely: attested data are to be explained by competence or system, where the competence, or the structure of the linguistic system itself, is the inventory of signs.


Nancy Stern

that are then tested on COCA. That is, Sabar’s quantitative predictions are made on the basis of prior qualitative analyses of attested examples, in which he observes that certain other forms that co-occur with some or any provide support for his meaning hypothesis. (For example, he supports his meaning hypothesis showing that while others co-occurs with both some and any, it is found much more frequently with some, a previously unknown distribution that is explained by his proposed meaning.) He tests predictions that such co-occurrence patterns appear regularly in the COCA corpus, and that they represent a strategy for using the meaning of some. The logic of Sabar’s quantitative procedure bridges the gap between qualitative and quantitative analysis, and spells out precisely how quantitative validation offers support for a meaning hypothesis.12 The paper by Xuehua Xiang follows the same methodology to test the meanings she proposes for the Chinese forms jiù, cái, and zhǐ. Elisabeth Mauder and Angelita Martínez also test hypotheses using quantitative data; they draw from the CREA database of the Spanish Academy (Corpus Referencial del Español Actual), which consists of a collection of written and spoken sources from multiple Spanish-speaking countries. 8. Inference: The link between meaning and message There is broad agreement among all linguistic theories that much more is communicated than is encoded in language. Langacker (2004, p. 44) notes: “what we understand from an expression – on the basis of context, implicature, or interpretive strategies – is often (if not always) more extensive than anything we could reasonably identify with its linguistic meaning, even in a broad sense.” It is this fact that forms the traditional distinction between semantics and pragmatics, a distinction that, as we have seen, does not apply to analyses in CS. In the conceptualization of the gap between meaning and message, the difference between CS and other theories is significant enough to look at here as a separate issue, even at the cost of some repetition of points made above. In the CS view, language does not establish or create meaning at the sentence or construction level. CS meanings (in Saussure’s terms, signifiés) are understood to be ‘instrumental,’ that is, meanings are tools that speakers deploy to achieve their communicative goals. Analytical experience has shown that the meanings of linguistic forms are but sparse hints. Language users must rely on local and global contexts, social setting, common sense, and life experiences to infer messages that are always

12. Davis (2004b), in a paper titled ‘Revisiting the gap between meaning and message,’ takes issue with the logic used by Sabar.

Introduction: Columbia School in the functional-cognitive space 17

under-determined by the semantic input.13 Thus, utterances do not have interpretations that are derived from semantics alone; instead, the only way that hearers can understand any utterance at all is through substantial processes of inference. As noted above, these processes of inference, and resulting messages, are not seen as part of the linguistic system itself; they are merely part of the use of language. This CS conception of monosemic linguistic signs that serve as hints to communicated messages is quite different from the embrace of polysemy that is a hallmark of Cognitive linguistics (Geerarts & Cuykens, 2007, p. 5). Cognitive linguistics proposes meanings for lexical items and for constructions whose “semantics … can best be represented as a category of related meanings” (Goldberg, 1995, p. 33). Bybee (1998) notes, “When the same pattern of inferences occurs frequently with a particular grammatical construction, those inferences can become part of the meaning of the construction” (p. 266). In the Cognitive view, a form has a number of related but distinct senses, which in CS would be analyzed as results of the interpretation of the invariant meaning of the form in different contexts. Also incorporated into linguistic meaning in Cognitive linguistics is an encyclopedic view of the world: as “language is a system for the categorization of the world, there is no need to postulate a systemic or structural level of linguistic meaning that is different from the level where world knowledge is associated with linguistic forms” (Geeraerts & Cuyckens, 2007, p. 5). By contrast, in CS this encyclopedic type of knowledge is not part of the linguistic system, but instead, is always marshaled in the interpretation of utterances, as language users derive rich and complex interpretations from sparse linguistic meanings. Thus, the differing approaches to linguistic meaning are paired with quite different views of the role of inference in the interpretation of utterances. Diver explains the CS approach this way: The general picture of human language is that of a particular kind of instrument of communication, an imprecise code by means of which precise messages can be transmitted through the exercise of human ingenuity. The code and the ingenuity must be kept clearly separate; most of the difficulties encountered in the various schools of linguistic analysis result, simply, from attempts to build the ingenuity (Diver, 1995/2012, p. 445) into the structure of language itself.

Langacker (2004, p. 45) notes the many points of similarity between CS and Cognitive Grammar, and observes that “[the] abstract, monosemous meanings of CS are comparable to the highest level schemas posited in [Cognitive Grammar]. Both frameworks recognize that, in actual use, expressions are understood in 13. We saw this process in the brief consideration of the meaning of the word eat. For a full-length and highly accessible explication of this notion, see Ruhl (1989).


Nancy Stern

specific ways determined by context, communicative strategies, and interpretive abilities”.14 His critique of CS centers on the gap between meaning and message: “The place where disagreement sets in is when we ask how much of our total understanding of expressions – how much of the overall message – constitutes linguistic meaning” (2004, p. 44).15 Huffman (2012, p. 12) sums up Langacker’s position: One of Langacker’s main objections to the CS approach is that CS has eschewed polysemy and refused to build into its grammatical hypotheses aspects of interpretation that go beyond the posited meanings themselves.

The place of interpretations of utterances in both theories is considered in the following section. 9. Communicative strategies In a cogent observation, Langacker notes that “a pivotal question is whether CS does in fact accept the learning and conventionalization of certain communicative strategies and interpretations” (2004, p. 45). He further notes that within CS, there is some disagreement on this point. CS scholars including Contini-Morava (1995), Diver (1975/2012, 1995), Huffman (1997), Kirsner (1989), and Reid (1995, 2011) use in their analyses the concept of communicative strategies, or strategies of use, to describe the conventionalized ways in which language users deploy the signals and meanings of their linguistic system. The concept of strategies captures the fact that speakers routinely exploit the meanings of their linguistic system in specific ways. Contini-Morava (1995) explains:

14. A ‘schema’ is defined by Tuggy (2007) as “a subordinate concept, one which specifies the basic outline common to several, or many, more specific concepts” (p. 83). This may appear to be in contradiction to the quote from Goldberg (1995) in Section 4, that “Constructions are typically associated with a family of closely related senses rather than a single, fixed abstract sense.” However, Goldberg recognizes that constructions lie on a continuum from the most idiosyncratic to the most general. Idiosyncratic constructions are (more) specific to certain lexical items, such as kick the bucket, compared with general constructions such as NP V NP, which may correspond to what Langacker calls schemas. 15. Reid (personal communication) points out that Langacker’s criticism of CS may stem from an unstated acceptance of a compositional view of sentence meaning, in which multiple senses are posited for every lexical and grammatical form. The sum of the operative senses of all the forms that comprise an utterance will then be its interpretation (i.e., its ‘meaning’). This view also reflects the assumption that speakers can introspect about the interpretation of individual words in an utterance i.e., that it makes sense to speak of the interpretation of a single word.

Introduction: Columbia School in the functional-cognitive space 19

A ‘strategy’ is a routinized exploitation of a given meaning, so that it is regularly used to suggest/infer a particular type of message. A postulation of such conventionalized patterns of inference is justified by appeal to the human preference for habit or routine…. (p. 519)

To illustrate the notion of strategies we can consider a simple example, that of French pronouns. As in many other languages, there are distinct pronouns that signal second person in the singular, and second person in the plural (tu and vous). But the distribution of these forms does not line up as neatly as their meanings would suggest, as vous is frequently used to refer to singular referents. This is due to a well-known communicative strategy, in which plural pronouns may be used for singular referents when the speaker wishes to express politeness and respect. While vous can be analyzed as having a fixed, invariant meaning of second person/plural, there is a communicative strategy in which this meaning is conventionally used to refer respectfully to singular referents.16 In using the meaning more than one of vous to refer to a single person, speakers are verbally lowering their eyes, avoiding direct verbal contact. The effect of respect comes from the choice not to single out the interlocutor with the meaning one of tu and thereby to name her or him directly, but rather to indirectly refer to the interlocutor as a member of a larger group. Although strategies represent conventionalized uses, they are not explanations; that is, they do not replace the need to state how the meaning of a sign contributes to the communication of messages on individual occasions. The fact that signs are used in conventionalized ways is still consistent with the fact that sign use is ultimately contingent, variable, and always dependent on the complex array of speakers’ communicative goals and the vast multifarious contexts in which they are used. Kirsner (1989) argues that the notion of strategy allows one to understand how the inferential gap between meaning and message is bridged. In contrast, Davis (2004b, 2017) argues that CS analyses do not need the notion of strategy, since it suggests, contrary to fact, that there are systematic or algorithmic ways in which speakers relate particular message effects to linguistic meanings. That is, Davis holds that in every instance of use there are so many considerations leading to the selection and interpretation of linguistic meanings, and so many varying and subtle elements of messages, that these considerations can never be even partially formulaic: It is probably fair to say that messages cannot be categorized. Every example in authentic discourse is unique; practically every combination of linguistic meanings is novel. Even when the same construction is used over and over, its effects in various situations – or even in the same situation – vary widely. (Davis, 2017, p. 37) 16. Brown and Levinson (1978/1987) cite the use of plural pronouns for singular referents for the purpose of showing deference or distance in more than 20 languages, including French, Spanish, Italian, Russian, Greek, Yiddish, Hungarian, Quechua, Tamil, Hindi, and more.

20 Nancy Stern

More fully, Davis’s position (2017, pp. 226–28) is that the CS strategy is ‘theoretically extraneous,’ ‘methodologically untenable,’ rooted in ‘a failed theory of language,’ and based on ‘the wrong messages’. By contrast, Reid has argued that strategies serve as the basis for quantitative predictions. In this volume, Nadav Sabar and Xuehua Xiang, following Reid, appeal to quantitative predictions that depend on their recognition of certain strategies of use for the forms they study. Several other papers in the present volume explore the concept of strategies of use. Two papers, one by Elizabeth Mauder and Angelita Martínez, and the other by Angelita Martínez, return to the distribution of Spanish le/la and lo. They revisit the work of García and Otheguy (1983), which accounted for the unusual distribution of these forms in Ecuador by positing not only meanings for these signals, but also strategies of use. Like García and Otheguy, Mauder and Martínez illustrate that speakers may have the same grammatical system (that is, forms may have the same meanings), but their distribution may be different on account of the ways in which speakers deploy those meanings; due, that is, to identifiable differences of strategies. 10. Grammar and lexicon While neither Cognitive linguistics nor CS describes a strict division between grammar and lexicon for analytical purposes, both theories have recognized a continuum between these two types of linguistic units. Historically, Cognitive linguistics has attended to both grammatical and lexical analysis, while CS has focused on grammatical forms. However, this is not without exceptions (e.g., de Jonge, 1993; Tobin, 1995; Sabar, 2018). Diver’s view, in the early days of CS, was that it would be more analytically fruitful to begin by identifying the contribution of grammar, which would then lead to a more accurate analysis of the contribution of the lexicon:17 For example, in He left the house windowless, there is a natural temptation to suppose that the semantic contribution of the lexical unit left includes a causative component, that the factor of becoming windowless is to be attributed to the verb, the part of speech that, by definition, indicates an action. But when we discover

17. Although he did not carry out lexical analysis, Diver (1995, pp. 98–99) speculated that lexical meanings were like links in a chain: “successful use of any one link in a lexical chain does not seem to depend on a comprehensive grasp of the chain as a whole. If you learn just one link you can use that link ‘correctly.’ A later encounter with another link seems to have no effect on the use of the first one….” This approach is similar to Cognitive analyses involving prototypes and networks of related senses. Modern CS lexical analyses (e.g., Sabar, 2018) adopt a monosemic approach.

Introduction: Columbia School in the functional-cognitive space 21

that there is a grammatical meaning signalled by the order house windowless, as opposed to windowless house, it becomes apparent that the causative effect is to be attributed to that meaning, not to the verb. (1995, p. 97)

Sabar (2018) supports the grammar-lexicon distinction, but has argued that in the CS framework, a grammar-lexicon continuum is untenable. He notes that grammatical forms are defined as being part of a closed system of signs, while lexical forms have meanings that are stated independently of other signs. There can be no continuum because a form cannot be only partially a member of a closed system. Rather, a form is either part of a closed system (in which case it is grammatical), or it is not (in which case it is lexical). Sabar’s (2018) analysis of English look, seem, and appear illustrates that the tools of CS linguistics can be profitably applied to an analysis of lexicon. 11. The role of metaphor As noted earlier, Cognitive linguistics aims to study the ways in which elements of human cognition are reflected in language. In their seminal work Metaphors We Live By, Lakoff and Johnson (1980) illustrated that metaphor is not merely “a device of the poetic imagination [or] the rhetorical flourish” – it is, instead, “pervasive” in the way human beings perceive and conceptualize our world (p. 3). Lakoff and Johnson showed that our basic ideas (e.g., about emotions, time, spatial organization) as well as our more abstract conceptual structures (e.g., notions such as the mind, nations, theories, and arguments) are all structured by our understanding them as similar to something else. To give just one brief example, Lakoff and Johnson describe the conceptualization of time as a commodity with value (encompassing the view that time is a limited resource), and show that the use of vocabulary about time reflects this metaphorical conceptualization: time is money; we spend time; invest time; budget our time; profitably use time; use up time; have enough time; run out of time; and say, thank you for your time (pp. 8–9). This metaphorical conceptualization characterizes not only our ways of thinking about the notion of time, but also the language itself.18 Thus, Cognitive linguistics treats metaphor not as a peripheral aspect of language (and cognition in general), but rather as an essential part, “an inherent and fundamental aspect of semantic and grammatical structure” (Langacker 1987, 18. A literary or poetic metaphor draws attention to itself as a literary device by virtue of the fact it is literally not true, e.g, You are my sunshine. A cognitive metaphor, by contrast, passes unnoticed as it springs from conventionalized conceptualizations that are an integral part of our thinking.


Nancy Stern

p. 110). In Evans and Green’s (2006) words, “metaphor is not simply a stylistic feature of language, … thought itself is fundamentally metaphorical in nature” (p. 286), so Cognitive analyses do not generally make a distinction between literal and figurative language. On the other hand, some Cognitive linguists (Gibbs, 1994; Lakoff, 1986; Ariel, 2002a, 2002b) have argued against the distinction between literal and non-literal meaning, but continue to distinguish between encoded and interpreted meaning, referring instead to ‘salient’ (or encoded) interpretations as “the meaning that is psychologically salient irrespective of contextual appropriateness” (Coulson & Oakley, 2005, p. 3). Before addressing the role of metaphor in CS analyses, it is important to note once again that in CS, neither sentences nor utterances are understood to have literal or encoded meanings. Linguistic meanings are properties of grammatical morphemes, lexical items, and some word orders, and none of the contextually-derived interpretations of an utterance is privileged as basic or central – that is, as the literal sense. Metaphor though, as part of the human factor, is readily acknowledged to be a regular part of the inferential process that language users bring to bear in deploying and interpreting linguistic meanings. Sabar (2018) appeals to the concept of metaphor to account for the distribution of the form look; and Contini-Morava (2002) shows that meanings of Swahili noun-class prefixes are exploited metaphorically. Some specifics may be helpful here. Reid (2004) appeals to metaphor to account for the distribution of the forms in, at, and on. While limitations of space do not permit a full description or explication of that proposal here, a brief overview can illustrate the role of conceptual metaphor in a CS analysis. Reid’s hypothesized meanings for in, at, and on pertain to dimensionality in space, where in signals a three-dimensional location, on signals a location on a line or a plane, and at signals a point-like location.19 Like other CS meanings, these meanings are signaled, by hypothesis, every time the forms are used. Sometimes, the meanings are used to communicate messages regarding spatial relations: we were in New York, or on 5th Avenue, or at the corner of Amsterdam Avenue and 116th Street, where New York is conceptualized as a three-dimensional space, 5th Avenue is seen as a line, and a corner is a point in space. But while the meanings are invariant, the interpretations of the use of these forms ranges widely. Reid argues that these spatially defined meanings may also be used to refer, metaphorically, to messages related to time: we met in the daytime, or at noon, where periods of time are conceptualized as types of space. That is, daytime is conceptualized as a 3-dimensional container, while noon is 19. Tobin (1990) and Huffman (personal communication) have argued that the meaning of in (and for Huffman, at and on as well) is more abstract than Reid’s hypothesis regarding dimensionality.

Introduction: Columbia School in the functional-cognitive space 23

conceptualized as a single point. Further, being on time reflects a conceptualization of time as a fixed spatial path (following from the meaning of on as pertaining to a line or a plane), shown in the following uses: we’re on schedule, the train leaves on the hour, the ship was on course.20 Again, here we do not present the full analysis or spell out in detail the inferential paths between the hypothesized meanings and the interpretations of the forms in use (messages), but merely offer this brief example to illustrate the role of metaphor in a CS analysis. In all cases, metaphor is seen as something that speakers appeal to when they use language (signals and their meanings); metaphor is a feature of the way language users think, but it is not, in CS, hypothesized to be part of the linguistic system itself. Thus, neither CS nor Cognitive linguistics draws an analytical distinction between literal and figurative language, and metaphor is an important tool for linguistic analysis in both frameworks. In Cognitive linguistics, metaphor is seen as an integral part of cognition and therefore of language. In CS, metaphor is recognized as part of the realm of language use, and helps to account for the ways in which people use their linguistic resources – in accordance with the human factor orientation, which includes general cognitive principles and facets of human behavior. 12. Language-specific analyses CS analyses focus on individual languages, and not on crosslinguistic generalizations. And because the antecedently given categories of traditional grammar are avoided, the linguistic facts themselves suggest the analytical categories for each language. CS does not search for linguistic universals, because the analyst aims instead to discover language-specific meanings and categories. Diver (1995, p. 92) insists on “the need to begin from the inside and work out, and of the importance of not beginning the analysis of one language from the point of view of another.”21 To repeat just one example among the ones already mentioned, the traditional universalist category of reflexive has been shown by CS scholars to provide an unsatisfactory account of the distribution of forms in different languages, and analysts have shown that by hypothesizing language-specific signal-meaning pairs (see Diver, 1986, 1992/2012; García, 1975; Gorup, 2006; Stern, 2004, 2006), the different distribution of the forms labeled reflexive can be explained. Huffman (1997) observes:

20. Reid (2004) notes that the conceptualization of time as movement along a line was discussed as far back as Whorf (1956, p. 45) and developed extensively by Langacker (1991/2002, p. 243). 21. In CS, language universals would emerge on the basis of successful analyses in different languages, not via a search for them at the outset.


Nancy Stern

[I]t takes an act of will power to refocus one’s attention at this late date on the low-level phenomenon of language-specific distributions of grammatical forms … Whatever the achievements of modern linguistics, the fact remains that difficulties with some of its most basic assumptions have not been resolved. (p. 3)

However, and especially in light of recent controversies regarding the nature of languages (cf. MacSwan, 2017; Otheguy, García, & Reid, 2015, 2018), it is important to note that Diver’s conception of the goal of linguistic analysis does not depend on the notion or enumeration of individual languages. The asymmetry of speech sounds (or writing marks) that is the object of study does not depend on a definition of an individual language, or even of an individual idiolect. Rather, hypotheses in CS pertain to whatever corpus is under analysis. 13. Named languages CS analyses follow the rest of the field in the convention of referring to named languages (i.e., English, Spanish, Mandarin), thereby inadvertently contributing to the reification of what are clearly idealizations rather than discrete, demarcated or definable entities. Many scholars (i.e., Harris, 1980, 1981; Reagan, 2004; Makoni & Pennycook, 2005) have pointed out that the notion of named languages is a social construction, and that, in a strictly linguistic sense, there is no such thing as English, Spanish, Mandarin, Russian, Québecois, etc. Instead, what we call a language is “ultimately a collection of idiolects which have been determined to belong together for what are ultimately non- and extra-linguistic reasons” (Reagan, 2004, p. 46). But even our definition of idiolect is not of a discrete or bounded system. In accordance with Harris’s (1990, p. 45) observation that “linguistics does not need to postulate the existence of languages as part of its theoretical apparatus,” we adopt here the definition of idiolect in Otheguy, García & Reid (2018, p. 289): “the system that underlies what a person actually speaks, [which] consists of ordered and categorized lexical and grammatical features.” Thus, CS linguistics does not depend on the notion of named languages, as analyses are conducted on attested data: if hypotheses explain the occurrence of forms in the chosen data, they are successful; there is no metaphysical appeal to an abstract entity of a named language (or dialect or even idiolect) (cf. Davis, 2017, pp. 241–42). In this volume, contributors refer to named languages such as Spanish, Mandarin, and English. However, strictly speaking the analyses pertain to the data the authors have examined. The language names that have been applied to such collections are convenient socio-cultural labels, but no claim is made here about

Introduction: Columbia School in the functional-cognitive space 25

the ontological status of named languages, in spite of the convenient (though admittedly confusing) use of such terms. 14. Conclusion The papers in this volume continue the exploration of how much we can understand about language by studying it as a communicative device whose structure consists of meaning-bearing signs, and investigating how speakers use their linguistic resources to communicate a wide and infinitely varied range of messages. All the grammatical analyses in the present volume incorporate these key features of CS analyses that Davis (2006, pp. 9–10) observes go back to Diver’s seminal analysis of Homeric Greek (1969/2012): • • • • • • • •

Treat observable sound (or writing) as the phenomenon to be explained Adopt communication, with the meaningful signal, as an orienting principle Adopt the human factor as another orientation Divide the overall task into two parts, phonology and grammar/lexicon Distinguish between signaled meaning and inferred message Employ authentic discourse as the source of data Correlate the appearance of signals with other features of the text Use understandings of messages to validate the contribution of hypothesized meanings to those messages.

The papers in this volume advance these ideas by applying them to new data, and they illustrate the value of the strict application of clearly spelled out theoretical principles to the execution of linguistic analysis. This work moves forward not only the theoretical framework of CS, but also the cognitive-functionalist enterprise, and therefore, the field of linguistics itself.

Acknowledgements I am grateful to Ricardo Otheguy and Wallis Reid for their thoughtful input and meticulous review of multiple versions of this introduction; their suggestions have made immeasurable improvements to its clarity, precision, and comprehensiveness. I would also like to thank Bob Kirsner, Joseph Davis, Andrew McCormick, Lauren Whitty, Roxana Risco, Silvia Ramírez Gelbes, Eduardo Ho-Fernández, Ruben Mazzei, and Stephany Betances for their careful reading and helpful suggestions. I am grateful for every correction, question, and exhortation to explain more clearly.

26 Nancy Stern

References Agha, Asif. (2007). The object called “language” and the subject of linguistics. Journal of English Linguistics, 35(3), 217–235. Ariel, Mira. (2002a). The demise of a unique concept of literal meaning. Journal of Pragmatics, 34 (4), 361–402 (Special issue: literal, minimal and salient meanings). Ariel, Mira. (2002b). Privileged interactional interpretations. Journal of Pragmatics 34 (8), 1003– 1044. Austin, John L. 1940/1961. The meaning of a word. Reprinted in Philosophical Papers. Oxford University Press, 1961. Barlow, Michael, & Kemmer, Suzanne (Eds.). (2000). Usage-based models of language. Stanford, CA: CSLI (Center for the Study of Language and Information) Publications. Biber, Douglas, Conrad, Susan, & Reppen, Randi. (1998). Corpus linguistics: Investigating language structure and use. Cambridge University Press. Blevins, Juliette. (2004). Evolutionary phonology: The emergence of sound patterns. Cambridge: Cambridge University Press. Blevins, Juliette (2005). The role of phonological predictability in sound change: Privileged reduction in Oceanic reduplicated substrings. Oceanic Linguistics, 44, 455–464. Blevins, Juliette. (2006). A theoretical synopsis of Evolutionary Phonology. Theoretical Linguistics 32, 117–165. Blevins, Juliette. (2008). Consonant epenthesis: natural and unnatural histories. In J. Good (Ed.), Language universals and language change (pp. 79–107). Oxford University Press. Blevins, Juliette. (2009). Structure-preserving sound change: A look at unstressed vowel syncope in Austronesian. In A. Adelaar, & A. Pawley (Eds.), Austronesian historical linguistics and culture history: A festschrift for Bob Blust (pp. 33–49). Canberra: Pacific Linguistics. Blevins, Juliette. (2014). Evolutionary Phonology: A holistic approach to sound change typology. In P. Honeybone and J. Salmons (Eds.), Handbook of historical phonology (pp. 485–500). Oxford University Press. Bloom, Lois. (1970). Language development: Form and function in emerging grammars. Cambridge, MA: MIT Press. Bolinger, Dwight. (1968). Entailment and the meaning of structures. Glossa, 2, 119–127. Boogaart, Ronny, & Foolen, Ad. (2015). Discussion of Robert S. Kirsner, Qualitative- quantitative analyses of Dutch and Afrikaans grammar and lexicon. Nederlandse Taalkunde, 20(2), 215–217. Brown, Penelope, & Levinson, Stephen C. (1978/1987). Politeness: Some universals in language usage. Cambridge University Press. Brown, Roger. (1973). A first language: The early stages. London: George Allen & Unwin. Brugman, Claudia M. (1981/1988). The story of ‘over’: Polysemy, semantics, and the structure of the lexicon. Master’s thesis, University of California, Berkeley. New York: Garland, 1988. Brugman, Claudia M. (1988). The syntax and semantics of ‘have’ and its complements. Ph.D. dissertation, University of California, Berkeley.

Introduction: Columbia School in the functional-cognitive space 27

Butler, Christopher S., & Gonzálvez-García, Francisco. (2005). Situating FDG in functionalcognitive space: An initial study. In J. Lachlan Mackenzie, & M. de los Angeles GómezGonzález (Eds.), Studies in Functional Discourse Grammar [Linguistic Insights: Studies in Language and Communication 26] (pp. 109–158). Bern: Peter Lang. Butler, Christopher S., & Gonzálvez-García, Francisco. (2014). Exploring functional-cognitive space. Amsterdam/Philadelphia: John Benjamins. Bybee, Joan. (1998). A functionalist approach to grammar and its evolution http://www. Evolution of Com­ munication, 2, 249–278. Bybee, Joan. (2001). Phonology and language use. Cambridge University Press. Bybee, J. (2006). From usage to grammar: The mind’s response to repetition. Language, 82, 711– 733. Bybee, Joan. (2007). Frequency of use and the organization of language. Oxford University Press. Bybee, Joan. (2010). Language, usage and cognition. Cambridge University Press. Chomsky, Noam. (2002). On nature and language, ed. by Adriana Belletti and Luigi Rizzi. Cambridge University Press. Chomsky, Noam. (2012). The science of language: Interviews with James McGilvray. Cambridge University Press. Contini-Morava, Ellen. (1995). Introduction. In E. Contini-Morava, & B. Sussman Goldberg (Eds.), Meaning as explanation: Advances in linguistic sign theory (pp. 1–39). Berlin: Mouton de Gruyter. Contini-Morava, Ellen. (2002). (What) do noun class markers mean? In W. Reid, R. Otheguy, & N. Stern (Eds.), Signal, meaning, and message: Perspectives on sign-based linguistics (pp. 3–64). Amsterdam/Philadelphia: John Benjamins. Contini-Morava, Ellen & Barbara Sussman Goldberg. (1995). Meaning as explanation: Advances in linguistic sign theory. Berlin/New York: Mouton de Gruyter. Contini-Morava, Ellen, Kirsner, Robert S., & Rodríguez-Bachiller, Betsy (Eds.) (2004). Cognitive and communicative approaches to linguistic analysis. Amsterdam/Philadelphia: John Benjamins. Coulson, Seanna, & Oakley, Todd. (2005). Blending and coded meaning: Literal and figurative meaning in cognitive semantics. Journal of Pragmatics, 37, 1510–1536. Croft, William. (2014). Comparing categories and constructions crosslinguistically (again): the diversity of ditransitives [Review article on Studies in ditransitive constructions: A comparative handbook, ed. A. Malchukov, M. Haspelmath & B. Comrie]. Linguistic Typology, 18, 533–551. Croft, William, & Cruse, D. Alan. (2004). Cognitive linguistics. Cambridge University Press. Davies, Mark. (2008–). The Corpus of Contemporary American English (COCA): 560 million words, 1990-present. Available online at


Nancy Stern

Davis, Joseph. (2004a). The linguistics of William Diver and the linguistics of Ferdinand de Saussure. In G. Hassler, & G. Volkmann (Eds.), History of linguistics in texts and concepts, (Vol. I, pp. 307–326). Münster: Nodus. Davis, Joseph. (2004b). Revisiting the gap between meaning and message. In E. Contini-Morava, & B. Sussman Goldberg (Eds.), Meaning as explanation: Advances in linguistic sign theory (pp. 155–174). Berlin/New York: Mouton de Gruyter. Davis, Joseph. (2006). Introduction: Consistency and change in Columbia School Linguistics. In J. Davis, R. J. Gorup, & N. Stern (Eds.), Advances in functional linguistics: Columbia School beyond its origins (pp. 1–15). Amsterdam/Philadelphia: John Benjamins. Davis, Joseph. (2017). The substance and value of Italian si. Amsterdam/Philadelphia: John Benjamins. Davis, Joseph, Gorup, Radmila, & Stern, Nancy (Eds.). (2006). Advances in functional linguistics: Columbia School beyond its origins. Amsterdam/Philadelphia: John Benjamins. Dekker, Adriaan, & de Jonge, Bob. (2006). Phonology as human behavior: The case of Peninsular Spanish. In J. Davis, R. J. Gorup, & N. Stern (Eds.), Advances in functional linguistics: Columbia School beyond its origins (pp. 131–141). Amsterdam/Philadelphia: John Benjamins. De Jonge, Bob. (1993). The existence of synonyms in a language: Two forms but one, or rather two, meanings? Linguistics, 31, 521–538. Diver, William. (1969/2012). The System of Relevance of the Homeric verb. In A. Huffman, & J. Davis (Eds.), Language: Communication and human behavior. The linguistic essays of William Diver (pp. 135–160). Leiden/Boston: Brill. Diver, William. (1974). Substance and value in linguistic analysis. Semiotexte[e], 1(2), 11–30. Diver, William. (1979). Phonology as human behavior. In D. Aaronson, & R. Rieber (Eds.), Psycholinguistic research: Implications and applications (pp. 161–182). Hillsdale, NJ: Lawrence Erlbaum Associates. Diver, William. (1975)/( 2012). The nature of linguistic meaning. In A. Huffman, & J. Davis (Eds.), Language: Communication and human behavior. The linguistic essays of William Diver (pp. 47–64). Leiden/Boston: Brill. Diver, William. (1986)/ (2012). Latin se. In A. Huffman, & J. Davis (Eds.), Language: Communication and human behavior. The linguistic essays of William Diver (pp. 279–289). Leiden/Boston: Brill. Diver, William. (1992)/(2012). The Latin demonstratives. In A. Huffman, & J. Davis (Eds.), Language: Communication and human behavior. The linguistic essays of William Diver (pp. 265–277). Leiden/Boston: Brill. Diver, William. (1995). Theory. In E. Contini-Morava, & B. Sussman Goldberg (Eds.), Meaning as Explanation: Advances in Linguistic Sign Theory (pp. 43–114). Berlin/New York: Mouton de Gruyter. Diver, William, Davis, Joseph, and Reid, Wallis. 2012. Traditional Grammar and its legacy in twentieth-century linguistics. In A. Huffman, & J. Davis (Eds.), Language: Communication and human behavior. The linguistic essays of William Diver (pp. 371–443). Leiden/Boston: Brill. Emanatian, Michele. (1990). The Chagga consecutive construction. In J. Hutchison, & V. Manfredi (Eds.), Current approaches to African linguistics (Vol. 7, pp. 193–207). Dordrecht: Foris Publications.

Introduction: Columbia School in the functional-cognitive space 29

Evans, Vyvyan & Green, Melanie. (2006). Cognitive linguistics. An introduction. Great Britain: Routledge. Fillmore, Charles J. (1976). Frame Semantics and the Nature of Language. In S. Harnad, H. Steklis, & J. Lancaster (Eds.), Origins and evolutions of language and speech (pp. 20–32). New York: Academy of Sciences. Fillmore, Charles J. (1982). Frame Semantics. In Linguistic Society of Korea (Ed.), Linguistics in the Morning Calm (pp. 111–138). Seoul: Hanshin. Fillmore, Charles J. (1985). Frames and the semantics of understanding. Quaderni di semantica, 6, 222–254. García, Erica C. (1975). The role of theory in linguistic analysis: The Spanish pronoun system. Amsterdam/Oxford: North Holland Publishing Company. García, Erica. C. (2009). The motivated syntax of arbitrary signs: Cognitive constraints on Spanish clitic clustering. Amsterdam/Philadelphia: John Benjamins. García, Erica C., & Otheguy, Ricardo L. (1983). Being polite in Ecuador: Strategy reversal under language contact. Lingua, 61(2–3), 103–132. Geeraerts, Dirk, & Cuyckens, Hubert (Eds.) (2007). The Oxford handbook of Cognitive Linguistics. Oxford University Press. Geeraerts, Dirk, & Cuyckens, Hubert. (2007). Introducing Cognitive Linguistics. In Geeraerts, D., & H. Hubert Cuyckens (Eds.), The Oxford handbook of Cognitive Linguistics (pp. 3–21). Oxford University Press. Gibbs Raymond W. , Jr., (1994). The poetics of mind. Cambridge University Press. Goldberg, Adele E. (1995). Constructions: A Construction Grammar approach to argument structure. University of Chicago Press. Goldberg, Adele E. (2006). Constructions at work: The nature of generalization in language. Oxford University Press. Gorup, Radmila. (2006). Se Without Deixis. In J. Davis, R. J. Gorup, & N. Stern (Eds.), Advances in functional linguistics: Columbia School beyond its origins (pp. 195–209). Amsterdam/ Philadelphia: John Benjamins. Haiman, John. (1978). A Study in polysemy. Studies in Language, 2(1), 1–34. Harris, Roy. (1980). The language-makers. London: Duckworth. Harris, Roy. (1981). The language myth. London: Duckworth. Harris, Roy. (1988). Language, Saussure and Wittgenstein: How to play games with words. London/New York: Routledge. Harris, Roy. (1990). On redefining linguistics. In H. Davis, & T. Taylor (Eds.), Redefining linguistics (pp. 18–52). London: Routledge. Haspelmath, Martin. (2010). Comparative concepts and descriptive categories in crosslinguistic studies. Language, 86(3), 663–687. Hopper, Paul. (2007). Linguistics and micro-rhetoric: A twenty-first century encounter. Journal of English Linguistics, 35(3), 236–252. Huffman, Alan. (1997). The categories of grammar: French lui and le. Amsterdam/Philadelphia: John Benjamins. Huffman, Alan. (2001). The linguistics of William Diver and the Columbia School. Word, 52, 29–68. Huffman, Alan. (2012). Introduction: The Enduring legacy of William Diver. In A. Huffman, & J. Davis (Eds.), Language: Communication and human behavior. The linguistic essays of William Diver (pp. 1–20). Leiden/Boston: Brill.


Nancy Stern

Huffman, Alan, & Davis, Joseph. (2012). Language: Communication and human behavior: The linguistic essays of William Diver. Leiden/Boston: Brill. Kirsner, Robert S. (1989). Does sign-oriented linguistics have a future? On the falsifiability of theoretical constructs. In Y. Tobin (Ed.), From sign to text: A semiotic view of communication (pp. 161–178). Amsterdam/Philadelphia: John Benjamins. Kirsner, Robert S. (2004). Introduction: On paradigms, analyses, and dialogue. In E. ContiniMorava, R. S. Kirsner, & B. Rodríguez-Bachiller (Eds.), Cognitive and communicative approaches to linguistic analysis (pp. 1–18). Amsterdam/Philadelphia: John Benjamins. Lakoff, George. (1977). Linguistic gestalts. CLS, 13, 225–235. Lakoff, George. (1986). The meanings of literal. Metaphor and Symbolic Activity, 1, 291–296. Lakoff, George. (1987). Women, fire, and dangerous things: What categories reveal about the mind. University of Chicago Press. Lakoff, George, & Johnson, Mark. (1980). Metaphors we live by. University of Chicago Press. Langacker, Ronald W. (1987). Foundations of cognitive grammar: Theoretical prerequisites (Vol. 1). Stanford, CA: Stanford University Press. Langacker, Ronald W. (1991/2002). Concept, image, symbol: The cognitive basis of grammar, 2nd edition. Berlin: Mouton de Gruyter. Langacker, Ronald W. (2004). Form, meaning and behavior: The Cognitive Grammar analysis of double subject constructions. In E. Contini-Morava, R. S. Kirsner, & B. Rodríguez-Bachiller (Eds.), Cognitive and communicative approaches to linguistic analysis (pp. 21–60). Amsterdam/Philadelphia: John Benjamins. Langacker, Ronald. (2017). Ten lectures on the basics of cognitive grammar. Leiden/Boston: Brill. Lindner, Susan. (1981). A lexico-semantic analysis of verb-particle constructions with up and out. Ph.D. dissertation, University of California, San Diego. MacSwan, Jeff. (2017). A multilingual perspective on translanguaging. American Educational Research Journal, 54(1), 167–201. Makoni, Sinfree, & Pennycook, Alastair. (2005). Disinventing and (re)constituting languages. Critical Inquiry in Language Studies: An International Journal, 2(3), 137–156. McEnery, Anthony M., & Wilson, Anita. (2001). Corpus linguistics: an introduction. Edinburgh University Press. Nerlich, Brigitte & Clarke, David D. (2007). Cognitive Linguistics and the history of linguistics. In D. Geeraerts, & H. Cuyckens (Eds.), The Oxford handbook of Cognitive Linguistics (pp. 589–607). Oxford University Press. Nuyts, Jan. (2007). Cognitive Linguistics and Functional Linguistics. In D. Geeraerts, & H. Cuyckens (Eds.), The Oxford handbook of Cognitive Linguistics (pp. 543–565). Oxford University Press. Otheguy, Ricardo. (2002). Saussurean anti-nomenclaturism in grammatical analysis. In W. Reid, R. Otheguy, & N. Stern (Eds.), Signal, meaning, and message: Perspectives on sign-based linguistics (pp. 373–403). Amsterdam /Philadelphia: John Benjamins.

Introduction: Columbia School in the functional-cognitive space 31

Otheguy, Ricardo, García, Ofelia, & Reid, Wallis. (2015). Clarifying translanguaging and deconstructing named languages: A perspective from linguistics. Applied Linguistics Review, 6(3), 281–307. Otheguy, Ricardo, García, Ofelia, & Reid, Wallis. (2018). A translanguaging view of the linguistic system of bilinguals. Applied Linguistics Review (pp. 1–27). Pennycook, Alistair. (2018). Posthumanist applied linguistics. London/New York: Routledge. Reagan, Timothy. (2004). Objectification, positivism and language studies: A reconsideration. Critical Inquiry in Language Studies: An International Journal, 1(1), 41–60. Reid, Wallis. (1991). Verb and noun number in English: A functional explanation. London: Longman. Reid, Wallis. (1995). Quantitative analysis in Columbia School theory. In E. Contini-Morava, & B. Sussman Goldberg (Eds.), Meaning as explanation: Advances in linguistic sign theory (pp. 115–152). Berlin/New York: Mouton de Gruyter. Reid, Wallis. (2004). Monosemy, homonymy and polysemy. In E. Contini-Morava, R. S. Kirsner, & B. Rodríguez-Bachiller (Eds.), Cognitive and communicative approaches to linguistic analysis (pp. 93–129). Amsterdam/Philadelphia: John Benjamins. Reid, Wallis. (2006). Columbia School and Saussure’s langue. In J. Davis, R. J. Gorup, & N. Stern (Eds.), Advances in functional linguistics: Columbia School beyond its origins (pp. 17–39). Amsterdam/Philadelphia: John Benjamins. Reid, Wallis. (2011). The communicative function of English verb number. Natural Language & Linguistic Theory, 29(4), 1087–1146. Reid, Wallis. (2018). The justification of linguistic categories. In N. Shin, & D. Erker (Eds.), Questioning theoretical primitives in linguistic inquiry (Papers in honor of Ricardo Otheguy) (pp. 91–132). Amsterdam/Philadelphia: Benjamins. Reid, Wallis. (2002). Introduction: Sign-based linguistics. In W. Reid, R. Otheguy, & N. Stern (Eds.), Signal, meaning, and message: perspectives on sign-based linguistics (pp. ix–xxi). Amsterdam/Philadelphia: John Benjamins. Reid, Wallis, Otheguy, Ricardo, & Stern, Nancy (Eds.) (2002). Signal, meaning, and message: Perspectives on sign-based linguistics. Amsterdam/Philadelphia: John Benjamins. Rosch, Eleanor. (1973). Natural categories. Cognitive Psychology, 4, 328–350. Rosch, Eleanor, Mervis, Carolyn, Gray, Wayne, Johnson, David, & Boyes-Braem, Penny. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382–439. Ruhl, Charles. (1989). On monosemy: A study in linguistic semantics. Albany: State University of New York Press. Sabar, Nadav. (2018). Lexical meaning as a testable hypothesis: The case of English look, see, seem and appear. Amsterdam/Philadelphia: John Benjamins.


Nancy Stern

Saussure, Ferdinand de. (1916). Cours de linguistique générale. Publié par Charles Bally et Albert Séchehaye. Avec la collaboration de Albert Riedlinger. Translated by Roy Harris as Course in General Linguistics. La Salle, Illinois: Open Court Classics, 1972 [1986]. Translated by Wade Baskin as Course in General Linguistics. New York: The Philosophical Library, 1959; McGraw-Hill, 1966. Stern, Nancy. (2001). The meaning and use of English -self pronouns. Ph.D. dissertation, Graduate Center, The City University of New York. Stern, Nancy. (2004). The semantic unity of reflexive, emphatic, and other -self pronouns. American Speech, 79(3), 270–280. Stern, Nancy. (2006). Tell me about yourself: A unified account of English -self pronouns. In J. Davis, R. J. Gorup, & N. Stern (Eds.), Advances in functional linguistics: Columbia School beyond its origins (pp. 177–194). Amsterdam/Philadelphia: John Benjamins. Stern, Nancy. (2019). Ourself and themself: Grammar as expressive choice. Lingua. Sweetser, Eve. (1990). From etymology to pragmatics. Cambridge University Press. Taylor, John R. (1995). Linguistic categorization: Prototypes in linguistic theory, 2nd edition. Oxford: Clarendon Press. Taylor, Talbot. (2007). Cognitive Linguistics and Autonomous Linguistics. In Geeraerts, D., & H. Hubert Cuyckens (Eds.), The Oxford handbook of Cognitive Linguistics (pp. 566–588). Oxford University Press. Tobin, Yishai. (1990). Semiotics and linguistics. London: Longman. Tobin, Yishai. (1995). Only vs. just: Semantic integrality revisited. In E. Contini-Morava, & B. Sussman Goldberg (Eds.), Meaning as explanation: Advances in linguistic sign theory (pp. 323– 359). Berlin/New York: Mouton de Gruyter. Tobin, Yishai. (1997). Phonology as human behavior: Theoretical implications and clinical applications. Durham, NC/London: Duke University Press. Tobin, Yishai. (2011). Phonology as human behavior from an evolutionary point of view. In B. de Jonge, & Y. Tobin (Eds.), Linguistic theory and empirical evidence (pp. 169–195). Amsterdam/Philadelphia: John Benjamins. Tuggy, David. (2007). Schematicity. In D. Geeraerts, & H. Cuyckens (Eds.), The Oxford Handbook of Cognitive Linguistics (pp. 82–116). Whorf, Benjamin L. (1956). In J. B. Carroll (Ed.), Language, thought, and reality: Selected writings of Benjamin Lee Whorf. Cambridge, MA: MIT Press. Wittgenstein, Ludwig. (1953). Philosophical investigations. New York: Macmillan.

Palabras clave: Escuela de Columbia, Lingüística Cognitiva, espacio funcional-cognitivo, significado, monosemia

Using big data to support meaning hypotheses for some and any Nadav Sabar

Hebrew University of Jerusalem

This paper offers an original treatment of the grammatical forms some and any. Rather than seeing them as logical quantifiers, each sign constitutes an expressive device whose invariant meaning fully accounts for its distribution in English texts. A unique methodology that relies on qualitative analyses to produce large-scale quantitative predictions is laid out in detail. First, qualitative analyses of attested examples are shown to feature – alongside some or any – particular other forms that, by hypothesis, contribute to a similar element in the message as contributed by the sign under analysis. Then, quantitative predictions regarding the regularity of these co-occurrences are tested in the Corpus of Contemporary American English. This methodology has led to the discovery of numerous distributional peculiarities that are noted here – and explained – for the first time. Keywords: Columbia School, Corpus of Contemporary American English, qualitative and quantitative analysis, logical quantifiers

1. Introduction The problem addressed in this paper concerns the distributions of some and any in naturally occurring linguistic data. Why do speakers choose to utter some on one occasion and any on another? Why, for example, do people sometimes talk of that special someone yet hardly, if ever, talk of that special anyone? Or, why is the New York City public safety slogan If you see something, say something rather than If you see anything, say anything? This paper proposes meaning hypotheses for some and any that explain each of these signs’ attested distributions in terms of speakers’ expressive choices. In addition to explaining why these signs are chosen on particular occasions, the meaning hypotheses presented in this paper have also led to the discovery of numerous novel and otherwise unexpected distributional facts that have been confirmed in a massive corpus of English texts, the Corpus of Contemporary American English (Davies, 2008–). These newly discovered distributional patterns © 2019 John Benjamins Publishing Company


Nadav Sabar

involve not categorical uses and non-uses of some and any, but rather, statistical tendencies regarding co-occurrence favorings. Some of these tendencies – such as the fact that some rather than any favors co-occurrence with a following which despite the fact that both some and any co-occur with which many times – have never before been noted, let alone explained, by any other account. As will be explicated below, these predictions of co-occurrence favorings are made on the basis of careful analyses of attested examples, in which we find that certain forms (like which) that co-occur with some or any provide qualitative support for the meaning hypothesis we propose. We will then argue that the same rationale that has been offered for the choice of either some or any on the individual occasion is not an ad hoc explanation for that individual example, but rather, is characteristic of many examples throughout the corpus. 2. Approaches to the problem The forms some and any have been studied extensively by analysts in the fields of formal semantics and Cognitive linguistics, both assuming in advance of the analysis that these forms are representations of logical quantifiers. Section 2.1 explicates how the present approach differs from formal analyses, particularly of any. A comparison with the Cognitive analysis, which has more in common with our own, will be provided in Section 7, following the full presentation of our own hypotheses and analyses. Briefly however, one crucial difference between the Cognitive approach and ours is that the Cognitive analysis offers a description of the distributions of some and any whereas we offer an explanation. This difference is important because a description can only capture facts that are already known prior to the analysis, whereas our testable hypotheses have revealed new facts and have produced predictions that provide new knowledge about the distributions of some and any in a massive corpus. 2.1

Logic-based approach

In the formal tradition, semanticists work with a model that takes the construct ‘meaning of a sentence’ as primary. The meaning of a sentence is constructed compositionally from its component parts, whose semantic values are determined on the basis of their contribution to the truth conditions of the sentence containing them. In the case of some and any, the two forms appear to make one and the same contribution to the truth conditions of their sentences, both posited to represent

Some and any 35

existential quantification (Ladusaw, 1979).1 Consider, for example, the sentences below.

(1) John met someone yesterday.

(2) John did not meet anyone yesterday.

The proposition expressed by (1) is said to be that ‘there is a person John met yesterday’; the proposition expressed by (2) is that ‘it is not the case that there is a person John met yesterday’. The contribution of some in (1) and of any in (2) to the truth-conditions of each sentence is then said to be the same, ‘there is a person’. The analytical problem in this approach has centered primarily on providing an account for the distribution of any, where distribution is understood in terms of the logical-sentential environments permitting occurrence of this form. Thus, beginning with Klima (1964) it has been noted that any is grammatical in the scope of negation (as in Example (2)) yet not in the context of affirmative statements, leading to its classification as a ‘Negative Polarity Item’. Since Klima, and especially following Ladusaw (1979), a vast literature has emerged tackling the problem of the ‘licensing conditions’ of any, that is, the logical-sentential environments that render the form grammatical (see Giannakidou, 2011 for a survey). However, even if the licensing problem were fully settled, there would still remain the question – outside the purview of the licensing approach – of why, in actual speech events, we find speakers sometimes using some in environments posited to license any. In such cases, if some actually occurs on a particular occasion, the formal approach has no means of explaining why a speaker chose to produce some rather than any. Consider the attested example below featuring some in the scope of negation.

(3) When Yvonne lived in Italy, where it seems like the whole country is married, people always wanted to know about her personal life. I remember her telling me that every time she’d come back from a great vacation, the first question from married friends was, “Did you meet anybody?” It was as if the whole point of going on vacation was to meet someone. That she had a great time and saw something new and interesting didn’t matter. The entire vacation was cancelled or a flop because she didn’t meet someone. (

Formal hypotheses concerning the licensing conditions of any are not in the business of explaining why the writer here chose some. That any would be deemed

1. Ladusaw (1979) also recognizes a ‘Free-Choice’ any which is argued to represent universal quantification. As will be explicated in Section 7, Langacker (2017) likewise views any as representing universal quantification.


Nadav Sabar

acceptable (she didn’t meet anyone) is all a licensing hypothesis would care to predict.2 2.2

Sign-based approach

In contrast to the formal approach, the semiotic approach taken here is concerned precisely with explaining why speakers deploy one form over another in attested acts of communication. Guided by the overarching assumption that the structure of language is best revealed when it is taken to be an instrument of communication, the problem of this paper is thus construed in terms of human speech (and writing) behavior; that is, we seek to explain why speakers utter some or any on each particular occasion. The solution is given in terms of hypothesized meanings, where each meaning is posited as a unitary invariant semantic value of the signs some and any. These meanings consistently motivate a speaker’s choice to utter one signal or the other. Following the analytical tradition of Columbia School linguistics (henceforth, CS), linguistic meaning is characterized not as compositional but rather as instrumental (Huffman, 1997); that is, the meanings of signs serve as guides that hint toward the speaker’s intended message. A sharp distinction is maintained between meaning and message; that is, between, on the one hand, that which is part of the linguistic code – the invariant meaning that consistently accompanies a corresponding linguistic signal – and, on the other, the interpretation of the code – the ongoing subjective experience of communicated messages. Meanings are here seen as merely sparse notional fragments that do not encode messages, but provide prompts from which message elements may be suggested and communicative intents can be inferred (Contini-Morava, 1995; Diver, 1995/2012; Diver, 1990/2012; Huffman, 2001; Otheguy, 2002; Reid, 1991). As we will see, in explaining a speaker’s choice to use the meaning of some or of any on a particular occasion the analyst offers some characterization of the ongoing message that accompanies the text where the relevant signal occurs. The analyst then argues for a connection or fit between the hypothesized meaning of either some or any and the message elements interpreted in the text. Thus, an appeal to the message is required as part of the explanation regarding why one meaning rather than another is chosen on a particular occasion. Though interpretation of the ongoing message is ultimately subjective, the analyst ought to point to whatever

2. Further note in this example that both Did you meet anybody? – which is found in the text – and Did you meet somebody may be deemed acceptable. Again, a formal hypothesis is not in the business of explaining why any is the form of choice in this case.

Some and any 37

available contextual evidence that would support the proposed characterization. These pieces of contextual evidence provide objective support in the sense that they are found in the examples independently of the meaning hypotheses or of the analyst’s own apprehensions of the message. In the present work, some of these pieces of contextual evidence that will be found in individual examples will be taken up and used for generating large-scale quantitative predictions of co-occurrence favorings. The confirmation of these predictions will further help to reduce the possible impression of ad hoc subjective judgment regarding the analyst’s characterization of the ongoing message in a particular example (cf. Diver, 1995, Section 3.4.4). 3. The meaning hypotheses The statement of the meaning hypotheses for some and any comes in the form of a CS grammatical system of signs. A grammatical system is made up of a semantic substance, within which any number of signs may occur that divide that substance. The value assigned to each sign within that substance constitutes the hypothesized meaning of that sign. The meaning hypotheses for some and any involve a grammatical system whose semantic substance we have labeled Restrictiveness of Applicable Domain. The meaning hypothesis for some is summarized in shorthand formulation as restricted, while the meaning of any is summarized as unrestricted. In a bit more detail, when some is used, the suggestion is that limits, internal divisions, particular specifications or boundaries apply. The meaning of some is therefore fit for the communication of messages that involve a partitioning of the relevant domain. When any is used, by contrast, the contribution to the ongoing message is that no boundaries, limits or divisions are at issue, suggesting therefore no exclusions within the relevant domain. (This does not say that the applicable domain has no divisions or boundaries on the scene; but the meaning of any suggests that these divisions – whatever they might be in the referential reality – are not relevant to the message being communicated). Note that the hypothesized meanings make no specification regarding the nature of the domain. The meanings – restricted Domain of Application and unrestricted Domain of Application – will be applicable to whatever domain made available by the context. In many cases, the domain will be suggested by words appearing in close proximity to some or any; for instance, in I’ve read it in some magazine the domain would probably be understood as that of magazines. The meaning hypotheses are summarized in Figure 1.3

3. See Lewis (1986, pp. 33–35) for a similar proposal regarding the meanings of some and any.


Nadav Sabar

Semantic substance Restrictiveness of Applicable Domain

Meaning restricted unrestricted

Signal some any

Figure 1.  Meaning hypotheses for some and any

Let’s consider the attested examples below (the first of which was mentioned in the first paragraph of this paper), to illustrate how the hypothesized meanings contribute to the communicated messages.

(4) If you see something, say something. 

(5) No parking any time. 

(MTA) (street sign)

In Example (4), restricted Domain of Application is the meaning of choice because the message involves particular specifications as to the things that people might see, as well as particular specifications as to the things that people should then say. The meaning restricted suggests a partitioning of the domain of things, here implying that only suspicious-looking or potentially dangerous things are at issue. In this case, the context at large is responsible for the inference regarding the nature of the restriction (that is, to suspicious-looking things), but that a restriction is intended at all – that only certain kinds of things are at issue – is contributed, by hypothesis, by the meaning of some. Note that any might also have appeared in this example (If you see anything, say anything), but that choice would not have contributed as effectively to the intended message; the use of any would have suggested that people should call no matter what they see. Turning to Example (5), here unrestricted Domain of Application is the meaning of choice because the message involves no exclusions with respect to the times of day. The meaning unrestricted suggests that no internal divisions apply; hence no time of day is excluded. Returning briefly to Example (3), repeated below as (6), we can now understand why the writer chose some rather than any.

(6) When Yvonne lived in Italy, where it seems like the whole country is married, people always wanted to know about her personal life. I remember her telling me that every time she’d come back from a great vacation, the first question from married friends was, “Did you meet anybody?” It was as if the whole point of going on vacation was to meet someone. That she had a great time and saw something new and interesting didn’t matter. The entire vacation was cancelled or a flop because she didn’t meet someone. (

restricted Domain of Application is the meaning of choice because the message involves particular specifications regarding people Yvonne did not meet. Much as

Some and any 39

in Example (4), here again the context implies the nature of the restriction: only persons who may qualify as potential life-partners are at issue. 4. Methodology We now turn to investigate the distributions of some and any through qualitative analyses that will lead to quantitative predictions. These predictions – all based on analyses of the exploitations of these signs’ meanings in individual examples – will be tested in the Corpus of Contemporary American English, henceforth, COCA (Davies, 2008–). As mentioned, the quantitative predictions presented in this paper involve co-occurrence favorings. For example, in supporting the meaning restricted Domain of Application we will predict a favoring in the corpus of the co-occurrence of some and others. The prediction will be made on the basis of a qualitative analysis of an attested example featuring a sequence of some… others… (Example (7) below). The analysis of that example will argue that the use of others – being in part suggestive of a message element of partitioning – provides qualitative support for the hypothesized meaning restricted. That is, the presence of others will offer independent evidence that a message element closely related to that contributed by the meaning restricted indeed accompanies the use of some in that example. Then, in order to provide quantitative support, it will be predicted that, even though sequences of some… others… and any… others…both occur in the corpus, the co-occurrence of some… others… should be favored. The rationale for the prediction is that speakers intending to contribute to the ongoing communication a message element that involves a notion of partitioning will sometimes be motivated by that communicative goal in making the choice to use both others and some (rather than any). Confirmation of the prediction will indirectly support the meaning hypothesis because it would argue that a notion that, according to the hypothesis, is contributed by the meaning of some has motivated its choice on multiple occasions (particularly, some of those occasions in the corpus in which some is followed by others). This rationale for quantitative predictions will now be developed in greater detail.4 4. CS quantitative predictions have been offered that do not involve a co-occurrence favoring of the sign under analysis with some particular other form in the text. For example, Diver (1995, pp. 106–7) features a count regarding whether Latin pronouns is and hic favor reference to concrete or abstract objects – concrete or abstract reference not being marked by any one consistent linguistic manifestation. In both our analysis and in Diver’s, however, the count is made on the basis of a correlation between an occurrence of the hypothesized sign and a notional parameter that is argued to constitute a reason for choosing the sign’s meaning. In Diver’s analysis, reference to a concrete object provides partial independent evidence that the proposed notional parameter

40 Nadav Sabar

In order to understand the methodological procedure leading from qualitative analyses to quantitative predictions, we turn now to an explication of the term communicative strategy. Following Reid (1995), a communicative strategy is a reason proposed by the analyst to explain why a speaker chooses a particular meaning.5 The choice is made on the speaker’s assumption that the sign’s meaning will contribute to some feature or aspect of the ongoing message;6 that contribution of the meaning to the ongoing message constitutes the reason for uttering the meaning’s corresponding signal.7 To take some as an example, the claim here is that a speaker’s choice to use this sign is motivated by the contribution of its meaning – restricted Domain of Application – to messages that involve a partitioning of the applicable domain. One prominent way to tell whether this message element indeed figures as a part of the communicated message where some occurs is by looking to the (ease of locating the referent) sometimes motivates the choice of the meaning of is (less Deixis) rather than hic (more Deixis). In the same way, the presence of others provides partial independent evidence that a notion of partitioning motivates the choice of the meaning of some rather any. 5. Reid’s precise definition for a communicative strategy – “A communicative strategy is a principle of choice between meanings in a grammatical system as their semantic opposition applies to a specific notional parameter of the message” (1995, p. 142) – applies specifically to meanings in a grammatical system. Following the work of Sabar (2018), however, the term has been applied to the choice of lexical meanings as well. The definition as stated here is essentially the same as that of Reid, only it allows for the term to apply equally to both grammatical and lexical signs. It is further noted that while this definition is taken from Reid, the notion of a communicative strategy is attributed to Diver and appears in several of his early writings – including The Elements of a Science of Language (1990/2012) and The Nature of Linguistic Meaning (1975/2012) – as well as in his last work Theory (Diver, 1995, Section 3.4.4). 6. The contribution of the meaning to the ongoing message may be a notional fragment that is very similar or closely related to the one specified by the hypothesized meaning, or it may be something quite different from it. In the former case the reason for the choice of the meaning is called a “direct strategy” and in the latter case it is called an “indirect strategy” (see Diver, 1975/2012, The Nature of Linguistic Meaning). In the case of an indirect strategy the analyst must point to the inferential path leading from the meaning to the relevant aspect of the message. In this paper, however, we deal mostly with direct strategies. Thus the reason for choosing the meaning of some is to contribute to a message fragment that is similar or very closely related to that which is specified by its meaning, restricted Domain of Application. 7. Note that the analyst argues that a signal occurs where it does because of its meaning; the meaning, that is, is the reason for the choice of the sign. But then, an analysis is incomplete unless the analyst can explain why the speaker or writer chose that meaning as opposed to another meaning, e.g., why the speaker chose the meaning restricted Domain of Application rather than unrestricted Domain of Application. The reason for the choice of the meaning is its contribution to or expected effect on the ongoing message – that reason is what we have called a communicative strategy.

Some and any 41

linguistic context surrounding the use of some and checking whether there are other forms that might likewise contribute to or are made in response to a similar notional parameter. Consider the sequence some… others… in Example (7): (7) Some Feds [Federal workers] are held up as national heroes while others are considered a national joke.  (ABC Nightline: Income Tax)

While I am not proposing a hypothesis for the meaning of the form others, we may note the Google dictionary definition: ‘used to refer to a person or thing that is different or distinct from one already mentioned or known about’. In the present example, the use of others suggests a partitioning, referring to Federal workers different from the ones mentioned before. This message element is also contributed by the hypothesized meaning of some – restricted – which similarly suggests a partitioning, and is used to refer to the Federal workers not covered by others.8 The two forms, then, partially overlap in their communicative effects, together contributing to the same message element that involves a partition of Federal workers. In other words, the choice to use some and the choice to use others were each in its turn motivated – at least in part – by the suggestion of the same notion in the ongoing message. When in the course of the analysis of a particular example it is proposed that each of two forms is chosen to contribute to the same message element – as is the case here with others and some – then the generality of this claim may be tested through a count; that is, we may predict that the co-occurrence of some and others will be favored in the corpus. The rationale for the prediction, to repeat, is that, speakers intending to contribute a notion of partitioning to the ongoing message will often be guided by that communicative goal in making the choice to use both others and some together.9 Of course, having the absolute number of times that others and some co-occur in the corpus will not tell us whether this co-occurrence is to be considered as 8. It may appear there’s a methodological problem in supporting the meaning hypothesis for some by appealing to the dictionary definition of another word for which there is no hypothesis. Note, however, that this analysis does not depend on a meaning hypothesis for others; to support the hypothesis for some it is sufficient to appeal to the effect others has on the communicated message – this effect is captured by the dictionary definition. Indeed, Reid (1991, p. 304) explains that what is required for a prediction of a co-occurrence favoring is “an identity of communicative goal, not an identity of the semantic means employed to achieve that goal. [That is because] distinct semantic means can contribute to the same communicative end.” 9. Reid (1991) refers to this phenomenon as textual resonance: “[A]ny given feature of the message will typically play a part in determining the choice of more than one linguistic sign. … Textual resonance is created when two meanings jointly contribute to the communication of the same feature of the message (i.e., are in harmony).” (1991, p. 302).


Nadav Sabar

favored. Rather, it must be shown that others favors co-occurrence with some relative to some other word. But what other word may serve as an appropriate control for making the prediction regarding the favoring of others to some? The only criterion for the control is that it will be a form whose contributions to the ongoing message are always distinct from what is contributed by the meaning restricted (see Reid, 1995; Sabar, 2018, Chapter 3). For our purposes any will serve as the control term for all predictions that support the meaning hypothesis for some; this is because any – whose meaning, by hypothesis, is unrestricted – is unlikely to ever contribute to a message element that involves a partitioning as do others and some.10 Summing up so far, the prediction is that others will favor co-occurrence with some rather than with any. This is because in the qualitative analysis of Example (7) we have supported the meaning restricted through an appeal to the presence of others which likewise contributes to a message element of partitioning, whereas any, by hypothesis, is never used for contributing to such an effect on the message. Much as any will serve as a control term in supporting the meaning hypothesis for some, some in turn will serve as a control term in making predictions that support the meaning hypothesis for any. The two signs, to repeat, can serve each as a control term for the other because each is chosen for reasons that are different from the reasons motivating the choice of the other. We now address the fact that any itself also co-occurs many times in close proximity to others, as demonstrated in Example (8) below. When any co-occurs with others, however, then the choice of each form must be guided by a different communicative goal; that is, each form is chosen to produce a different effect on the message (unlike the co-occurrence of others and some, where the contribution of each of the two forms is to the same or related element of the message).

(8) [B]ecause of the U.S.’s influence and position, what America says and what America does really matter. Moreover, I realize just how proud Americans are of their traditions, passion for liberty and freedom, and open society. So, it always is difficult for any of us when we come to discover that others do not all see us in the way in which we see ourselves.  (USA Today)

10. Since this paper is concerned with the distributions of some and any it is of interest to use any as the control in testing the meaning hypothesis for some. As noted, however, the control may be any word whose contribution to the ongoing message does not seem to overlap with the contribution made by the meaning of some, restricted Domain of Application. For that matter, then, just as we may predict that others will favor co-occurrence with some rather than any, we may equally predict that others will favor some rather than, say, table, or hugging, etc. The prediction that others favors co-occurrence with some rather than table has been tested and was confirmed, though the results are not shown here.

Some and any 43

In this example, the speaker chose the meaning unrestricted Domain of Application to suggest that no American is excluded. Others, however, is chosen to suggest a notion of partitioning – separating Americans from non-Americans. There is, therefore, no shared reason that motivates the speaker to utter these two forms together. Importantly, therefore, there is no reason to expect that others should favor co-occurrence with any in the corpus. Again, this does not preclude the two forms from co-occurring sometimes, as they do in (8); it is just that when they do co-occur then the speaker is choosing each form to produce different communicative effects. It is predicted, then, that others will favor co-occurrence with some rather than with any. This is how the prediction is tested. First, the total number of occurrences in COCA is collected for each of the following favored and disfavored sequences (Table 1). Table 1.  Total COCA occurrences of others with some and with any  



Favored Disfavored

some [up to 2 slots] others any [up to 2 slots] others

1920  626

The material in square brackets indicates that any sequence of a length that is between zero to two forms – including either words or punctuation marks – may intervene between some/any and the predictive term others. These searches thus indicate how many times others co-occurs with some and with any, respectively. The decision to allot up to two intervening slots helps ensure that others and some are indeed used in response to the same aspect of the message. Still, it is noted that the prediction has been tested with both less and more intervening slots (that is, with zero, one, two, three, etc. – up to eight intervening slots, which is what COCA allows), and has been confirmed every time. Below we will show only the results for testing with two intervening slots. Before turning to the next step in the prediction we note that mere proximity cannot guarantee that others and some both respond to the same aspect of the message. For instance, the search may yield a result such as let’s have some drinks, the others will join later… In such an example, others and some occur in close proximity and each form may well suggest a message element of partitioning, but a qualitative analysis of this example probably could not appeal to others to support the meaning of some because it seems each form has been chosen in response to a different aspect of the communication. This is unlike the case in Example (7) where both forms respond to the same aspect of the communication, that is, both respond to a message that concerns the partitioning of Federal workers. In the example above, however,

44 Nadav Sabar

each form seems to apply to a different thing (some to drinks, others to people). Still, if there were no association between some and others, that is, if any time these two forms co-occurred then each form were chosen for reasons unrelated to the other, then the results of the count ought to reveal no particular favoring of others toward some, in comparison to any. Now in addition to the searches described in Table 1, two more counts are required to test the prediction. That is, it is necessary to have the total number of occurrences of some and the total number of occurrences of any. From these numbers we then eliminate all the cases where others is simultaneously present (at a distance of up to two intervening slots), thus providing a baseline for the frequency of some relative to any irrespective of any impact that others may have on their distribution. The final step is to compare this baseline frequency, to the frequency of some relative to any in the presence of others. The prediction is that – in the presence of others – the frequency of some relative to any will be greater than it is in the baseline. Figure 2 provides a visual aid demonstrating the impact we predict that the presence of others will have on the frequency of some relative to any. others present (prediction)

some… others

any… others

others absent (baseline) some


Figure 2.  The predicted impact of others on the frequency of some relative to any

The right side (titled ‘others absent’) shows that without others to impact their distribution, some and any occur at approximately the same rate. (Note that having the predictive term and control term occur at approximately the same rate in the baseline condition is not necessary for making the prediction, as will become evident when we look at Table 2 below).11 The purpose of Figure 2 is just to illustrate that we expect the presence of others to be associated with an increase in the frequency of some relative to any, as shown under ‘others present’. Table 2 now presents the results for this count. 11. Indeed, some and any do not actually occur at the same rate in the baseline. Figure 2 shows them occurring at the same rate simply for ease of explicating the argument.

Some and any 45

Table 2.  Total COCA occurrences of some and any in the presence and absence of others  

others present

others absent






some any Total

1920  626 2546

 75  25 100

 874579  451586 1326165

 66  34 100