A Theory of Literary Explication: Specifying a Relativistic Foundation in Epistemic Probability, Cognitive Science, and Second-Order Logic [Unabridged] 1443831476, 9781443831475

This book presents current multidisciplinary research and theory from 17 different fields (most of them never before app

361 51 1MB

English Pages 215 [213] Year 2011

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

A Theory of Literary Explication: Specifying a Relativistic Foundation in Epistemic Probability, Cognitive Science, and Second-Order Logic [Unabridged]
 1443831476, 9781443831475

Table of contents :
TABLE OF CONTENTS
ACKNOWLEDGEMENTS
A NOTE ON DOCUMENTATION
PREFACE
PART I
CHAPTER ONE
CHAPTER TWO
CHAPTER THREE
CHAPTER FOUR
CHAPTER FIVE
PART II
CHAPTER SIX
CHAPTER SEVEN
CHAPTER EIGHT
CHAPTER NINE
CHAPTER TEN
CHAPTER ELEVEN
PART III
CHAPTER TWELVE
CHAPTER THIRTEEN
CHAPTER FOURTEEN
AFTERWORD
APPENDIX
SUBSTANTIVE NOTES
SOURCE CITATIONS
BIBLIOGRAPHY
ABOUT THE AUTHOR

Citation preview

A Theory of Literary Explication

A Theory of Literary Explication: Specifying a Relativistic Foundation in Epistemic Probability, Cognitive Science, and Second-Order Logic

by

Kenneth B. Newell "There is no Archimedean point of absolute certainty left to which to attach our knowledge of the world; all we have is an elastic net of probability connections floating in open space." —Hans Reichenbach

A Theory of Literary Explication: Specifying a Relativistic Foundation in Epistemic Probability, Cognitive Science, and Second-Order Logic, by Kenneth B. Newell This book first published 2011 Cambridge Scholars Publishing 12 Back Chapman Street, Newcastle upon Tyne, NE6 2XX, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Copyright © 2011 by Kenneth B. Newell All rights for this book reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN (10): 1-4438-3147-6, ISBN (13): 978-1-4438-3147-5

TABLE OF CONTENTS

Acknowledgements .................................................................................... ix A Note on Documentation ......................................................................... xi Preface ...................................................................................................... xiii Part I: A Defense of Explication Chapter One................................................................................................. 2 Explication and Interpretation Chapter Two ................................................................................................ 5 Theory and Practice Chapter Three .............................................................................................. 9 Reasoned Argument Chapter Four .............................................................................................. 13 Second-Order Relative Objective Epistemic Probability (SOROEP) 4.1 Physical vs. Epistemic Probability................................................ 13 4.2 The Question of Necessary Consensus: Believability/Justifiability/ Inferability, Evidentialism, and Well-Foundedness ...................... 14 4.3 Relativity between Probabilities ................................................... 16 4.4 Other Applications of SOROEP.................................................... 17 4.5 A Foundation of Rationalism ........................................................ 18 4.6 Physical vs. Epistemic Probability Again ..................................... 19 4.7 Formal Measure Theory and Fuzzy Measure................................ 20 4.8 Competition between Readings..................................................... 21 4.9 “More-or-Less as Probable”.......................................................... 22 4.10 A Three-Part Problem .................................................................. 23 4.11 Criteria Other than SOROEP ....................................................... 25

vi

Table of Contents

Chapter Five .............................................................................................. 27 Evidence and Hypothesis 5.1 A Good Hermeneutic Circle ......................................................... 27 5.2 Implicit (or Tacit) Procedural Knowledge .................................... 28 5.3 Repeated Reciprocal Adjustment.................................................. 31 5.4 A Reading as a Working Hypothesis ............................................ 32 Part II: Consensus of SOROEP Judgments Chapter Six ................................................................................................ 36 General Considerations 6.1 Consensus Despite Human Variability ......................................... 36 6.2 Consensus Despite Human Fallibility ........................................... 37 6.3 Consensus Across Disciplines: A Product of Evolution ............... 40 Chapter Seven............................................................................................ 43 Modularity in Speech Comprehension and Reading Chapter Eight............................................................................................. 48 SOROEP in Speech Comprehension and Reading 8.1 In Adults Parsing Language .......................................................... 48 8.2 In Children Acquiring Language .................................................. 50 8.3 Seven Theories Implying an Innate Ability to Make SOROEP Judgments ....................................................................................... 54 Chapter Nine.............................................................................................. 61 SOROEP in the Brain 9.1 A Modular vs. a Central System ................................................... 61 9.2 Connectionism and the Training of a Connectionist Network ...... 66 9.3 Connectionism vs. Innateness and Modularity in Language Acquisition ..................................................................................... 68 9.4 Determining Neuron-Firing in the Brain....................................... 70 Chapter Ten ............................................................................................... 72 Other Theories and Related Conditions 10.1 Other Conditions Suggesting an Innate Ability to Make SOROEP Judgments ...................................................................... 72 10.2 Innate Ability to Make Physical-Relative-Probability Judgments....................................................................................... 76

A Theory of Literary Explication

vii

Chapter Eleven .......................................................................................... 82 Implications Part III: SOROEP and Foundations Chapter Twelve ......................................................................................... 86 Rationalism, Empiricism, and Coherentism Chapter Thirteen ........................................................................................ 91 Three “Quasifoundational” Concepts 13.1 Kant’s “Transcendental” Principles ............................................ 91 13.2 Implicit (or Tacit) Procedural Knowledge .................................. 92 13.3 Second-Order Logic.................................................................... 92 Chapter Fourteen ....................................................................................... 96 SOROEP Judgments and Internal-Representation Judgments Afterword ................................................................................................ 100 A Supplement on Justification Appendix ................................................................................................. 109 Evidence and Hypotheses among Probability Types Substantive Notes .................................................................................... 120 Source Citations....................................................................................... 144 Bibliography ............................................................................................ 157 About the Author ..................................................................................... 195

ACKNOWLEDGEMENTS

This book is dedicated to my wife and fellow scholar Rosalie. Besides generally being a blessing in my life, she, along with Beverly O’Neill, Marsha Kinder, and now departed relatives and friends—Ruth Skolnik, Libby Sheklow, Morris Sheklow, Genee Fadiman, Bill Fadiman, Beverle Houston, and Joan Hugo—provided generous support or advice that forwarded the writing of the book. Jackie Elam, Karla Shippey, and publisher’s anonymous reviewers also gave good advice. Ross Scimeca, Katherina Bell, Charlotte Crockett, and Brian Hartlet gave valuable library assistance, Edward Ipp gave valuable technical assistance, and Carol Koulicourdi, Amanda Millar, and Soucin Yip-Sou deftly shepherded the book through the publishing process. Manuel Schonhorn has provided enduring friendship, discussion, and collegiality. So too, while they lived, did L.S. Dembo, Bernard A. Block, and William D. Wolf. Hyman Kleinman and William York Tindall were inspiring teachers, and William H. Marshall was both inspiring teacher and friend. I regret only that this statement of my gratitude to all of them cannot be read by all of them.

A NOTE ON DOCUMENTATION

As is common in scholarly works, this book contains many notes and some lengthy notes—both source citations and substantive (or discursive) notes—and so a documentation style has been chosen to allow the reader easily to decide the extent to which she wishes her reading of notes to interrupt her reading of the main text: interruption by only one kind of note or the other or neither or both kinds. Consequently, all notes will be endnotes, those that are only source citations will be designated by the traditional superscript arabic numerals, but those that are either substantive notes or source citations accompanied by substantive notes will be designated by nontraditional superscript italicized capital letters of the alphabet. By this means, when the reader comes upon either superscript symbol while reading the main text, she will know without interrupting her reading which kind of endnote awaits her attention and will be able to choose whether or not she wants to attend to it at that point.

PREFACE

A reader of literature is really a reader-explicator since, to try to understand what she is reading, she must explicate it, and, to do that, she must often choose between different possible explications of the literary work. In this book I use current multidisciplinary research and theory to show that choosing between explications (as distinguished from interpretations) can be based on a special kind of probability—on what I will describe below as “second-order relative objective epistemic probability” (hereafter abbreviated as the acronym “SOROEP”). This probability, in turn, rests not only on two of the currently disparaged but major traditional philosophical foundations (rationalism and empiricism) but also on two nontraditional foundations—a cognitive rather than a philosophical one (implicit or tacit procedural knowledge, which is based in the adaptive unconscious) and a relativistic rather than an absolute one (second-order logic, which is a relativistic logic). (A relativistic foundation, it should be noted, represents a middle way between the possibility of an absolute philosophical foundation and the impossibility of A any kind of philosophical foundation. ) Consequently, there is more than justification for stating, in respect to a literary work, either that explication X is more probable than explication Y or that X is more-or-less as probable as Y. Either statement is a statement of SOROEP. But, before describing in Chapter 4 the concept of SOROEP and my claim that it supports the practice of explication (though not necessarily interpretation), it is necessary to establish in the first three chapters that explication can be distinguished from interpretation (or other kinds of interpretation), that such explication is still needed in literary studies, and that reasoned argument, which supports explication and the use of SOROEP, is still a viable practice. Admittedly, no explication of a literary work can be shown to be the “true,” “correct,” or “author's” readingB (if one exists). And no explication can be shown to be even a “probable” reading.C Nonetheless, the practice of explication is more than justified because of eight interrelated reasons in aggregate: (1) because reasoned argument based on evidence can produce a relative-probability judgment about a reading—a judgment that it is either more probable than an alternative reading or more-or-less as probable as an alternative one; (2) because that judgment, while allowing

xiv

Preface

the work an unlimited number of readings, provides a way to judge among them and thereby constitutes a middle-way compromise between the two current but widely unaccepted extreme views of interpretation—the view that the work has only an unlimited number of equally acceptable though different readings (i.e., the work has no unequally acceptable readings), which are all misreadings anyway, and the opposing view that it has only one acceptable reading, which can be discovered; (3) because that judgment is based on the above-mentioned four foundations—rationalism, empiricism, implicit (or tacit) procedural knowledge, and second-order logic; (4) because that judgment is similar to an estimation of the extent of difference between the mental images of two different external objects— an estimation that is an example of implicit (or tacit) knowledge; (5) because that judgment is also a relativistic example of “transcendental” principles first described by Kant; (6) because on that judgment there can be a consensus (which may range from majority agreement to unanimity) among those explicating the work in accordance with relative probability; (7) because this consensus may be due to an evolved, uniform, and probably innate1 ability in the healthy, adult human brain to form relativeprobability judgments and to form them in the practice of activities (like reading and explicating) that are not uniform and innate; and (8) because that consensus can occur even under (or despite) the assumed condition that both the amount of evidence pertinent to the work and the number of possible explications of the work are infinite. It should be noted that, in the first reason, the main verb is not produces or should produce but can produce because the action designated by the predicate is neither merely descriptive of how readers use reasoned argument nor merely prescriptive (normative) of how they should use it. The action is partly both—another middle way, this time between the descriptive and the prescriptive (normative). To use philosopher Paul Thagard's term, the action is “biscriptive.”2 Reasoned argument based on evidence produces a relative-probability judgment about a reading only when the reader chooses to use such reasoned argument for that purpose. She is free to choose otherwise. In the sixth and eighth reasons the main verb is also a can verb because the cognitive conditions (discussed in Part II) relevant to the production of relative-probability judgments do not guarantee consensus among the judgments. They only foster it. The sixth and seventh reasons also present a middle way between extremes because those explicating a literary work in accordance with relative probability can achieve consensus rather than, on the one hand, complete disagreement with one another (which would support the

A Theory of Literary Explication

xv

extreme view that the work has only a limitless number of equally acceptable though different readings) and, on the other hand, nothing less than complete agreement with one another (which would support the extreme view that the work has only one acceptable reading). Explicators of the work can at least achieve consensus perhaps because of the abovementioned human brain’s ability to form SOROEP judgments, but explicators can at most achieve consensus (rather than consistent unanimity) because that ability does not guarantee agreement but only fosters it. There is one additional respect in which this book presents a middle way between extremes, but in this case there are three extreme views: that literary explication has an as-yet-undiscovered absolute foundation that justifies a reading of a work to be judged more acceptable than another; that literary explication has no such foundation and justification; and that, if literary explication has no such foundation and justification that traditionally it was supposed to have in rationalism and empiricism, then it needs no foundation or justification. The first extreme view is implicit in critical works with a modern approach, the second in works with a postmodern approach, and the third in works reacting against a postmodern approach. My middle-way view is that, when literary explication is based on SOROEP, it has a relativistic foundation and that this is enough of a foundation to more than justify some reading(s) of a literary work to be judged more acceptable than others. In the course of justifying explication and specifically explication based on relative-probability judgments, I use material not only from critical theory and hermeneutics (as might he expected) but also from probability theory, philosophy of science, second-order logic, and four fields of cognitive science (linguistics, epistemology, neuropsychology, and artificial intelligence); moreover, I touch upon textual criticism, legal theory, measure theory, fuzzy logic, animal learning behavior, developmental psychology, evolutionary epistemology, and neurobiology. Here my purpose is to show from a wide range of disciplines (most of them never before applied to literary explication) how other researchers' theories and ideas are relevant to a justification of explication. In this way I hope to show that explication based on SOROEP judgments is more than justified even in this postmodern era—an era in which the amount of literary interpretation has gradually but greatly declined and, where still practiced, is practiced for the sake of “Theory” and Cultural Studies and, like them, for the sake of politics. Against this postmodern tendency there were initially only occasional reactions in print from academics; but since the late ’90s such reactions have appeared with

xvi

Preface

some regularity, and a middle-way viewpoint may be growing. It has come to be felt that one should attend “not only to the most fashionable intellectual ideas but also to competing traditions . . . that, precisely because they are less radical, ultimately may be more progressive.”3 There have been calls for a return to literature4 as well as to literary criticism and interpretation.D The aesthetic in literature has re-appreciated in value,E and in the philosophy of aesthetics intentionalism has reappeared.F If it spreads, “the death of the author” may be succeeded by his resurrection as “the hypothetical author” or as the actual author whose intention is supported either by the text5 or by its function as a work of art,6 or as the actual author with whom, regardless of his intention, the reader is engaged by way of or through the mediation of the text.7 In poetry New Formalism has appeared, and now a “new formalism” is appearing in critical theory.8 However, a viable way to return to literary criticism is not to retreat to a restrictively formalist New Criticism but, taking advantage of the philosophical and cognitive justifications described herein, to practice explication based on SOROEP judgments. Such a practice is a way to respond to the current call in PMLA for ideas on the topic “Literary Criticism for the Twenty-First Century”—for ideas on how to “remobilize” the field. After the last few decades “we may have entered a moment of reconstruction or regeneration in which we seek other forms of literary-theoretical knowledge. . . . Are there current approaches that have not yet been fully developed, that would richly repay attention?”9 To practice explication based on SOROEP judgments (it should be noted) does not require a revolutionary or even an original kind of explicatory practice. Ever since some readings of a literary work were first preferred to other readings, some readers have consciously or unconsciously preferred the “more probable” ones.G And some readers who have preferred the readings that they felt were “more inclusive” or “more coherent” or both have, in many cases, unknowingly approximated preferring the “more probable” ones, since “more inclusive” or “more coherent” or their combination can, in many cases, approximate “more probable” even though not being equivalent to it. (Only when readers prefer the readings they feel to be, for example, “richer” or more “interesting,” “topical,” or “relevant” are these readings less likely to be the “more probable” ones.) Therefore, the purpose of this book is not to show how to make a more probable reading of a literary work or even how to judge which are the more probable readings—for, if indeed readers do choose the SOROEP criterion, they already know instinctively how to use it in judging between readings since that knowledge is part of the implicit (or tacit) procedural knowledge based in their adaptive unconscious.

A Theory of Literary Explication

xvii

Instead, the purpose of this book is to show merely but crucially that using the SOROEP criterion in judging between readings has more than a philosophical and cognitive justification—it has a relativistic foundation.

[T]he only thing that is really desirable without a reason for being so, is to render ideas and things reasonable. One cannot well demand a reason for reasonableness itself. —Charles Sanders Peirce, 1900 The term reason has almost always been used to cover an area far larger than is covered by logic. Plato and Aristotle used reason— both logos and nous, I'm told—to refer to the capacity to discover sound first principles, to make assumptions, or to formulate alternative hypotheses, as well as to the capacity to test those principles or hypotheses dialectically and to construct chains of argument from them logically. —Wayne C. Booth, 1970 . . . Leibniz . . . conceived of Probability as a branch of Logic . . . [and so introduced the] enquiries of the philosopher into those processes of human faculty which, by determining reasonable preference, guide our choice. . . . —John Maynard Keynes, 1921 [On] the question . . . what to do about the “bottomless pit” phenomenon . . . , with our concern about the lack of a Foundation . . . “acting on the probabilities” is the only rational thing to do, and . . . one ought to do the rational thing even in unrepeatable situations. —Hilary Putnam, 1987 It cannot be denied that a probable interpretation can be made where a certain one is not possible, but this would be too difficult to put into rules since a rational theory of probability has not yet been sufficiently developed. . . . It is no wonder then that the theory of interpretation has been attacked in its most difficult chapter and that it has not been easy to come away from this. —Johann Martin Chladenius, 1742, translated from the German An interpretation must not only be probable, but more probable than another. There are criteria of relative superiority which may easily be derived from the logic of subjective probability. —Paul Ricoeur, 1971, translated from the French10

PART I: A DEFENSE OF EXPLICATION

CHAPTER ONE EXPLICATION AND INTERPRETATION

The term explication is used loosely here, but it is used intentionally rather than interpretation. Although any difference between the terms can reduce to a difference in degree rather than kind, the distinction between them or between the degrees of criticism they represent is nonetheless used and may not be expendable.A Even a leading deconstructionist, who espouses the Nietzschian idea that “reading is . . . the importation of meaning into a text which has no meaning ‘in itself,’” uses a traditional etymological distinction between explicate and interpret— the one “in the sense of unfold, unravel, or unweave,” the other “in the sense of . . . tease . . . for multiple meanings or implications.”B There are also other distinctions. If a work contains any “crux”—i.e., not just an unsolved problem or baffling passage but one “upon which interpretation of the rest of the work depends”1—an attempt to solve it is an explication rather than an interpretation. Explication may also be distinguished from two other kinds of interpretation even while being situated between them—between establishment of an “authoritative” text involving textual or bibliographical “interpretation”2 and critical interpretation of the then established text. Here, explication is capable of mediating between the two and, since different from both in degree rather than in kind, forms a continuum between them. Similar to such mediation, explication has also been considered a “negotiation” between the first two of the three objectives of interpretation: “(1) the author’s intention—what someone meant by writing the text to be interpreted, (2) the literal meaning—what the text says given the individual meanings of words and the composed meaning of sentences, and (3) the representative content—what the text as a whole means—in the sense of what it represents.”3 Minus a mediating/negotiating function, explication has been distinguished from another two kinds of interpretation: explanation and exploration. In explication the reader reconstructs “authorial meaning” through “objective interpretation” of “communicative . . . authorial signals”; in explanation the reader deconstructively explains “informative

Explication and Interpretation

3

. . . textual symptoms” of what the text conceals, and in exploration the reader participatorily experiences “disclosures” that “fuse” her cultural Alternatively, another two kinds of context with the author’s.4 interpretation from which explication has been distinguished are translation-into-a-theoretical-language and intervention. In explication the reader glosses the words and the cultural context of the text and then initially guesses but eventually solves the “enigma” of the text by “drawing inferences from hints” therein about the author’s “constructed intention”; in translation-into-a-theoretical-language the reader emphasizes language and the text rather than the “intention of meaning as source of the text”; and in intervention the reader intervenes by “construct[ing]” meaning to promote “affect.”5 Explication has been considered an interpretive activity like “construing the import of a remark in dialogue and explaining a datum in science.” Unlike a merely “decoding” activity (such as understanding the syntax of the simplest sentence or reading a thermometer), which is an “invariant or rule-governed translation” of a datum, an activity like explication is a “process of inference to the best explanation of the type familiar from the philosophy of science”C and important in the philosophy of realism.6 Explication has also been considered a prose paraphrase (especially in FranceD) or a prose translation or “an equivalent . . . only of the conceptual portion” of the “total meaning,” which is “a synthesis of conceptual and attitudinal meanings,”7 or a statement having the same reference though not the same meaning as a text in the sense that Sir Walter Scott and the author of Waverley have the same reference though not the same meaning,E or a reading of a text “for a knowledge of each part and for the relation of these parts to the whole.”F Any of the above definitions will serve for the purpose of the present work even though none of them are wholly satisfactory because of possible implications. For instance, the last definition may imply a reading confined within the limits of the text by New Critical insistence on the selfsufficiency of the text. But, if extratextual material (whether published or unpublished, public or private) such as an earlier draft, comparable text, authorial statement of intention, or historical or biographical information seemed relevant to understanding a text, explication as it is used here would take that material into consideration as evidence for a particular explication. Especially in France, explication preceded the New Criticism, had and still has a validity independent of it, and so need not be considered as having perished with the latter's demise. Just as that more generalized activity known as “close reading” is still viable,8 so is explication. The term construe is often used as a synonym for explicate, and, if its use as a noun were not so awkward from unfamiliarity, the term would be

4

Chapter One

as suitable in the present work as explication. After all, the range of its definitions allows it to be, like explication, a kind of interpretation yet distinguished from interpretation and, because of its association with syntax and grammar, at a more basic level than interpretation (or, if you prefer, at the most basic level of interpretation): construe . . . To apply the rules of syntax to (a sentence or clause) so as to exhibit the structure, arrangement, or connection of, or to discover the sense; to explain the construction of; to interpret; also, to translate, esp. orally. Also, Gram., now less commonly, to construct. . . . To put a construction upon; to explain the sense or intention of; spec., as disting. from interpret; to discover and apply the meaning and intention of with reference to a particular state of affairs; to interpret; understand; also, to deduce or infer by construction.G

Like explication, construe examines the relation of parts to the whole by examining “structure, arrangement, or connection”; construe too can be considered translation; and, like explication as distinguished above from explanation and exploration, it is differentiated from deconstruction by being associated not only with “construction” but also with authorial meaning (“sense or intention”). Most of these terms and distinctions are used, for example, by I.A. Richards to describe the difficulty of readers in making out the plain sense of poetry. . . . [They] fail to understand it, . . . to make out its prose sense, its plain overt meaning as a set of ordinary, intelligible, English sentences, taken quite apart from any further poetic significance. And equally, they misapprehend its feeling, its tone, and its intention. They would travesty it in a paraphrase. They fail to construe it just as a schoolboy fails to construe a piece of Caesar.9

CHAPTER TWO THEORY AND PRACTICE

At first glance the above material may seem more suitable to a work of the '40s or '50s. And indeed, the present work does owe much to the doctrines of those decades. Nevertheless, it was written in and after the '90s under a familiarity with contemporary hermeneutics. Admittedly, hermeneutics must remain of paramount interest to the close reader of literature, if only because all critical practice illustrates theory whether or not the practitioner names the theory or even admits that one is operative. However, theory without practice is equally bad. Though practice without theory is blind, theory without practice is empty.1 As philosopher Richard Rorty succinctly put it, “[d]isengagement from practice produces theoretical hallucinations”;2 and so, much theory “seems arid and unreal, out of phase with concrete issues in critical practice and pedagogy, and out of touch with human needs and interests.”3 Even worse, theory can be misused “to stand outside practice in order to govern practice from without.”4 Therefore, a practice is still necessary in which it and theory mutually guide each other: “[J]ust as theory can clarify and reform interpretive practice, it itself can be enlightened and reshaped by that practice.”5 Theory can guide practice by being used to “regulate the kinds of evidence that literary scholarship provides”—i.e., “not to replace or dismantle historical evidence but to guide our use of it to help us sort out issues of relevance, priority, and persuasiveness” and so promote “problem-solving rather than field-defining.”6 And practice can guide theory because it “enables a certain testing of theoretical positions in the detailed terms of practical criticism” and, conversely, “encourages bringing into theoretical reflection the assumptions that often lie unexamined in traditional forms of practical criticism.”7 Besides, to “a high degree the form of . . . theory is practice. However esoteric, most theorists begin with a text, and they arrive at their generalizations not through direct statement but through teasing them out of that text.”8 By this procedure a theory “grows fortuitously out of encountering the peculiar contingencies of peculiar texts,” “allows itself to be modified by new bombardments of the same,” and thereby avoids being “embraced

6

Chapter Two

from the start as a highly systematized fait accompli to which the perceived actualities of literary experience must bend.”9 And while theory “always depends on a background of entrenched interpretive practices that initially get it going and continue to orient it,” it is, in turn, “judged pragmatically by its fruits in the practice that it can help reshape and sustain.”10 But even where practice is considered only a means to an end that is theory, no one . . .—not Derrida or Foucault or Greenblatt or whoever—can do any work with literature at all without first performing acts of interpretation (something must be understood as something before it can be talked about). Interpretation, however unimportant ultimately to the critic, is the necessary basis of larger speculation, to the extent that that speculation is concerned at all with literature and what it reveals about, say, history, mind, language, politics, or culture.11

Indeed, through ever larger speculation, interpretive literary theory has become interfused with all those other branches of study, making its concepts “oceanic” and therefore almost unmanageable. However, they can still “become limited and manageable in interpretive practice.”12 The necessity of practice is also part of the hermeneutic philosophy of Hans-Georg Gadamer. Although, in his philosophy, every interpretation is “prejudiced,” it is not false per se: it is only “mediated by the prejudices of its time,” is an effect of history, and “betrays the marks of its birth.” But the “true” prejudice can be discriminated from the false only through practice. “The true prejudice is the one borne out by interpretation itself. What distinguishes the true from the false interpretation is not a principle but a process, for to historical beings truth is disclosed in the historical process of interpreting.”13 Explicating traditional texts anew is also necessary for their own sake and not just to test or “tease out” theory, make its concepts manageable, or ascertain “true” interpretive “prejudice.” In opposition to this view, contemporary theorists have from time to time and for different reasons called for a moratorium on new readings,A and the best of those reasons may seem to be that, after a theory is teased out of a text, the theory spawns countless similar readings of countless other texts: Ever since the New Critical discovery that almost any work of literature could be read as a complex of paradoxes and ironies, critical “methodology,” whatever other purposes it has served, has been an instrument for generating new “readings”—and thus new publications. The recent discovery that every text can be reinterpreted as a commentary on its own textual problematics or as a self-consuming artifact ensures that the production of new readings will not cease even though explication of many authors and works seems to have reached the point of saturation.14

Theory and Practice

7

But usually that point is not reached, for one has only “to immerse oneself in historical materials . . . to discover how little, on a given issue, has been 15 settled once and for all.” There is always a need for a more clarifying reading regardless of the theory behind it and of the number of other readings available. According to Timothy Peltason, Although many current theorists complain that we are flooded by explications and that criticism must find itself a new job, I do not find in the course of my own reading and teaching any overabundance of helpful guides to the poems and novels that I am puzzled by. . . . In my own study . . . , it is the companionship and guidance of such readings that I have most often missed.16

* * * * * * Of course, any interpretation theory or practice may seem irrelevant to more recently favored theories—to the New Historicism, for example, which may be favored precisely because “theoretical reflection has not been able to devise clear, indisputable procedures for producing correct interpretations. This problem, it is thought, can be bypassed, if not resolved, by turning to history.”17 However, instead of bypassing the problem, the New Historicism merely translates it into other terms— interprets historical context which interprets literature. The New Historicism is yet another fundamentally literary form of enquiry in which the verbal icon of the new critics is replaced with a cultural manifold to which the . . . exegetical skills of the literary critic are applied. . . . Greenblatt reads contexts as if they were metaphysical poems, and his method shares with New Criticism or deconstruction a hermeneutical license that has long been claimed by literary critics.18

In other words, “history does not tell us what the text is, because we decide what history is, and then put history into the text, rather than the other way around. . . . From this perspective, the new historicists’ contextualization is just another form of interpretation.”19 The case is the same with Cultural Studies. If “the literary text emerges in a space and with an effectivity provided by the larger culture,” it would seem that focusing on the larger culture is the way to focus on it. But . . . you cannot focus on the background array of social practices, on the “whole intertextual system of relations” within which everything is interdependent (“heteroglot”) and nothing free-standing, without turning it into an object which is itself in need of the kind of explanation it supposedly provides.20

8

Chapter Two

In other words, that background array of social practices becomes an object that must itself be interpreted. Interpretive practice is inevitable in another sense too. If we are teachers of literature, we teach future readers or interpreters or consumers, not, as most other disciplines do, future scientists or practitioners or producers. People—students and teachers—want to go on enjoying literature qua literature, no matter how problematic that qua is rendered by deconstruction, feminism, historicism, reader-response, or other theories. They have no answer to the point . . . that we have little need any more, as a scientific or intellectual profession, for repeated readings of classic texts. But such readings are what hermeneutics produces—and what it wants to produce.21

CHAPTER THREE REASONED ARGUMENT

What also shows the legacy of the '40s and '50s in the present work is the principle that reasoned argument should support readings. Whenever this principle appeared sporadically in the '70s and '80s, it was seen as part of a “growing reactionary movement in the academy to recover the ideals of logic, reason, and determinate meaning and to repudiate the radicalism of the sixties and seventies.”1 Of course, such a reactionary movement never developed in theory and criticism. Besides, reasoned argument does not become discreditable because it can be used in an unpopular cause.2 Nor does it become discreditable because current theory views logic and reason as merely persuasion and rhetoric—for this view is contrary to views in cognitive linguistics, cognitive psychology, evolutionary epistemology, and neurobiology. There, conscious reason (including logic) is considered a product of biological evolution. By contrast, persuasion and rhetoric are later and cultural developments born of human speech, which evolved dependently from conscious reason according to studies based on recent findings in cognitive neuroscience, archeology, and genetics.3 Of course, alternatively, speech may have evolved merely after conscious reason4 or concurrently with it5 or independently of it,6 but these possibilities are now less likely. Therefore, biologically evolved conscious reason could not be merely the separate, culturally developed persuasion and rhetoric. But even where it is still maintained that language is the source and cause of conceptual structure and thus of rational thought, it is only the quintessential elements in language that are so considered—syntax and “the core materials from which syntax is made” (thematic and structural relations). These are the only language elements that conceptual structure “expresses”: What conceptual structure does not express . . . includes the linear ordering of words into sentences and the morphophonemic shapes that linguistic concepts must take on if they are going to be used communicatively between two individuals.

10

Chapter Three In other words, conceptual structure contains just those elements that are universal (Language-with-a-big-L) and excludes just those elements that are language particular.7

Therefore, conceptual structure, the basis of rational thought, could not express or include persuasion and rhetoric, two elements that are language particular. Of course, one might still reject this conclusion. After all, there is much evidence that language influences perception and cognition,8 and if one were reductively to equate (1) influence with equivalence, (2) the influencing language elements with persuasion and rhetoric, and (3) influenced perception and cognition with logic and reason, then one could still view logic and reason as merely persuasion and rhetoric. One might also claim that, even if speech, persuasion, and rhetoric evolved dependently from conscious reason, their subsequent influence upon it has so overwhelmed it that now it is merely persuasion and rhetoric. However, most (though not all) specialists on current interrelations between language and thought are committed either to such a weak conception of the dependence of thought on language or to such a weak conception of the independence of thought from language—i.e., either to the conception that “language is itself the medium for some thoughts and is partly constitutive of those thoughts” or to the conception that “language facilitates or augments some forms of thought”—that it “becomes unclear quite which of the two doctrines” the specialists “intended themselves to be committed to.”A That leaves in an equally weak condition the conception that persuasive or rhetorical language is itself the medium for some thoughts and is partly constitutive of them. But, even if one still viewed logic and reason as merely persuasion and rhetoric, viewed thus and used in argumentation, they would still make possible the “forms of sociality” that include the peaceful resolution of conflicts, meaningful social criticism, higher education, and even self-transformation. Argumentation is the practice of a very tenuous hope that people can settle their conflicts nonviolently, that they can act differently from the way they otherwise would because they can open themselves to the dialogues that arguments are. In the process of developing this ability, a great deal more is accomplished, for this dialogue which is argumentation is finally indistinguishable from learning itself, indistinguishable from the practice of inquiry.9

It is not surprising, then, that both logic and reason are still used in all critical disciplines and seem to be still necessary to them. (Even in mathematics, plausible reasoning must supplement the deductive logic that might be supposed to constitute the discipline completely.10) Specifically,

Reasoned Argument

11

logic and reason seem to be still necessary to theory and criticism and so must, with consistency, be accepted as necessary to interpretation too as long as interpretation is judged to be related to theory and criticism. Indeed, in contemporary rhetorical hermeneutics, reasoned argument about interpretations is accepted if not actually welcomed, for it is the raw material of that theory. Under it (according to Stanley Fish), we are not without rules or texts or standards or “shared points of departure and common notions of how to read.” We have everything that we always had—texts, standards, norms, criteria of judgment, critical histories, etc. We can convince others that they are wrong, argue that one interpretation is better than another, cite evidence in support of the interpretations we prefer, etc.; it is just that we do all those things within a set of institutional assumptions that can themselves become the objects of dispute.11

Or, as expressed by more recent theorists as diverse as K.M. Newton and Steven Mailloux, The concept of “truth” or “validity” in interpretation should . . . be replaced by the concept of power. . . . The literary interpreter is engaged in a power struggle with other interpretations. . . . [But] power is achieved by the same methods that are used in the search for truth or validity: arguments founded on rationality, logic, the use of evidence, and so on. The only difference is the awareness that in the area of literary criticism these are usually the strongest weapons in the struggle for power.12 To recognize the rhetorical politics of every interpretation, is not to avoid taking a position. Taking a position, making an interpretation, cannot be avoided. Moreover, such historical contingency does not disable interpretive argument, because it is truly the only ground it can have. We are always arguing at particular moments in specific places to certain audiences. Our beliefs and commitments are no less real because they are historical, and the same holds for our interpretations.13

And even beyond such rhetorical theory that accepts reasoned argument about interpretations conditionally is other theory that accepts it unconditionally—not only deconstruction in the original, Derridean mode14 but also theory based on work in Anglo-American analytic philosophy. For example, James L Battersby uses studies in the philosophy of mind, category formation, and interpretation and mental representation to show that “nothing in recent literary theory (in either its hermeneutic or its cultural/historical mode) has rendered obsolete, invalid, or second-rate inquiries into authors, interpretation, intentionality, determinate meaning, [or] objective value judgments” or has subverted

12

Chapter Three

“the importance, indeed the indispensability, of practical reasoning . . . to the making and understanding of literary texts.”B

CHAPTER FOUR SECOND-ORDER RELATIVE OBJECTIVE EPISTEMIC PROBABILITY (SOROEP)

If, then, it is assumed that explication can be distinguished from interpretation (or other kinds of interpretation) and is still needed in literary studies and that reasoned argument, which supports explication, is still a viable practice, the concept of second-order relative objective epistemic probability (SOROEP) can now be introduced along with the claim that, together with reasoned argument, SOROEP supports the practice of explication (though not necessarily interpretation) to the extent of providing a relativistic foundation for it.

4.1: Physical vs. Epistemic Probability According to many (though not all) probability theorists, there are basically two kinds of probability—quantitative and qualitative. Quantitative probability, first associated with Pascal, is sometimes also called factual, stochastic, aleatory, outside, extensional, or frequentist (frequency) probabilityA but in the present work will usually be called by one of its common names, physical probability. It is numerical and mathematical and is concerned with statistics and the laws of chance processes inherent in nature and in some experimental devices (a flipped coin, a thrown pair of dice, a dealt hand of playing cards). On these chance processes everyone has the same information, and so physical probability is independent of knowledge, which can be different for different people. The second kind, qualitative probability, was first associated with Bacon and will hereafter be called by one of its common names, epistemic probability, although it is sometimes also called plausibility, credibility, inside, intensional, inductive, conditional, or logical probability, or, as in the last epigraph preceding Part I, subjective B probability. It is concerned with non-numerical degrees of justification or rational belief—qualitative measures of the extent to which propositions having no numerical character are justified or deserve belief on the

14

Chapter Four

strength of evidence—and so is dependent on knowledge, which can be different for different people.1 Here, the “probability of interest lies within the cognitive system of the person making the judgment of probability rather than in the behavior of numerous objective events.”2 Admittedly, epistemic probability is sometimes indicated by “a number that represents, albeit with a ludicrous affectation of precision, the degree to which we are certain of something, or alternatively, the degree to which we believe it or the degree to which our evidence supports it.”3 But, since “there may be no sensible way to assign real numbers to degrees of confidence,”4 epistemic probability will be treated in the present work as non-numerical and representative only of the degree to which the evidence supports something. And here it should be noted that epistemic probability is sometimes also called relative probability because it is relative to its supporting evidence and that this relativity is a first-order relativity.

4.2: The Question of Necessary Consensus: Believability/Justifiability/Inferability, Evidentialism, and Well-Foundedness Given that epistemic probability is concerned with degrees of justification or rational belief on the strength of evidence, the question arises whether the validity of such probability depends on consensus (which may range from majority agreement to unanimity) of people’s probability judgments. The answer to this question is that, under the following conditions, the validity of epistemic probability would be theoretically not dependent on consensus of probability judgments or on consensus of the evidence-based beliefs underlying those judgmentsC or on the beliefs themselves or even on belief at all: those conditions are if epistemic probability were described as concerned with degrees of believability rather than rational belief and of justifiability rather than justificationD—i.e., if (unlike mere belief, which is potentially different in all human beings) rational belief were understood as potentially universal in human beings5 and therefore as grounds for the universal abstraction believability, and if justification were here understood as justifiability—as not a human action (a verbal act of defense) but a circumstance or a relationship between two circumstances not necessarily involving a human being and so not necessarily variable because of human variability. Among probability theorists, philosophers of science, and epistemologists there are precedents for acknowledging the reality of the universal abstractions believability and justifiability. L. Jonathan Cohen acknowledges the reality of believability although he uses instead the

Second-Order Relative Objective Epistemic Probability (SOROEP)

15

abstract terms credibility (“belief-worthiness”) and inferability.6 John Maynard Keynes uses inferrible in distinguishing hypothetical from assertoric inference;7 and since, in the logical theory of probability, probability is located in the “Platonic world of abstract ideas,”8 rational belief which underlies that probability must be located in that world too as the abstract idea believability. According to Richard Johns, logical probability “is not anthropocentric” but “is defined on objective states of affairs rather than human thoughts”—on “epistemic states” (e.g., believability) rather than beliefs. Or, if anthropocentricity is insisted upon, logical probability can be defined on “the epistemic states of a perfect, infinite intellect, a mind of unlimited capacity that infallibly draws all and only valid inferences. This mind is an embodiment of all logical truths,” and the contents of the beliefs “of this being are (or are indistinguishable from) states of affairs themselves.”9 Karl R. Popper too acknowledges the reality of abstractions like believability. He establishes a basis for his propensity theory of physical probability by positing a reality behind such “dispositional” terms as breakable, soluble, and red—where the ability to break or dissolve or “reflect a certain kind of light” lies in the thing alone. But he also posits an equally real dispositional ability that extends from the thing to its observers: Red-looking too is dispositional in that it “describes the disposition of a thing to make onlookers agree that it looks red.”10 Likewise, believable, credible, inferable and justifiable describe the disposition of a thing—e.g., a hypothesis—to make those considering it agree that it is believable, credible, inferable, and justifiable. According to Peter Achinstein, the dispositional concept “reasonableness of belief” (i.e., believability) is an abstraction independent of human agency and thereby makes possible what he calls “objective epistemic probability.”11 So too would the dispositional equivalents credibility, inferability, and justifiability make it possible. As for acknowledging that justifiability can be independent of human agency, Alvin I. Goldman assumes it in his analysis of justified belief,12 Paul K. Moser acknowledges it by differentiating between “justifiability” and “justifiedness,”13 and Laurence BonJour defends justifiability as “a priori justification.”E Given the above understanding that the justification supporting epistemic probability can instead be justifiability—i.e., can be a circumstance or a relationship between two circumstances not necessarily involving a human being and so not necessarily variable because of human variability—the epistemic probability used in literary explication would then be concerned with degrees of justification of different readings by different bodies of evidence or with degrees of the potential for different bodies of evidence to justify different readings. The validity of the

16

Chapter Four

probability used in explicating a literary work would then not depend on a consensus of those who have explicated the work by using that probability. The condition described in the preceding paragraph is an example of “evidentialism”—the view that “the epistemic justification of an attitude [in this case, an explication] depends only on evidence” and not “upon the cognitive capacities of people [explicators], or upon the[ir] cognitive processes or information-gathering practices that led to the attitude.”14 This view would raise the justification of an explication to the status of well-foundedness, but it should be noted that this term is not equivalent to having an absolute foundation, despite the verbal similarity.15 However, well-foundedness has an effect similar to that of having a relativistic foundation based on epistemic probability because well-foundedness has the same effect as what is necessary to a relativistic foundation based on epistemic probability—the treatment of rational belief and justification as, respectively, believability and justifiability independent of human agency. That is, the well-foundedness of the belief is like the justification of believability or the justifiability of a belief in being independent of human agency and so invulnerable to the possible human defects to which the mere justification of the belief is vulnerable. Examples of such possible human defects are that the evidence supporting the belief may be “overridden by other evidence the person has”16 or that the belief, though correct, may not have been “properly arrived at”17 and may be based on something other than justifying evidence18—i.e., the belief may be held for “bad reasons” or “dogmatically, or from wishful thinking, or on some other epistemically faulty basis” and so be only “accidentally correct”19 or correct “due simply to lucky guesswork.”20. Incidentally, in the above Preface to this work, variant expressions for SOROEP-based explication were said to be “more than” justified. And now that evidentialism as well as a relativistic foundation based on epistemic probability can be seen as raising justification to the status of well-foundedness, those statements in the Preface can be understood in an additional way—not only that those variant expressions for SOROEPbased explication are more than sufficiently justified but also that their justification is raised to the status of well-foundedness.

4.3: Relativity between Probabilities At this point, another problem seems to arise, but it can be evaded. When probability is applied to a reading of a literary work, one usually (and, I claim, correctly) assumes that the probability of a reading (or its degree of justification or potential justification) cannot be determined, cannot be given a numerical or quantitative measure (the one exception

Second-Order Relative Objective Epistemic Probability (SOROEP)

17

being that a reading based on no evidence at all has a zero probability). Indeed, a reading cannot even be called “probable.” However, by means of reasoned argument based on evidence, a reading can be compared with or considered relative to another reading and can sometimes be estimated to be more probable than that reading,F even though by how much cannot be determined, cannot be given a numerical or quantitative measure.G This relativity between the probabilities of the two readings is made possible by the existence of gradation (or degree) in evidence and in justification based on evidence.21 But most important of all to emphasize is that this relativity between the probabilities is a second-order relativity because it is between probabilities each of which is relative to its own supporting evidence.H Strictly speaking, then, the kind of probability normally used in deciding between readings of a literary work might justly be called by a phrase that builds on Achinstein’s phrase “objective epistemic probability” quoted in the preceding section: either “second-order relative objective epistemic probability” (designated as “SOROEP”) or “objective epistemic probability that relates two first-order relative objective epistemic probabilities” (designated as “ROEP1 - ROEP2”). However, both expanded phrases and the latter designation are too prolix and awkward for repeated use. Consequently, in the present work, the simpler designation “SOROEP” is being used instead.

4.4: Other Applications of SOROEP Although not by either of those phrase names or designations, SOROEP has been shown to be relevant in other fields. It has been considered part of the scientific method, for the method can evaluate which one of “a pair of hypotheses . . . is more rational on the basis of the evidence provided”22 since “there is a rough consensus about how to order” the hypotheses “in degrees of probability.”23 In textual criticism SOROEP is relevant to establishing an “authoritative” text of a work.24 In linguistic semantics a SOROEP statement is called a preference rule—“a statement in probabilistic form of the relative strength of two or more items for interpretation relative to some property or properties.”25 And since “linguistic meanings are probabilistic, not deterministic, in nature . . . , a language can be characterized in terms of . . . preference-rule systems in which specific lexical items, for example, will have relatively preferred and relatively dispreferred semantic interpretations.” This also applies to literary language and to a literary work as a whole, for they are not “exempt from the same probabilistic preference-based processes of meaning reconstruction at work in other modes of language use. Literary analysis . . . entails assigning probabilistic weightings to candidate

18

Chapter Four

interpretations, not separating the wheat of ‘readings’ from the chaff of ‘misreadings.’”26 Of course, in accord with the distinction made above between explication and interpretation, such “probabilistic weightings” are relevant more to the former than to the latter.I Explication is, in general, a more restrictive or less free procedure than interpretation, and probability provides a way to give form to or objectify that restrictiveness. Moreover, when that restrictiveness includes the requirement that explication attempt J to approach disinterestedness or other such related unattainable ideals, probability provides a way to give form to or objectify that attempt and so enables explication to approach that ideal more easily. The use of SOROEP to choose among readings is not new, especially among intentionalists. Although E.D. Hirsch, Jr., believes it to be not qualitative but a vague and non-numerical form of quantitative (“frequency”) probability,27 he and William Irwin believe that it validates which reading best approaches the author's intention realized in a literary work.28 P.D. Juhl implies the same by calling a better reading the “more likely” one,29 and so does Wendell V. Harris when he calls one meaning not “as likely as” another and another meaning the “most likely” or “most probable intended meaning.”K But, anti-intentionalists too believe that SOROEP is essential in validating interpretations—for example, phenomenologist Paul Ricoeur: As concerns the procedures of validation by which we test our guesses, I agree with Hirsch that they are closer to a logic of probability than to a logic of empirical verification. To show that an interpretation is more probable in the light of what is known is something other than showing that a conclusion is true. In this sense, validation is . . . an argumentative discipline comparable to the juridical procedures of legal interpretation. It is a logic of uncertainty and of qualitative probability.L

4.5: A Foundation of Rationalism It should be admitted that any description of epistemic probability can be considered philosophically “trivial” or “uninteresting” since it is expressed in terms of other epistemic concepts (like justification, belief, evidence, and strength or weight of evidence) that can be considered to have no empirical foundation. Any such description therefore would be “account[ing] for the obscure in terms of the equally obscure”30 and so, rather than justifying a concept, would only be stating its interrelationship with other concepts. However, even if epistemic probability were describable only in terms of other epistemic concepts without empirical

Second-Order Relative Objective Epistemic Probability (SOROEP)

19

foundation, it would not then become ill-founded—for even “deductive logic, which no one thinks ill-founded,” has no empirical foundation, “no epistemic anchorage . . . outside logic,” but “floats” or “rests on intuition.”31 Similarly, at least two key concepts on which epistemic probability rests—justification and gradation (or degree) in justification— are not empirically founded on “one's sensory (and introspective) experience of the world” but rest on “reason or pure thought alone.”32 Those two key concepts are “[b]asic epistemic norms” and, “like moral norms (and logical norms),” are themselves “justified not by being deduced from more fundamental norms (an obvious impossibility), but by their ability to sort specific, individual normative intuitions and other relevant data into the right barrels in an economical and illuminating way.”33 Besides, epistemic is like physical probability insofar as it may not need a supporting philosophical foundation. Indeed, when explicators use SOROEP as the ultimate criterion to favor a particular reading (as they still 34 do even in this postmodern era ), they use SOROEP independently of any supporting philosophical foundation just as mathematicians use physical probability independently of any such foundation in their own field. The following passage could as well be describing conditions in explication as in mathematics: [N]o merely theoretical or philosophical discourse could bring down or eliminate an established mathematical practice unless some alternative means are suggested for doing what has been getting done. Hence, we need have little fear of finding that probability theory rests on shaky foundations, or even none at all; for the response of the practicing mathematician will surely be a shrug of the shoulders and continued use of probability theory. In fact, many mathematicians and philosophers already take the view that probability neither has nor needs any philosophical foundations—it is an uninterpreted part of formal measure theory which we use whenever we find it convenient, informative, practical. Such people get on very well in probability theory, and have even written quite good textbooks, despite their professed inability to explain what they are talking about.35

4.6: Physical vs. Epistemic Probability Again But it is necessary to return here to emphasizing the distinction between physical and epistemic probability. Because physical probability is conventionally expressed in percentages (though it can be expressed otherwise—e.g., three-to-oneM instead of 75%), it suggests that percentages of a whole are being expressed; and if the probabilities used in

20

Chapter Four

explication were mistakenly thought to be of the physical kind, they would suggest that a “more probable” reading is nearer to that whole than another reading is and that the whole must represent a reading which is complete, perfect, and ideal or which expresses, in Hirsch's terms, the “true” and “correct meaning,” the “author's meaning”—in other words, a reading whose possibility is indeterminable and disputed. However, when the probabilities used in explication are correctly thought to be of the epistemic kind, they suggest only that a “more probable” reading indicates a higher degree of justification or belief supported by evidence than another reading does. And if the term high suggests reference to a base “level” that the degree of justification of a reading is above, that base level represents zero degree of justification of a reading because that reading is N not based on evidence—in other words, a reading which is possible. Another way to express this relationship is that the "more probable" of two readings is more distant from a zero-probability reading or (if one feels awkward about using the adjective probable without reference to truth or falsity rather than the customary adverb probably modifying true or false) that the preferred of two readings is less probably false. These distinctions between physical and epistemic probability mean that one need not be an intentionalist and believe in the “true,” “correct,” or “author's meaning” in order to have justifcation for believing, on the basis of evidence, one explication to be “more probable” than another.

4.7: Formal Measure Theory and Fuzzy Measure When physical and epistemic probability are viewed as part of formal measure theory, the distinction between them is, first, like the distinction between analytic and synthetic geometry (or analytic and synthetic classical physics). In analytic geometry, beginning with Descartes and Fermat, the magnitudes of continua such as lengths, areas, and regions are compared by comparing abstract numbers (i.e., absolute values) assigned to those magnitudes; whereas in synthetic geometry, beginning with Euclid, the magnitudes of those continua are compared by comparing physical, non-numerical ratios (i.e., relativistic values, like “longer,” “shorter,” “larger,” “smaller”) of those magnitudes. Secondly, the distinction between physical and epistemic probability is like the distinction between the extensive measurement of directly measurable properties (such as length and weight) and the intensive measurement of indirectly measurable properties (such as temperature and hardness). A property is measurable only indirectly when it can be measured only by measuring a directly measurable, correlated property—for example, measuring temperature by measuring the length of a column of mercury in

Second-Order Relative Objective Epistemic Probability (SOROEP)

21

a thermometer. As for the distinction between extensive and intensive measurement, the former “is accomplished by counting concatenations and is supposed to make such statements as ‘The length of a is n times greater than the length of b’ meaningful,” whereas the latter “does not proceed by counting concatenations and is supposed to make only such statements as ‘The temperature of a is greater than the temperature of b’ meaningful.”36 The distinction between physical and epistemic probability is also like the distinction in fuzzy logic between physical probability and fuzzy measure (except that fuzzy measure is numerical whereas epistemic probability is not): Fuzzy measures assess how much the evidence proves a fact. . . . [I]f a patient were suffering from an unknown lung ailment, a physician might deem it either bronchitis, emphysema, pneumonia, or a cold and assign the following fuzzy measures: bronchitis, 0.75; emphysema, 0.45; pneumonia, 0.30; and cold, 0.10. In probability, such figures would have to add up to 1, no matter how much or how little proof there was. In fuzzy measures, they don't. Rather, each measure starts at zero and works up.37

The epistemic probability of a literary reading also “starts at zero and works up” (although to an unknowable and non-numerical extent). And, just as the “whole” represented by the numeral 1 need not exist as the sum of fuzzy measures, so the “whole” represented by the complete, perfect, ideal, true, correct, or author's reading need not exist in literary explication (the possibility of such a reading being indeterminable and disputed).O

4.8: Competition between Readings Moreover, a literary reading is like any natural-science concept (not just a medical diagnosis) insofar as the concept is evaluated by comparison with competing concepts rather than by its approximation to a “whole”: The “‘fitness’ of any particular concept is not a quantity to be measured on some absolute fixed scale; particularly not some absolute scale telling only of approximation to objective reality. Solving a problem better than others 38 might be a cause for success.” Such “success would be measured by growth in knowledge” whether of the literary work or of the natural world in the sense that that knowledge moves away from a condition in which we have lots and lots of particular statements, many of which seem incompatible with one another, to a situation in which we get more and more statements about regularities, most of which are compatible with one another. This movement is not to be thought of as an attempt to move toward a predetermined goal, but as movement away from a primitive condition. The further we move in this

22

Chapter Four direction, the greater the fit of our knowledge to the world we are living in [or to the literary work we are explicating—during that time a parallel “world” we are “living in”]. . . . All this shows that no concept of truth need be set up as a primary concept.39

And if all explanations are really only metaphors (as some cognitive linguists believe), the search for “true” explanations is really “a search for metaphors of progressively more sophisticated functional applicability to 40 what we have come to know as the world [or the literary work].” As a result, the supposed role of reason—to find the “truth” about the world (or the literary work)—seems gone, for, without the concept of truth, there is no such role for reason. But this does not mean that reason is eliminated from the knowledge-acquisition process. It is merely transformed from a positive faculty which finds the truth into a negative, critical faculty which eliminates errors and, above all, those errors which consist in the presence of false theories and of theories which, though not false, explain less than other theories. Rational intellectual behaviour is now seen as error elimination rather than as truth-finding ability. In this negative and critical role, it is an essential part of the growth of knowledge.41

Here, then, is another reason (besides those in the preceding chapter) why the use of reason to select among readings is still justified.

4.9: “More-or-Less as Probable” Instead of being more probable than an alternative reading, a reading can be more-or-less as probable as an alternative reading.P However, this phrase—“more-or-less as probable as”42—unfortunately suggests that the probabilities of the readings are nearly or even approximately equal or have “no significant difference” between them.43 And suggesting this may imply wrongly that the difference between the probabilities is measurable (in this case, almost zero).Q On the other hand, phrasing which completely avoids suggesting that tendency toward equality—like “the reading is somewhere from more to less probable than the alternative reading”— describes an indeterminate relationship between their probabilities and thus a state of ignorance about any probability relationship between them. As a compromise, there is phrasing that both avoids describing indeterminacy and only minimally suggests the tendency toward equality—e.g., the statement that “the reading is somewhere from not appreciably more to not appreciably less probable than the alternative reading.” In any case, such a statement is too unwieldy for repeated use—unlike “more-or-less as

Second-Order Relative Objective Epistemic Probability (SOROEP)

23

probable as”—and so the latter can be used under the assumption that it serves as a convenient equivalent of the former. The above problem and compromise solution are given a graphic representation by Peter Achinstein. If the two probabilities were represented as points, their superimposition or their separation would be unwantedly measurable. Instead, each probability is represented as an imprecise “interval”—as a line of uncertain length “smeared out” lengthwise at both ends44 so that one cannot tell where the two gradually fading ends of each line finally disappear. And so, if one attempts superimposition of the lines, one cannot tell whether, in a lengthwise direction, they overlap completely or partly and by how much. They overlap “more or less.” The phrase “more or less” is used also in fuzzy logic where it is a standard modifier for expanding the range of a set.45 And in natural science, analogous phrasing is used: the formula for Heisenberg's uncertainty principle, for instance, is sometimes expressed using the verb phrase “is of the order of” or “has a value not very different from.”R So too in moral philosophy, where the logical space of comparability between two goods is sometimes said to include not only the positive value relations better than, worse than, and equally as good as but also on a par with.46 By the same token, one might say in literary explication that a reading “more-or-less as probable as” an alternative reading has a probability “of the order of” or “not very different from” or “on a par with” the probability of the alternative reading.S

4.10: A Three-Part Problem To use SOROEP in literary explication is not merely to use another and a common form of logic (which probability isT) for choosing among readings but also to address “one of the most pressing current problems facing literary critics” (a problem having three parts): “how to allow a variety of conflicting interpretations for a given text without abandoning all possibility of controlling the range of those interpretations, or judging among them.”47 A noticeable number of critics now sympathize with this view of the three-part problem by expressing in print their disagreement with both of the extreme opposing views in hermeneutics—that a work has only a limitless number of equally acceptable though different readings (which are necessarily misreadings) and that it has only one acceptable reading (which is ultimately discoverable).U In 1983 the William Riley Parker Prize for the Outstanding Article in PMLA that year was won by an article that presented “a theory of limited pluralism . . . to chart a middle way between” the two extremes.48 The award testified to the recognition

24

Chapter Four

by at least MLA officers and some members that a “middle way” theory was needed and that, by implication, it was needed because a “middle way” practice was believed in. Two parts of the above three-part problem are addressed by the use of SOROEP. Showing that a reading is more-or-less as probable as an alternative reading implies that a work has more than one acceptable reading—i.e., “allow[s] a variety of conflicting interpretations for a given text.” And showing that a reading is more probable than an alternative reading suggests (though it does not determine) that the alternative “less probable” reading is either unacceptable or less acceptable merely by being less probable.V (It suggests even that a future reading still more probable may cause the current more probable reading to become unacceptable or less acceptable.) A more probable reading therefore suggests that not all possible interpretations are equally acceptable and so provides for the “possibility of . . . judging among them.” However, showing that a reading is more probable than an alternative reading does not suggest (let alone determine) that the number of equally acceptable readings is limited—i.e., that there is the “possibility of controlling the range of . . . interpretations.”W As a result, the condition in which SOROEP is used is similar to what epistemologist Richard Feldman calls the “total evidence” condition of “evidentialism” (sec. 4.2 above): For it to be true that a person’s evidence supports a proposition, it must be that the person’s total evidence, on balance, supports that proposition. It is possible to have some evidence that supports a proposition and some evidence that supports the denial of that proposition [or supports a different proposition]. If these two bodies of evidence are equally weighty, and the person has no other relevant evidence, then the person’s total evidence is neutral and suspending judgment about the proposition is the justified attitude. If one portion of the evidence is stronger than the other, then the corresponding attitude is the justified one. In all cases, it is the total evidence that determines which attitude is the justified one.49

The use of SOROEP is also similar to what Russian-literature critic Vladimir E. Alexandrov maintains in his book Limits to Interpretation: that, since a literary work “can motivate a specific range of divergent and even contradictory interpretations of varying degrees of plausibility,” one should determine which interpretations are plausible (more-or-less as plausible as each other?) and which are not plausible, and to do this, one should “verbalize differing degrees of . . . plausibility, which is, of course, difficult to do. . . . Nevertheless, because particular meanings are not always simply right or wrong, . . . it is important to try to articulate whenever possible the extent” of their differing degrees of plausibility.50

Second-Order Relative Objective Epistemic Probability (SOROEP)

25

In addition, SOROEP can justify (though by different means) what aesthetics philosopher Annette Barnes aims to account for in her book On Interpretation: “how critical practice both tolerates a plurality of sometimes incompatible interpretations of artworks and nevertheless allows that confrontation and significant defeat may take place between critics.”51 In other words, the use of SOROEP incorporates as complements rather than contraries both “critical pluralism,” which “holds that admissible interpretations are equally preferable,” and “multiplism,” which “allows that admissible interpretations may be unequally preferable.”52 But pluralism and multiplism can be complements rather than contraries anyway, for “sensible” pluralists can be multiplists as well: There are degrees of pluralism, and some degrees of it are more plausible than others. . . . [S]ensible pluralists . . . may believe that lots of interpretations of a given work are better, or worse, than others. To be a pluralist you need believe only that it is possible for there to be a work of which there are two good interpretations, but where neither is better than the other.53

4.11: Criteria Other than SOROEP Instead of epistemic probability, the ultimate criterion for judging among explications is sometimes identified as inclusiveness or coherence or their combination,X and each one or their combination may yield the most probable reading when applied to a particular literary text.Y But inclusiveness and/or coherence may not always do so when applied to a different text.54 On the other hand, in cognitive science, a recently developed computer program named ECHO can choose from among competitive scientific hypotheses the one of greatest “explanatory coherence” by weighting and combining criteria such as inclusiveness, simplicity (parsimony of necessary supporting assumptions), degree of higher-order explanations (explanations that explain lower-order explanations that, in turn, explain still lower-order explanations that, in turn, explain . . . data), and nearness (in kind) of supporting analogies.55 Since ECHO has (according to some reports) shown promise in simulating human explanatory reasoning,56 it should be noted that ECHO's inference to the most explanatorily coherent hypothesis is very similar to “Inference to the Best Explanation” (in philosophy) and to “abduction” (in artificial intelligence),57 that these are equivalent to the most plausible or probable inference58 or “Inference to the Likeliest Potential Explanation,”59 and that consequently ECHO's process may be approximating the human one of using SOROEP— not, it should be emphasized, of using physical relative probability.60 However, even though ECHO can be “viewed as an

26

Chapter Four

intuitively appealing and computationally efficient approximation to probabilistic reasoning,”61 it remains to be seen whether ECHO can ever be developed to simulate a reader's mental operation of applying SOROEP to competitive readings of a literary text. Epistemic probability, then, may not always be reducible or translatable into other criteria like inclusiveness, coherence, simplicity, and their combinations and so may not be expendable. A similar situation exists in civil-law standards of proof. Normally, to win a civil case, a plaintiff must have on his side “the preponderance of evidence” (the most common phrasing in America) or “the balance of probability” (the most common in Britain). But, like inclusiveness, preponderance may vary from one case to another, for it “must be preponderance of probative force, rather than preponderance of mere volume.” Therefore, “proof on the preponderance of evidence” is usually “interpreted as requiring proof that a proposition is probably true (i.e., 62 more probably true than false).” Here too, epistemic probability may not be expendable.

CHAPTER FIVE EVIDENCE AND HYPOTHESIS

5.1: A Good Hermeneutic Circle Also unexpendable is the use of evidence in support of reason and SOROEP. Admittedly, evidence has its limitations. To be considered “evidence,” any datum either must have originally suggested to the A explicator the particular tentative reading (i.e., a hypothesis ) that the explicator adopts or else must be “made into” evidence by that readinghypothesis (whereas that datum may not be by a different one). In either case, the explicator must first recognize—i.e., interpret—that datum as evidence before it can be evidence in an interpretation process. But this condition need not be considered a paralyzing hermeneutic circle; it means only that the explicator must adjust and readjust data and hypothesis to each other repeatedly in order to obtain a working hypothesis—i.e., a reading either more probable than readings less adjusted to the same data (or adjusted to less or less reliable or less important data) or more-or-less as probable as the most probable reading(s) adjusted to different data. In other words, each adjustment between data and hypothesis is the result of a SOROEP judgment. According to Leo Spitzer, to adjust data and hypothesis to each other repeatedly is to work from the surface to the “inward life-center” of the work of art: first observing details about the superficial appearance of the particular work (and the “ideas” expressed . . .); then, grouping these details and seeking to integrate them into a creative principle which may have been present in the soul of the artist; and, finally, making the return trip to all the other groups of observations in order to find whether the “inward form” one has tentatively constructed gives an account of the whole. The scholar will surely be able to state, after three or four of these “fro voyages,” whether he has found the life-giving center.

This repeated, reciprocal movement or circle

28

Chapter Five is not a vicious one; on the contrary, it is the basic operation in the humanities, the Zirkel im Verstehen as Dilthey has termed the discovery, made by the Romantic scholar and theologian Schleiermacher, that cognizance in philology is reached not only by the gradual progression from one detail to another detail, but by the anticipation or divination of the whole—because “the detail can be understood only by the whole and any explanation of detail presupposes the understanding of the whole.” Our to-and-fro voyage from certain outward details to the inner center and back again to other series of details is only an application of the principle of the “philological circle.”B

According to linguist Richard Harland, a reader unconsciously uses this process of repeated reciprocal adjustment when interpreting syntagmatic combinations of words: The reader keeps “the evidence of the parts and the assumption of the whole all up in the air together, juggling between them until everything can be made to slot home simultaneously. This is the art of creative hypothesis, of leaping ahead to hunches and checking back afterwards against the data.” But there are times when the reader becomes conscious of this process: when trying to interpret “exceptional” syntagmatic combinations—conceptual paradoxes like “billion-ton pebble” and “primordial supplementation.”1 By the same token, the reader uses the process unconsciously when explicating a literary work “from scratch,” but she becomes conscious of the process when she tries to explicate a conceptual paradox and when she tries to decide between two different readings of the work.

5.2: Implicit (or Tacit) Procedural Knowledge The unconscious “building” of an explication “from scratch” by the process of repeated reciprocal adjustment is due to what is called “implicit (or tacit) procedural knowledge”—unconscious knowledge about how to perform various conscious cognitive activities such as understanding and producing language, solving problems, making decisions, reasoning, developing interpretive categories, drawing inferences, and playing sophisticated games like chess. Because implicit procedural knowledge is unconscious, one can display it only by using it; one cannot describe it. For instance, in understanding language, children successfully internalize all the grammatical rules that identify all and only the acceptable grammatical forms, whereas no linguist has ever been able to describe all those rules to make a complete grammar of the language.2 Similarly, in explicating a literary work, the reader has long since internalized specific knowledge of (1) how to infer what unconsciously selected data may act as evidence supporting a reading not preconceived, (2) how to infer a

Evidence and Hypothesis

29

reading (i.e., derive a hypothesis or inference) from evidence,C (3) how to reciprocally adjust data and a reading to each other, and (if she wants to), (4) how to decide between two different readings on the basis of SOROEP.D The reader is not conscious of the knowledge how she does these things (even though she is conscious that she is in the process of doing them), and neither she nor anyone else can describe that knowledge. Of course, she may be able to provide a reason (actually, rationalize), for example, why one reading is more probable than another or more-or-less as probable as another, but she cannot describe any of the above internalized specific knowledge underlying that reason. “[P]eople have implicit awareness of various probability principles, and deploy them spontaneously in favorable circumstances,”3 but they cannot know consciously the system governing that deployment. That knowledge is inaccessible—not (as it seemed to the hermeneutist Johann Martin Chladenius in 1742) because “a rational theory of [epistemic] probability has not yet been sufficiently developed” (see the fifth epigraph preceding the beginning of Part I) but because that knowledge is implicit procedural knowledge. In the implicit system, knowledge about how to use consciously used probabilities is unconscious since the probabilities are “embodied,” whereas, by contrast, in the explicit system there is not only conscious use of non-numerical probabilities but also both conscious “extensive use of numerals to express probabilities”4 and conscious knowledge about how to use those numerals to express them. The implicit system serves the explicit one: “implicit cognitive processes . . . deliver content for conscious analytic processing and shape its direction in line with the relevance principle”—the principle that what is most relevant in the current context is “what is most probable or plausible.”5 Moreover, the implicit system is primitive in evolutionary terms; it evolved long ago into an adaptive functional system and so has not changed since then. By contrast, “explicit cognition is newer, and less fixed by evolutionary processes. Consequently, while people differ widely in terms of explicit cognitive abilities, they do not differ very much in terms of implicit cognitive processes. Implicit cognition is a process in which we all share, and share alike.”6 That is one reason (others will be presented in Part II) why consensus is possible among readers judging which of two readings is more probable. Of course, implicit cognition does not insure that these readers will reach consensus, for a SOROEP judgment depends partly on implicit procedures that can select differently for different readers—for example, selecting data (mentioned above) that may act as evidence supporting a reading.E In contrast to one reader’s unconscious, another’s might select different data that becomes different evidence supporting a

30

Chapter Five

different reading. Nonetheless, a SOROEP judgment depends also on kinds of implicit knowledge that “we all . . . share alike,” that are uniform among readers (like knowledge, mentioned above, of how to infer a reading from evidence and how to reciprocally adjust data and reading to each other); and so the implicit system fosters consensus among readers even if it does not insure it. As a result, for example, when subjects in experiments are presented with a particular reading and asked to select from a body of mixed data evidence to support the reading, they show not uniformity but consensus not only in selecting evidence but also in judging how strongly an item of evidence supports the reading.7 Implicit knowledge may be considered part of a larger inclusive cognition now commonly called “System 1”; explicit knowledge may be considered part of a larger inclusive “System 2” cognition; and the general description of the two inclusive systems is similar to that of the two included kinds of knowledge: The two systems have different evolutionary histories. System 1 . . . is the ancient system. . . . It is really a bundle of systems that most theorists regarded as implicit, meaning that only the final products of such a process register in consciousness, and they may stimulate actions without any conscious reflection. System 2, in contrast, is evolutionarily recent and arguably unique to humans. . . . System 2 function relates to general measures of cognitive ability such as IQ, whereas system 1 function does not. . . . [S]ystem 2 allows us to engage in abstract reasoning and hypothetical thinking.8

By contrast, System 1 is responsible for reasoning that is not abstract and for probabilities that are “considered as summary statistics computed over . . . implicitly acquired world knowledge.”9 (These probabilities would also translate into epistemic probabilities.10) There is “recent supporting evidence of a neuropsychological nature for this theory” of two distinct cognitive systems. When resolving conflicts between a belief (in System 1) and logic (in System 2), “the response that dominates correlates with distinct areas of brain activity.”11 An interesting parallel to implicit procedural knowledge is implicit procedural memory, and here the above-described differences between implicit procedural knowledge and explicit knowledge parallel the differences between implicit procedural memory and explicit memory. The testing of severe, incurable amnesiacs—particularly Clive Wearing, a noted English musician-musicologist—has shown that there is a conscious memory of events (episodic [or explicit] memory) and an unconscious memory for procedures—and that such procedural memory is unimpaired in amnesia. . . . Episodic or explicit memory . . . develops

Evidence and Hypothesis

31

relatively late in childhood and is dependent on a complex brain system . . . that is compromised in severe amnesiacs and all but obliterated in Clive. The basis of procedural or implicit memory . . . involves larger and more primitive parts of the brain . . . [whose] size and variety . . . guarantee the robustness of procedural memory. . . . Some [procedural memories] may be present even before birth (fetal horses, for example, may gallop in the womb). Much of the early motor development of the child . . . start[s] to develop long before the child can call on any explicit or episodic memories.

If Clive is asked how he does the same daily procedures that he did before being stricken by amnesia, “he cannot say, but he does them. Whatever involves a sequence or pattern of action, he does fluently, unhesitatingly,” especially sight-reading, singing, playing, and conducting music and learning new music.12 In this respect, Clive and a normal reader of a verbal text are comparable. When the reader unconsciously develops an understanding of the verbal text, her understanding it in one way rather than another is part of her implicit procedural knowledge just as Clive’s preserved understanding of music is part of his implicit procedural memory and knowledge. Like a verbal text, a piece of music will “teach one about its structure and secrets.” And like reading a verbal text, [l]istening to music is not a passive process but intensely active, involving a stream of inferences, hypotheses. . . . We can grasp a new piece—how it is constructed . . . because one has knowledge, largely implicit, of musical “rules” (how a cadence must resolve, for instance) and a familiarity with particular musical conventions (the form of a sonata, or the repetition of a theme).13

These “rules” and conventions are comparable in a verbal text to syntax and usage respectively.

5.3: Repeated Reciprocal Adjustment If it is assumed that, in the process of repeated reciprocal adjustment, the number of data (and therefore amount of evidence) and the number of possible hypotheses are infinite, it may be thought that adjustment between them would have to be done an infinite number of times and not just “repeatedly.” But that theoretically would produce the infinitely superior reading (the best if not the complete, perfect, and ideal reading), whereas what is needed is merely a reading more-or-less as probable as or more probable than other readings. At a certain point in the process, having adjusted and readjusted a finite number of data and hypotheses to each other repeatedly, the explicator could and would curtail the process and

32

Chapter Five

adopt such a reading (or readings) until such time as a newly conceived hypothesis or newly discovered or realized evidence would urge a recommencement of the process.F By such repeated reciprocal adjustment, working hypotheses have been obtained continually over the years in literary explication—just as, by that process, working hypotheses have been obtained continually over the centuries in natural science:G [T]he logic of science is circular interpretation, reinterpretation, and selfcorrection of data in terms of theory, theory in terms of data. Such a view of science is by no means new: it is to be found in all essentials in those fathers of inductive science, Francis Bacon and Isaac Newton. . . . [T]he logic of science . . . is virtuously rather than viciously circular.14

Or, in terms more up to date, “[w]e have, as it were, a spiraling set of nested feedback systems, as advances in theory respond to feedback and in turn enrich the observation language to make new corrections possible.”H Incidentally, this spiral process (it should be noted) is not undermined by the assumption that the observations, the data, the “facts themselves are theory laden.” Even if there is “no representation of facts without the observation language, and no observation language is just 'given' as theoryfree,” yet we do generally have an observation language rich enough to express observations not assimilable to current theory. Moreover, we also have the ability to recognize experiences that are not well described in the observation language. We can thus talk of feedback even though our language may not be able to formulate the range of such feedback. Observation language and even perception are theory laden, yet they are rich enough to contain feedback that contradicts the theory.I

This applies to literary explication also. Even if it is assumed that, before a datum is interpreted as evidence in an explication process, the datum, the language describing it, and the recognition or perception of it are “theory laden” (i.e., influenced or determined by the current hypothesis or hypothetical reading), yet that datum, language, and recognition or perception are rich enough to contain feedback that is not assimilable to the hypothesis or that contradicts it.

5.4: A Reading as a Working Hypothesis By definition, a hypothesis can never be certain. Still, that a reading based on evidence can never be certain—never be more than a working hypothesis, supportable but unprovable—means only that it is limited, not

Evidence and Hypothesis

33

that it is irrelevant or meaningless. Though limited, it can still be helpful and clarifying.J And it can supply a basis for a full interpretation even as an “authoritative” text (which is itself only a working text) can supply a basis for a full interpretation. Therefore, although literary criticism should (as Jonathan Culler said) advance “understanding of the conventions and operations of an institution, a mode of discourse,” it should also (pace Culler) produce “yet another interpretation of King Lear”15—one based on explications more-or-less as probable as other explications or, with luck, more probable. As philosopher Hans Reichenbach noted and the title page of this book quoted, we have “no Archimedean point of absolute certainty left to which to attach our knowledge of the world”; however, we have instead “an elastic net of probability connections floating in open space.”16 In other words we have not an absolute foundation but a relativistic one; and that is enough of a foundation to make possible “various discursive forma- tions” such as working hypotheses and SOROEP: [I]n a world where the ultimate grounds of reality are not available to us even as we live them out—in our world as opposed to the world as seen by God—the facts and values and opportunities for action delivered to us by various discursive formations are not second-hand, are not illusions, are not hegemonic impositions, but are, first of all, the best we have, and, second, more often than not adequate to the job.17

PART II: CONSENSUS OF SOROEP JUDGMENTS

CHAPTER SIX GENERAL CONSIDERATIONS

6.1: Consensus Despite Human Variability In section 4.2, it was shown that, theoretically, the validity of the probability used in explicating a literary work would not depend on consensus of those who have explicated the work by using that probability. It follows then that, likewise, the validity of the SOROEP between two explications would theoretically not depend on consensus of those who have explicated the work by using SOROEP. However, consensus is reassuring to have regardless, and it so increases confidence in a SOROEP judgment that consensus may be regarded as necessary practically even if it is not theoretically. But if it is regarded as necessary practically, the question then arises whether it can be obtained—whether it is inevitable, probable, or even possible. Most literary critics of the postmodern era would respond that consensus is impossible. According to Cultural Studies theorist David R. Shumway, those critics learned from the “experience of several decades of academic literary interpretation” not only that “consensus about the meaning of a literary work could not be produced” but also that “Theory” explained and justified “the failure to achieve consensus” and produced “even a broader range of disparate readings.” As a result, the critics became interpretive relativists. However, Shumway then points out that when they “practice in their specialties, . . . they do not behave like relativists. They argue that their interpretations . . . are better or truer than rivals,” and “they present their own as having a compelling claim to which the profession ought to attend.”1 If their readings are explications rather than interpretations (as differentiated in Chapter 1 above), the profession can judge (and reach consensus in judging) whether the readings are “better or truer than rivals,” for such judgment is based on forms of knowledge that are implicit in the members of the profession and that (as Shumway adds) “involve the possibility of a high degree of consensus.”2 Moreover, consensus on a SOROEP judgment can be obtained because, “both for individual readers and for reading collectivities, literary

General Considerations

37

interpretation involves probabilistic reasoning—judgments to the effect that interpretation X can be given a higher probability weighting than interpretation Y.”3 “Given that individuals make the same direct judgments . . . and given that the situation is one in which probabilities can be compared, then they will order the probabilities in the same way.”4 And consensus can be obtained despite both the human variability and the human fallibility that is assumed to pervade the making of SOROEP judgments. The variability is real, but it is controlled by cognitive conditions which will be discussed in subsequent chapters. Here, an example will show merely that the variability is controlled and that consensus results. According to a German study performed in the early 1980s and reported by mathematician Keith Devlin, when German subjects were shown a sentence containing a noun modified by the German adjective for few or several and were asked to estimate the number that the adjective represented, their estimates varied with the size of the few or several objects named, with the size of nearby objects named, with the “activity” status of the few or several objects, and with the size of the “frame” through which the objects were observed. For instance, the median estimates for several crumbs, several paperclips, several pills, several children, several cars, and several mountains were 9.69, 8.15, 7.27, 5.75, 5.50, and 5.27 respectively. In the sentence “In front of the hut a few people were standing,” the median estimate of a few people increased from 4.55 to 5.33, then to 6.34, and then to 6.69 when hut was replaced by house, then city hall, and then building respectively. In the city hall variant of the sentence, the estimate decreased from 6.34 to 5.14 when standing was replaced by working. And in the sentence “Out of the window one can see a few people,” the estimate decreased from 5.86 to 4.76 when Out of the window was replaced by Through the peephole. But the important point here is that, although “the numbers the respondents gave varied from context to context, within each context the responses were remarkably consistent from person to person.”5

6.2: Consensus Despite Human Fallibility As for the human fallibility that is assumed to pervade the making of SOROEP judgments, it is only apparent. A notable psychological experiment that supposedly shows this fallibility does not really do so although it does show that, when human subjects are faced with making a particular kind of relative-probability judgment, most of them violate a basic principle of probability. In this experiment the subjects are first asked to read a description like the following: “Linda is 31 years old,

38

Chapter Six

single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.” Then the subjects are asked which of two statements is more probable—“Linda is a bank teller” or “Linda is a bank teller and is active in the feminist movement.” Most of the subjects incorrectly choose the latter statement and do so even when the statements are rephrased in an attempt to prevent the wrong choice. This choice violates the basic probability principle known as the conjunction rule: that the conjunction of two events cannot be more probable than one of those events alone. Linda's being a bank teller and something else as well cannot be more probable than her being a bank teller.6 However, the conjunction rule is a principle of physical probability, not of epistemic probability, for it does not depend on evidence, the initial description of Linda. It is an independent logical or mathematical axiom: X + Y cannot be more probable than X. Yet the experiment is set up as if it were an exercise in epistemic probability: The subjects are first given the description of Linda and then asked to choose between two different statements about her. Because of this unintentionally “deceptive” epistemic setup, the subjects naturally assume the description to be relevant to one statement or the other.A So influential is the initial description upon their judgment that results similar to those in the Linda experiment occur even in an experiment where the initial description is not stated outright but is only implicit in each of the two statements. In this experiment the subjects are asked to decide which is more probable— ”President Reagan will provide federal support for unwed mothers” or “President Reagan will provide federal support for unwed mothers and cut federal support to local governments.”7 The unstated description that the subjects would infer would be something like “President Reagan is a conservative Republican and so is more likely to reduce or deny federal support for projects other than federal ones.” The subjects also naturally assume that the statements will be representative of real-world choices in truly being alternatives, and so the subjects interpret the statements (despite the phrasing) as “Linda is a bank teller” versus “Linda is active in the feminist movement” and “President Reagan will provide federal support for unwed mothers” versus “President Reagan will cut federal support to local governments.” Indeed, far from showing that humans are fallible in making relative-probability judgments, the two experiments show that the human reaction to answer questions and solve problems by making SOROEP judgments between alternatives is so ingrained as to be practically instinctive.B Indeed, according to cognitive scientist David R. Olson, that reaction is found even in pre-school children. When they are shown a group of five animals composed of three ducks and two rabbits

General Considerations

39

and are asked: “Are there more ducks or more animals”? they tend to reply “More ducks ’cause there’s only two rabbits.” When asked to repeat the question they sometimes reply, “You said ‘Are there more ducks or rabbits?’” That, of course, is not what was asked.8 This ingrained reaction is an example of the way in which human rationality in its evolutionary adaptive sense appropriate to achieving practical goals efficiently and (in general) reliably is distinct from and, in many cases (including the present one), contrary to human rationality in the normative sense appropriate to making logical decisions and social judgments.C An “inductive rule like the conjunction rule for independent events from classical probability theory” is “not likely on the list of rules that were the output of our original reasoning capacity,”9 which evolved only to the degree of insuring our species survival. Therefore, human rationality in its evolutionary adaptive sense may sometimes produce decisions and judgments that are theoretically incorrect; nevertheless, they are practical in that they insure (even if they sometimes needlessly overinsure) continued advantage and even survival. Having caught a glimpse of stripes moving through the jungle, the animal that hangs around pondering the frequency of tigers in this part of the jungle is much more likely to end up dead than one that treats stripes as “oncoming tiger” and flees accordingly. As world events run on time-scales that are not of our making, the evidence pointing to the precipitateness of human judgement—far from defining fatal flaws— reveals, on the contrary, a very effective system for exploiting what we know at the time when it can be of most use. We can live with the resulting false positives just as we can live with our failures to be dead certain.10

Indeed, even when speed is not essential, judgments that are practical even if theoretically incorrect are evoked by problems that are concrete or natural (i.e., customary). According to psychologist Seymour Epstein, the experiential system of information processing that evolved in the human unconscious interprets problems in terms of past experience and so fosters appropriate responses to concrete or natural (i.e., customary) problems but not to abstract or unnatural ones. The system can be misled by abstract problems requiring unnatural interpretations. Since the Linda problem has information on personality characteristics as well as behavior, the natural interpretation is that matching behaviors to personality is required whereas the unnatural interpretation is that the problem is statistical.11 This natural-unnatural criterion is relevant to the Linda problem in other ways too. According to psychologist Ranald R. Macdonald, it would be natural for the tested subjects to feel that the test statement about Linda becomes

40

Chapter Six

“more believable as it develops” from her being a bank teller to her also being active in the feminist movement, “despite the fact that there is necessarily more to believe.” Otherwise, it would seem that “the shorter any explanation is the more believable it is.” Moreover, “in natural language questions are always motivated”—i.e., “they are only asked when there is some reason to expect a positive answer.” Therefore, the question “Is Linda a bank teller?” is “preposterous if the occupation is randomly chosen.” The subjects are reacting to each question about Linda “as if it 12 were one that might be naturally asked,” given the original description.

6.3: Consensus Across Disciplines: A Product of Evolution In most fields of study, consensus of probability judgments based on evidence has been regarded as necessary practically but also as readily obtainable—in the field of metaphysics, for instance. There, the “selfsufficiency of a world theory and its independence of any one man's judgment are based on a qualified application of multiplicative [i.e., consensual] corroboration superimposed upon judgment of the adequacy or degree of structural [i.e., evidential] corroboration.” As a result, metaphysicians “come to agree . . . under certain circumstances about the structural agreement of fact with fact” and, in particular, “come to essential agreement about the shortcomings of a world theory, once the claims of dogmatism have been set aside.”13 In natural science too, consensus is regarded as necessary practically and as obtainable in order to confirm interpretive hypotheses which are considered probability judgments supported by evidence. Confirmation of such hypotheses can be obtained under one or other of two conditions. First, under the right circumstances enough evidence will make the probability of a true hypothesis converge toward unity. . . . Second, it may be assumed that in regard to experience of the natural world, most competent investigators at any given time share prior beliefs about what hypotheses are possibly true, and there is a rough consensus about how to order them in degrees of probability. . . . [This] de facto consensus . . . may be assumed in many elementary cases of “coping with the world” because of the common experience of human beings with the basic circumstances of survival. Such consensus has been found to carry us quite far in justification of much more elaborate hypotheses than are directly needed for survival, and success in applying these is called the progress of science.

To be sure, the “right circumstances” necessary to satisfy the first condition “are not often realized,” and the consensus that is the second

General Considerations

41

condition “can sometimes be only culture-wide, and can sometimes break down even within a culture when issues of social, political, or religious significance are at stake.” Nonetheless, [i]n the natural sciences, at least at a low level of theory and in local domains, objectivity is eventually arrived at—in other words, either the circumstances of the first condition are approximately realized, or consensus reduces to those human-wide assumptions that may be supposed to be objective products of evolution. But it is clear that this is much more difficult to achieve in the interpretive sciences.14

Indeed, satisfaction of the first condition is impossible to achieve in the interpretive science of literary explication (as Part 1 has tried to show). However, in respect to the second condition, our shared ability to read artificial signs which constitute literary writing may have evolved over the millennia from the shared ability to read natural signs (the weather, animal footprints, edible wild plants, etc.) which determined survival. The consensus in reading the natural signs may then have carried us “quite far in justification of much more elaborate hypotheses than are directly needed for survival”—for instance, in justification of hypothetical readings of literary works. And success in applying these hypothetical readings, though “much more difficult to achieve” than in natural science, has been considered progress in the understanding of literary works, at least until the advent of the postmodern era. By the above evolutionary view of reading and interpreting, consensus would be due not just to “nurture” but to “nature.” And consensus would be possible not just among adult readers in the same “interpretive community”15 or “cultural climate”16 but among adult readers proficient in understanding a common language whether or not it was their first language and they had crossed cultures17—for, even if adult readers have different first languages and no common language, they show no “lasting cognitive differences.”18 And even if adult readers interpret in accordance with different theories of interpretation, they have the same recourse that scientists have who interpret nature under different “paradigms”: The stimuli that impinge upon them are the same. So is their general neural apparatus, however differently programmed. Furthermore, except in a small, if all-important, area of experience even their neural programming must be very nearly the same, for they share a history, except the immediate past. As a result, both their everyday and most of their scientific world and language are shared. Given that much in common, they should be able to find out a great deal about how they differ. . . . [Each should be able] to translate the other’s theory and its consequences into his own language and simultaneously to describe in his

42

Chapter Six language the world to which that theory applies. . . . Since translation, if pursued, allows the participants . . . to experience vicariously something of the merits and defects of each other’s point of view, it is a potent tool both for persuasion and for conversion.19

Nor would consensus be impossible even between readers having apparently different standards of evidence: The “very deep-seated disagreements which have encouraged the idea that standards of evidence are culture-relative—or, in the intra-scientific form of the variability thesis, paradigm-relative—may be explicable . . . as lying . . . in a complex mesh of further disagreements in background beliefs, rather than in any deep divergence of standards of evidence.” Actually, there is an “underlying commonality of criteria of evidence.” This may not be apparent because, as a result of evolution, human “perceptual judgments are, though not infallible, involuntary” and “natural” and so have become the universal basis for criticizing “epistemic practices and norms.” But this condition means that the “underlying commonality of criteria of evidence” is only “ masked, . . . not erased, by the perspectival character of specific assessments of justification.”20 Besides, “relativism about standards requires what there cannot be, a position beyond all standards.”21 Similarly, in artificial intelligence, a “cognitive agent” (computer program or robot) that is constructed or just imagined so as to simulate or duplicate the human activity of interpreting a text can attribute the possibility of readers’ different interpretation procedures to differences in their belief systems and so can assume that readers use the same interpretation procedure. An “interpretive principle, such as ‘In Japanese poetry, the mention of cherry blossoms means that the season is spring,’ can be viewed [not] as part of the interpretation process—as something we do when we interpret”—but “as a belief that is accessed by the interpretation process—as something we use when we interpret.” Individual differences can then be accommodated. Even if we assume that “different people have different interpretation procedures, that they use radically different means to comprehend language . . . , we can factor out the differences, call them differences in belief,” and assume a common interpretation procedure that “need not be indexed by who is doing the interpreting or how they choose to do it on a specific occasion. This is already encoded” in the belief system.22

Overall, then, in respect to the non-numerical “kinds of statements of logical [i.e., epistemic] probability, and especially to comparative equalities and inequalities, . . . the number and variety of the assessments of logical probability which men are able to come to agreement about are simply overwhelming.”23

CHAPTER SEVEN MODULARITY IN SPEECH COMPREHENSION AND READING

The same neural apparatus and similar neural programming among reading adults can be inferred from cognitive science as well as philosophy of science. So too can the above evolutionary view of reading and interpreting. Psycholinguist Steven Pinker might specifically have included the reading and interpretation of literature when he suggested “that most . . . human 'cultural' practices (competitive sports, narrative literature, landscape design, ballet) . . . are clever technologies we have invented to exercise and stimulate mental modules that were originally designed for specific adaptive functions.”1 According to psychologist John C. Marshall, the core mechanism used in reading any modern orthography evolved very early in the development of Homo sapiens. This mechanism interprets the conceptual significance of two-dimensional signs . . . [and] twodimensional representation suffices for recognition and interpretation. The importance of recognizing two-dimensional signs arises from their role in tracking. A hunting community that could not interpret the marks left by animals would not survive for very long. It is thus not unreasonable to suppose that such abilities can be selected by neoDarwinian means.2

The concept that reading specifically developed from and is now made possible by a “mental module” or “core mechanism” in the brain is based on two theories that are, at present, widely accepted. The first theory, introduced by Noam Chomsky in 1959 and developed in his subsequent works and those of colleagues, is that children's learning to understand speech merely by hearing it constantly before the age of six is due to a computational brain module (also called a language-acquisition device, mental organ, neural system/structure/mechanism, psychological faculty, or language instinct) that is uniform and innate in themA and so guarantees their consensus in understanding that speech (unless, of course, they suffer from mental or cognitive disorders that impair that understanding). The

44

Chapter Seven

second theory, elaborated by philosopher-psycholinguist Jerry A. Fodor in 1983, is that the ability to understand speech is one of many modules in the brain rather than part of the central system of general intelligence.3 To cognitive scientists John Tooby and Leda Cosmides, those modules were formed by natural selection, and, because they “are present in all human minds, much of what they construct is the same for all people, from whatever culture.” The cognitive “representations produced by these universal mechanisms thereby constitute the foundation of [not only] our shared reality and our ability to communicate”4 but also (I claim) the possibility of consensus in our SOROEP judgments, for consensus is more likely if the ability to make SOROEP judgments is innate or at least “innately guided” (though consensus is still possible otherwise), and innateness or innate guidedness is more likely if the ability to make SOROEP judgments is modular to some degree (though innateness or innate guidedness is still possible otherwise).B Of course, not only modular theory but also Chomsky's instinctual theory of speech comprehension is disputed, and currently opponents of Chomsky's theory are increasing in number.5 But, whereas Chomsky's theory is an example of “special nativism” (i.e., the language-acquisition device in the brain contains grammatical mechanisms), the theories of his opponents are examples not of “non-nativism” but of “general nativism”— i.e., “the innate knowledge required for language acquisition is more general in nature and does not include actual grammatical categories, principles, or strategies.”6 In other words, although Chomsky's opponents deny that the ability to learn to understand speech is innate, they grant that, instead, some predisposition for developing that ability is innate.C Yet, this innate predisposition would equally account for children’s consensus in understanding speech.D (In “dynamic systems” theory, even the predisposition would not be innate but would develop with such “a strong probability”7 that children's consensus would be equally accounted for.) And such consensus, which is a fact, is the important point here. What is less important is the question at what stage innateness becomes less dominant than cultural acquisition in the chain of causation from brain structure to a particular behavioral trait. In a variant version of modular theory called “connectionism,” “parallel distributed processing,” or “neural network” modeling, innateness loses predominance at an earlier stage in that chain than in standard modular theory: Instead of the module being fully genetically determined (“hard-wired”), it is only genetically predetermined (“prewired” or “soft-wired”), allowing its neural structure to adapt to the cultural environment and later re-adapt (“rewire”) to a changing cultural environment8—a good example of the “transaction” between nature and nurture instead of their opposition (as in the traditional

Modularity in Speech Comprehension and Reading

45

“versus” debate).9 In any case, some cognitive scientists feel that debate should be no longer about and actually “is no longer about whether a particular trait is learned or innate—which is basically a nonsense question—but about how variable a trait is and where its variability, or invariability, derives from.”10 That is one reason why the question of modularity (which insures invariability in a trait) has become important and why it will be discussed here in relation not only to speech comprehension but also to reading and the use of SOROEP in reading. Moreover, some cognitive scientists feel that the debate now is also about “the extent to which the mechanisms that allow us to learn language— which themselves presumably are innate—are specialized for language”; for, if they are not exclusively specialized for it, they probably would— like the predispositions listed in note C above—also support abilities “used more widely throughout mental life” (such as the abilities “to learn abstract rules, . . . to collect statistics,” and presumably to make SOROEP judgments), and the language instinct would then be understood as only one “particular built-in way of acquiring new information.”E That is one reason why a discussion of speech comprehension is also relevant to reading and SOROEP judgments in reading. . Admittedly, most cognitive scientists believe that, unlike the ability to understand speech, the ability to read is not a biological instinct or module, since written language did not exist during human evolution when instincts were developing and since children do not spontaneously and automatically learn to read (as they learn to understand speech) but must be taught. Nonetheless, there have appeared a number of theories on how reading (which begins at about age five) not only builds on the by-then developed understanding of speech11 but also taps into and uses the speech-comprehension module12 or, further, uses a number of interacting modules.13 According to one of these theories, a child's learning to read requires “painful concentration, speaking the words out loud, [and] creeping through a text like a tortoise” because the child is trying “to open up and ‘look inside’ an encapsulated cognitive process, to make explicit the operations of the little computer [or module] in the head that identifies speech sounds, so that the child can connect the sounds with the shapes of the letters on the page.”14 There are also theories that reading is gradually made modular (“modularized”) by the process of childhood learning15 or is modularized in practice if not in principle (i.e., quasi-modularized) by “learned automaticity” resulting from the combination of explicit teaching and repeated experience.F According to one of these theories—Marshall's (quoted above)—the module or combination of modules (“core mechanism”) used in reading evolved not for reading but for recognition and interpretation of natural two-dimensional signs and, in order to

46

Chapter Seven

become operational for reading, must be “triggered” in the child after age four by the experience of specific environmental stimulation—i.e., teaching.16 Indeed, reading comprehension has been found to be highly correlated not only with speech comprehension but also with comprehension of viewed pictures and films—i.e., “two-dimensional signs”; and, as fMRI experiments show, all three activities activate “common brain regions.”17 The correlations here are fully triangular, for “understanding and believing what is said to you is just one more level of natural-sign reading on the same level as ordinary perception.”18 If the mechanism used in reading taps into and uses the speechcomprehension module, it begins to do so at a time when the module is fully operational for learning to understand heard speech (until about age six) and for producing consensus in learning it, even though from age six to about fifteen the module is decreasingly operational in this respect. And because the module both produces consensus in children's learning to understand heard speech and shares with the reading mechanism the characteristic of being qualitatively the same in all human beings, a question is bound to arise about the degree to which that consensus outlasts children’s learning to understand heard speech. To phrase the question more specifically, let us first assume a case in which a speaker reads a written text aloud repeatedly to an adult listener who has not read it, until the listener feels that he or she basically understands it. In this special case (which will be called “stage 4” below), only the listener's speechcomprehension module is used in understanding the text. Now, assume also that children's understanding of heard speech after they have learned it is a stage away (stage 2) from their learning it (stage 1), that adults' understanding of heard speech is a stage further away (stage 3), that the above-described understanding by adults of a written text by their hearing it read aloud repeatedly is a stage still further away (stage 4), and that adult readers' understanding of the written text by reading it themselves is a stage furthest away of all (stage 5). The question, then, becomes this: Is the consensus that the speech-comprehension module produces in stage 1 still operative (though decreasingly so) in each successive stage, and, if not in stage 5, is it at least associatively influential there? Although the question may be unanswerable in the present state of cognitive-science research, the relationship between reading and the innate instinct for understanding heard speech provides a basis for entertaining the hypothesis of such extended consensus. Of course, as the phrasing of the above question implied, the hypothesis of such extended consensus must be qualified by a few admissions: consensus found in stage 2 would be less than that found in stage 1 because of the greater complexity and subtlety of normal speech

Modularity in Speech Comprehension and Reading

47

compared with “Motherese” (which, spoken only to infants and young children, “is slower, more exaggerated in pitch, more directed to the here and now, and more grammatical”19); consensus found in stage 3 would be less than that found in stage 2 because of the greater complexity and subtlety of speech between adults compared with speech between adult and older child; and consensus found in stages 4 and 5 would be still less because of the difference between speech and written language. Indeed, these last two lessenings cause among adult listeners and readers the ambiguities that sometimes make difficult to achieve any consensus in basically understanding a sentence. But, compared with human beings, computers are even worse off in this respect. “They find ambiguities that are quite legitimate, as far as English grammar is concerned, but that would never occur to a sane person.” One computer found that the adage Time flies like an arrow had five different meanings or syntactic patterns: Time proceeds as quickly as an arrow proceeds. (the intended meaning) Measure the speed of flies in the same way that you measure the speed of an arrow. Measure the speed of flies in the same way that an arrow measures the speed of flies. Measure the speed of flies that resemble an arrow. Flies of a particular kind, time-flies, are fond of an arrow.20

Humans are better off than computers because they unconsciously eliminate many of the ambiguities (and so come nearer to consensus). This process is made possible by their System 1 cognition21 that includes the evolutionarily adaptive implicit (or tacit) procedural knowledge of how to use epistemic probability.22

CHAPTER EIGHT SOROEP IN SPEECH COMPREHENSION AND READING

8.1: In Adults Parsing Language The way humans unconsciously eliminate many of the ambiguities of language (and so come nearer to consensus) is by using SOROEP unawares. This is especially shown by the words likely, gamble, and bet (or syntactic variations of them) in Steven Pinker’s cognitive analysis of the listening or reading procedure of humans: they “home in on the sensible analysis of a sentence,” on the one hand, by doing a “breadthfirst” search for the interpretation of each word, “entertaining, however briefly, several entries for an ambiguous word, even unlikely ones,” which “are somehow filtered out before they reach consciousness.”A On the other hand, humans do a “depth-first” search for the syntactic pattern or “tree” of a phrase, clause, or sentence. The human brain “somehow gambles at each step about the alternative most likely to be true and then plows ahead with that single interpretation as far as possible.” If the brain comes across words that cannot be fitted into the tree, it backtrack[s] and start[s] over with a different tree. . . . The depth-first strategy gambles that a tree that has fit the words so far will continue to fit new ones, and thereby saves memory space by keeping only that tree in mind, at the cost of having to start over if it bet on the wrong horse. . . . A depth-first parser must use some criterion to pick one tree (or a small number) and run with it— ideally the tree most likely to be correct.B

But the only criterion for picking the tree (or the meaning of an ambiguous word) “most likely to be correct” is SOROEP.C And the evidence determining the SOROEP-based choice might include the possible meanings of each word and phrase in the sentence heard or read to that point, the context to that point, the nature of the occasion, the speaker or author, and generally, unless the sentence refers to a fantastic or mythical world, the real one. “The human parser does seem [to need and] to use at

SOROEP in Speech Comprehension and Reading

49

least a bit of knowledge about what tends to happen in the world.”D The evidence determining the SOROEP-based choice might also include standards of communication that normally can be assumed in speakers (or authors), for they set themselves certain standards, of truthfulness, informativeness, comprehensibility, and so on, and only try to communicate information that meets the standards set. As long as speakers systematically observe the standards, and hearers systematically expect them to, a whole range of linguistically possible interpretations for any given utterance can be inferentially dismissed, and the task of communcation and comprehension becomes accordingly easier.1

Of course, deciding a question of lexical or syntactic ambiguity may seem much different from deciding a question about the most preferable explication of an ambiguous literary work. However, both questions are 2 decided by inferring which reading is the most probable or, in other terms, by using inference processes to “disambiguate” the ambiguity and so selecting which inference is the most “plausible”: [I]nference processes come into play at the lexical level in disambiguating word senses, at the syntactic level [in disambiguating sentence senses] . . . , at the pragmatic level in connecting story events, and at all three levels in understanding any text. . . . [A]t the pragmatic level . . . , [t]hese processes rely on the application of semantic and episodic memory structures to input text in order to “fill in the gaps.” For example, while reading the text, “John had not worked in months. He grabbed his gun and went to the bank,” it is the pragmatic inference process that allows the understander to conclude that John intended to rob the bank. . . . [However,] John might well have been a bank guard who had just recovered from a lengthy illness and was preparing for his first day back on the job. This interpretation explains the explicit text events, yet somehow does not seem as plausible as the original explanation. Thus, as is the case with processing at the lexical [and syntactic] level, the primary problem in pragmatic inference processing is disambiguation: the evaluation of competing inferences and subsequent selection of the one which best explains the events portrayed E in the text.

It is “arguable that the development of inferential abilities is similar in relevant respects to that of linguistic abilities”—one respect being that “[g]rammars and inferential abilities stabilise after a learning period and 3 remain unchanged from one utterance or inference to the next.” It is also

50

Chapter Eight

arguable that the development of reasoning abilities in general (which include inferential abilities) is likewise similar to that of linguistic abilities, for, in both linguistic and reasoning abilities, there is spontaneous and largely unconscious processing of an open-ended class of inputs; people are able to understand endlessly many sentences and to draw inferences from endlessly many premises. Also, in both cases, people are able to make spontaneous intuitive judgments about an effectively infinite class of cases—judgments about grammaticality, ambiguity, etc. in the case of linguistics, and judgments about validity, probability, etc. in the case of reasoning.4

Therefore, it is also arguable that linguistic, inferential, and reasoning abilities when stabilized after a learning period are similarly uniform and consensus-producing, and that the ability to make SOROEP-based selections between competing inferences shares in these similarities.

8.2: In Children Acquiring Language SOROEP seems to be functional in solving language-interpretation problems not only of the adult but also of the child, for the “same ambiguity that bedevils language parsing in the adult bedevils language acquisition in the child.”5 According to philosopher Hilary Putnam, in Chomsky’s instinctual theory of speech comprehension the child has a “built in” function that assigns weights to all the transformational grammars possessing the linguistic universals characteristic of all human natural languages. Each of these grammars is assigned a weight according to the degree of its compatibility with the language that the child hears, and on the basis of this “plausibility ordering” the highest-weighted or most plausible grammar is “selected” as the grammar that the child will instinctively learn and use.6 The weights of the grammars are the “evidence,” and the relative plausibility is unconscious (or pre-conscious) SOROEP. Then (according to Steven Pinker) in order to begin to learn the selected grammar, the child must unconsciously learn the syntactic relationships of the words and phrases in sentences. And since these relationships are expressed by grammatical categories like noun, verb, and adjective, the child must unconsciously assign each word in a sentence to its proper category. To do so, the child must make a number of “guesses” about what is most “likely”—guesses that may be unconscious judgments using SOROEP based on evidence: [S]ince the meanings of parents’ sentences are usually guessable in context, the child could use the meanings to help set up the right phrase

SOROEP in Speech Comprehension and Reading

51

structure. Imagine that a parent says The big dog ate ice cream. If the child has previously learned the individual words big, dog, ate, and ice cream, he or she can guess their categories. . . . In turn, nouns and verbs must belong to noun phrases and verb phrases, so the child can posit one for each of these words. And if there is a big dog around, the child can guess that the and big modify dog, and connect them properly inside the noun phrase. . . . If the child knows that the dog just ate ice cream, he or she can also guess that ice cream and dog are role-players for the verb eat. Dog is a special kind of role-player, because it is the causal agent of the action and the topic of the sentence; hence it is likely to be the subject of the sentence.7

From one sentence in context, then, the child can learn two phrases and three syntactic relationships. And here too the human parser needs and uses “at least a bit of knowledge about what tends to happen in the world,” although the child's “bit of knowledge” and “world” are miniscule compared with an adult's. F The above view of syntax learning as a “guessing” process happens to be quoted from the work of someone (Pinker) who believes language to be a biological instinct. But it should be noted that such a view does not depend on that belief, for the view is also held by others who believe the opposite. For example, according to neurobiologist Terrence W. Deacon, "[c]hildren's minds need not innately embody language structures” because “languages embody the predispositions of children’s minds”: Languages . . . evolve. . . . [They] are under powerful selection pressure to fit children's likely guesses, because children are the vehicle by which a language gets reproduced. Languages have had to adapt to children's spontaneous assumptions about communication, learning, social interaction, and even symbolic reference. . . . [Languages] have been shaped by a sort of cultural equivalent to natural selection, in which children's special learning predispositions have shaped language to fit.

Therefore, “children appear preadapted to guess the rules of syntax correctly, precisely because languages evolve so as to embody in their 8 syntax the most frequently guessed patterns.” These patterns “embody the predispositions of children’s minds,” and, over the generations, those predispositions have been (according to one theory) to prefer the patterns that most benefit oral communication—hearers understanding speakers and 9 speakers understanding hearers’ responses. Developmental psycholinguist Melissa Bowerman is another example of a researcher who does not believe language to be a biological instinct but finds children's "guessing" ability to be involved in the learning of syntax and, indeed, of language as a whole: Children are capable of

52

Chapter Eight building language-specific categories by observing the distribution of forms in adult speech and making inferences about the categorization principles that might underlie this distribution. . . . [Children have the] ability to scan the input for clues to categorization . . . [and] to make sensible guesses about what the needed grouping principles might be.10

SOROEP is functional as well at the preceding stage of learning language—when the child is trying to figure out the meaning of each of the individual words big, dog, ate, and ice cream encountered not all together in a single sentence but at separate times in separate contexts not necessarily sentences. Then the child is unconsciously guided by “probabilistic biases” that “provide good first guesses as to a word's meaning”G and are due to “innate structural specification” in the brain.11 (And even where it is maintained that probabilistic biases are not innate, a predisposition for them would be considered innate—e.g., the “competition of word forms for semantic space”12 or children's abilities both to learn “what words do” and to form “conceptions of those aspects of the world that they find interesting.”13) The biases are probabilistic inasmuch as, depending on the condition (which varies), one bias rather than another is more likely to help the child match a novel name (or word) with the more likely thing (or meaning). For example, if the child does not already know a name for an object, the “whole object” bias is more likely to help and so becomes dominant and causes the child to apply a novel name associated with the object to the whole object rather than to any part or attribute of it. But if the child already knows a name for the object, the “mutual exclusivity” bias is more likely to help and so becomes dominant and causes the child to apply a novel name associated with the object to a part or attribute of it rather than to the whole object. These biases along with others, which become dominant under other conditions, “may be ordered into a hierarchy such that one bias overrides another.”14 The brain may be considered to be making a SOROEP-based “decision” when it compares the applicable condition with each condition associated with a particular bias and, on the basis of that comparison (the “evidence”), “decides” which bias to make the overriding one. (Incidentally, that which probabilistic biases accomplish—allow a child to learn a novel word in context—an automated model called Latent Semantic Analysis can also accomplish although by different means, for the model can accomplish the first step of becoming familiar with a novel word: it can indicate which other word in the context of a phrase, sentence, paragraph, essay, or more the novel word is most similar to in meaning. The model works according to physical probability, since it expresses in terms of “relative frequency” the degree of similarity between the meanings of any two words in the same context. To find that degree of

SOROEP in Speech Comprehension and Reading

53

similarity, the model first scans a general encyclopedia and calculates the relative frequency with which the two words jointly occur in all the encyclopedia entries that contain either or both words.15 The resultant degree of similarity in terms of relative frequency indicates “how likely or unlikely the two words are to be used in the same context.”16 Moreover, the same kind of calculation can be used to express the degree to which the meaning of a word in a context is similar to the meaning of the whole context,17 thereby indicating how semantically “appropriate” the use of the word is in that context.) Probabilistic biases also provide “good first guesses” in at least two systems even more basic than language: sight and consciousness. In respect to sight, early in the development of the visual system in the mammalian brain, “internally generated spontaneous activity sculpts circuits on the basis of the brain's 'best guess' at the initial configuration of connections necessary for function and survival.”18 Even so, after birth We do not see objects as such; we see shapes, surfaces, contours, and boundaries, presenting themselves in different illumination or contexts, changing perspective with their movement or ours. From this complex, shifting visual chaos, we have to extract invariants that allow us to infer or hypothesize objecthood.

This inferring or hypothesizing is accomplished by the inferotemporal cortex, which evolved for object and general visual recognition.19 As the “shape analyzer” in the brain, the inferotemporal cortex makes the most probable or best “guess” about the identity and nature of a seen threedimensional object from the two-dimensional image that the object projects on the retina.H Moreover, when “the brain is confronted with several possible solutions . . . , it must first ascertain what the possible solutions are and decide which is the most likely.” For instance, when confronted with an unfinished pattern, the “brain tries to make sense of this by ‘finishing it off’ in the most plausible way, and interprets the pattern” in this way. “There are of course other interpretations that the brain could give in this instance . . . , but they are far less plausible.” And here, besides the “more probable” option, the “more-or-less as probable” option is also available: “True ambiguity results when no single solution is more likely than other solutions, leaving the brain with the only option left, of treating them all as equally likely and giving each a place on the conscious stage, one at a time, so that we are only conscious of one of the interpretations at any given time.”20 Lastly, it should be noted that the brain makes the most probable or “best guess” about not only visual phenomena but phenomena detected by any of the five senses or even falsely produced by them. The most common example of this false

54

Chapter Eight

production is physical feelings in phantom limbs after loss of the real ones, although the senses can produce false feelings in any part of the body when the brain’s “best guess” about the condition of that body part is wrong.21 In respect to consciousness, by means of it we as organisms are given an adaptive “advantage of being able to manipulate inside our heads possible courses of action and their consequences” and then to choose (i.e., make a “best guess” or SOROEP judgment) “on the basis of those conscious calculations what is best to do” rather than unthinkingly perform one or more of the actions on a trial-and-error basis and so risk punishment or damage by some of the consequences.22

8.3: Seven Theories Implying an Innate Ability to Make SOROEP Judgments If, indeed, children make SOROEP judgments in learning to understand heard speech, it remains for cognitive science to discover whether the ability to make SOROEP judgments is innate (or at least qualitatively the same) in all children, partly responsible for the consensus in their learning to understand heard speech, qualitatively the same in them after that learning period and through adulthood, and so partly responsible for what consensus there is among adults in their basic understanding of both heard speech and read text. Indeed, affirmative answers to most of these questions may already be implied in at least seven current theories— three in developmental psychology, one in linguistics, and three in philosophy: (1) According to the “theory theory of mind” described by developmental psychologists Alison Gopnik and Andrew N. Meltzoff, human cognition is due to the innate ability to theorize about our experiences, and cognitive development is due to the innate ability to revise and change our theories. From birth onward—whether as infant, child, or adult—we experience more and different phenomena which act as new “evidence” causing us unconsciously to revise or change our theories about phenomena past and present. On the one hand, infants universally “start with . . . the same theorizing capacities” and “have innate capacities for theory change”; on the other hand, when adult scientists theorize, they “employ cognitive processes that are first seen in very young children.” And though the system of theory formation and revision is “at work most dramatically in childhood and in our endeavors as scientists, . . . there is no reason to suppose that it is not equally responsible for much of our everyday cognition.” The “conceptual structures of infancy and childhood, of ordinary cognition, and of science are largely continuous.” In all these stages humans possess the same “largely unconscious theorizing devices, designed by evolution for rapid, powerful, and flexible learning, and

SOROEP in Speech Comprehension and Reading

55

exploiting logical regularities to that end.” Therefore, the “basic cognitive abilities involved in scientific theorizing—such as the ability to make inductions and deductions, the ability to consider evidence, the ability to entertain hypotheses, and other general logical capacities” (including, presumably, the ability to make SOROEP judgments)—are not “gradually developed over the course of childhood” but “are available from the very start of development.”I Moreover, the continuity between childhood theorizing and the adult scientific kind insures that, when conditions for the latter are like those for the former, a consensus occurs among scientists’ theories as it does among children’s: Children the world over . . . may converge on the same representations because the crucial evidence is universally the same and so are children’s theorizing capacities. . . . [As for scientists,] when the assumption of common initial theories and common patterns of evidence, presented in the same sequence, does hold, scientists, like children, do converge on a common account of the world. . . . [W]orking independently [scientists] converge on similar accounts at similar times, . . . [on] evolutionary theory or the calculus or the structure of DNA (to take some famous examples) . . . because similar minds approaching similar problems are presented with similar patterns of evidence.23

Indeed, in 1922 it was found that, over the course of Western history, each of 148 major scientific discoveries had been made nearly simultaneously but independently by more than one person—by not only two but often more, even up to nine people working independently of each other.24 (2) Like the “theory theory of mind” a comparable theory also assumes that children “have an innate core or skeletal notion of scientific ideas,” but this theory differs in that the innate core includes not a theorizing capacity but a sense of “cause-and-effect relations—an adaptive specialization” that gives children a “preparedness” for acquiring knowledge and developing understanding. Such acquisition and development “can take the form of ‘enrichment’ in children’s knowledge rather than [as in the theory theory] a radical conceptual change that breaks with previous naive theoretical convictions that are incompatible or incommensurable with a mature understanding.”25 That “enrichment” process may be simulated by a computer model called the Adaptive Strategy Choice Model (ASCM). The model illustrates how problem-solving experience generates changes in children’s cognitive abilities from age 4 through adulthood, but the model also simulates competencies that “are hypothesized to be . . . present from birth”—“basic properties of the human information-processing system.” These competencies are simulated in the model by “basic procedures for choosing [problem-solving] strategies, for collecting data on the outcomes

56

Chapter Eight

they generate, . . . for projecting their future usefulness,” and, in the process, for using relative probability that simulates SOROEP: Each time a strategy is used to solve a problem, the experience yields information regarding the strategy, the problem, and their interaction. This information is preserved in a database on each strategy’s speed and accuracy for solving problems in general, problems with particular features, and specific problems. When a problem is encountered, the database for each strategy is used to compute a strength for that strategy. These strengths are the model’s way of projecting how well the strategy is likely to do on that problem. The likelihood of any given strategy being chosen is [“proportional to” and so] determined by the strength of that strategy relative to the strengths of alternative strategies. Each problemsolving experience changes the database of the strategy that was used, and thus changes the probability of the strategy being chosen in the future.26

(3) Even from birth, all children (except autistic ones) have a “theory of mind,” the ability to learn to “mentalise” or “mind-read” others—i.e., to “imagine or represent states of mind that . . . others might hold.”27 One theory about what makes that ability possible is the mind’s theorizing capacity in the above “theory theory of mind.”28 Another theory is that mind-reading (as well as its possibly prerequisite theorizing capacity) is partly made possible by executive functioning—“the ability to inhibit irrelevant thoughts and/or to work with more than two thoughts at a time.”29 And according to a third theory, mind-reading.” is made possible mainly by directly “empathizing with” or “simulating” the mind of another.30 Taking this theory further, a fourth theory posits that, in order to “mind-read, or to try to imagine the world from someone else’s different perspective, one has to switch from one’s own primary representations (what one takes to be true of the world) to someone else’s representation (what they take to be true of the world, even if this could be untrue).”31 This “switch” is possible because one’s mind can make out of one’s primary representation a copy or second-order representation and can change it in any way that one can imagine so that it is no longer true of the world as one knows it. And a specific way in which one’s mind can change that copy is to make it the same as someone else’s representation32 and so enable one to mind-read that person. “Arguably,” mind-reading is “impossible without such an ability to switch between our primary and our second-order representations.”33 And a fifth theory posits that mindreading may be made possible by a system of “mirror” neurons—a system in which the neuron areas activated in a person’s brain by his seeing or hearing another person’s action is the same as (i.e., is “mirroring”) the corresponding neuron areas activated in the brain of the person performing the action. This fifth theory is an effect of the discovery that, unlike

SOROEP in Speech Comprehension and Reading

57

normal children, children who are autistic—i.e., incapable of normal mindreading—have defects in their mirror-neuron systems and that these defects may cause some of the children’s autistic symptoms.34 Moreover, the mirror-neuron system may be the brain mechanism that makes possible mind-reading by means of “empathy/simulation” or “switching” from one’s own primary representation to a second-order representation of someone else’s primary representation. But, whichever one (or more) of those five theories is the right one, what has at least been established is that language is necessary for a full development of mind-reading ability.35 And, in turn, mind-reading ability partly makes possible children’s first beginnings to acquire language (assigning a heard word to an object)36 and partly makes possible what consensus there is among adults in basically understanding not only problematic (ambiguous and non-literal) heard speech37 but also an author’s “mind” or mental state manifested in a read text.J Moreover, a “metacommunicative” module dedicated to verbal comprehension through recognition of the communicator’s intention “might have evolved as a specialisation of a more general mind-reading module.”38 And such a metacommunicative module might be related to a mirror-neuron system in which the neuron areas activated in a person’s brain by his hearing or reading another person’s communication may be “mirroring” the corresponding neuron areas activated in the communicating person’s brain by his performing the action of communicating, especially since the communicator’s intention is within his communication and the hearer’s or reader’s mirror-neuron system is “involved with understanding why the action is being done, its intention.”39 Therefore, the meaning of the communication might be the same for both persons, and the hearer/reader might understand the speaker/author without their agreeing previously, for example, on arbitrary symbols.40 This communication system might have evolved from a primitive prelanguage protosign system in which the early human pointed with the hand or arm and pantomimed with the face or body: “The protosign communication system has a great asset: its semantics is neither arbitrarily imposed nor derives from an improbable agreement among individuals. It is inherent to the gestures that are used to communicate. This is not so, or at least is not apparent, for speech” or even primitive nonverbal speech gestures like roars, grunts, and wordless songs. However, at some point in evolution, “hand/body gestures and . . . primitive speech gestures were intrinsically linked. . . . [I]ntrinsically known messages (hand gestures) were transferred to an opaque gestural system, . . . the orolaryngeal system,” without losing their “intrinsic (nonarbitrary) meaning,” with the result that, now, “hand/arm and speech gestures must . . . have, at least

58

Chapter Eight

partially, a common neural substrate.” Current experiments verify this linkage by showing that “right-hand motor excitability increases during reading and spontaneous speech.” Since the “effect is limited to the left hemisphere” and “word articulation recruits motor cortex bilaterally,” the excitability increase would be due not to word articulation but to “coactivation of the right hand motor area with the language network.”41 By this evolutionary means, the protosign communication system with its semantics of intrinsic nonarbitrary meaning and probable agreement among individuals could have evolved into a mirror-neuron communication system with the same semantics. And so, either the metacommunicative module or an innate mirror-neuron system of communication or both might then be responsible for what consensus there is among adults in their understanding of heard speech and read text. However, there is much dispute over whether, in humans, general mind-reading ability as well as its possibly prerequisite theorizing capacity is modular and innate,42 although innateness seems likely not only because some animals (e.g., jays and perhaps dogs, parrots, apes, elephants, and dolphins43) show some evidence of a limited mind-reading ability but also because mind-reading considered as a state of mind containing (i.e., imagining or representing) another’s state of mind, is a recursion—“a constituent that contains a constituent of the same kind”—and recursion may have been innately essential to the origin and development of human thought.44 (A metacommunicative module, system of second-order representation, and system of mirror neurons are also recursions.) But if it were finally to be established that mind-reading is accomplished only by empathy/simulation, a system of mirror neurons, second-order representation, and/or a metacommunicative module, the resultant direct reading of an author’s mind as manifested in a read text would make supernumerary the reader’s ability to make SOROEP judgments to choose between possible readings of the text (assuming that the read authorial mind as manifested in the read text is described so as to avoid the traditional objections of anti-intentionalistsK). Still, mindreading may instead or as well be made possible by theorizing capacity, executive functioning, or some as-yet unnoted cognitive condition,L and so mind-reading may instead or as well be made possible by cognitive conditions that include the ability to make SOROEP judgments. Indeed, such a circumstance may be implied unintentionally and indirectly in observations by cognitive-science philosophers Shaun Nichols and Stephen P. Stich. In considering the results of the Linda probability experiment (described above in section 6.2), they “suspect” that people using their mind-reading ability “can often predict the inference” that the subjects taking the Linda test will make, since it is a “perfectly natural inference”

SOROEP in Speech Comprehension and Reading

59

for the subjects to make and “since just about everyone feels the tug” of that inference even though, as a SOROEP judgment, it is incorrect.45 Therefore, mind-reading people can often predict others’ SOROEP judgments perhaps because mind-reading ability is made possible by cognitive conditions that include the ability to make SOROEP judgments. (4) The innate theorizing capacity described above in the “theory theory of mind” is also manifested in what linguist William O'Grady believes to be children's innate ability not only to generalize and infer but also to formulate grammatical rules—or rather grammatical “hypotheses” that they think are rules, the rules themselves not being innate. That ability comes from “the learning module,” one of the several interacting modules constituting the “general nativist” device that insures the acquisition of language. By means of the learning module, children formulate grammatical hypotheses or reformulate them to conform to their experience with language, until ultimately they follow its grammar. They make no change in their grammatical concepts “without a triggering stimulus in experience” (i.e., novel “evidence”); but, given that stimulus, they draw on their grammatical concepts to formulate the “most conservative” and “restrictive hypotheses consistent with experience.”46 Such hypotheses would approximate and be expressible as SOROEP-based grammatical hypotheses consistent with the evidence. And children continue to use this innate ability until well after the age of ten, even though the changes after age six may be too subtle to be noticed.47 In the activity of reading too they are thought to be “generating hypotheses,” and not only about grammar but also “about the meaning of the pattern of symbols” on the page.48 (5) According to philospher L. Jonathan Cohen, “reasons for supposing the innateness of inductive (Baconian) [i.e., epistemic] mechanisms” are the “widespread human tendency to make judgements of inductive probability in appropriate contexts,” experimental demonstration that this tendency is the “normal intuitive mode of judging the probabilities of individual events,” and the infant's ability to “learn the syntax and semantics of its native language in such a relatively short period.”49 (6) According to philosopher Peter Carruthers, the principles of abductive inference (i.e., Inference to the Best Explanation) are innate.50 And since a version of that inference—Inference to the Likeliest (i.e., most probable) Potential Explanation—is equivalent to SOROEP judgment,51 the ability to make SOROEP judgments may also be innate. (7) According to philosopher Robert Hanna, all rational humans “possess a cognitive faculty that is innately set up for representing logic.” This faculty “contains a single universal ‘protologic’” that is “distinct in

60

Chapter Eight

structure from all classical and neoclassical logical systems” but “is used for the construction of all logical systems.”52 Since probability is a common form of logic, the innate protological faculty may make possible the ability to make probability judgments and specifically SOROEP judgments.

CHAPTER NINE SOROEP IN THE BRAIN

9.1: A Modular vs. a Central System Despite the evidence reviewed in the preceding chapter, modularists in particular could still assume that the ability to make SOROEP judgments is not innate in children. Even if the ability to understand speech is assumed to be modular and innate rather than part of the central system of general intelligence, and even if the ability to read were conceded to be partly or multi-modular or (quasi-)modularized and at least “innately guided,” the ability to make SOROEP judgments could still be thought part of the central system and not modular or innate to any degree. Indeed, a description of that central system by one cognitive psychologist sympathetic to modular theory is very like a description of SOROEP judgments: The central processor has access to the information from the different modules, can compare the various inputs with one another, and can draw on this wealth of data flexibly in order to make decisions. . . . The comparisons effected by the central processor allow individuals to make the best hypotheses of what the world is like.1

However, there are many alternative reasons why the ability to make SOROEP judgments need not be part of the general-purpose processor that is the central system. Here are fifteen such reasons: (1) Some cognitive processes assumed to be in the central system may actually be fully or multi-modular or (quasi-)modularized.2 They may be multi-modular by resulting either from a combination of computational modules or from a computational module “deploying” (accessing and processing) non-computational “Chomskian modules”—”systems of mental representation—bodies of mentally represented knowledge or information—such as a grammar or a theory.”3

62

Chapter Nine

(2) Some innate modules (called “horizontal”) may themselves be general-purpose processors operating across two or more cognitive domains,4 resulting in so-called “soft” or “weak modularity.”5 (3) As a child develops, the brain may become more modular, but then, as the child develops further, the information in separate modules may become more accessible, resulting in a “cognitive fluidity” that produces general intelligence.6 (4) All cognitive processes may be modular and a central system unnecessary and nonexistent.7 (5) The central system is too inefficient and evolutionarily improbable to exist at all. It is “highly error-prone,” and domain-general learning implies extremely slow and inefficient processes . . . . More importantly (from an evolutionary perspective), even if we allow that domain-general, higher-level cognitive processes confer definite reproductive advantages, we are presented with the . . . problem . . . how to model the specific processes by which such an advantageous computational system evolved. . . . [E]volution proceeds in a shortsighted, piecemeal fashion—it does not produce massively complex designs out of the blue.8

(6) All cognitive processes assumed to be in the central system may actually be quasi-modular since central cognition itself may be quasi-modular in structure. . . . Quasimodules would differ from full modules in having conceptual (rather than perceptual or motor) inputs and outputs. And they may differ markedly in the degree to which their processes, and principles of operation, are accessible to the rest of the system. But they would still be . . . at least partly impervious to changes in background belief.

The periphery of this type of mind would be “made up of a quite highly modular set of input and output modules (including language),” and the center would be “structured out of a number of quasi-modular component subsystems, in such a way that its operation” would be “subserved by a variety of special-purpose conceptual processors. Thus . . . there may well be systems designed to deal with . . . inferences to the best explanation”9 and so with SOROEP judgment, since it is equivalent to a version of those inferences: inference to the likeliest (i.e., most probable) potential explanation.10 (7) A central system may be unnecessary because of the language faculty. As “almost everyone now agrees . . . the language faculty is a distinct input and output module of the mind, and . . . would need to have access to the outputs of any other . . . conceptual belief or desire forming

SOROEP in the Brain

63

modules, in order that those contents should be expressible in speech.” Consequently, the language faculty may serve “as the organ of intermodular communication, making it possible for us to combine contents across modular domains” and thereby use “theoretical reason,” abductive inference (inference to the best explanation),11 and so SOROEP judgments. (8) “[D]ifferent normative accounts of probability” may be “enshrined in different modules of the mind tied to specific tasks.”12 (9) In dual-process theory, System 1 cognition includes both epistemic probability and modularity;13 therefore, the ability to make SOROEP judgments may be located in the modular system of the brain. (10) According to philosopher Murray Clarke, natural selection in living beings has selected for “means-end reasoning modules” that provide the beings with spatial guidance by means of implicit abductive inference. Without the use of thought, these modules enable humans to find their way after and despite spatial reorientation and enable ants, rats, and migratory birds to use dead reckoning in traveling from and back to their nests.14 Since these modules provide the ability to make such abductive inferences, they may also provide the ability to make SOROEP judgments. (11) Since a SOROEP judgment is also the outcome of chains of inference and since (according to cognitive scientist Dan Sperber) most if not all conceptual modules are inferential devices . . . [and] the output of one conceptual module can serve as input to another one, . . . chains of inference can take a conceptual premise from one module to the next and therefore integrate the contribution of each in some final conclusion. A holistic effect need not be the outcome of a holistic procedure.15

(12) Since the ability to make SOROEP judgments depends on the ability to make abductive inferences, it is problematic how the former ability could be within a modular framework, for one cannot reconcile a local notion of computation [modularity] with what seems to be the holism of . . . abductive inference [which] seems to be able to draw on the entire corpus of one’s prior epistemic commitments. . . . [A]bductive inference or “inference to the best explanation” is global. But the massive modularity hypothesis posits only local . . . computational processors. Hence, the massive modularity theorists cannot explain, inter alia, abduction.16

However, one such theorist offers a solution: Abductive inference can be “realized in cycles of modular activity, utilizing a variety of learned rules and heuristic processes,” for, after language evolved, the ability to make

64

Chapter Nine

abductive inferences may have developed from an ability that is “explicable within a modular framework”—the ability to determine which one of differing verbal testimonies or discourse interpretations should be believed: [F]rom the outset of language-use people would have used language to manipulate and deceive as well as to inform. And in that case consumers of testimony would have needed, from the very start, to be discriminating about what testimony to accept.17

(13) The mind may have not one but three layers of modules—the first layer that receives input from the senses, the second layer that is “a complex network of first-order conceptual modules of all kinds, and then [as the third layer] a second-order metarepresentational module” that “processes concepts of concepts and representations of representations.” If you are trying, for instance, to decide between two competitive concepts about (or explications of) a particular poem, you have information in two modes: The two competitive concepts about the poem are handled by firstorder conceptual modules, and the concept of comparing the two competitive concepts is handled by the second-order metarepresentational module. This module knows nothing about poems, “but it may know something about semantic relationships among representations; it may have some ability to evaluate the validity of an inference, the evidential value of some information, the relative plausibility of two contradictory beliefs.”A In other words, this module may enable you to make a SOROEP judgment about the two competitive concepts. The cognitive scientist—Dan Sperber—who proposed the above theory of three-layer modularity is a social anthropologist. But, a neuropsychologist—Elkhonon Goldberg—working independently (in Russia rather than America) and from the opposite direction (outward from the brain to the mind), proposed a similar theory of three layers and located the third layer in the frontal lobes of the neocortex. Like Sperber’s third layer, Goldberg’s cannot do the work of the first two but instead coordinates their products: The frontal lobes do not have the specific knowledge or expertise for all the necessary challenges facing the organism. What they have, however, is the ability to “find” the areas of the brain in possession of this knowledge and expertise for any specific challenge, and to string them together in complex configurations according to the need.

As a result, “the frontal lobe control is ‘global,’ coordinating and constraining the activities of a vast array of neural structures.” Indeed,

SOROEP in the Brain

65

Goldberg also divided the neocortex by itself into three layers similar in function to Sperber’s, although he called the division a “simplistic” but “heuristically powerful” “didactic ploy” to explain the organization of the neocortex: The first layer consists of “primary sensory projection areas,” the second layer consists of cortical areas “involved in more complex information processing” (but each of them “still linked to a particular modality”), and the third layer consists of cortical regions that “appear at the latest stages of the evolution of the brain,” are “presumed to be central to the most complex aspects of information processing,” and “are not linked to any single modality” but “integrate the inputs coming from many modalities.”18 Like Goldberg, cognitive neuroscientist Stanislas Dehaene locates the third layer in the prefrontal cortex (or frontal lobes of the neocortex) but goes further than Goldberg in describing the layer: It provides a global “‘neuronal workspace’ whose main function is to assemble, confront, recombine, and synthesize knowledge”—indeed, to make possible “conscious thinking” and “the testing of new ideas.” It thereby “allows our behavior to be guided by any combination of information from past or present experience” and “provides a space for internal deliberation fed by a whole set of perceptions and memories”; moreover, in our evolutionary past, it was “closely related to both the emergence of reflective consciousness and to the human competence for cultural invention.”19 Unlike Sperber, both Goldberg and Dehaene do not consider the nature of the third layer to be modular, but that difference does not affect the point that for all three of them a central system of general intelligence is unnecessary. A similar three-layer arrangement is currently used in robotics. There, the third layer is (like Goldberg’s) a “deliberative layer of nonmodular mechanisms for planning and world-modeling,” whereas the second layer is “a reactive layer that contains a multitude of . . . modules that enable it” to perform “real-time activities”—for example, “respond[ing] rapidly to environmental contingencies” like avoiding obstacles. In “large measure because of this, the reasoning mechanisms within the deliberative layer of the system can be ‘decoupled’ from real-time activities . . . and instead deployed to generate solutions to complex, informationally intensive, decision-making tasks.”20 (14) It is “a mistake to suppose . . . that the only two possibilities are either a completely unconstrained, general-purpose learner or a heavily modular learner pre-equipped with large bodies of domain-specific knowledge. . . . [O]ther types of constraints in addition to (or instead of)” modules “might, for instance, be . . . developmental or architectural” constraints21 that might include the ability to make SOROEP judgments.

66

Chapter Nine

(15) According to the theory of “connectionism” (see Chapter 7 above), “the possibility of regrowth and restructuring of the neuronal groups during the life of an individual brain . . . allows the groups themselves, locally and adaptively, to accomplish the coordination that previously had to be attributed to that mysterious and unlocatable central processor.”22 Connectionists, therefore, can do without a central system. Meanwhile, attempts to reconcile connectionism with standard versions of modularity have been made and probably will continue.23 For example, implicit “System 1” cognition, with which epistemic probability is involved (see section 5.2 above), is an “ancient system that relies on associative learning through distributed neural networks and may also reflect the operation of innate modules.”24 Consequently, it may yet be viewed widely that making SOROEP judgments does not need a central processor but is modular, modularized, quasi-modularized, or otherwise constrained (and innate or innately guided) to some degree.

9.2: Connectionism and the Training of a Connectionist Network The view that making SOROEP judgments does not need a central processor is particularly encouraged, first, by a connectionist “picture of cognition in which particular pieces of evidence or input are compared and weighted to form a single network.”25 In that network “[p]ossible interpretations compete, and less plausible interpretations are inhibited by more plausible ones, by a process of adjustment,”26 so that the “most plausible interpretation of the evidence wins.”27 Also encouraging the above view is the thesis that relative-probability judgments based on evidential reasoning can be encoded into a computer if they are encoded as a connectionist network.28 Even more encouraging is the actuality of the reverse process—that, when an artificial or simulated neural network is built and “trained” in accordance with the connectionist model, the network can produce data that “can be interpreted as implicit estimates of probabilities.”29 The network shows an ability to use what would be tantamount either to physical relative probability (since it consists of two or more values that add up to 1) or to SOROEP in a human brain—what might be called quasiSOROEP, since, occurring in a computer rather than a brain, it cannot be “epistemic” literally. First, the network is trained to make decisive, all-ornothing choices between two or more alternatives on the basis of typical and unproblematic data which is not numerical or, if numerical, is not

SOROEP in the Brain

67

manipulated mathematically. But when afterwards the network is fed atypical and problematic (though still non-numerical or non-mathematical) data, it can make “graded” choices. Also it can “behave sensibly” and make “reasonable” choices when fed data that includes errors, inconsistencies, and contradictions. It can make quasi-intelligent "guesses" and "inductive inferences" and can “generalize” from a collection of objects each of which has a slightly different set of features. As a result the network can describe a “typical” object (having a set of the most common or frequent features) although that typical object is not identical to any object in the collection, and the network can make a “default assignment”—i.e., when one of many features of an object is unknown, the network can find which feature among all the other objects is most probably also the unknown feature of the first object,30 since “the more similar two things are in respects that we know about, the more likely they are to be similar in respects that we do not.”31 The above “graded,” “sensible,” and “reasonable” choices, the "guesses," "inferences," “generalized” object, and “default assignment” all result from the operation of either physical relative probability or quasi-SOROEP. This operation may be understood more clearly in an example—of a network making “graded” choices or, more specifically, “deciding,” say, whether a sonar echo has bounced off an undersea explosive mine or a rock32—in other words, a network deciding between two different “interpretations.” But first the network must be trained by being fed fifty different known mine echoes one at a time and “told” that they are mine echoes, then fifty different known rock echoes and told that they are rock echoes. (The large number of echoes is necessary since they vary widely in character even within each type.) With each input echo, the network analyzes and stores the electrical-frequency profile of the echo and registers an output reading for it, but the computer operator then adjusts that reading so that each successive reading of a successive echo will approach nearer to a reading of certainty (1 and 0)—certainty that the echo is of one type (1) and not the other (0). When the network has been fed all the echoes several thousand times and its readings approach certainty without being adjusted, its training is complete. It now has “life experience” or, in its case, “interpretation experience.” And so it can be tested on mine or rock echoes different from any in the training set and not be told whether they are mine or rock echoes. For each unidentified echo, the network analyzes its electrical-frequency profile, compares it with the set of profiles stored previously (the resulting comparison is the “evidence”), and registers an output reading—but one not limited to indicating merely certainty that the echo is of one type and not the other (1 and 0). The output reading can also indicate that the echo is more like one

68

Chapter Nine

type than the other (e.g., .6 and .4) or as much like one type as the other (e.g., .5 and .5), thereby showing a graded choice. In other words, the reading can indicate that one “interpretation” is more probable than the other or as probable as the other. Such a “trained” and “experienced” network can interpret faster and more accurately than a trained and experienced sonar operator can. The operator too uses SOROEP (here not “quasi-”) to make graded choices in interpreting echoes, though his SOROEP works on heard echoes rather than their analyzed electrical frequencies. On the other hand, the computer networks that are trained and experienced in interpreting not sonar echoes but heard speech or read text cannot handle the complicated discourse that a human being so trained and experienced—i.e., a literate adult—can handle.33 And here too, the literate adult makes graded choices in interpreting discourse, just as the sonar operator and the computer network do in interpreting sonar echoes (and just as the computer network does in interpreting discourseB). Within the human brain (according to linguist Ray Jackendoff), “instead of neural inputs and outputs having only the values 0 and 1,” they could have the equivalent of “continuously graded values.”34 Such gradation is detectable in the way that the mind individuates and groups seen objects and categorizes objects and actions.35 Hypothetically, the result is “a projected structure that represents the highest degree of overall preference” among alternative possibilities. Or, stated more specifically, either a projected “structure is judged the most highly preferred, or most coherent, or most salient, or most stable” structure or, in “the case of structural ambiguity, more than one possible structure attains sufficient salience to be projectable.”36 This is much the same as one interpretation being either more probable than others or as probable as others. In this instance, then, the working of the brain seems to be approximated by computer neural networks. In “dynamic systems” theory too, which incorporates connectionism, gradation expressed as either physical relative probability or quasiSOROEP is central. Neural inputs and outputs in the living brain can have only continuously changing graded relative values greater than the minimum (0 and 1) but less than the maximum (1 and 0).37

9.3: Connectionism vs. Innateness and Modularity in Language Acquisition Connectionism is one of the reasons why opponents of Chomsky’s instinctual theory of speech comprehension are currently increasing in number (as mentioned above near the beginning of Chapter 7). Neural networks have been trained to produce grammatically intricate but correct

SOROEP in the Brain

69

sentences without being fed any grammar rules initially—i.e., without being given “instinctual” or “innate” rules.38 And, in the course of their training, the networks exhibit the same successive behavior (choosing first correct, then mistaken, then correct past-tense forms of common irregular verbs) that children exhibit as they learn to speak.39 These results may be considered the “response” of connectionism to the first of the following three beliefs of its critics: that, in order for neural networks to duplicate children’s learning a first language, the networks must (1) learn the correct grammar of the language induced from only limited samples of it, (2) reject all incorrect hypothetical grammars that could equally be induced, and (3) have a built-in bias that would be equivalent to the children’s innate bias for their first-heard language.40 In response to the second belief, “a learning rule . . . decreases probability of trying [language] constructions that had not proved useful” because they “did not match the adult grammar,” and so when a child’s early guesses at constructions do not match that grammar, they “eventually wither away . . . from lack of use.41 In response to the third belief, there is “a principled way for the [network] program, and presumably for the child,” to choose “the best grammar, rather than the first one that fits the data.” 24 On the other hand, unlike children, neural networks often become “confused” in forming the past tense of uncommon regular verbs; and the networks cannot, by themselves, satisfy other, more technical requirements for the humanlike learning of language.43 Consequently, although networks have been “trained to capture various aspects of grammar and other language and cognitive functions . . . , none of these models has yet attained a significant fraction of language learning or understanding.”44 Moreover, “[r]ecent advances in linguistic theory have led to an increase in the amount of linguistic knowledge that is hypothesized to be innately specified.”C Therefore, unless or until connectionism is further developed (an ongoing process45) so as to overcome the above problems, Chomsky’s instinctual theory will still prevail. But even if ultimately it were to be established that the theory is untenable, the use of SOROEP in learning to comprehend heard speech (and then in comprehending heard speech and read text) would still be credible not only because such use of SOROEP does not depend on that theory (though it is made more credible by it) but also because such use would still be possible under the condition of “general nativism,” because SOROEP may be part of the innate predispositions or biases theorized by Deacon, Bowerman, Maratsos, O’Grady, Woodward and Markman, and Gopnik and Meltzoff (see sections 8.2 and 8.3), because SOROEP is involved in resolving ambiguities of word meaning and syntax, and because quasi-SOROEP is a part of connectionism and “dynamic systems” theory.

70

Chapter Nine

9.4: Determining Neuron-Firing in the Brain One may go even further. If ultimately it were to be established that all higher cognitive functions are handled by a complex central processor instead of by many different modules,46 the brain would still be using a function prerequisite to though more elementary than quasi-SOROEP. This function is the “choosing” of one action (not interpretation) rather than another on the basis of the balance of “evidence” and is operative in the brain at a level correspondingly prerequisite to though more elementary than modules: it is operative in the firing or non-firing of each individual neuron receiving input from other neurons. At any time during the life of a brain, whether or not a particular neuron fires and, if so, how strongly depends on the balance between the properties of the connecting excitatory neurons signaling it to fire and the properties of the connecting inhibitory neurons signaling it not to fire. These properties include the number of signaling neurons of each type, the strength of the signal from each neuron, and the “size,” “weight,” or “importance” of the connection with each neuron.47 These properties constitute the input “evidence.” The particular neuron sums it up for each type and “balances out” or compares the sums in order to make a graded choice whether or not to fire and, if so, how strongly. If it chooses to fire, the “neuron’s rate of firing (that is, strength of judgment) would be based on some weighted function of the excitatory and inhibitory inputs”—that is, on “relative strength.”48 In using an analogy to illustrate this non-epistemic condition, linguist James Paul Gee unintentionally makes the condition epistemic: It is as if several friends are yelling a message to me (say, the message “Do it!”). Each friend is yelling with a certain degree of strength (is yelling more or less loudly) depending upon how excited he or she is. Further, each friend has a different connection to me, is more or less important to me in terms of how much I listen to him or her. This is my friend’s “weight.” The contribution each friend makes to my decision as to what to do (how excited to get, and, thus, how loudly I will yell to other friends) is just the product of how loudly that friend is yelling times how important (weighty) that friend is to me. I decide whether to yell at all to other friends, and how loudly I will yell if I do yell, by summing the contributions of all my friends. If this sum is high enough I yell (if it is not high enough, I don’t yell), and the larger the sum, the louder I yell. It is also possible for me to have friends who specialize in yelling not “Do it!” but “Don’t do it!” If enough of these sorts of friends are yelling to me—or some very weighty ones are—they can drown out the friends yelling “Do it!” and inhibit my yelling at all.49

SOROEP in the Brain

71

Although it is not Gee’s purpose, this analogy can suggest how the concrete and low-level neuronal “choice” whether or not to act based on the balance of “evidence” can provide the ground not only for the abstract and high-level human choice whether or not to act based on the balance of evidence but also for the even higher-level human choice (based on the balance of evidence) between one interpretation and another. This relationship between the low-level neuronal “choice” and the higher-level human choices is especially implied in “dynamic systems” theory. There, the dynamics of central cognitive processes [e.g., choosing between interpretations] are nothing more than aggregate dynamics of low-level neural processes, redescribed in higher-level . . . terms. . . . Dynamical systems theory provides a framework for understanding these level relationships and the emergence of macroscopic order and complexity from microscopic behavior.50

But even independently of dynamic-systems theory these level relationships are recognized: When a “neuron totals up all the excitatory and inhibitory signals it receives from the other neurons that converge upon it and then carries out an appropriate course of action based on that calculation,” the neuron is acting according to “principles of neuronal integration . . . likely to underlie some of the brain’s higher cognitive decision making as well. Each perception and thought we have . . . is the outcome of a vast multitude of basically similar neural calculations.”51

CHAPTER TEN OTHER THEORIES AND RELATED CONDITIONS

10.1: Other Conditions Suggesting an Innate Ability to Make SOROEP Judgments There are at least six factual conditions suggesting that the ability to make SOROEP judgments may be innate: (1) Children as young as age four show that ability.1 (2) They also show the closely related ability to make judgments based on epistemic probability versus possibility.2 (3) Infants as young as sixteen weeks, without yet knowing numbers or how to count, discriminate between groups of different small numbers of items (i.e., possess the concepts more and less) and recognize numerical equivalenceA—predispositions for the ability to make SOROEP judgments. (4) Although members of an indigenous tribe in the remote Brazilian forest lack words for numbers beyond five, they can tell, among groups of items differently numbering much more than five, which group has the most items, which group has the next most, and so on down, as well as which groups have approximately the same number of items.3 They can do this, even without the ability to do exact arithmetic calculation—just as others can do it without that ability (normal infants, adults with brain lesions deteriorating memory, and many animal species)—because they all share with modern adults a primitive, language-independent ability to mentally represent numerical quantity.4 Modern adults use this ability unconsciously if asked “whether 9 is closer to 10 or to 5” or “whether 53 plus 68 is closer to 120 or to 150,”5 or if asked the reverse of these questions: whether 10 or 5 is closer to 9 or whether 120 or 150 is closer to 53 plus 68. Moreover, this mental-comparison ability “is not restricted to the number sense—e.g., it works in much the same way when people make size comparisons between animals based on names.” Since the ability “is apparent for far more than just numerical comparisons, . . . it may reveal

Other Theories and Related Conditions

73

something more general about mental representation.”6 Therefore, the above two reversed numerical questions (which are in the form “whether number X or number Y is closer to Z, a number or sum of other numbers”) may be counterparts to being asked to make a SOROEP judgment—for instance, whether explication X or explication Y is closer to literary work Z. (5) Among modern adults the fully developed extranumerical versions of the above numerical abilities are universal:B the abilities to make comparisons, make “differential valuations” of things compared, and either order them as “continua . . . with gradations between them” or apply to them the logical notion of equivalence.7 (6) The ability of adults to reason “conjecturally” (another predisposition for the ability to make SOROEP judgments) is also universal.8 (Moreover, their ability to reason in general is thought to be, more likely than not, innate and modular.9) But the question whether the ability to make SOROEP judgments is innate may depend on whether the abstract concept of SOROEP itself is innate. And though at present this is not determinable, linguist Ray Jackendoff believes that what is surely innate is the unconscious mental combinatorial system for constructing such abstract concepts—a system that he calls, by analogy with language, the Universal Grammar of concepts. The Grammar constructs concepts in the following way. When we locate objects (or, what is more to the purpose here, when we compare their sizesC), we do so in terms not of absolute space but of figures placed against or “organized” on a limited background—a “visual field.” The Universal Grammar of concepts uses this “figure-ground organization of the visual field” to “conceptually organize” the extended world of spatial objects, and so “the organization of our thought parallels the figure-ground organization.” Then, by “conceptual parallelism,” the Universal Grammar extends the organization of the world of spatial objects to the world of nonspatial concepts, the world of abstract thought.10 In order to illustrate such conceptual parallelism and extension between size comparison of objects and the abstract concept of SOROEP, it is first necessary to introduce a pair of specific objects as an example. Such a pair of objects has been conveniently provided by E.D. Hirsch Jr. as an analogue to a relative-probability judgment that is imprecise but nonetheless accurate: We “can easily and correctly judge that one pile of sand is larger than another without being able to estimate the precise number of grains in each pile or even the relative proportion of one pile to another.”11 According to Jackendoff, the figure-ground organization can be extended to abstract concepts because the “characteristics that [both

74

Chapter Ten

spatial and abstract] things can be considered to have fall into broad families or ‘semantic fields,’” and across these fields our innate mental “conceptual system allows us to build complex concepts,” thereby resulting in parallelism and extension.12 For example, one semantic field for piles of sand is relative size or volume, and one semantic field for explications is SOROEP. Across these two fields we can build the “complex” concept that, whether referring to sandpiles or to explications, one thing is greater than another or more-or-less as great. Consequently, just as one sandpile can be seen as volumetrically greater—i.e., more sizable—than another sandpile or more-or-less as sizable, so can one explication be conceived of as probabilistically greater—i.e., more probable—than another explication or more-or-less as probable. Of course, it may be thought that, because of the extreme abstractness of SOROEP and of probability relations in general, the process of conceptual parallelism and extension would not work in their case. Indeed, because of extreme abstractness, the domain of logical relations, which includes probability relations, has often been regarded as a field isolated from human conceptualization, to be studied by purely mathematical techniques. On the other hand, analysis through cross-field generalization reveals that this domain . . . has formal parallels to a very concrete semantic field having to do with pushing objects around in space.13

Jackendoff specifies that, in discussing “an innate basis for concepts,” he does not “necessarily mean that any particular concepts are innate: rather what’s innate—what human nature gives us—are the building blocks 14 from which the infinite variety of possible concepts can be constructed.” Yet, he also does specify that certain abstract concepts are themselves innate—for example, the concept of possession or ownership. And the three reasons he offers for the innateness of this concept can also be reasons why the concept of SOROEP may be innate: First, the formal analysis of combinatorial systems . . . shows that possessional concepts cannot be derived from spatial concepts alone without adding something else [—“the fundamental relation of possession and the fundamental notion of a right” that “have to be available innately in the Conceptual Well-Formedness Rules” of the mind]. Second, the way evolution typically works is by innovating some weird little eccentric device, not by innovating big general-purpose systems. So the evolution of an eccentric domain of possession is on evolutionary grounds more plausible than that of a totally mushy general-purpose problem solver, which is in any event likely to be inadequate to the task of learning the notion of possession. Third, the notion of possession is tremendously

Other Theories and Related Conditions

75

fundamental to human culture—this abstraction and its inferences constitute one of a very small number of major issues around which each culture constructs its equivalent of a legal system.D

Similarly, the fundamental notions of comparison and measurement upon which the concept of SOROEP depends have to be available innately in the Rules of the mind; the evolution of an eccentric domain for the concept is more plausible on evolutionary grounds than a general-purpose problem solver; and the concept is fundamental to human culture and its legal systems. Linguist Eve Sweetser also believes that SOROEP is a semantic extension of comparing objects. But the extension can come from comparing not merely the sizes but any characteristics of different objects, and not merely of objects but also of more abstract things—”entities” and even “situations.” The comparison is expressed as extent of “likeness,” and likeness at an abstract mental level is referred to in terms of physical likeness. In fact, if you say to me “John and Mary are alike,” I cannot tell without further data at what level you are comparing them. Further, our Pavlovian reflexes tell us that we can reason from similar situations to probable similar results. In earlier English usage, it was possible to say “He is like to die,” meaning what we would now say as “He is likely to die.” If a person’s appearance and situation resemble those of a person about to die, then (so far as we can tell) that person is more likely to die than someone whose appearance and situation are different. Thus physical resemblance and probable future fate are interconnected phenomena, at least in our folk understanding. (Compare modern English usages such as “It looks like Joe will be going to New York” vs “It looks like it’s stormy out right now.”)

Perhaps as a result, words for physical similarity (like) in Irish, Welsh, and English, as Sweetser notes, came to mean probability (likely). But she also finds that this “historical connection between the lexicon of physical similarity and that of probability (the like-likely link-up)” is only one of many “meaningshifts which fit neatly” into a larger systematic structure of semantic changes and interconnected semantic domains crossing many languages.15 And if this structure is innate, so is the extension of physical comparison to the abstract concept of SOROEP.

76

Chapter Ten

10.2: Innate Ability to Make Physical-Relative-Probability Judgments Emphasis in Chapter 4 on the differences between physical probability and SOROEP—to show how only the latter applies to explication of a unique literary work—should not obscure the close cognitive relationship between the two kinds of probability.E Indeed, in the previous section, the SOROEP judgments discussed there can as readily be considered physicalrelative-probability judgments occurring preconsciously in infancy or subconsciously thereafter; and in this section the preconscious or subconscious physical-relative-probability judgments to be discussed can as readily be considered SOROEP judgments. But, whether or not the ability to make judgments based on SOROEP is innate in humans, the ability to make choices based on physical relative probability is apparently innate in them.16 This is suggested by the results of experiments with infants. In one experiment, infants as young as two months were, first, habituated “to sequences of discrete visual stimuli [e.g., colors] whose ordering followed a statistically predictable pattern.” Then, when they were shown “the familiar pattern alternating with a novel sequence of identical stimulus components,” the infants showed an ability to detect the difference between the familiar and the novel sequence. “These results provide support for the likelihood of domain general statistical learning in infancy,”17 consequently of infants’ ability to differentiate between familiar and novel auditory sequences too, and so of an explanation for their subsequent ability to separate into individual words a continuous sequence of heard syllables spoken by others.18 Infants in their first year also begin to use physical relative probability preconsciously for a different purpose. Using probability frequencies, they compare what are called “cue validities” in order to classify or categorize every newly encountered object—i.e., to decide whether the object is an example of a concept by considering each feature of the object, the frequency with which that feature accompanies that concept, and . . . the infrequency with which the feature accompanies other concepts. For example, the feature of flying makes it likely that an object is a bird in proportion to the frequency with which flying is found in birds and in proportion to the infrequency with which flying is found in other things. . . . [Thus,] ability to fly is a highly valid cue for an object’s being a bird.

In practice, children consider “a large number of features of a newly encountered object, sum . . . the cue validities of the features of that object for different concepts, and view . . . the object as a member of whichever concept achieves the highest sum.”19 But this is only one of many ways in

Other Theories and Related Conditions

77

which infants’ preconscious (and adults’ unconscious) use of physicalrelative-probability frequencies helps them make choices in perception, cognition, and language.20 Indeed, the number and variety of those ways are enough to suggest that “some aspects of early development may turn out to be best characterized as resulting from innately biased statistical learning mechanisms.”21 Especially relevant here is a way in which adults’ unconscious use of physical-relative-probability frequencies helps them make one kind of choice in cognition and their conscious use of SOROEP helps them make another kind of choice in cognition. This way is shown by the results of a “cue validities” experiment conducted by psychologists Lawrence W. Barsalou and Daniel R. Sewell. In this experiment, adult subjects generated a “graded structure” for given members of a concept or category—i.e., the subjects ranked each category member in order of how good an example or how typical it was of the category. For instance, given the category bird, most American college students would rank robin as very typical of the category, pigeon as moderately typical, ostrich as atypical, and (continuing in that direction) would rank butterfly as a moderate nonmember of the category and chair as an extreme nonmember. (Of course, in the experiment, the subjects were tested on examples less simple and obvious than this one.) In the first phase of the experiment, a group of undergraduates and a group of faculty members generated graded structures from their own viewpoints. As a result, the correlation between the two groups’ graded structures turned out to be only .23, “indicating that different populations of subjects may perceive very different graded structures in the same categories.” Then, in the second phase of the experiment, other undergraduates generated graded structures from what they believed to be the faculty viewpoint, other faculty members did the same from what they believed to be the undergraduate viewpoint, and graduates did the same from what they believed to be, first, the undergraduate viewpoint and, then, the faculty viewpoint. Considering the substantial difference indicated by the .23 correlation, Barsalou and Sewell were “surprised” by the results of the second phase: The undergraduates’ graded structures generated from what they believed to be the faculty viewpoint were “identical” to the faculty members’ structures generated from their own viewpoint; the faculty members’ structures generated from what they believed to be the undergraduate viewpoint were “very close” to the undergraduates’ structures generated from their own viewpoint; and the graduates were “perfect” at taking the viewpoints of both faculty and undergraduates. Although specific individuals from these different populations did not take each other’s viewpoint with any consistency (the correlations between them ranged from about .30 to .60), the results

78

Chapter Ten

showed that, on the average, different populations can be “very accurate” at taking each other’s viewpoint.22 The results suggest further that, whereas different populations may not attain consensus when their judgments are based on their own differing viewpoints (i.e., on their unconscious use of physical-relative-probability frequencies), they can attain consensus when their judgments are based on what they try to make (and believe to be) the same viewpoint—i.e., on their conscious use of SOROEP. * * * * * * Although noteworthy in cognitive science, the ability to make preconscious (and then unconscious) choices based on physical relative probability is even more important in the field of evolutionary epistemology. There the ability is considered to be a result of biological evolution (i.e., evolution of cognition) and an innate part of the preconscious cognitive apparatus in both animals and humans. But the ability is also considered to be half the reason why cognition evolved in animals and humans in the first place: In any species cognition evolved because the individual organism alternated continuously between “experience” of its environment and “expectation” (also called “prejudgment,” “extrapolation,” “prediction,” “hypothesis,” “forecast,” and “prognosis”) of what its next experience would be if it made a certain choice based on past experience; and this “expectation” was based on the individual organism’s preconscious sense of physical relative probability.23 Both the “experience” and the “expectation” tendencies of each generation were passed on to succeeding generations, whose “experience” and “expectation” tendencies not only reciprocally reinforced and enlarged each other continuously but also were reinforced and enlarged by the “experience” of all preceding generations. And so cognition in the species evolved. In humans the preconscious sense of physical relative probability evolved into an additional conscious sense of it as well as into a conscious sense of SOROEP.F * * * * * * There is an even more comprehensive theory in cognitive psychology about unconscious physical probability—not just that it is part of cognition or even a partial cause of its evolution but that it is the basis of all cognition: “that our brains represent the outside world numerically, that there exist subjective probabilities in our heads for everything under the sun, and that the very nature of cognition is computation.”24 By this view, although epistemic and conscious physical probability remain distinct from

Other Theories and Related Conditions

79

each other, they both become manifestations of this unconscious and innate physical probability that is equivalent to cognition. Still, it seems to go against normal understanding to accept the idea that an epistemicprobability judgment completely based on obviously nonnumerical evidence is nonetheless based on numerical probabilities (“invisible frequencies”) within the brain. One may insist that the most valuable evidence we have does not, in fact, consist of frequencies—for example, books we read, people we talk to, and—most importantly—our own internal thinking. Yet, by and large, our basic reasoning patterns follow the standard rules of probability calculus remarkably well. . . . For example, the patterns of reasoning involved in assessing relative likelihoods of familiar events, thinking under hypothetical assumptions, judging the relevance of information sources, processing causal relationships, and combining contextual and stimulus clues in perceptual tasks . . . show a remarkable agreement with the rules of probability calculus. There is probably an evolutionary reason for this fortunate agreement. . . . I believe . . . that the internal experience of the human mind is so rich with frequencies, albeit invisible ones, that only a calculus tailored after frequencies can manage the outcomes of this experience. These invisible frequencies involve track records of mental processes, past usages of concepts and strategies, and vast populations of synthetic scenarios. Perhaps even the firing activities of neurons is relevant here.25

Such “invisible frequencies” may be reflected in the automated Latent Semantic Analysis model discussed above (near the end of section 8.2). And similarly its authors believe that the mechanism of their model may embody “a possible theory about all human knowledge acquisition” and be “a homologue of an important underlying mechanism of human cognition in general.”26 Such “invisible frequencies” or the unconscious SOROEP they support may also be the unacknowledged basis of human consciousness (not just of cognition) as it is analyzed by linguist Ray Jackendoff and by neurophysiologist William H. Calvin. In Jackendoff’s theory, mental structures must be constructed “to produce behavior, knowledge, and experience,” but the processor that determines the process of construction does not arbitrarily choose among the possibilities and then go on from there. . . . Rather, it constructs all reasonable possibilities and runs them in parallel, eventually selecting a single most plausible or most stable structure as more constraints become available, and inhibiting the other structures.27

80

Chapter Ten

Calvin’s theory is different, but his analysis is similar: Except when a person is selecting among alternatives consciously, every thought and idea (but especially every choice and decision, inference and conclusion, realization and insight) results from the brain’s unconsciously generating (“brainstorming”) a lot of random thought-and-idea possibilities, then unconsciously selecting the “best” or most “reasonable,” “logical,” “insightful,” “foresightful,” or “probable” one from among them, and then allowing the person to be conscious of having considered only that one. In this process the brain “works like a greatly speeded-up version of biological evolution (or of our immune system)” in that it “evolve[s] an idea, using variation-then-selection, in much the same way that biology evolves a new species using Darwin’s natural selection to edit random genetic variations and so shape new body styles.” The brain “shapes up new thoughts in milliseconds rather than new species in millennia, using innocuous remembered environments rather than the noxious real-life ones.”G And it may be that those innocuous environments are remembered unconsciously in the form of “invisible frequencies.” * * * * * * To summarize then, the last four chapters have described many cognitive conditions relevant to the question of consensus. Several of those conditions are factual : relative-probability and probability-versuspossibility judgments in children, universal “conjectural” reasoning, prearithmetic number discrimination in infants, greater number-discrimination ability in an indigenous tribe (and other humans and animals) than its limited arithmetic language allows, and universal predispositions in adults for the ability to use SOROEP. The remaining cognitive conditions are theoretical rather than factual: “cue validities,” Latent Semantic Analysis, SOROEP as a concept in an innate Universal Grammar of concepts and in a structure of interconnected semantic domains crossing many languages, SOROEP judgments coming from a somewhat modular or modularized source in the brain and additionally from a neural network either in a connectionist system or (as gradation) in a “dynamic system,” SOROEP judgments being a higher-level extension of neural-network choices and neuron-firing determinations in the brain, children’s learning to read being partly affected by the earlier consensus in their learning to understand speech, parallel results of the brain’s probabilistic “guessing” (providing the right inference to resolve language ambiguity, insuring the right development and function of ocular vision, choosing the safest courses of action made possible by consciousness), physical probability being one of the two motive forces behind the evolution of cognition and (as “invisible

Other Theories and Related Conditions

81

frequencies” in the brain) being the basis of human cognition and of its quasi-Darwinian process of variation-then-selection, and, lastly, the following conditions being innate—human reasoning ability, theorizing capacity, mind-reading ability, inductive mechanisms, abductive-inference principles, and a protologic faculty. Of course, there are other relevant cognitive conditions that I have missed because the multidisciplinary field of cognitive science is already too vast and burgeoning for one researcher (especially one not professionally in the field) to cover. But even when only those above-stated conditions are considered together, they strongly indicate the possibility (though they cannot prove) that the ability to make SOROEP judgments is innate in human beings and that this innateness fosters consensus in such judgments.

CHAPTER ELEVEN IMPLICATIONS

The possibility that the ability to make SOROEP judgments is innate and so can foster consensus is reinforced by the likelihood of language (or of a predisposition to language) as a biological instinct. According to Steven Pinker, such an instinct would be evidence that there is some universal and fixed structure not just in the human brain but in “human nature,” even though “[m]odern intellectual life is suffused with a relativism that denies that there is such” a structure.1 For Jerry Fodor this relativism is represented in cognitive science by the “interactionist” idea that perceptual processes are not merely “arbitrarily sensitive to the organism’s beliefs and desires” but are “saturated” or “comprehensively determined” by cognition—an idea related to the idea in the philosophy of science that theories saturate observations, to the idea in anthropology that culture saturates values, to the idea in sociology that class affiliations saturate “epistemic commitments,” and to the idea in linguistics that syntax saturates metaphysics2—i.e., “that people’s thoughts are determined by the categories made available by their language.”3 And all these ideas are related to the idea in literary hermeneutics that all the above saturant conditions (one’s beliefs and desires, theories, culture, class affiliations, and categories made available by one’s language) saturate not only an interpretation of a literary work but even an explication of it, the SOROEP judgments behind the explication, and the choice of evidence supporting the judgments. But, as one believer in cultural saturation recently admitted, we have “pressed the cultural case too hard, ignoring evidence of constant or ‘natural’ features in the human experience.”4 There is much evidence against the ideas that cognition saturates perception5 and that language saturates cognition and culture.6 Admittedly, there is also much evidence that cognition influences the language expressing perception7 and that language influences perception and cognition.8 But influence is not saturation.9 On the other hand, with his theory of completely encapsulated modules, Fodor is at the other extreme. He believes that, if the human brain has

Implications

83

modular structure and perceptual processes are modular, they are distinct from cognitive processes, not “arbitrarily sensitive to the organism’s beliefs and desires,” and so dependably “inferential.”10 Indeed, Fodorian modularity can have a far-reaching implication for the specific perceptual process of a listener’s understanding heard speech: For Fodor, a sentence perception module that delivers the speaker’s message verbatim, undistorted by the listener’s biases and expectations, is emblematic of a universally structured human mind, the same in all places and times, that would allow people to agree on what is just and true as a matter of objective reality rather than of taste, custom, and self-interest.11

Indeed, much of this Fodorian implication is warranted even under conditions of non-Fodorian “soft” modularity, which can be “soft” in four different ways: (1) it can be as described in chapter 7, note A; (2) perceptual modules, being only semi-encapsulated, can be “sensitive to the organism’s beliefs and desires” without being saturated by them; (3) cognition can “engage . . . early [perceptual] processing sites to participate in cognitive tasks” and can do so without affecting “perceptual processing itself, owing to the considerable delay that results from the time it takes top-down information to reenter those early sites”; and (4) the modules are only “semi-hard-wired”—i.e., they have “no fixed neural architecture . . . since perceptual learning can modify the patterns of connectivity.12 But even under these conditions, perceptual processes (given normal circumstances) are more dependably inferential than not, and consequently much of the above Fodorian implication is still warranted. Therefore, if that implication were adapted to literary explication, it could be said that a reader’s sentence-perception module can “approximate” an author’s message despite the reader’s biases and expectations and so be emblematic of a universally structured part of the human mindA that would allow readers to agree (i.e., reach consensus) on the SOROEP relation between two readings as a matter of objective reality rather than of taste, custom, and self-interest. Whether or not the human brain has such a sentence-perception module, the Artificial Intelligence community has long attempted to build one in a computer. One such attempt is being made by the Laboratory for International Fuzzy Engineering Research in Yokohama. Applying fuzzy logic, neural networks, and binary-brain-structure analogies to computer engineering, the chief architect of the project, Michio Sugeno, hopes that the computer, when finished, will be able to do “simultaneous translation,” make a “summary of a novel or article,” or, when given a photograph, “explain the meaning of it in natural language.”12 These activities, it should be noted, are similar or equivalent to language explication.

84

Chapter Eleven

An alternative project—one that is less ambitious and more restricted in scope—is to develop a computer program that can select the most “probable” or “explanatorily coherent” explication from among already existent, man-made, competitive explications. Such an alternative project may someday be realized by continued improvement of some promising computer program—for example, the PROBEX program developed by Swedish cognitive scientist Peter Juslin and associates13 or the ECHO program (mentioned above in section 4.11) developed by philosopher Paul Thagard and associates or a similarly purposed program (named PEIRCEIGTT) developed by philosopher John R. Josephson and associates.14 And even though developers of such an alternative project would have a more restricted aim than the group led by Sugeno, what has been said of the latter group could also be said of the others—that, if they “succeed, come close, or even make it halfway, they will have cracked one of the great barriers in history.”15 But each group will have done something more though incidentally. Since all computers of any one model represent “a universally structured . . . mind” (“the same in all places and times”) for that model, the Sugeno group and the alternative-project groups may be said to be aiming at, respectively, language explication and explication choice by one kind of universally structured mind (a computer’s). And in proportion to the degree to which any of the groups achieves its aim, it will also make more likely the idea that the kind of mind which has already achieved language explication and explication choice to the highest degree (the human mind) and with the most complex kind of language (poetry) is also, in part, a universally structured mind for that “model” (the human species).

PART III: SOROEP AND FOUNDATIONS

CHAPTER TWELVE RATIONALISM, EMPIRICISM, AND COHERENTISM

Despite the prominence of the term foundation in the subtitle of this work, the term has been used sparingly in the text. Uncustomarily modified by the adjective relativistic, foundation was used in the Preface to posit a philosophical support for SOROEP, at the end of section 4.2 to show a possible relation to well-foundedness, and at the end of section 5.4 to characterize the “floating” “net of probability connections” that includes SOROEP and exists despite the absence of any “point of absolute certainty . . . to which to attach our knowledge of the world.” Additionally, in section 4.5, a major traditional philosophical foundation—“reason or pure thought”—was said to support justification and gradation (or degree) in justification—concepts on which SOROEP rests. And now a description of other kinds of foundation supporting SOROEP can be added, inasmuch as seven prerequisite topics have since been briefly described: secondorder relative probability between two readings versus absolute (and zero) probability of a reading (sec. 4.3); the use of probability by explicators and mathematicians regardless of the question of a foundation for it (end of sec. 4.5); implicit (or tacit) procedural knowledge (sec. 5.2); the training of a neural network (sec. 9.2); the firing of brain neurons (sec. 9.4); the Universal Grammar of concepts inferred by Ray Jackendoff (sec. 10.1); and the structure of interconnected semantic domains inferred by Eve Sweetser (sec. 10.1). Another major traditional philosophical foundation is empiricism—i.e., sensory experience and “observation” of that experience.1 Admittedly, like deductive logic, justification and gradation (or degree) in justification have no empirical foundation, only a rationalistic one. But SOROEP has something of both—a rationalistic foundation by resting on justification and gradation (or degree) in justification, and an empirical foundation by resting on visual sensory experience, on “observation.” Because of the Universal Grammar of concepts inferred by Jackendoff, observation of the relative size of comparative objects on the background of a visual field

Rationalism, Empiricism, and Coherentism

87

parallels and extends to judgment of the relative probability of comparative abstract concepts. And because of the structure of interconnected semantic domains inferred by Sweetser, words expressing abstract likelihood (i.e., probability) in several languages grew from words depending on “observation”—words expressing visually observed likeness: “He looks (or is) like someone about to die” or “It looks like he is about to die” was once expressed as “He is like to die,” which became “He is likely to die” or, in terms of SOROEP, “He is more likely to die than to live.” In both Jackendoff’s and Sweetser’s analyses, SOROEP would therefore be what philosopher Jesse J. Prinz calls an “innate perceptual concept” accepted within the philosophy of empiricism.2 An empirical foundation for SOROEP is thereby in evidence. After the beginning of the twentieth century, an alternative to foundationalism began to be posited by some philosophers. And coherentism, as it is called, is still a viable alternative for some of them.3 In coherentism, the conditions of empirical justification and knowledge can be expressed in terms of the complex, holistic relations that obtain among what we believe or accept, and . . . these relations are ultimately reciprocal or cyclical in some sense. The idea of mutual support or the reciprocity of justification-conferring relations is a denial of the primary tenet of foundationalism, viz., that justification is transmitted from one belief (or small set of beliefs) to another belief in a strongly directional or linear manner, and that all such lines of transmission find their source in one or another form of basic belief, a belief characterized by a kind of epistemic priority, in that its own justification is not, in turn, transmitted to it from other beliefs. In short, then, the coherence theory claims that no empirical beliefs enjoy epistemic priority, and all rely for their justification on their connection to, or membership in, the body of other things believed or accepted.4

The contrast between these two epistemological theories has also been stated more colorfully: For the foundationalist, knowledge of the actual, and even of the probable, requires a foundation of certainty. For the coherentist, knowledge is not a Baconian brick wall, with block supporting block upon a solid foundation; but rather a spider’s web in which each item of knowledge is a node linked to others by thin strands of evidential connection, each alone weak, but all together collectively adequate to create a strong structure.

Moreover, every item of knowledge “coheres” not only with other items in the network but also “with the data of experience.”5 In this latter respect,

88

Chapter Twelve

then, coherentism uses an empirical foundation, indeed becomes partly foundational,6 and so is really “foundherentism”—the name coined by epistemologist Susan Haack for her position that justified beliefs not only need to cohere with each other but also need an empirical foundation in experience.7 And metaphorical analogues for a coherence network—the above-mentioned spider’s web, a geodesic dome, a tepee frame of upright sticks leaning against each other at their top ends8—also use an empirical foundation: the ground to support either them or what they must be A attached to. Of course, the ground need not be the firm, solid one of absolute foundationalism. Instead, it may be the yielding, swampy ground in philosopher-of-science Karl R. Popper’s metaphorical analogue for the support of objective science by a relativistic foundation: Science does not rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or “given” base; and if we stop driving the piles deeper, it is not because we have reached firm ground. We simply stop when we are satisfied that the piles are firm enough to carry the structure, at least for the time being.9

But whether the empirical “data of experience” is regarded as a ground— firm or yielding—or as a foundation—absolute or relativistic—a coherence network of beliefs must rest on or cohere to it. In regard to justifying the use of SOROEP in literary explication, the following chain of reasoning shows that, like relativistic foundations, the coherence of coherentism justifies the use of SOROEP (and not merely because coherentism is also relativistic by nature): (1) Absolute and relative epistemic probability are alike in being based on both of the two traditional foundations.B (2) Absolute epistemic probability is incorporated into both foundationalism and coherentism.10 (3) Therefore, relative epistemic probability must alike be so incorporated, especially because coherentism partly depends on the empirical “data of experience” and so is really “foundherentism.” (4) This leads to the inference that, with relative epistemic probability (or SOROEP) incorporated into coherentism, the coherence of coherentism justifies the use of SOROEP. This justification is also shown by the following alternate chain of reasoning:

Rationalism, Empiricism, and Coherentism

89

(1) A literary work may contain many different or overlapping coherence networks, and each one provides a different reading of the work. (2) In any of the coherence networks, all the “candidate” beliefs cohere with the evidence (other beliefs in the network and the “data of experience”). (3) All the beliefs in a network are “more or less plausible”11—i.e., moreor-less as probable as each other—and any of them can be dislodged by ones that are “more plausible.”12 (4) This relative plausibility or probability exists not only within a coherence network in a literary work but also between two different coherence networks in that work. (5) Therefore, each of the two networks provides a different first-orderrelative-probability reading of the work, and the two resultant readings can justifiably be compared as to relative plausibility—a condition of secondorder relative probability or SOROEP. (6) Consequently, the coherence of coherentism justifies the use of SOROEP in literary explication. Lastly, the coherence of coherentism justifies the use of SOROEP even if, in the comparison of the two readings, the difference between the degrees to which each is coherent may not be the same as their relative probability (see the beginning of section 4.11 above), since the beliefs that best cohere together may not be individually the most probable beliefs. Unlike foundationalism, coherentism is not familiar outside the scholarly philosophical community, and so, although it has received a fair share of criticism by philosophers,13 it has escaped the widespread disparagement that foundationalism has received outside that community. Outside it, disbelief in foundationalism is ascendant because, according to one philosopher, the term “has come to be used with disreputable looseness in . . . literary circles.”14 There, it is believed that no foundation of any kind exists, can exist, or should exist. And even within the philosophical community, disbelief in foundationalism is common because, “in the most historically prominent forms of the doctrine,” its “core has been obscured by other elements that are not at all required for their being cases of foundationalism.”15 Consequently, although many philosophers continue to believe in it,16 the traditional rationalistic and empirical kinds of foundation are no longer widely accepted. Moreover, since in a coherentist network every belief coheres with the “data of experience” and thereby becomes partly foundational, coherentism becomes only as acceptable (or unacceptable, depending on your viewpoint) as foundationalism . And coherentism cannot do without the “data of experience”—for without it as the “ground” to rely on, the

90

Chapter Twelve

metaphorical analogues of a coherentist network (the spider’s web, geodesic dome, and tepee frame) as well as Popper’s analogue for the relativistic foundation of objective science (the building erected on piles driven down into a swamp) would be left impossibly “floating in empty space”17 and so would no longer be appropriate. However, this does not apply to SOROEP, for though it partly rests on rationalistic and empirical foundations, it does not need them in order to exist. As the title-page epigraph noted, the “elastic net of probability connections”—the metaphorical analogue that includes SOROEP—floats in empty space anyway. A “point of absolute certainty . . . to which to attach our knowledge of the world” is not needed. SOROEP can rely on other concepts that, as the next chapter will show, can serve as relativistic foundations. That is one reason why the justification of SOROEP by relativistic foundations rather than only by coherence has been and will continue to be the primary basis of discussion in this work. There is also another reason. The possibility of a foundation being relativistic and still being a foundation is not normally considered; therefore, most of the times when the term foundation is mentioned in this work, it needs to be modified by that adjective. By this means, a main purpose of the work is advanced: to emphasize the relativity feature essential in the epistemological theory justifying a particular type of literary explication. By contrast, the coherence of coherentism, being relativistic by nature, does not need that adjective modifying it, and so the purposed emphasis on relativity would be absent if coherentism were the primary basis of discussion in this work.

CHAPTER THIRTEEN THREE “QUASIFOUNDATIONAL” CONCEPTS

Recently, three different concepts have been cited as providing a middle-way viewpoint between foundationalism and antifoundationalism inasmuch as the concepts prevent a condition of antifoundationalism without requiring a condition of absolute philosophical foundationalism. Concerning the first concept, Kant’s “transcendental” principles—they make possible both rationalistic and empirical foundations (assuming that these foundations exist) and so, not being dependent on them, would not be affected by disbelief in their existence. The second concept, implicit (or tacit) procedural knowledge, was described above in section 5.2. It is an unconscious knowledge not dependent on conscious knowledge and so not based on the conscious perception of experience, an empirical foundation. Therefore, disbelief in the foundational basis of that perception would not affect implicit procedural knowledge. Concerning the third concept, second-order logic—it is possible to have a foundation without foundationalism in the logic of mathematics, specifically in second-order logic.1 Unlike the first two concepts, second-order logic is “quasifoundational” by being relativistically foundational.

13.1: Kant’s “Transcendental” Principles According to Patrick Colm Hogan, Kant’s “transcendental” principles are still valid. They represent, a priori, the universal condition that allows things to be objects of cognition2—i.e., allows us to objectify things and so think and reason about them and become familiar with them. Such principles are “imposed by the very nature of reason itself.”3 To Hogan, we could not think or even act without them. The “most ordinary interpretive or inferential acts would be baffled by an inadjudicable proliferation of alternative hypotheses,” and we would be unable to make not only analyses and arguments but even the simplest observations. Consequently, we are compelled to accept such principles even though we cannot show them to be true, cannot prove their validity without presupposing it.4

92

Chapter Thirteen

According to Hogan, the most obvious transcendental principles are the laws of logic (for example, the principle of noncontradiction that disallows a sentence from being “both true and false in the same sense at the same time”), but a less obvious one—or one “akin to” and having the “status” of such a principle—is the criterion of comparative simplicity: the criterion that relativistically favors whichever one of two alternative hypotheses accounts either “for (roughly) the same data with fewer theoretical principles” or “for more data with (roughly) the same number of theoretical principles.”A But Hogan implies that comparative simplicity cannot then approximate comparative probability because he considers probability to mean not epistemic but only physical probability—“a statistical notion that appears to have no clear interpretation in this context”5 even if it were possible to see “how probability could be calculated in this context.”6 However, comparative simplicity is a measure of the same order as SOROEP and, though not the same as SOROEP, can often approximate it and yield the same result. Consequently, like comparative simplicity, SOROEP too could be “akin to” and have the “status” of a transcendental principle.

13.2: Implicit (or Tacit) Procedural Knowledge Although implicit procedural knowledge is not dependent on conscious explicit knowledge, the reverse is not true. Implicit knowledge is an “axis” that “anchors” explicit knowledge and so needs no other foundation or “other justification than that it vindicates itself in the lives of . . . human beings, even when and as they discuss whether knowledge is possible at all.”7 Therefore, if the human ability to make SOROEP judgments is innate, or even if only development of the implicit procedural knowledge underlying such judgments is innate, then SOROEP has a foundation, albeit a cognitive one.

13.3: Second-Order Logic Like implicit procedural knowledge, second-order logic too “vindicates itself in the lives of . . . human beings”—that is, in the lives of mathematicians. Whereas foundationalism implies an absolute theory supporting mathematics, a “[f]oundation without foundationalism is an account of mathematical practice, a codification of mathematical concepts and theories that reflects the mathematician's informal view of her subject matter” and “reveals connections between theories.”B But “theories” are relationships between things, “connections between theories” are relationships between relationships between things, and, whereas first-

Three “Quasifoundational” Concepts

93

order logic cannot express such second-order relationships, second-order logic can: First-order logic is all about saying that things have properties and bear relationships to other things: . . . “there is some thing that stands in the relation of being-larger-than to every other thing.” The language of second-order logic allows us to quantify [i.e., use quantifiers like some and every] over not just things themselves but also over the properties that things have and the relationships that things bear to one another. In other words, in second-order logic we can write sentences containing such expressions as . . . “there are some relationships that.” In second-order logic (but not first-order logic) it is possible to express such propositions as . . . “every constitutional relationship that holds between the US President and Senate also holds between the British Prime Minister and the House of Commons.”8

Second-order logic is therefore more comprehensive than first-order logic and “provides a better model of mathematical practice just because it does not possess the typical first-order properties.”9 Those properties are essential in rationalistic foundationalism, but, since second-order logic does not possess them, it would not be affected by disbelief in that foundationalism. Therefore, second-order logic provides a foundation without foundationalism and thus an alternative to both the view that mathematics must have an absolute theoretical foundation and the opposite view that it neither has nor needs any foundation. Likewise, second-order logic can provide an alternative (if one is desired) to the view in mathematics and philosophy that specifically probability, which is a form of logic, “neither has nor needs any philosophical foundation”10 and to the view in literary theory that literary explication (whether or not through the use of SOROEP) neither has nor needs any philosophical foundation. But, further, second-order logic can provide a philosophical foundation (albeit a relativistic one) specifically for the use of SOROEP in the practice of explication because SOROEP has a second-order nature like that of second-order logic. This likeness can be shown by comparing a description of a SOROEP statement with a description of the proposition illustrating second-order logic in the above quotation. A SOROEP statement is not merely about an unspecified (because indeterminable) probability relationship between two things (e.g., an explication and a body of evidence) and about an unspecified probability relationship between two alternative things (e.g., an alternative explication and body of evidence) but also about the probability relationship between the two relationships—a probability relationship specified either as difference (signified by “more probable than”) or as approximate sameness (signified by “more-or-less as probable as”). In like manner, the proposition

94

Chapter Thirteen

illustrating second-order logic in the above quotation is not merely about every constitutional relationship (unspecified) between the President and Senate and about every constitutional relationship (unspecified) between the Prime Minister and House of Commons but also about the constitutional relationship between every pair of the respective relationships—a constitutional relationship specified as sameness (signified by “also”). Thus, a SOROEP statement has a second-order nature like that of second-order logic. Moreover, just as a SOROEP statement incorporates a second-order relativity (see sec. 4.3 above), so does a proposition in second-order logic. The above-quoted proposition communicates only relative knowledge about each constitutional relationship between the President and Senate—that each relationship is the same as a respective constitutional relationship between the Prime Minister and House of Commons. And the proposition communicates (circularly) only the same relative knowledge about that respective constitutional relationship between the Prime Minister and House of Commons—that it is the same as the respective aforementioned one between the President and Senate. The likeness between a SOROEP statement and a statement in secondorder logic is also discernible when the above description of a SOROEP statement is specifically about written-language explication and is expressed in the following different way: Except for explications not based on any evidence at all and so not inferable from any evidence, it can be stated that every explication of a work either is (A) more strongly inferable from some body of evidence than some other explication is inferable from some identical, partly different, or wholly different body of evidence or is (B) more-or-less as strongly inferable from some body of evidence as some other explication is inferable from some identical, partly different, or wholly different body of evidence. Although this statement contains seven quantifiers (every and some), its section B is an alternative to its section A, and so the statement has, in effect, only four quantifiers (which are italicized). But since “it takes [only] four quantifiers to reach a sentence that cannot be translated into a first-order language,”11 the statement is in a second-order language. Other examples can help show the distinction: “Every writer likes his first book almost as much as every critic dislikes the latest book he reviewed” versus “Every writer likes a book of his almost as much as every critic dislikes some book he has reviewed.” Or “The bestselling book by every author is referred to in the longest essay by every critic” versus “Some book by every author is referred to in some essay by every critic.” The first and third examples, with only two quantifiers each, are in a first-order language; the second and fourth examples, with four

Three “Quasifoundational” Concepts

95

quantifiers each (“a” having the indefiniteness of “some”), are in a secondorder language.12 Lastly, the relationship between SOROEP and second-order logic is manifest both in the quasi-SOROEP function that determines neuron firing in an artificial or real neural network and in the “balancing” function that determines neuron firing in the brain (see sec. 9.2 and 9.4 above). These functions “can be formulated only by means of second-order logic”—i.e., “by means of quantifying over properties of neurons and/or relationships between neurons.”13

CHAPTER FOURTEEN SOROEP JUDGMENTS AND INTERNALREPRESENTATION JUDGMENTS

Not the second-order logic of SOROEP judgments but their secondorder nature together with the implicit procedural knowledge underlying them can be shown through the work of cognitive psychologist Roger N. Shepard and associates on the internal representation of external objects. The human subjects they test seem unable to tell us anything significant about the structure of an individual mental image as such. What they can, however, tell us about is the relations between that internal representation and other internal representations . . . . [W]e can readily assess within ourselves the degree of functional relation between any two [internal representations] by a simple, direct judgment of subjective similarity. Moreover, we can do this even though . . . we may be unable to communicate anything about the absolute nature of either of the two representations taken separately. (Thus, we easily report that orange is more similar to red than to blue without being able to say anything significant . . . about the unique subjective experience of the color orange itself.)1

As a result, an “approximate parallelism” or “structural resemblance” or “isomorphism” exists not in the first-order relation between (a) an individual object, and (b) its corresponding internal representation—but in the second-order relation between (a) the relations among alternative external objects, and (b) the relations among their corresponding internal representations. Thus, . . . the internal representation for a square . . . should . . . have a closer functional relation to the internal representation for a rectangle than to that, say, for a green flash or the taste of persimmon.2

Or, in a human subject comparing only different geometric figures, the internal representation for the square should have a closer functional relation to the internal representation for the rectangle than to that, say, for

SOROEP Judgments and Internal-Representation Judgments

97

an ellipse.A Likewise, the internal representation for the square should have about as close a functional relation to the internal representation for the rectangle as to that, say, for a rhombus (also called a lozenge or diamond), since the rectangle and not the rhombus has four right angles like the square but the rhombus and not the rectangle has four equal sides like the square.B Since the above second-order isomorphism exists not only with objects but also with concepts,3 it can be stated analogously that there is an approximate parallelism or structural resemblance or isomorphism not in the first-order (probability) relation between (a) a particular body of evidence pertaining to a literary work and (b) an explication (“internal representation”) of that body of evidence but in the second-order (probability) relation between (a) the relation between two variant bodies of evidence pertaining to the work and (b) the (probability) relation between their corresponding explications. Therefore, just as the internal representation for the rectangle is functionally closer to that for the square than that for the ellipse is, so can a reading be probabilistically closer to zero than another reading is. Or, if the metaphor of closeness is changed to a metaphor of distance, the other reading can be more distant from zero probability than the first reading is and consequently more probable than the first reading. Likewise, just as the internal representation for the rectangle is about as functionally close to that for the square as that for the rhombus is, so can a reading be about as probabilistically close to zero as another reading is. Or, in terms of metaphorical distance, one reading can be about as distant from zero probability as the other reading is and consequently more-or-less as probable as the other reading. In essence, human estimation of the relative probability between two different readings of a literary work is similar to human estimation of the extent of similarity (or difference) between the internal representations of two different external objects. Moreover, the latter estimation is an example of “primitive, internal assessments of similarity . . . (implicit or even unconscious though they may often be) that mediate every response we make to any situation that is not exactly identical to one confronted before”4—in other words, it is an example of implicit (or tacit) knowledge. In this respect also the two estimations are similar. Thus, besides the latter estimation, human estimation of the relative probability between two different readings of a literary work is also an example of implicit (or tacit) procedural knowledge, therefore may be related to the process of empathizing or indwelling5 that (along with “switching” from one’s own primary representation to a second-order representation of someone else’s primary representation) may be made possible by the activation of one’s mirror neurons, and so may account for

98

Chapter Fourteen

“mind-reading” someone else’s thoughts or intentions—e.g., those embodied in a text by its author.6 * * * * * * In summary, then, literary explication can have a relativistic philosophical foundation in SOROEP, which rests on such implicit abilities as the inference of evidence from data, the use of evidence to develop justification, and the comparison of justifications. In turn, such abilities rest not only on a traditional though disputed philosophical foundation—reason or rationalism—but also on a cognitive one—implicit (or tacit) procedural knowledge. Moreover, SOROEP rests as well on another traditional though disputed philosophical foundation— empiricism—not only because the observed organization of the visual field of objects cognitively parallels and extends to the organization of abstract concepts but also because of the evidence of semantic changes produced by visual sensory experience of the external world. And, further, SOROEP is a relativistic example of Kant’s “transcendental” principles; it is similar to an example of implicit (or tacit) knowledge—the extent of difference between the mental images of two different external objects; and it either rests on a relativistic philosophical foundation—second-order logic—or is an example of it, as shown by characteristics common to both SOROEP and second-order logic: each is a form of logic, each is relativistic and its relativity is of the second order, statements of each can contain four quantifiers, and each can account for neural-network choices in a computer or brain and for neuron-firing determinations in a brain. As a result of all this, SOROEP and literary explication based on SOROEP rest directly or indirectly on several relativistic philosophical foundations as well as a cognitive one. But since these foundations are all interrelated (and could undoubtedly be interrelated in other ways more technically complex and comprehensive than they have been in the present work), I consider them over-all as one aggregate foundation and so have used foundation only in the singular in the title and Preface of this work. One last note. I realize that, despite all the research, theories, evidence, and arguments presented in this work, the postmodern reader may still be skeptical that readers using SOROEP to explicate literature can be free enough from extraneous influences and considerations to assess disinterestedly both the first-order relativity of a reading to its evidence and the second-order relativity of different readings to each other. Of relevance here is David H. Richter’s reprinting in 2000 of Helen Vendler’s 1980 appeal to MLA members to teach literature disinterestedly by freeing themselves, when teaching, from the extraneous considerations of theory

SOROEP Judgments and Internal-Representation Judgments

99

and politics—i.e., by reverting to an earlier “attitude of entire receptivity and plasticity and innocence before the text” so that “intense engagement” with it results in “self-forgetfulness.”7 However, Richter reprinted Vendler’s appeal in order to take issue with it: “there isn’t any way to avoid politics merely by willfully ignoring the social and political implications of literary texts. Evading the politics of literature is only another political way of reading.” But Richter then goes on to state that disinterestedness before the text can indeed be “recaptured,” though not by reverting to an earlier attitude and ignoring theory, but by mastering it and so transcending it: “The process of reading is not regressive but transcendent. When we have fallen into theory, we can recapture our transparency of response to the text not by ignoring theory but by mastering it.”8 However, whether “transparency of response to the text”— i.e., disinterestedness—is approached by means of ignoring theory or mastering and transcending it, the means is of secondary importance as long as disinterestedness is approached. And it can be when SOROEP is used, since probability provides a way to give form to or objectify the attempt to approach disinterestedness.9

AFTERWORD A SUPPLEMENT ON JUSTIFICATION

In this work I tried to avoid the controversial, epistemologically absolute terms meaning, truth, knowledge, and their syntactic variants (unless, of course, I was quoting or referring to others’ use of them), for I consider them to be unnecessary in order adequately to describe and justify SOROEP-based explication. By contrast, I necessarily had to use other controversial terms such as innate, justification, belief, inference, evidence, and their syntactic variants. And if justification of any such term I used is considered vulnerable to the condition of infinite regress, my recourse would be to fall back on SOROEP: to claim that the concept represented by the term I used is justified by being, not true or known in the epistemologically absolute sense, but more probable than competitive concepts (even as a belief is justified if it has “a better chance of being true than its competitors”1). Even the term justification is justified in this way, for the term can be used (as epistemologist Richard Fumerton uses the term “P”) “in such a way that one has justification for believing P only if . . . the justification makes P more likely than not to be true”2—i.e., makes justification more probable than its competitive concept, unjustification. It would then appear that I would have to claim additionally that that relative probability is itself justified by being more probable than concepts competitive with that relative probability (e.g., absolute probability and certainty as well as other relative probabilities), that this second relative probability is, in turn, justified by being more probable than competitive concepts, and that “all the way down” it would be relative probabilities, each one different from the others and so non-repeating (thus avoiding “ a form of coherentism—infinitely long circles”3) and resulting in a regressive “infinite ancestral justification chain” whose “possibility . . . has had its defenders.”4 (After all, though infinite, the chain is a recursion— “a constituent that contains a constituent of the same kind,”5 and this second constituent contains a constituent of the same kind, and this third constituent . . . and so on to infinity.) Among those defenders has been epistemologist Peter Klein. But, besides Klein’s ways of defusing the

A Supplement on Justification

101

problem of such a chain,6 there could be at least three other ways of defusing it in the case of a chain of relative probabilities: (1) Just as traditionally a justification chain can be ended at a step that represents an absolute foundation,7 the relative-probability justification chain would be ended even at the second step down by the relativistic foundation that the second relative-probability judgment represents.A (2) That relativistic foundation is related to “modest” foundations in the following way: the relativistic foundation sustains SOROEP-based explication, which is equivalent to “Inference to the Likeliest [i.e., most probable] Potential Explanation,” which is a version of “Inference to the Best Explanation,”8 which is a standard philosophical-argument form that establishes and validates “modest” empirical foundations.9 (3) With each step down in the infinite justification chain, the relative probability being more probable in that step than competitive concepts would serve, in effect, to increase the degree to which the first-step concept (the one represented by the controversial term I used) is more probable than its competitive concepts—until, after an almost infinite number of steps down, that degree would approach certainty ( i.e., complete justification) as a limit and so approximate it although theoretically never reach it. This third approach to complete justification in the chain may be mathematically or visually illustrated by any one of four recursive analogues (or what in epistemology is called “thought experiments”): odds raised to the nth power, a graph extending to infinity, a theoretically almost-infinite physical system, and an infinitely recursive image. [1] Odds Raised to the Nth Power. With each step down in the chain, the relative probability in that step can be, for the sake of illustration, converted to percentages written as decimals, expressed as odds, and given the highest mathematical odds rounded off (for convenience) to the nearest whole percent—i.e., .99 to .01. (Of course, the relative probability in each step is, in fact, different from that in every other step, but here, merely for a quicker illustration of results, each step will be given a relative probability of .99 to .01.) The .99 and .01 in the first step would each be multiplied respectively by the .99 and .01 in the second step, and each product would be made proportionate to 1—the sum of the two products— by being divided by that sum, resulting in an effectual (as well as further.992 recursive and third-order) relative probability or odds of to 2 .99 + .012

102

Afterword

.012 , or .999898 to .000102. Then this process of multiplication .992 + .012 and proportionating to 1 would be repeated for the .99 and .01 in the third step, resulting in an effectual (as well as still-further-recursive and fourth.013 .993 to , or order) relative probability or odds of .993 +.013 .993 + .013 .999999 to .000001. Then the process would be repeated for each further step until, after an almost infinite number of steps, the effectual relative probability or odds would approach 1 as a limit and so approximate it although theoretically never reach it. If this illustration had truly shown the decimal odds in each step to be different from those in every other step, the gradual approach of the odds to 1 as a limit might not have been apparent as early as by the third step down, and so many more repetitions of the process of multiplication and proportionating might have been necessary before the gradual approach of the odds to 1 as a limit became apparent. [2] A Graph Extending to Infinity. The graph can be made, first, by converting the relative probabilities in the chain to ranges of percentages written as decimals and then, in order to show these ranges graphically, by translating them into a range of x-coordinates along the horizontal axis of a graph. First, the conversion: Because the concept represented in the first step of the chain by a controversial term I used and the concept of relative probability in all steps thereafter are each more probable than competitive concepts, the relative probability of either concept would be equivalent to the range of percentages (for convenience, rounded off to the nearest percent) from .51 to .99; whereas if the relative probability of either concept were less probable than a competitive concept, that relative probability would be equivalent to the range from .01 to .49; if it were as probable as competitive concepts, it would be equivalent to .50; if it were instead a certainty, to 1; and if instead a competitive concept were certain, to 0. Next, the percentages can be translated into x-coordinates by means of the equation x = a greater percentage (e.g., .51) minus its complementary lesser percentage (.49—i.e., 1 - .51). Therefore, x = p - (1 - p) = p - 1 + p = 2p - 1. As a result, the percentage range from .51 to .99 would be translated into the x-coordinate range from just above 0 to just below +1, whereas the percentage range from .01 to .49 would be translated into the x-coordinate range from just above -1 to just below 0, .50 would be translated into the x-coordinate 0, 1 into the x-coordinate +1, and 0 into the x-coordinate -1.

A Supplement on Justification

103

As for the vertical y-axis of the graph, each y-coordinate would represent a particular step in the chain from 0 to infinity (’); and even though going from a step in the chain to a step justifying that step is visualized and described as a step “down,” the chain would be shown here extending upward instead so as to conform to the traditional upward extension of the vertical y-axis. The graph would visually represent the x2 y equation y = or, transposed, x = ± 2 1 + y and would be a 1-x parabola-like curve (though not a parabola) extending symmetrically from a vertex of x,y-coordinates 0,0 upward infinitely on the left toward x,ycoordinates -1,’ and on the right toward +1,’. That is, x = ±1 would be the limits or “asymptotes” that the curve would approach but reach only at infinity (just as a hyperbola curve does, though this curve is not a hyperbola). At the outset the first-step concept could be assumed to be, at worst, only minimally more probable than its competitive concepts (i.e., .51 to .49) and so is represented by the coordinates x = just slightly more than 0 and y = +1; and in each successive step (represented by an increasing ycoordinate upward and to the right of the vertex) the relative probability also could be assumed to be, at worst, only minimally more probable than its competitive concepts. But, with each successive step, the relative probability being more probable in that step than competitive concepts would serve to cause the first-step concept to be, in effect, increasingly more probable than its competitive concepts (i.e., x would become, in effect, increasingly more than 0, though still less than +1), until, when the number of steps (or the y-coordinate) was almost infinity, the relative probability being more probable in those steps than its competitive concepts would serve to cause the first-step concept to be, in effect, almost certain (i.e., x would become, in effect, almost +1). Of course, the probability of the first-step concept would theoretically never reach +1 until the chain reached infinity, but before the chain reached it, that probability would at least approximate +1—i.e., approximate certainty or complete justification. [3] A Theoretically Almost-Infinite Physical System. For such a system, assume that a plumbing assembly of tanks connected by pipes is built so that its bottom row of tanks rests on the earth but that its topmost tank (Tank A) is as far above the earth as gravitational attraction still allows water in Tank A to flow back toward the earth through piping and lower rows of tanks. All the tanks in the assembly are the same shape and size, but two pipes of different widths (diameters) extend downward from the bottom of Tank A.

104

Afterword

Now, let me return for a moment to the justification chain, where the concept represented in the first step by a controversial term I used and the concept of relative probability in all steps thereafter are each more probable than competitive concepts. As was assumed in the above graph analogy, each concept at the outset could be assumed to be, at worst, only minimally more probable than its competitive concepts—i.e., .51 to .49. Let the same ratio be used for the ratio between the two pipes in this physical-system analogy. Therefore, one pipe is 51 mm. in width and extends from the bottom of Tank A down into the top of Tank B1; the other pipe is 49 mm. in width and extends from the bottom of Tank A down into the top of Tank B2, which is alongside Tank B1. Then, from the bottom of Tank B1, two pipes of the same unequal widths extend downward, the wider pipe into the top of Tank C1 and the narrower pipe into the top of Tank C2, which is alongside Tank C1. And from the bottom of Tank B2, two pipes of the same unequal widths extend downward, the wider pipe into the top of Tank C3, which is alongside Tank C2, and the narrower pipe into the top of Tank C4, which is alongside Tank C3. Then, assume that the same procedure continues and leads to the next lower row of tanks (D1 to D8), then to the next lower row of tanks (E1 to E16), and so on, to a final row of tanks that rest on the earth. In this final row the odd-numbered tanks (those fed by the wider pipes) are sealed shut at the bottom, and the evennumbered tanks (those fed by the narrower pipes) are open at the bottom. At the outset all the water in the system is in Tank A but then is allowed to flow freely into the tanks below it even to the bottom row. With each successive row of tanks that the water enters—i.e. (in terms of the chain), with each successive step—an increasing amount of the water would enter the totality of odd-numbered tanks in the row, so that, when all the water flows into the bottom-row tanks, most of it would be in the odd-numbered tanks and so would still be in the system because of their sealed-shut bottoms. In other words, with each successive row (or each successive step in the chain), the increasing amount of the water entering oddnumbered tanks would, in the end, cause the system to save more of its water—i.e., to be more “justified.” By contrast, with each successive row, a decreasing amount of the water would enter the totality of evennumbered tanks in the row, so that when all the water flows into the bottom-row tanks, the small amount that would flow into even-numbered tanks would be lost to the system because of their open bottoms and would sink into the earth. Now, assume even more theoretically—indeed, fantastically—that this plumbing assembly is built to a height approaching infinity and that, even

A Supplement on Justification

105

at that height, the gravitational attraction of the earth is enough to draw the water in the topmost tank down through the pipes and lower tanks toward the earth. In this case, the amount of the water finally entering bottom-row odd-numbered tanks would be almost all the water that flowed out of the topmost tank, and the amount of the water finally entering bottom-row even-numbered tanks would be infinitesimal. Of course, those oddnumbered tanks would not contain absolutely all the water that flowed out of the topmost tank, and those even-numbered tanks would contain more than absolutely none of that water (at least before their open bottoms allowed that water to sink into the earth), but these absolute conditions as well as the complete saving of the water in the system—i.e., its complete justification—would at least be approximated. [4] An Infinitely Recursive Image. A well-known image that shows an infinite number of recursions and self-replications is the camera shot near the end of Orson Welles’s film Citizen Kane when the aged Kane played by Welles slowly walks through a hallway in which the two parallel walls are covered by full-length mirrors facing each other. As a result, an infinite succession of receding and diminishing sideview images of Kane is shown reflected in an infinite succession of receding and diminishing reflected mirrors. And, with the reflected Kane images receding toward infinity and becoming progressively smaller—i.e. (in terms of the chain), with each successive step—the diminishing images would serve to cause the original figure of Kane to seem effectually or symbolically diminishing as well, until, when one of the images (or number of steps) should almost reach infinity, the image would be almost gone and would serve to cause the original figure of Kane to seem effectually or symbolically almost gone as well. Of course, the image would theoretically not be completely gone until a reflected mirror bearing it reached infinity, but before one reached it, complete disappearance of the image would at least be approximated, complete disappearance of the figure would at least be symbolized, and (in terms of the chain) complete justification would at least be approximated. Here an objection might be raised that the above last three analogues are not adequate analogues for the described infinitely regressive justification chain because, in each analogue, the first-step concept (the ycoordinate equal to 1, the topmost tank, the Kane figure) is the same in kind as the repetitive concept in all subsequent steps (the y-coordinates from 2 to ’, the other tanks, the Kane images), whereas the concept represented in the first step of the justification chain is different in kind from the repetitive concept (relative probability) in all subsequent steps. However, I do not think the objection valid. First, since y is an unknown

106

Afterword

quantity in an equation and a coordinate of a graph curve, it is free to represent, in the same equation and on the same graph, not only numbers of different kinds (e.g., rational and irrational numbers) but also an infinite number of numbers that are all different from one another and so are nonrepeating. Second, the topmost tank is different in kind from all the other tanks because, at the outset, it contains all the water in the system and all the other tanks contain none; moreover, each horizontal row of the other tanks is different from each of the other rows because each row consists of a different number of tanks and so is non-repeating. Third, the figure of Kane is different in kind from all its reflected images, and since they are different in size and containment from one another, they are non-repeating. Each of the four analogues parallels the effects of the justification chain—that, as the chain lengthens, the degree to which the first-step concept is more probable than its competitive concepts increases, although it can never reach certainty (i.e., complete justification) as a limit. This is the same result that Peter Klein found, though by different means, concerning warrant (which is less than certainty but more than wellfoundedness10): “Warrant increases . . . because we are getting further from the questioned proposition. . . . Warrant, and with it rational credibility, increases as the series lengthens” even though theoretically “the matter is never completely settled.”11 Incidentally, I have wondered whether there might be an alternative to the injunction that each step (or link) in the justification chain must be nonrepeating lest a form of coherentism be in the chain. Couldn’t repetitive steps be viewed instead not as “cohering” with one another as in coherentism but as “supporting” and “being supported by” one another as in foundationalism? In the case involving relative probabilities, all steps after the first one could then be considered repetitions of the same relative probability and not be problematical. The chain would appear to be finite and on its way toward a basic justification and an absolute foundation, even though it would really turn out to be infinite and on its way toward only approaching and approximating but never reaching a basic justification, thus signifying only a relativistic foundation. The chain would be a foundationalist process but would reach an infinitist result. Metaphorically, the analogue of the “Baconian brick wall, with block supporting block upon a solid foundation”12 would be replaced by Karl Popper’s analogue for the relativistic foundation of objective science: a building erected on piles driven down into a bottomless but supporting swamp.13 But even if, after all, repetitions of the same relative probability are indeed viewed as a form of coherentism in the chain, wouldn’t the chain be only partly coherentist and (as discussed above) still be partly

A Supplement on Justification

107

foundationalist and infinitist? If so, the chain would exemplify all three of these theories of justification. And would that suggest the possibility of a “unified theory” of justification? Lastly, it should not be surprising that the problem with the infiniteregress chain can be defused in the matter of justification since the problem with the chain has been defused in other matters—for instance, in “the idea that thoughts are internal representations.” In that idea it would seem that a little man called a homunculus or demon must be in the head to look at internal representations, and “the little man would require an even littler man to look at the representations inside him, and so on, ad infinitum.” But this does not lead to an infinite regress because an internal representation is not a lifelike photograph of the world, and the homunculus that “looks at it” is not a miniaturized copy of the entire system, requiring its entire intelligence. . . . Instead, a representation is a set of symbols corresponding to aspects of the world, and each homunculus [in a set of homunculi] is required only to react in a few circumscribed ways to some of the symbols, a feat far simpler than what the system as a whole does. The intelligence of the system emerges from the activities of the not-so-intelligent mechanical demons inside it. . . . [I]f representations are read . . . by [a set of stupid] demons . . . , and the demons have [each a set of] smaller (and stupider) demons inside them, eventually you have to . . . replace the [set of] smallest and stupidest demons with machines—in the case of people and animals, machines built from neurons: neural networks.14

APPENDIX EVIDENCE AND HYPOTHESES AMONG PROBABILITY TYPES

110

Appendix

Evidence and Hypotheses among Probability Types

111

Even under (or despite) the assumption that the amount of evidence and the number of possible hypotheses are infinite, the explication process using SOROEP can still be shown to be feasible. This can be done by using a kind of extrapolation and comparison to show a continuum (rather than merely the difference) between physical and epistemic probability.A This, in turn, can be done through the following seven-step process: extending into the area of physical probability the notion of evidence as it is considered in epistemic probability, dividing physical probability into two types which may be called empirical and actuarial, making some evidentiary and hypothesis-relevant observations about (A) an empirical type of probability situation, comparing these observations with parallel ones about (B) an actuarial type of probability situation, comparing these latter observations with parallel ones about (C) a mixed actuarialepistemic type of probability situation, comparing these latter observations with parallel ones about (D) an epistemic type of probability situation, and comparing these latter observations with parallel ones about (E) another situation of the epistemic type, a poetry-explication situation. (For a preliminary overview of this continuum and of the parallelism between the observations, see the title outline on the page opposite.) (A) EMPIRICAL PROBABILITY A common empirical-probability situation is the tossing of an irregular coin repeatedly so as to discover the probability of its landing on one face rather than the other. (1) UNENDING APPROACH. Theoretically, if the coin and the person or machine tossing it never wore out, the coin could be tossed an infinite number of times. With each toss the calculated probability of the result of tossing the coin becomes more accurate, but that probability cannot be considered perfectly accurate until the coin is tossed an infinite number of times—an impossible feat in actuality. Therefore, with each toss, the actual probability can draw toward though never be identical with a theoretical perfectly accurate probability; or, restated in general terms, the obtainable answer can approach though never be identical with a theoretical perfectly accurate answer. (2) INFINITE “EVIDENCE.” All characteristics of the coin (and of the person or machine tossing it) which affect the toss results to a degree from infinitesimal to appreciable may be considered “evidence.” Examples of such “evidence” are the relative sizes, shapes, and thicknesses of the two designs in relief on the two coin faces and the relative degrees to which the

112

Appendix

two perpendicular edges bordering the coin faces are worn down. The number of these characteristics and so the amount of evidence is infinite. (3) NUMBERLESS “HYPOTHESES.” The probability calculated from the toss results may be considered a “hypothesis” about or “reading” of the tossing property of the coin. Since the probability calculated after any toss is different from the probability calculated after the preceding toss and since the number of toss results can theoretically be infinite, the number of probability answers and thus of possible “hypotheses” about the tossing property of the coin can theoretically be infinite. (4) PRACTICAL CURTAILMENT. Because of the above circumstances, after a certain number of tosses the experimenter ceases tossing, for she considers that the probability obtained thus far is accurate enough for her purpose and she assumes that more tossing will not alter the already obtained probability enough to justify more tossing. In this assumption, of course, she may be wrong, but her cessation of tossing does not mean that the probability obtained thus far is inaccurate past the point of usability and that all her work was wasted effort. Her cessation means only that the probability obtained is less accurate than it might have been. Besides, the more tosses she has done, the more probable it is that her assumption is correct. (B) ACTUARIAL PROBABILITY A common actuarial-probability situation is the consulting of statistics on (say) the lifespans of many now-dead cigarette smokers so as to discover the probable lifespan of one particular live cigarette smoker. The desired answer, this smoker’s life expectancy, is not found (as with the coin) by using an “empirical” method directly upon him to obtain results into the future. Instead, the answer is found indirectly—by substituting other (dead) smokers for him, obtaining results about them (their lifespans) from the past, averaging these results, and applying the average to him. (la) UNENDING APPROACH BY ADDITION. Of course, as with the coin tossing, results from the future rather than the past could be used: other live rather than dead smokers could be substituted for the object smoker and, when they died, the average of their lifespans would be applied to him. And, indeed, theoretically, as with the coin tossing, lifespan figures could be obtained from an infinite number of smokers over an infinite time into the future, and each figure would make the average indicate more accurately the object smoker’s probable lifespan, although the average could never indicate it with perfect accuracy since the average

Evidence and Hypotheses among Probability Types

113

of the “other” smokers could never be perfectly identical with the object smoker. However, in actuality, the object smoker would die during that future time and his death would make irrelevant all the collection of other smokers’ lifespan figures, for, now, the desired answer (the object smoker’s lifespan) would be known directly and, unlike the tossing property of the coin, with perfect accuracy. Consequently, at any time before his death, his probable lifespan must be found by obtaining lifespan figures on a finite number of already dead smokers. (1b) UNENDING APPROACH BY SUBDIVISION. Lifespan figures on a finite number of dead smokers are obtained from the narrowest subgroup of such smokers that also fits the object smoker and for which figures are obtainable.1 (For the purpose of this exposition, let us assume that lifespan figures are obtainable on smokers in any subgroup that the actuary doing this work thinks is relevant and wants to use.) In other words, the smokers are divided repeatedly into complementary subgroups, one of which fits the object smoker; and the average lifespan of smokers in a fitting subgroup indicates his probable lifespan more accurately than does the average lifespan of the smokers in the preceding wider subgroup before division. For example, if the object smoker is male, the average lifespan of the male smokers indicates his probable lifespan more accurately than does the average lifespan of all the smokers, male and female. And if, in addition, the object smoker has high blood pressure, the average lifespan of the male smokers with high blood pressure indicates his probable lifespan still more accurately than does the average lifespan of all the male smokers regardless of blood pressure. And so on, with each additional subdivision—male smokers with high blood pressure and low cholesterol levels, male smokers with high blood pressure, low cholesterol levels, and heart-trouble histories, etc.—their average lifespan more nearly approaches the probable lifespan of the object smoker, who is a male with high blood pressure, low cholesterol level, heart-trouble history, etc. Moreover, a narrower subgroup fitting the object smoker is formed not only by adding another characteristic to the description of the preceding subgroup but also by narrowing the range of a characteristic. For example, male smokers with high blood pressure, low cholesterol level, and heart-trouble history are narrowed to male smokers with blood pressure of 180-over-100, cholesterol level of 170, and history of angina pectoris. (Of course, beforehand, the object smoker would have been thoroughly examined and found to have these specific characteristics.) (1c) LIMITS OF APPROACH. Theoretically, if each of the relevant factors in this situation were infinite—the number of the smokers, the

114

Appendix

information available on their lifespans, and the information available on their and the object smoker’s characteristics (the number of these characteristics is actually infinite)—the smokers could be divided into complementary subgroups an infinite number of times according to the infinite number of characteristics. At that point a subgroup would be reached where the average lifespan indicated the object smoker’s probable lifespan with perfect accuracy.B However, the circumstance is only theoretical and so impossible. Since each of the relevant factors in the situation (except the number of characteristics) is actually finite, eventually after repeated subdivisions the number of smokers in a fitting subgroup would reach a point where they would be too few to yield a statistically significant number of lifespan figures from which a statistically significant average could be calculated.2 Therefore, up to that point, with each subdivision the other smokers’ average lifespan can draw toward though never be identical with a theoretical perfectly accurate probable lifespan for the object smoker; or, restated in the same general terms as were used in the coin-tossing situation, the obtainable answer can approach though never be identical with a theoretical perfectly accurate answer. (2) INFINITE “EVIDENCE.” All characteristics in the successive subgroups fitting the object smoker—e.g., the blood pressure of 180-over100, cholesterol level of 170, history of angina pectoris, etc.—may be considered “evidence.” Since the number of these characteristics is infinite, the amount of “evidence” is infinite. (3) NUMBERLESS “HYPOTHESES.” The average lifespan of the smokers in each successive subgroup fitting the object smoker may be considered a “hypothesis” about or “reading” of the object smoker’s probable lifespan. Since the number of such possible subgroups can theoretically be infinite, the number of lifespan averages and thus of possible “hypotheses” about the object smoker’s probable lifespan can theoretically be infinite. (4) PRACTICAL CURTAILMENT. After a certain number of subdivisions, the actuary ceases to subdivide, either because the smokers in any subsequent fitting subgroup would be too few to be statistically significant or because she assumes that no further characteristics affect lifespan detectablyC or because she considers the lifespan average obtained thus far to be accurate enough for her purpose and assumes that, although further characteristics may affect lifespan detectably, further subdivision would not alter the average enough to justify more subdividing. In these two assumptions, of course, she may be wrong. There may be in some

Evidence and Hypotheses among Probability Types

115

smokers an x characteristic that determines their lifespans more than any other characteristic; but that determinism may as yet be unknown to her and to medical science. Indeed, at the time, x and lifespan may seem completely unrelated to each other. Consequently, although lifespan figures on smokers with x characteristic (which the object smoker also has) are obtainable, she does not use them. But this omission does not mean that the probable lifespan which she calculated for the object smoker is inaccurate past the point of usability and that all her work was wasted effort. The omission means only that the calculated probable lifespan is less accurate than it might have been. Besides, the more subdivisions she has done, the more probable it is that her assumptions are correct, although that probability is lower than in the coin-tossing situation. (C) MIXED ACTUARIAL AND EPISTEMIC PROBABILITIES Ascertaining the probable lifespan of one particular live smoker can be an example as well of a situation in which actuarial and epistemic probabilities are mixed.D But, in this situation, the lifespans of other (dead) smokers with characteristics fitting the object smoker are not merely averaged, as they were in the preceding situation. Instead, the actuary now is a physician too and judges the extent to which, separately and in combination, characteristics fitting the object smoker have been found to affect other smokers’ lifespans—judgments that involve much more than merely calculating the other smokers’ average lifespan. Then, using these judgments as guides, the actuary-physician judges the probable extent to which the characteristics have affected and will affect the object smoker’s lifespan and, from that, estimates his probable lifespan. (1c) LIMITS OF APPROACH. Theoretically, if, as in the preceding situation, each of the relevant factors in this mixed-probabilities situation were infinite—including, in this situation, the actuary-physician’s ability to weight and integrate the infinite amount of information availableE—the result would be similar to the result in the preceding situation: the object smoker’s probable lifespan would be indicated with perfect accuracy.F However, as in the preceding situation, the supposition is only theoretical and so impossible. Since each of the relevant factors (except the number of characteristics) is actually finite, the actuary-physician’s estimate can draw toward though never be identical with a theoretical perfectly accurate probable lifespan for the object smoker; or, restated in the same general terms as in the preceding situations, the obtainable answer can approach though never be identical with a theoretical perfectly accurate answer.

116

Appendix

(2) INFINITE EVIDENCE. All characteristics involved in the actuaryphysician’s judgments and estimate—again e.g., the blood pressure of 180over-100, cholesterol level of 170, history of angina pectoris, etc.—are evidence. Since the number of characteristics that can be involved is infinite, the amount of possible evidence is infinite. (3) NUMBERLESS HYPOTHESES. The actuary-physician’s estimate is a hypothesis about or “reading” of the object smoker’s probable lifespan. Since that estimate can vary with the characteristics involved, the judgments made, and the particular actuary-physician choosing the characteristics and making the judgments and the estimate, the number of possible estimates and thus of possible hypotheses about the object smoker’s probable lifespan is infinite. (4) PRACTICAL CURTAILMENT. Because of the above circumstances, the actuary-physician at some point proceeds no further. She may consider that her estimate is the result of weighting and integrating a large amount of information and that to try to weight and integrate more would make the process too complex to allow an estimate to be derived from it. Or, as in the preceding situation, she may assume that no further characteristics affect lifespan detectably, or she may consider her estimate to be accurate enough for her purpose and assume that, although further characteristics may affect lifespan detectably, working with more characteristics and lifespan figures would not alter her estimate enough to justify the work. As in the preceding situation, her assumptions may be wrong but, if so, will only cause her estimate to be less accurate than it might have been. Besides, the more characteristics and lifespan figures she has worked with, the more probable it is that her assumptions are correct, although that probability is lower than in the preceding situation. (D) EPISTEMIC PROBABILITY A common epistemic-probability situation is the deciding on a verdict in (say) a criminal lawcourt case. (1-3) UNENDING APPROACH, INFINITE EVIDENCE, AND NUMBERLESS HYPOTHESES. Here, evidence relevant to the case can be infinite. In trying to decide on the defendant’s guilt or innocence, the panel of jurors (or judge or panel of judges, depending on the judicial system) considers, weighs, and integrates external evidence—the “facts” in the case. But especially when this evidence alone is not sufficient to elicit a unanimous verdict from the jurors, they can also consider, weigh, and integrate internal evidence—that which constitutes the defendant’s

Evidence and Hypotheses among Probability Types

117

“character” and perhaps that which constitutes the “character” of each of the other persons involved in the crime. The jurors can try to understand the character of the particular person by considering his verbal and nonverbal behavior as noted by them, recounted by witnesses, or recorded in documents, and by considering the events and circumstances that, throughout his life, may have formed his character—an amount of information that can be infinite. Moreover, although the verdict can be one of only two alternatives (“guilty” or “not guilty”), it is an abbreviated or summary expression of the jury’s hypothesis or “reading” of the case; and the jury’s possible hypotheses can be infinite in number since each can be based on both external and internal evidence: what occurred in the performance of the crime and what of relevance occurred before and after it—occurrences that can be mental, emotional, and psychological as well as behavioral in the persons involved and consequently that can be infinite in number. Therefore, the jury’s hypothesis can draw toward though never be identical with a theoretical hypothesis encompassing the potentially infinite number of such occurrences; or, restated in the same general terms as in the three preceding situations, the obtainable answer can approach though never be identical with a perfectly accurate answer. (4) PRACTICAL CURTAILMENT. Because of the above circumstances, the jurors cease deliberating at some point and render their verdict. As in the two preceding situations, they assume that no further evidence will basically change their hypothesis about the case. Of course, they may be wrong, and, if they are, the wrong assumption may cause their hypothesis to be not merely (as in the two preceding situations) less accurate than it might have been but so inaccurate as to lead to a wrong verdict. However, the more evidence they have considered, weighed, and integrated, the more probable it is that their assumption is correct, although that probability is lower than in the preceding situation. (E) EPISTEMIC PROBABILITY again Another epistemic-probability situation is the deciding on an explication of a poem. (1-3) UNENDING APPROACH, INFINITE EVIDENCE, AND NUMBERLESS HYPOTHESES. Here, evidence relevant to the poem can be infinite. In trying to decide on the explication, the reader considers, weighs, and integrates external evidence—the language of the poem. But especially when this evidence alone is not sufficient to elicit a clear decision from the reader, she can also consider, weigh, and integrate internal evidence—that which constitutes the author’s “character.”G The

118

Appendix

reader can try to understand his character by considering his verbal and nonverbal behavior as noted elsewhere than in the poem and by considering the events and circumstances that, throughout his life, may have formed his character—an amount of information that can be infinite. Moreover, a reader’s possible explications of the poem can be infinite in number since each can be based on both external and internal evidence: what occurs in the poem and what occurred in the process of conceiving and writing it—occurrences that can be mental, emotional, and psychological as well as behavioral in the author and consequently that can be infinite in number. Therefore, the reader’s explication can draw toward though never be identical with a theoretical explication encompassing the potentially infinite number of such occurrences; or, restated in the same general terms as in the preceding situations, the obtainable answer can approach though never be identical with a theoretical perfectly accurate answer. (4) PRACTICAL CURTAILMENT. Because of the above circumstances, the reader ceases analyzing at some point. As in the three preceding situations, she assumes that no further evidence will basically change her reading of the poem. Of course, she may be wrong, and, if she is, her wrong assumption may cause her reading to be not merely (as in the two smoking situations) less accurate than it might have been but (as in the lawcourt situation) so inaccurate as to be basically wrong. However, the more evidence she has considered, weighed, and integrated, the more probable it is that her assumption is correct, although that probability, as in the lawcourt situation, is lower than in the mixed actuarial-epistemic situation preceding it. * * * * * * Because of the infinite amount of “evidence” and infinite number of “hypotheses” in each of the above five situations, the process of deriving the most probable answer had to be curtailed at some point. However, even under (or despite) those conditions of infinitude and the necessity of curtailment, the process still yields the best obtainable answer—i.e., the one that “works.” Tossing an irregular coin repeatedly and expressing the results in terms of probability remains the best way for an experimenter to estimate the tossing property of the coin; estimating life expectancies from lifespan data remains the best way for life-insurance companies to prosper; consulting his physician remains the best way for a needy patient to obtain an estimate of his life expectancy; and considering both evidence relevant to a lawcourt case (or to a poem) and the relative probabilities of

Evidence and Hypotheses among Probability Types

119

hypotheses based on evidence remains the best way for a lawcourt (or a reader) to decide upon one verdict (or one explication) rather than another.

SUBSTANTIVE NOTES

Preface A To emphasize its contrast with relativistic foundation, I use the term absolute foundation in the present work although the accepted term in epistemology for straightforward, unmodified foundation proper is radical foundation. Moreover, relativistic foundation is not the first and only kind of non-absolute (or nonradical) foundation that has been posited. For example, there are foundations called “fallible” (Lehrer, Theory 46-47), “modest” (Pastin; Moser, Empirical Justification 117-19, Knowledge 2 and throughout; Richard Feldman 70-78; Hales 32-33), “moderate” (BonJour, Structure 26; Alan Goldman, “BonJour’s Coherentism” 125; Audi, Epistemology 205-08), “local” (Castañeda), and Wittgensteinian or “take-to-be-the-case” foundations (Black). Nor is this the first and only kind of relativistic foundation that has been posited. According to epistemologist Susan Haack, basic beliefs in experientialist empirical foundationalism are justified “by the subject’s sensory and/or introspective experience” and so are relative to the particular subject and his current conscious states and perceptual beliefs (Haack 15-16, 77-78). And, according to philosopher-of-science Karl R. Popper, the acceptance of basic statements depends on (is relative to) their logical connectedness and easy testability for answering searching questions in the testing, in turn, of theoretical systems (Popper, Logic 104-11, 128). Moreover, philosopher Anthony Quinton implies that a belief he ascribes to Popper is a belief in a relativistic foundation: that, since further evidence can be acquired for any statement, whether a statement is basic depends on (is relative to) no one’s disputing its basic status and insisting that further evidence for that status be sought. Quinton implies the same about a belief he ascribes to Nelson Goodman: that, since one has a free choice among the many alternative ways of systematizing a body of assertions, whether one of those assertions is basic depends on (is relative to) whether the system builder makes it basic in the way he chooses to systematize the body of assertions (Quinton, “Foundations” 56-57; see also Quinton, Nature 228). Another example is epistemologist Paul Moser’s concept of a “semantic foundation.” According to Moser, when one believes that a proposition is “justified,” one already has as a semantic foundation a presupposed “operative notion of justification,” of “what justification is for one”; and this notion is constituted and supported by one’s operative semantic (or conceptual) standards for correctly using the term and so is relative to such standards (which are various) and to the person who holds one or more of them. Support “for a notion for one is perspectival, being relative to one’s conceptual ends and pertinent semantic commitments,” and rests “on a (variable) notion of ‘support’ relative to which certain considerations can yield ‘support’”

A Theory of Literary Explication

121

(Moser, Philosophy 86, 98, 227, and throughout). But most relevant to the present work, epistemologist Roderick Chisholm and his followers clearly assume to be “primitive” the “comparative property of being more reasonable to believe than” (Fumerton, “Epistemic Role” 86) and, according to epistemologist Peter Graham, “Conservative, Moderate, and Liberal foundationalism” accept the following relativistic epistemic principle: “If S possesses one explanation that better explains S’s evidence than any other available alternative explanation, then S is justified in believing that explanation on the basis of the evidence” (Graham 95). However, it should also be noted that the currently popular postmodern doctrine of “social constructivism” is not considered an example of relativistic foundationalism since it is thought to show the impossibility of any kind of foundationalism. Instead, it is considered a relativistic epistemology (Kitching 9). Besides, in social constructivism one’s belief is said to be relative to a general entity (one’s society or culture or language) rather than, as in the above examples, to a specific behavior or way of thinking regardless of one’s society, culture, or language. And lastly it should be noted that the terms relative, relativistic, and relativity as used in the present work do not refer to postmodern/poststructuralist relativism: the view that there are only one’s own foundationless perceptions, beliefs, and culture from which to judge anything as more “privileged” than anything else. B In the present work a reading (a product of the reading process) will be used as a synonym for an explication and, where suitable, as a substitute for it so that the monotony that would result from continual repetition of the longer word can be avoided. (On the other hand, reading as the process should not be thought of as synonymous with explication as a process.) C By contrast, certain kinds of “meaning” can be considered probable (Spolsky, “Limits” 423), and so meanings of a literary work might seem more advantageous to deal with than explications. But the question of meanings is exceedingly problematic (see, e.g., Hogan, On Interpretation 3-5, 9-12). Therefore, this book concerns explications instead even though they cannot be shown to be probable. D Donoghue, “Practice”; Culler, “Literary” 290; Goodheart, “Criticism”; Peltason, “Uncommon Pursuit”; Edmundson, Why Read? 53-62, 77, 122-23 and “Against Readings” B7-10; Gallop. This return had been advocated as far back as 1991—in Said 28-30 (rpt. in Berman 185-86, 188)—and recent examples of it are Paglia, McSweeney, and Eagleton on poetry, McAlindon on drama, and Alexandrov and Quint on fiction. E See, e.g., Goodheart, Literary Studies 24-25, 32-34; Michael Clark; Isobel Armstrong; De Bolla ch. 1, 4; Altieri; Dasenbrock, Truth 260-66; Singer; Loesberg; McSweeney 3-8; Singer and Dunn; essays in Levine, in Joughin and Malpas, and in Elliott, Caton, and Rhyne. To multicultural-studies theorist Emory Elliott, the “aesthetic” and “formal qualities” that literature teachers “normally would consider in teaching established white authors” should also be considered in teaching minority literature since it too is “serious art worthy of literary explication.” This would produce a beneficial “shift in criticism more toward close readings and thereby open up new opportunities for commentators to demonstrate the verbal skill and artistry of many writers whose works have entered

122

Substantive Notes

the anthologies and classrooms in the last twenty years” (Elliott 13-14, 17). More recently, in her presidential address to the Modern Language Association, Marjorie Perloff sweepingly called for a return to literature, literary criticism, and interpretation of the aesthetic in literature (“Presidential Address”). F Although in a “moderate” form in which authorial intentions “inform (rather than determine)” interpretations (Swirski 141; see also his ch. 7 and Dutton 199208, esp. 201) and hold variation in readings “within tight normative ranges,” suggesting that “authors expertly constrain . . . the analytical . . . range of response to their creations” and so are “alive and well” in their works (Gottschall 60). G Even Marxist critic Terry Eagleton stated recently that, compared with one reading, the “odds” were against its opposite and that a particular reading was “more likely” than some others (104, 125).

Chapter One Explication and Interpretation A E.g., see Robert Scholes’s distinction between reading and interpretation (21, 39-40), Gregory Currie’s between paraphrastic and non-paraphrastic interpretation (“Interpreting Fictions” 97), Jerrold Levinson’s between “does mean” and “could mean” interpretation (“Hypothetical Intentionalism” 310), Karol Berger’s between “analysis (or close reading)” and “hermeneutics” or “interpretation proper” (224-25), Jorge Gracia’s between meaning interpretation and relational interpretation (47-50), David Novitz’s between elucidatory and elaborative interpretation (105-06), or computer scientist Jerry R. Hobbs’s use of E.D. Hirsch’s distinction between the “inner” and “outer” horizons of a text when Hobbs stipulates that the text interpretation produced by a “cognitive agent” (computer program or robot) in the field of artificial intelligence represents the “inner” rather than the “outer horizon” (Hobbs 23; Hirsch, “Interpretation” 470 and Validity 224). B Miller 11-12. As the reader can see, instead of merely paraphrasing or summarizing the views of other writers, I quote liberally from them, for I believe that their views are more faithfully represented and better understood in the words in which they were originally expressed (unless in a foreign language unmastered by the reader). Faithful representation and good understanding of those views are already handicapped enough by the necessity of their being wrested out of their original, more fully explanatory contexts and would be further handicapped by their being merely paraphrased or summarized. Besides, quoting those views then allows the reader better to judge the use I make of them. C Kiparsky 187. However, Kiparsky applies this description of explication to literary interpretation as a whole rather than (as I do) to only the restricted kind of literary interpretation that is explication. D Nevertheless, probably because the French word explication also designates such a general activity (and is commonly translated) as “explanation,” Paul Ricoeur has chosen to coin the French word explicitation to designate the more limited activity which the English word explication designates (John Thompson 28). It is doubtful, however, whether explicitation will become widely accepted.

A Theory of Literary Explication

123

E The distinction between reference and meaning is Gottlob Frege’s, and the Scott illustration is Bertrand Russell’s. Their discussions of the distinction are included in Moore. Repeating the Scott illustration, James L. Battersby applies the distinction to literary criticism (Paradigms ch. 7). F Arms, “Explication.” Explication in French and English usage is different from the German Auslegung (often translated as “explication”) as it is used, e.g., in Heidegger’s Being and Time and Gadamer’s Truth and Method (see MuellerVollmer 34-35, 40-41). G “Construe,” def. 2, 3, Webster’s. The awkwardness of using construe as a noun is usually evaded by using the neologism construal instead, but that term does not appear in dictionaries and, besides, has been appropriated by psycholinguists to describe a specific technical process in cognitive grammar. See Frazier and Clifton 29 and following.

Chapter Two Theory and Practice A The most notable such call was Culler, “Beyond Interpretation,” but a more recent one was Livingston, Literary Knowledge 264.

Chapter Three Reasoned Argument A Carruthers and Boucher 8-9. Later in their book they describe the two doctrines in more detail but without making the differentiation between them more definite: “language is the vehicle of (and so a necessary but not a sufficient condition for) some types of concept, and/or some types of reasoning,” versus, it is “possible for language to enhance human cognitive powers, either via inner language, or by providing an external resource for cognition” (123). B Battersby, Paradigms xiii, 16. On the indispensability of reasoning to the understanding of fiction, see Livingston, Literature Introd., ch. 2; and for an up-todate advocacy of reasoned argument, see Amanda Anderson 2, 8, 12, and throughout.

Chapter Four Second-Order Relative Objective Epistemic Probability (SOROEP) A The last name, however, is overinclusive and not to be confused with one of the two theories of quantitative probability: frequentist probability and propensity. B The last two names, however, are overinclusive and not to be confused with two theories of epistemic probability called “logical” or “Keynesian,” on the one hand, and “subjective,” “subjectivist,” or “personalist,” on the other.

124

Substantive Notes

C Although the description of epistemic probability used here is different from that in the logical theory of probability, that theory too holds epistemic probability to be independent of consensus (see Keynes 4). When one proposition depends upon another as evidence and we say that the first is “probable in relation to” the second, “we are affirming a relation between the two propositions that holds necessarily and quite independently of the knowledge and beliefs of any particular subject” (Chisholm 35; italics mine). D Because of these changes, epistemic probability would then refer to a cognitive state that a person would be able to be in or that it would be “rational, or justified, for him to be in, even if he is not actually in it” (Alvin Goldman, Epistemology 415n13). Probability that is epistemic (“of knowledge”) might then be more appropriately called epistemonic (“capable of becoming knowledge”) if it were not for the fact that this rare term, not used since the seventeenth century, would be awkward from unfamiliarity. E BonJour, In Defense 2. For the most recent pro and con arguments about a priori knowledge and justification, see Shaffer and Veber. Incidentally, the distinctions between believability and rational belief and between justifiability and human justification may be thought of as analogous to either of two familiar distinctions in linguistics—Ferdinand de Saussure’s distinction between langue and parole or Noam Chomsky’s between competence and performance—i.e., between the system of language in general and any particular usage within that system or between a speaker-hearer’s knowledge of her own language and its actual use in concrete situations. Believability and justifiability are even more closely analogous to the understandability of objective knowledge as recorded in a book: [W]hether anybody ever reads it and really grasps its contents is almost accidental. . . . [W]hat turns black spots on white paper into . . . an instance of knowledge in the objective sense . . . is the possibility or potentiality of being understood, the dispositional character of being understood. . . . And this potentiality or disposition may exist without ever being actualized or realized. (Popper, “Epistemology” 236; rpt. in Objective Knowledge115-16) F Cf. “we most often have strong reasons for preferring one [reading] to another, reasons that may be successfully defended by arguments better than those of an opponent” (Johansen 358). Indeed, not just readings but “all philosophical arguments are really explicit or implicit comparisons of plausibility” [i.e., epistemic probability] (Lycan 118). Moreover, there is epistemic justification for preferring one reading to another on the basis of epistemic probability because there is epistemic justification for preferring one “belief” to another on that basis (Swain, “Justification” 45-46 and Reasons 133-34) and because a reading may be considered a belief. There is epistemic justification as well for accepting the most preferred of such readings as an “Inference to the Likeliest [i.e., most probable] Potential Explanation,” which is a version of “Inference to the Best Explanation” (Lipton, Inference 60).

A Theory of Literary Explication

125

Finally, there is epistemic justification for accepting the most preferred of such readings as an Inference to the Likeliest Cause (Cartwright 6, 85, 92; see also Lipton, “Good” 1-2, 8-16, esp. 17-20)—i.e., to the most probable immediate cause of the literary work. The most probable reading would infer why the work is in the form (the text) it is in—i.e., would “infer from its elements the aesthetic that might generate this unique configuration” (Vendler 2). Readings are theories, and an “important aspect of theories is that the theoretical entities are seen to be causally responsible for the evidence” (Gopnik and Meltzoff 35; see also Thagard, ”Abductive Inference” 234 and Lagnado and Sloman 163). Just as the ability to infer the best explanation may be innate (Carruthers, “Roots” 92-93 and Architecture 347), so may the ability to infer the likeliest cause. Indeed, cognitive neuropsychologist Chris Frith compares the ability to infer the best or likeliest cause to the ability to solve what he calls an “inverse problem”— for instance, how to apply the muscular forces in his arm so that it moves into a particular position. This is a problem because it has no exact solution: I can follow a different path and go at different speeds, yet still finish up in the same position. Many—indeed an infinite number of—different force applications will cause the arm to reach the final position I want. So how do I choose which forces to apply? Luckily I am not aware of this problem when I move my arm. My brain has solved the problem. Some solutions are better than others and, from past experience, my brain is pretty good at choosing the best one . . . [even though we] don’t yet know precisely how the brain defines best for movements. . . . [T]his is the same problem that our brains have solved long ago in order to perceive the physical world. The meaning (in this case, the cause) of the signals that strike our senses is ambiguous in the same way. Many different objects in the world can lead to the same sensory signals. . . . [O]ur brain solves this problem by using guesses about the world. . . . It is the same inverse problem that has to be solved when we listen to words [or read them]. Many different meanings can lead to the same words. So how do we choose the best meaning? (Frith 166 and n; italics mine) We—or rather our brains—choose the best meaning by “guessing”—i.e., by using epistemic probability. By this means, our brains choose which cause best accounts for a particular effect—which force application best causes the arm to move to a particular position, which objects in the world best account for particular sensory signals, and which meaning best accounts for a particular passage of oral or written words. G Cf. “A man may believe that one statement has a better chance of being true than another without believing either statement to have any precise numerical chance of being true” (Lehrer, Knowledge 191). Or, “perhaps we must be satisfied with a comparative conception of epistemic probability” since “epistemic probability is interval-valued rather than real-valued” and “there may be no sensible way to assign real numbers to degrees of confidence” (Plantinga 173). See also Hirsch, Validity 174.

126

Substantive Notes

H It should be noted here that the two readings can be based on the same, partly different, or wholly different evidence and that the second-order relation between literary explications is similar to second-order relations in logic and in internal-representation judgments of external objects, as will be shown below in section 13.3 and Chapter 14 respectively. It may also be similar to second-order relations in fractal studies, chaos theory, turbulence theory, and other naturalscience studies. Incidentally, one might suppose that, to avoid confusion, a term might be used for the second-order relationship that is different from the term used for the firstorder one—“relative”/“relativity” for the first-order one, and “comparative”/ “comparativeness” for the second-order one. But that usage would not emphasize their sameness in kind and their difference only in order, the second-order relationship “containing” the first-order one. This is their second-order situation, and it is necessary that it be emphasized precisely because subsequently, in section 13.3 and Chapter 14, that second-order situation will be associated with secondorder logic and with the second-order nature of internal-representation judgments of external objects. But, the second-order situation must be emphasized also because, by the second-order relationship “containing” the first-order one, the second-order one can be recognized as a recursion—“a constituent that contains a constituent of the same kind” (Corballis 6). And recursion is important enough to be considered essential to the origin and development of human thought (Corballis 1 and throughout). Therefore, since “relative”/ “relativity” is suitable to both the first- and second-order relationships but “comparative”/“comparativeness” is suitable only to the second-order one, “relative”/“relativity” will be used for both. There is also another reason for not using the term “comparative” for the second-order relationship: to avoid associating the kind of probability applicable to literary explication with quantitative concepts, since works on the logical theory of probability and on confirmation theory try to find correspondences between “comparative” and quantitative concepts. (See Carnap xv ff., 8 ff., 22, 163, 428 ff.; Swinburne, Introduction 2-4, 38; Benenson 14, 220, 235, 238 ff., 268 ff.) Even so, it should be acknowledged that there have been attempts in the past to use quantitative probability (“frequency”) to choose between readings of a literary work. See, e.g., Hill, “Principles.” I In fact, this distinction might have been added to the above list of distinctions between explication and interpretation. J Ideals related to disinterestedness include impartiality, objectivity, openness, detachment, distance, self-suppression, and unsituatedness—ideals usually disparaged because of their unattainability but recently beginning to be reapproved as, nevertheless, worth aiming for. See, e.g., Peltason, “Seeing Things” 177-94. K Harris, Literary Meaning 6, 114, 142. Although such traditional intentionalists use SOROEP, they do so in order to try to obtain what they assume to be an obtainable absolute meaning—”the author’s meaning”—and so the viewpoint of the present work is not intentionalist. Incidentally, David R. Anderson, an intentionalist different from the norm (a reader-response theorist who locates intention not necessarily in the author), uses SOROEP too, but he uses it—along with his “subjective knowledge,” “past experience, current needs, and

A Theory of Literary Explication

127

personal resources”—to continually construct and transform the meanings of a literary work through continual “negotiation” with other interpreters of the work. L Ricoeur 212. Admittedly, another kind of anti-intentionalist, the New Critic, might not need to use SOROEP in order to try to find what she assumes to be the meaning of a literary text, for, according to William C. Dowling, that meaning exists like the meaning of a mathematical theorem or law of logic—i.e., the text meaning is absolute, abstract, independent of and external to author, reader, and, indeed, mind, and exists in the syntax and dictionary definitions of the words of the text. In such a case, the absolute meaning of a literary text would be either understood or not understood depending only on the New Critic’s ability to parse its language, use the OED (or foreign-language equivalent) to define its words, select from her findings the relevant evidence, and from that evidence infer the meaning of the text. M This ratio, commonly known as odds, is called relative probability too but, being of the physical kind, should not be confused with the epistemic kind used in explication. N There exists here a continuum of readings extending from those having no degree of justification to those having increasing degrees of justification but never to a “true” reading having “perfect” justification (whatever that might be). This continuum comprehends the four theses of Jean-Jacques Lecercle’s theory of interpretation: Although no reading is true, all readings are possible, and each of them is either false or “just”—i.e., allowable, approvable, or preferable (31, 236). In other words, every reading has either no or some justification. O Even in physical probability, although the “whole” (or certainty) represented by the numeral 1 does exist, it may not always be indispensable since, according to the logical theory of probability, certainty is only a special case of probability and so cannot he used to define the probability relation (Cox 4). As an instance, any physical probability expressed as odds—e.g., three-to-one—can be understood and defined without the knowledge or even existence of certainty (which, expressed as odds, would be the useless expression “n-to-zero,” where n can be any number but zero). P With the latter possibility, cf. “it is possible using a certain criterion to establish one interpretation as acceptable, and by using another criterion to establish a different, even incompatible, interpretation as acceptable” (Barnes 5152). With both possibilities, cf. “for any two statements a person believes to be relevant to each other. . . , one is believed to have a better chance of being true than the other or neither is believed to have a better chance of being true than the other” (Lehrer, Knowledge 191). See also Hirsch, Validity 169, 171-73. Q By contrast, through the use of the Principle of Indifference or Insufficient Reason, the logical theory of probability assumes a zero difference between the probabilities under certain conditions and thereby “provides the basis for the numerical measurement of probability and hence for the standard mathematical development of the theory.” However, without that Principle, probability in the logical theory would be similar to probability as described in the present work insofar as it would be located only “in assessments of the grounds the evidence may give for believing some hypothesis,” and such assessments would “permit no more than qualitative comparisons of probability” (Runde 100-02, 117).

128

Substantive Notes

R See, e.g., Wichmann 16 and Atkins 568, respectively. The phrase “not very different from” is also used in the application of fuzzy logic to chaos theory (Stewart 112). S The various expressions mentioned here (as well as in note P above) of this condition may be contrasted with deconstructive expressions of it as “suspension between alternatives,” “undecidability,” “unreadability,” simultaneous assertion and denial of the authority of the text’s rhetorical mode (Edmundson, Why Read? 47-49), and “the aporia of the conflict of interpretations, which hermeneutics by itself is unable to resolve” (Schrag 137). T It is sometimes called nondeductive logic (Franklin x). U For disagreement with both extreme opposing views, see, e.g., Eco 6-7, 21; Dunning 2; Lesser 4-5; Alter 221. For examples of the extreme opposing views themselves, see respectively Rendall 1065, 1067, and Juhl 13. According to philosopher Sonia Sedivy, the prevailing extreme view of only an unlimited number of equally acceptable though different readings is undermined by an argument that she bases on the philosophy of Ludwig Wittgenstein. The argument is that children instinctively adapt or are trained over time and maturation to follow the implicit norms or rules of their culture’s “forms of life activities,” including its language practices, until the norms become “second nature” to them. They learn the norms not by interpreting them but by practicing them. Because of that acculturation, a text immediately upon their reading it and without their having to interpret it presents a meaning to them, just as a face immediately upon their seeing it and without their having to interpret it presents an expression to them. As for the text, its meaning is an apprehension or description (not an interpretation) of the facts presented by the text. Interpretation either is unnecessary or can be done subsequently and additionally. But since meaning and facts are presented at a stage independent of interpretation or previous to it, they are not affected by the multitudinousness of interpretations (Sedivy 165-85). There is also a more commonly offered argument against the prevailing extreme view of only an unlimited number of equally acceptable though different readings: [I]f it is true that there is always more than one way of construing a text, it is not true that all interpretations are equal. . . . The text is a limited field of possible constructions. The logic of validation allows us to move between the two limits of dogmatism and scepticism. It is always possible to argue for or against an interpretation, to confront interpretations, to arbitrate between them, and to seek for an agreement, even if this agreement remains beyond our reach. (Ricoeur 213) The prevailing extreme view “scant[s] the degree to which different readings may overlap, or even coincide” and so “undermine[s] the communal basis of practical criticism, which is grounded in the possibility of common perception and mutual assent.” The extreme view “leaves no room for strong probability, the loose fit, the broad consensus that might differ only in detail” (Dickstein 39, 182). In trying to ascertain the “sense” of a text, “[d]ifferent users might set up slightly different senses; yet there will be a common core of probable operations and

A Theory of Literary Explication

129

content consistently found among most users, so that the notion ‘sense of the text’ is not unduly unstable” and consensus is possible (De Beaugrande and Dressler 7). For a discussion of whether consensus on an explication is made possible by SOROEP, see Part II. V Of course, one may still feel either that a “less probable” reading is never unacceptable or that it is just as acceptable as the “more probable” reading; but, in any case, one would be well “employed in working out hierarchies of probability among the meanings” (Ruthven 160). W Even if some readings are judged to be unacceptable or less acceptable, others judged to be equally acceptable may still be able theoretically to be generated “without limit.” Of course, their number cannot include the unacceptable or less acceptable readings and so is “limited” to that extent. But this limitation does not affect the theoretical possibility of an unlimited number of equally acceptable readings. In other words, though equally acceptable readings are limited in inclusiveness of possible readings, they are unlimited in number. They constitute what is known as a “bounded infinite set” (Gander 7-8). X To philosopher Nicholas Rescher they constitute the first two laws of text interpretation (Philosophical Reasoning 71-73). Y According to Paul Ricoeur, a reading that, “on the one hand, takes account of the greatest number of facts furnished by the text, including its potential connotations” (i.e., the most inclusive reading) “and on the other hand, offers a qualitatively better convergence between the features which it takes into account” (i.e., the more coherent reading) is the “more probable” reading (175-76).

Chapter Five Evidence and Hypothesis A “Meaning is created by the interpreter the way any scientific theory is fashioned, through the framing and testing of hypotheses” (Berger 228). B Spitzer 19-20. Although Spitzer at first calls the data “details about the superficial appearance of the particular work,” one should not infer that, in New Critical fashion, the data should include only internal evidence from the work. External evidence such as any evidence of authorial intention should also be included, for it may turn out to produce a more probable explication (see Dutton 199-208, esp. 201; Swirski, ch. 7 and throughout; and Gottschall 60). There are also other descriptions more recent than Spitzer’s of Schleiermacher’s conception of the repeated-reciprocal-adjustment process. For instance, see Iser 52-54. And one of the steps in the process—adjusting hypothesis to data—is described by Reed Way Dasenbrock in terms of Donald Davidson’s idea of adjusting a “prior” into a “passing” theory in interpreting speech. (Dasenbrock, “Do We Write” 26-27; Davidson, “Nice Derangement” 442.) For Dasenbrock, adjusting a prior into a passing theory occurs in interpreting writing as well as speech and, as evidenced by his calling a passing theory “provisional,” is only the initial step in the adjustment process (Truth 74, 75, 173). E.D. Hirsch, Jr., describes this step as the process of revising or adjusting a hypothesis (or schema) when the “range of predictions or expectations”

130

Substantive Notes

it provokes is not “confirmed by our linguistic experience.” The process provides “a more useful and accurate model than that of the so-called hermeneutic [vicious] circle” because, “[u]nlike one’s unalterable and inescapable preunderstanding in Heidegger’s account of the hermeneutic circle, a schema can be radically altered and corrected” (Aims 32-34). C “[B]y a nonconscious operation of inference” upon evidence, humans “confirm and disconfirm hypotheses, a kind of elaborative processing that leaves traces of confirmed hypotheses” (Dulany, 188). According to philosopherpsychologist Michael Polanyi, that nonconscious operation of inference upon evidence is one in which the evidence “integrates” with a hypothesis or “bears on” it or “fuses” or “merges into” it (Knowing 141, 194, 212). This process can also be considered one of enhancement rather than integration: humans have the “capacity both to sense the accessibility of a hidden inference from given premises, or to invent transformations of the premises which increase the accessibility of the hidden inference” (Polanyi, Personal Knowledge 129). There are still other theories, whether on evidence and inferred hypotheses or on premises and inferred conclusions. According to Charles Sanders Peirce, the process (which he named abduction) is simply a creative one in which humans creatively formulate the inference from the premises (Blackburn). According to philosopher Mark Johnson, the process is one in which, given the evidence, humans are unconsciously “forced” or “compelled” by logical necessity into the inference because they have experienced (felt or at least seen) physical force or compulsion (63-64). According to epistemologist Richard Fumerton, those whom he calls inferential internalists view the inferential process as one in which humans unconsciously discern the upper limit of a probability relation between premises and conclusion (“Epistemic Role” 78, 86). And, according to computer scientist Jerome A. Feldman, inference is simply “a process of quantitatively combining evidence in context to derive the most likely conclusions” (236). Perhaps, either of these last two theories may be relevant to why, when humans speak of the process, they do so in metaphorical terms of addition: “putting two and two together” or finding out what “it all adds up to” (Lakoff and Johnson 246). But, exactly how humans “integrate” evidence into an inference, enhance the accessibility of an inference, creatively formulate it, are unconsciously forced into it, discern the upper limit of a probability relation between it and its premises, or “quantitatively combine” evidence into it, or why they speak of it as the sum of addition is implicit procedural knowledge and consequently hidden and indescribable. D For the fourth example, see Woods 59, 62-63, 68-70, 75-78, 80. All four examples may be considered among the “complex encoding skills” that the “human cognitive system is equipped to nonconsciously acquire” and that “are necessary to maintain the normal level of adjustment (e.g., speaking, thinking, problem solving)” (Lewicki, Czyzewska, and Hill 175). E This process is an example of the implicit procedure that Evans and Over call relevance since it can as well be described as a procedure in which a reasoner’s implicit system represents information selectively and so makes it explicit and not only focuses her attention on it to use in explicit inference but also determines generally what she is aware of and thinks about (10, 22, 48, 54, 143).

A Theory of Literary Explication

131

F See the Appendix below for another way to show the feasibility of this curtailed process even under (or despite) the assumption that the amount of evidence and the number of possible hypotheses are infinite. G Other critics have noted specifically that Spitzer’s process of repeated reciprocal adjustment in explicating a literary work “resembles the inductive method of the experimental sciences” (Howarth and Walton, xxxiii). Spitzer’s process was “like scientific work. . . . His critical circle was much like that of hypothesis and experiment” and so exemplified “broader principles of inquiry that go beyond literary criticism” (Ellis, Literature 194). See also Ellis, Theory 19097. H Arbib and Hesse 8. The literary critic Paul Hernadi uses the same spiral metaphor to describe the repeated reciprocal adjustment that he believes is necessary for a full interpretation (rather than just an explication) of a text: “[F]or the ever increasing penetration of critical understanding” there is required an “ongoing interaction of words and minds [that] could well be described as a hermeneutic spiral: progress through repeated recourse to the reconstructive explication, deconstructive explanation, and participatory exploration of [respectively] authorial signals, textual symptoms, and experienced disclosures” (xv). I Arbib and Hesse 8. For epistemological analyses that defuse the problem of theory-ladenness, see Alvin Goldman, Epistemology 188-89, Papineau 25-34, Raftopoulos, ch. 7, and Fales. J The case is the same in science—see Farrell 59—and in the field of artificial intelligence. In interpreting a text, a “cognitive agent” (computer program or robot) “can never be certain about any of its hypotheses”; still, it “can form good hypotheses about an author’s intentions” and “relate the text to what the agent believes the author is trying to accomplish” (Hobbs 18).

Chapter Six General Considerations A Politzer and Noveck 97. Psychologists Ralph Hertwig and Gerd Gigerenzer raised an apparently similar criticism of the Linda experiment: Subjects tested on Linda’s case would have considered it a case of single-event (or epistemic) probability “if they assumed, quite reasonably, that the experimenter had included the sketch of Linda for some purpose. . . . [T]hey would have interpreted the question as, To what extent does the information given about Linda warrant the conclusion that she is a bankteller? And a reasonable answer is, not very much” (351; see also Evans, Hypothetical Thinking 141, Newell, Lagnado, and Shanks 100-01, and Stenning and van Lambalgen 362-63). Although “the question is actually about mathematical probability . . . the cover story” of the problem suggests “one of the other legitimate meanings” of probability (Gigerenzer, Adaptive Thinking 266). It suggests “plausibility, credibility, or typicality” or “the degree to which the evidence . . . supports the conclusion” (Newell, Lagnado, and Shanks 75-77; see also Gigerenzer, Gut Feelings 95)—i.e., it suggests epistemic probability. However, when subjects are cued that the problem is one of

132

Substantive Notes

mathematical (physical) rather than epistemic probability--i.e., when they are asked, not which of the two statements is more probable, but to how many out of 100 people who are like Linda do each of the two statements apply--then, about the same proportion of subjects answer correctly as before answered incorrectly (Fiedler 126-27; Gigerenzer, Gut Feelings 96-97). B This is also shown by other experiments. In one of them, subjects are asked to list, first, all the seven-letter words ending in ing that they can think of and then all the seven-letter words with n as their penultimate letter that they can think of. As a result, the words that they list of the latter type include no words that they list of the former type, even though the latter type includes the former. Alternatively, other subjects are asked to estimate what might be the frequency of words of the former type within any given text and then what might be the frequency of words of the latter type within that text. As a result, the first frequency estimate is much higher than the second, even though the latter type of word includes the former and so would appear much more frequently than the former (Tversky and Kahneman, “Extensional versus Intuitive Reasoning” 295, 311; rpt. in Gilovich, Griffin, and Kahneman 21, 45). In another group of experiments, when given an epistemic setup consisting of a description and two alternatives, subjects wrongly ignore prior (physical relative) probabilities or “base rates” but, when not given the unintentionally “deceptive” description, correctly use the prior probabilities to decide between the alternatives (Metzger 517; Kahneman and Tversky 242; rpt. in Kahneman, Slovic, and Tversky 56; Politzer and Macchi). See Lagnado and Sloman for a comprehensive review of the above experiments and many others designed to test human fallibility in the making of SOROEP versus physicalprobability judgments. C John Anderson, Character 31, 37, 250-51; Evans and Over 8 and following. The evolutionary adaptive kind of rationality shown in the Linda and Reagan experiments may be considered a result of “System 1” cognition—the generalized classification for the evolutionary adaptive implicit (or tacit) procedural knowledge described above in section 5.2; and the normative logical kind of rationality may be considered a result of “System 2” cognition—the generalized classification for the normative logical explicit knowledge described in the same section (Stanovich, Who Is Rational? 147, 151, 198, 230).

Chapter Seven Modularity in Speech Comprehension and Reading A Chomsky, On Nature 84-86, 147; Pinker, Language Instinct 18. In the present work I will use the term module rather than any of the alternative terms since, unlike them, it allows other convenient syntactic forms (modular, modularity, and modularized) that I will have occasion to use. However, some cognitive scientists specify that not all the terms are synonymous—that module designates a more restricted, more encapsulated mechanism than does mental organ (see Stein 49-51; Pinker, How the Mind Works 31, 315; Prinz, “Mind” 3033; but see also Samuels, “Human Mind” 51-52). In deference to these cognitive scientists I acknowledge that, although I use the term module and its other forms, I

A Theory of Literary Explication

133

am really referring to the less restricted, less encapsulated mechanism (a “mental organ”): an “information processing system that is specialized for performing a specific cognitive function” (Cosmides and Tooby, “From Evolution” 291; see also Ermer, Cosmides, and Tooby 153 and Pinker, How the Mind Works 33) but can still involve “contributions from units dispersed widely” throughout the brain, “can be involved in many different information-processing and control activities,” can “recover function by using new areas when damage to an area affects function” (see Valerie Hardcastle and Matthew Stewart, paraphrased in Brook and Mandik 10; see also Hardcastle and Stewart 36), and can “send back projections to lowerlevel areas” (Brook and Mandik 11; see also Barrett 161-63). With these four qualifications indicating only semi-encapsulation, the resultant “soft” module should be acceptable to cognitive scientists whether they be, at one extreme, “massive modularity” theorists (see, e.g., Carruthers, Architecture 58-59, 62-64) or, at the other extreme, “dynamic systems” theorists (see, e.g., Spivey 123, 125, Van Gelder and Port 13). The term innate is just as complex, even though sometimes (e.g., in Roeper 281) it is used simply as synonymous with implicit and tacit as these terms were used in sec. 5.2 above. When in the present work I use innate independently as opposed to quoting or referring to others’ use of it (which I just did in the text), it can be assumed to refer to an expansion (prompted by Samuels, “Nativism” 236) of a definition of the term by Jeffrey Elman and colleagues: Innate refers to “putative aspects of brain structure, cognition or behavior” that, “given the normal environment for a species,” produce “developmental outcomes . . . more or less invariant between individuals” of the species (Elman et al. 22-23) and that, in the normal course of development (Samuels, “Nativism” 259), are either “the product of interactions internal to the organism” (Elman et al. 22-23) or exceed the product of internal interactions (i.e., cannot be accounted for by them) between the organism and either external phenomena or materials taken into the organism. Or, here is an abbreviated version: “a trait is innate if (a) it emerges during the course of development that is normal for the genotype, and (b) it is cognitively basic, not admitting of a cognitive (e.g., learning-based) explanation” (Carruthers, Architecture 10n5). Moreover, it should be noted that the longer as well as abbreviated version of innate “does not correspond, even in an approximate sense, to genetic or coded in the genes” (Elman et al. 23). However, if these descriptions of innate (like all others) are considered to be too vague or too complicated or insufficiently comprehensive or useful only as a diagnosis rather than a definition of the term, then innateness should be considered a gradational or sliding-scale measure that, below a minimum, becomes non-innateness; and this gradational innateness can as well be considered what my independent use of the term innate refers to. A rationale for my treating innate as graded toward its opposite, noninnate, is that the terms implicit and tacit (with which, as stated above, innate is sometimes equated) are sometimes treated as graded toward their opposite, explicit (Osman 1005-06). B For the close relationship between modularity and innateness, see Chomsky, Rules 40-41, Marcus 221n84, and Hanna 89-90. However, modularity and innateness should not be equated with each other (Elman et al. 100, 386-87; Cowie 287 and n15); modularity does not always depend on innateness (Hanna 88 and

134

Substantive Notes

n30) nor innateness on modularity (Elman et al. 21-23, 36-37, and throughout; Gopnik 171); and modularity is compatible with non-innateness (Neil Smith 104). For a comprehensive discussion of these matters, see Robert Wilson 14-17, 19-22, 50-60, 72-73. C E.g., “general multipurpose learning strategies” (Putnam, “‘Innateness’” 21; rpt. in Searle 138), “concepts and learning strategies that are required independent of language” (O’Grady, Principles x), “some innate structure” without which “no stimulus could be re-identifiable and no empirical generalizations could be made” (Katz 318), “conceptual learning” and “conceptual categories” (Edelman, Bright Air 129-30), an “initial structure of the language of thought” (Braine 90), “some set of general induction processes that operate in a wide variety of situations” (Reber 153), “rational inference” (Hogan, On Interpretation 24, 81-82), “conceptual structure” (Turner, Literary Mind 155, 157; Culicover 198 and throughout), the “timing [of] the development of memory” (“chronotopic nativism”) in conjunction with the innate constraints provided by the “properties of the processing mechanisms which are engaged in language use” (Elman et al. 34, 342, 348), “general cognitive capabilities for learning . . . possibly including language-specific phonetic and phonological capabilities” (Schütze 63), evolved “perceptual, motor, learning, and even emotional predispositions” (especially “an innate bias for learning . . . the logic behind symbolic reference”) that act as “adaptations for language” (Deacon 141, 350), brain organization that produces “biases or sensitivities toward particular types of information inherent in environmental events such as language” and “therefore constrains how language is learned” (Seidenberg 1603), an analytic device that can select the optimum “dimensionality” in which to represent semantic similarity between words and so can amplify the ability to learn their meanings (Landauer and Dumais 211-12), “the abilities to create and learn symbols, to form concepts and categories, to process vocal-auditory information rapidly, and to interact and communicate with other persons intersubjectively” (Tomasello, “Introduction” xi), the abilities to read intentions and find patterns (Tomasello, Constructing 3-4), “a baby’s predisposition to discover patterns in the language (or invent, in the case of creoles) and thereby softwire a language machine in one of the neurologically possible self-organizing schemes” (Calvin and Bickerton 5), “statistical learning mechanisms” that can “pick out the sound segments, phrase boundaries, grammatical categories, and rules that make up language,” thereby allowing children to “pick up subtle patterns by monitoring frequencies and making unconscious predictions” (Prinz, Furnishing 211), “the ability to use symbols . . . to form a higher level of pattern construction” out of “the grammatical patterns embedded in presymbolic and symbolic communications” (Greenspan and Shanker 362-63, 365), “the required biological factors necessary to benefit (verbally) from being brought up in a language-enriched environment . . .—for example, attention, inhibition, memory, motor control, and audiological processing” (Shanker and Taylor 375-78), the ability of the brain neocortex to form the classification of patterns and to build sequences (Hawkins 165), “the functional operation of the core behavioral intelligence system” whose “machinery of the basal ganglia” organizes and sequences information into meaningful patterns of language (La Cerra and Bingham 85), the abilities to produce and recognize sounds, plan motor

A Theory of Literary Explication

135

sequences, categorize, store and retrieve information, predict the actions of other individuals, engage in cultural learning, use perceptual primitives to derive simple concepts and from them form complex concepts (Dabrowska 62-69, 76, 107-12), or innate human embodiment of “Universal Grammar” ( the “system of rules or principles shared by all natural human languages”) under the assumption that “thoughts involve language-like mental representations” (Devitt 13-14, 142, 244, 246, sec. 12.4). D This is due to what has been called the “universal acceptance of nativism” argument: “that every psychological theory is committed to at least some innate structure” (Samuels, “Nativism” 238)—to “a set of innate attention biases and a variety of innately structured learning mechanisms” (Carruthers, Architecture 10) or even to only “a mechanism for acquiring other learning mechanisms” (Block 279). For an extensive discussion of minimally necessary innate structure in humans, see Plotkin, Necessary Knowledge 1, 249-51, ch. 4 and 5. E Marcus 30. According to developmental psychologist Paul Bloom, predispositions for a child to learn words include abilities not specialized for learning language: the ability (known as “mind-reading”) to infer the intentions of others by imagining or representing their mental states, the ability to acquire concepts corresponding to kinds and individuals, and the ability to learn socially transmitted information quickly and store it in memory (Bloom, How Children Learn 10, 34-35, 261). But see also Stromswold 368-86. F Schwartz and Schwartz 40-41. It should be specified that the term reading designates a process that is itself the combination of two separable processes: converting a series of visual (or, in braille, tactile) words into speech (whether audible, silent, or manually signed) and gaining some understanding of what that series of words represents. Though seemingly obvious, this specification is necessary since hyperlexics—extremely mentally retarded children who read accurately and rapidly but without understanding—do the first process but not the second and so separate the two. This phenomenon increases the possibility that at least part of the reading process is modular or (quasi-)modularized (see Cossu and Marshall, “Theoretical Implications” 586 and “Cognitive Skills” 36-38; Marshall and Cossu 52-54).

Chapter Eight SOROEP in Speech Comprehension and Reading A Pinker, Language Instinct 210. This “filtering-out” process can be said to depend on unconscious SOROEP: After “a word in a sentence is first heard” and “all of its possible meanings are activated, whether contextually plausible or not, . . . the otiose meanings are pared away over time as the word is integrated with the syntactic, semantic, and pragmatic context” and only the most plausible meaning is left; but, “[o]ccasionally among the promiscuous structures there are multiple stable states, in which case perception produces an ambiguous result such as . . . a pun or other ambiguity in language” (Jackendoff, Language 21; italics mine)—i.e., a word with at least two alternative meanings, either of them as plausible as the other.

136

Substantive Notes

B Pinker, Language Instinct 210-11, 213-14; italics mine. Instead of the terms “gamble” and “bet,” other analysts specifically of the reading procedure use the terms “likely,” “probable,” “plausible,” “guess,” and “predict” (or syntactic variations of them): see, e.g., Rumelhart 588-602; Woods 62, 64-65, 68-69, 75-78; Goodman 127, 131-32; Frank Smith 18-19 and throughout. C By contrast, the criterion of physical relative probability would pick the tree or word meaning not “most likely” but only “likely to be correct,” for that criterion involves having an impression about the most usual, common, familiar, or “normal” tree or word meaning (Spolsky, “Darwin” 53). (Remembering the most frequent tree or word meaning would be an overstatement.) Besides, what would come to mind (the parsing mind) first would be not the tree or word meaning determined by SOROEP but the one “foremost on our mind due to conventionality, frequency, familiarity, or prototypicality” (Giora 10), and that tree or word meaning might agree with the SOROEP-determined one, but also it might not and so have to be rejected (Green 19n20). An example of this need for rejection and correction is recorded by text linguist Robert de Beaugrande: A class of his American students sometimes interprets “pants” in line 18 of Coleridge’s poem “Kubla Khan”—”As if this earth in fast thick pants were breathing”—as “trousers” (a usage “quite common” in English) rather than “gasps” (a usage “statistically improbable” and “quite rare”), even though “the context reverses these probabilities” (New Foundations 184). Readers must compare, not “statistical,” but “contextual” probabilities (i.e., use SOROEP) to identify a concept in a text: they must estimate the “weaker or stronger likelihood that the concept will subsume certain knowledge when actualized in a textual world, where each concept appears in one or more relations to others” and where these “relations constitute the linkage which delimits the use of each concept” (De Beaugrande and Dressler 85-86, 140-41; De Beaugrande, Text 148). Likewise, computer models may need to be corrected in word meaning, for they too measure physical relative probability. For instance, the automated Latent Semantic Analysis model had the “impression” that nurse was slightly more similar in meaning to physician than doctor was (Landauer, Foltz, and Laham 274; Perfetti 369). Moreover, computer models make many more mistakes in tree selection than do humans, who unconsciously use SOROEP: “[S]ystematic predictive analysis for a natural language grammar makes many more extraneous, tentative excursions down false parsing paths than people seem to” (Woods 61). D Pinker, Language Instinct 214. See also De Beaugrande and Dressler 6, 85. If the computer that paraphrased Time flies like an arrow had had “a bit of knowledge” about the use of similes, it might have chosen the correct paraphrase as the most probable one. And, indeed, later-model computers have been given a knowledge base, for, without it, they cannot solve problems by means of logic alone or even heuristics: Heuristics depend on the problem at hand, and the more we know about it, the better our hunches and the faster we will reach them. . . . Knowledge may be inseparable from intelligence. The more you know, the smarter you become. . . . [T]he mass of baseline knowledge called common sense

A Theory of Literary Explication

137

is . . . important for solving problems. (McNeill and Freiberger 211-12; see also Pinker, How the Mind Works 316) E Holbrook et al. 401-02. Holbrook et al. do not try to reason why their second interpretation “somehow does not seem as plausible” as their first one. They may have realized that any such reasons would be unconfirmable rationalizations, for a comparative-plausibility judgment is reached only by means of implicit (or tacit) procedural knowledge, which is unconscious and inaccessible (see sec. 5.2 above). F The use of the word guess (and its syntactic variants) by Pinker and most other commentators herein quoted on the child’s acquisition of language seems to be meant to imply that the child’s conscious or unconscious guesses about language are based on insufficient, uncertain, and/or ambiguous but nonetheless relevant evidence. And from such evidence a relative-probability judgment can be made and understood as possibly relevant to the process of language acquisition. But this is not so where the word guess is meant to imply the complete absence of evidence—where the child’s guesses are simply created, imagined, “dreamed up”—as in Geoffrey Sampson’s use of the word in his writings against Chomsky’s instinctual theory of speech comprehension. Yet, Sampson states that the child then “tests” his guesses against “objective external reality” (20, 22). In the testing process evidence (if there was any) would appear, and from the evidence a relative-probability judgment could be made and understood as possibly relevant to the process of language acquisition, but now where that process would not be based on Chomsky’s instinctual theory of speech comprehension. G Markman 67-70, 85. These biases are also called preferences, strategies, or assumptions, but the standard term for them is constraints although the term misleadingly connotes "a closing down of choice" rather than "free, but biased, choice" (Nelson, "Constraints” 228; see also Siegler, “Developmental Psychologists” 223-24). Using bias rather than constraint further implies a “statistical trend” in the choice rather than an invariable “taxonomic” choice and “undermine[s] the notion that the constraints are genetically hard wired and exist prior to word learning” (Bloom, “Recent Controversies” 750-51; rpt. in Bloom, Language 15-16). In general, an innate predisposition for language (see near the beginning of ch. 7 above) may also be considered a constraint (Newport 27; rpt. in Bloom, Language 557). H Pinker, How the Mind Works 243-44. Moreover, since “biological structures might find uses very different from those for which they had originally evolved”—i.e., might undergo “redeployment” or “exaptation”—the inferotemporal cortex much later was “recruited for other purposes—most notably, reading” (Sacks, “A Man of Letters” 27). Allowing us to infer or hypothesize objecthood, the inferotemporal cortex additionally allowed us to infer or hypothesize meaning in written language. And just as the most probable or best “guess” about the identity of a seen object is made from its two-dimensional image on the retina, so, in children learning to read and alexics trying to regain the lost ability to read, the “guessing or inference to complete a word [is often made] using such scattered clues” within the word as recognized letter clusters or syllables. Normally, perception of them or of the words themselves, “as well as inferences

138

Substantive Notes

and hypotheses based on such perceptions, become instantaneous and unconscious,” allowing us to “read fluently and . . . attend consciously to the meaning . . . of written language” (Sacks, Afterword 153-54). I Gopnik and Meltzoff 26, 32, 74-75, 214, 217, 221; see also Hogan, On Interpretation 23-26; Fodor, Language 58, 95, 97. However, there are also involved in scientific theorizing other cognitive abilities that are made possible by social conditions of scientific work and so are not available from the very start of child development (Carruthers, “Roots” 84-86 and Architecture 349-54; Gopnik and Glymour 118; Faucher et al. 335-36, 345, 348-54, 360-62; see also Halford 547-48 and Pinker, How the Mind Works 301-05). J Baron-Cohen, Mindblindness 26-27; Geary 330; Swirski 15, 100-02. According to Lisa Zunshine, the “theory of mind” or mind-reading ability of readers of a work of fiction enables them to interpret the minds of at least the characters in the work (ch. 1-6). But in addition, Zunshine seems to state that the author of the work expects readers both to supply the same interpretation of those minds that the author supplies and to assume that their own interpretation could be the same as the author’s: Writers can exploit our constant readiness to posit a mind whenever we observe behavior as they experiment with the amount and kind of interpretation of the characters’ mental states that they supply and that they expect us to supply. . . . [By our] comparing our interpretation of what the given character must be feeling at a given moment with what we assume could be the author’s own interpretation, we deliver a rich stimulation to the cognitive adaptations constituting our Theory of Mind. (Zunshine 22, 24-25) By this addition Zunshine may be implying that, through “reading” the characters’ minds, readers indirectly may be “reading” the author’s mind as manifested in the text. K Described, e.g., as noted in the above Preface near the end of its penultimate paragraph. L After reviewing research of the last 25 years in the theory of mind, psychologist Martin J. Doherty believes in 2009 that the data strongly support the theory theory of mind and the empathy/simulation theory (5, 54). According to cognitive neuroscientist Franck Grammont, A simple consensus could be found between these two theories if one considers that they both have a place but at different levels. We would use a theory [theory] of others’ minds principally at a conscious level, in rather complex situations, whereas we would use simulation [theory] in an automatic and nonconscious manner when we observe others’ actions and, more generally, their behavior. Obviously, the discovery of mirror neurons brought forth arguments in favor of the simulation theory . . . , even if it remains a rather controversial issue. (Grammont, “Can I Really Intend” 122-23)

A Theory of Literary Explication

139

Chapter Nine SOROEP in the Brain A Sperber, “Modularity of Thought” 60-63; rpt. as “Mental Modularity” in Explaining Culture 47-50 and in Whitehouse 47-50. Subsequently, Sperber has postulated metarepresentational modules or sub-modules that comprehend a speaker’s intended meaning from her utterance (and presumably an author’s from her text) and that evaluate the validity of an argument (and presumably of an explication) by checking its consistency and logic (“Metarepresentations” 129-37). B In the integration phase of the Kintsch Construction-Integration model of narrative-text comprehension, the informational network is treated as a constraintsatisfaction network and, in “settling into a stable state, . . . serves to pick out . . . that subset of information which best fits the prevailing sets of constraints” (Wilkes 259-60; italics mine). C Crain and Thornton 335. See also Campbell, Improbable Machine 183-85; Green and Vervaeke; Stromswold 356-64, 368-86; Cowie 276, 304-05, 311; Jackendoff, Foundations 82-92; Goldin-Meadow 220; Simpson 6-7; Siegal and Surian 134-35, 145-47; and for a theory of cognitive architecture (called Optimality Theory) that both depends on Chomsky’s instinctual theory and uses connectionism, see Prince and Smolensky.

Chapter Ten Other Theories and Related Conditions A Starkey and Cooper; Treiber and Wilcox. Indeed, newborns even in their first week of life can distinguish between two and three items, although many species of untrained animals can do the same (Wiese 1-3; Alvin Goldman, “Sciences” 160). Results of experiments “suggest that early mathematical knowledge develops from an innate base” (Starkey, Spelke, and Gelman 179). That “infants possess true numerical concepts . . . suggests that humans are innately endowed with arithmetic abilities” (Wynn 749; see also Siegal 131 and Pinker, How the Mind Works 338, 340). B Universality in a trait does not mean that it is innate but only suggests that it may be innate. See Hogan, Cognitive Science 201-02. C A legitimate substitute since, in language, a comparing statement is “a complete semantic analogue” of a locating statement, and “comparative adjectives . . . act like spatial directions” semantically (Jackendoff, Semantics 197). Locating objects is made possible by “spatial cognition, . . . arguably one of the domains that is pre-structured most by innate knowledge”; and comparing the sizes of objects is made possible by spatial cognition plus the “innate” general capacity for “forming similarity judgments” (Schütze 117, 179). That locating objects and so comparing their sizes is an innate ability has been shown by eye-movement (“looking time”) results of tests on newborn infants (Gopnik, Meltzoff, and Kuhl 65-68).

140

Substantive Notes

D Jackendoff, Languages 66. Other cognitive scientists too have evidence that certain highly abstract concepts like ownership “have a genetically imposed head start in the young child’s kit of mind-tools; when the specific words for owning . . . enter a child’s brain, they find homes already partially built for them” (Dennett, Darwin ‘s Dangerous Idea 378-79). E This relationship is, for instance, in the remembering of experiences originally judged by means of SOROEP but later remembered as physicalprobability judgments. “For reasons of storage economy and generality we forget the actual [past] experiences and retain their mental impressions in the forms of averages [or] weights . . . that help us determine future actions.” Judgments or guesstimates in terms of physical probability are therefore “summaries of knowledge that is left behind when information is transferred to a higher level of abstraction” (Pearl, Probabilistic Reasoning 15, 21). Of course, a reverse relationship can also be true: A SOROEP decision can be made on the basis of evidence consisting of past physical relative probabilities (Cosmides and Tooby, “Humans” 66n19; Kyburg; Plantinga 171). Similarly, the greater the physical probability of x, the greater the strength of the evidence for belief in x and so the greater the epistemic probability of the belief. Another connection between epistemic and physical probability is that, if the outcome of an experiment has a factual [i.e., physical] probability other than 0 or 1, the experimenter is rationally uncertain of the outcome. Hence it will sometimes be unclear which kind of probability an author means. . . . In a subject like insurance, the connection between the two concepts is in its nature close. (Franklin 326) There are also “cases of reiterated and mixed probabilities. It is epistemically probable (with respect to our evidence) that the statistical probability of a tritium atom’s decaying within the next 13 years is just over one-half; it is statistically probable, we hope, that epistemically probable beliefs (beliefs probable with respect to someone’s evidence) are true”; and it “is epistemically probable with respect to my evidence” that “epistemically probable beliefs are statistically likely to be true” (Plantinga 141-42; see also Pollock 97, 109 and Lagnado and Sloman 173). F Riedl 39-41. Moreover, what Riedl calls “logical” probability (45-46) may be interpreted as epistemic probability as well. Philosopher William S. Cooper also traces the evolution of epistemic probability, which he calls “subjective” probability (59-60, 82-83). G Calvin 3, 22, 35-36, 38. Moreover, the high-speed probabilistic generation of thoughts and ideas may be considered not only similar to biological evolution but also a result of it. According to the founder of the subjective theory of probability, if we discovered “on what a priori probabilities a human being’s present opinions could be based, we should obviously find them to be ones determined by natural selection . . .” (Ramsey 192). And fuzzy-logic theorist Bart Kosko suspects probability “reasoning” to be a biological instinct that evolved in animals and humans because it aided in their struggle for survival (54).

A Theory of Literary Explication

141

Chapter Eleven Implications A If not qualified, the phrase “universally structured human mind” could wrongly imply that everyone’s mind is completely structured in the same way and that therefore variation in any mental structure among individuals would not occur.

Chapter Twelve Rationalism, Empiricism, and Coherentism A Another metaphorical analogue for a coherence network—but, in this case, essentially verbal rather than material—is a crossword puzzle: The answers in the crossword form the set of our beliefs, each individual answer that we fill in representing a single belief. When we get an answer that fits with an answer that we have already, that fact helps to confirm the correctness of the second answer. But it is equally true that the second answer helps confirm the correctness of the first answer. Each answer in the crossword plays a part in supporting all the other answers, which in turn play a part in supporting it. Just as no belief is epistemically basic, no answer in the crossword is more basic than any other. (Bernecker 124) But here it may seem that there is no “ground” supporting the puzzle. However, the letters (constituting words or abbreviations) placed in the crossword box are “answers”; and without the numbered “Across” and “Down” questions or clues printed below the box (the questions to which those letters are answers), the words or abbreviations placed in the box would be only arbitrary fill-ins, and so their overlapping connections with other words or abbreviations would have no significance. Therefore, those questions or clues are the ground supporting the puzzle. Indeed, even in a jigsaw puzzle, which could also but more generally be considered a metaphorical analogue for a coherence network, the picture, scene, or design shown by the completed puzzle is the supporting ground, for, without it, there would be no clue available to indicate which puzzle piece fits together with which one of other pieces alike in shape. B They are based on a rationalistic foundation by resting on justification and gradation (or degree) in justification (sec. 4.5), and they are based on an empirical foundation by resting on visually observed likeness that (as shown just above) grew into the abstract concept of likelihood.

142

Substantive Notes

Chapter Thirteen Three “Quasifoundational” Concepts A Philosophical Approaches 332. Epistemologist Richard Swinburne treats comparative simplicity as “a fundamental a priori principle” (Epistemic Justification 102). B Sher 150. The final clause would be analogously pertinent to a foundation without foundationalism in literary explication: an account of explicatory practice that “reveals [probability] connections between theories” (i.e., explications) of a literary work.

Chapter Fourteen SOROEP Judgments and Internal-Representation Judgments A In sec. 10.1 above—i.e., in the discussion of Ray Jackendoff’s ideas on comparing the sizes of different objects and Eve Sweetser’s ideas on comparing any characteristics of things, whether concrete or abstract—we already noted that a sense of similarity is innate in humans (Schütze 179) and inherent in their comparing concepts (i.e., internal representations). However, this sense of similarity becomes detectable in small children only after they develop an awareness of second-order “relations between relations”—for instance, when they can correctly match a picture of two apples with a picture of two bananas instead of with a picture of an orange and a plum—i.e., when they become aware of not only the similarity (sameness) of fruits in each picture but also the similarity (sameness) between the number of similar (same) fruits in both pictures. Mature baboons and pigeons can also pass this test, thereby showing (like humans) an innate sense of similarity and of second-order “relations between relations” (Kluger 41). B These functional relations between internal representations exist not only in tested human subjects but also in tested pigeons: In behavioral psychology, when a pigeon is rewarded for pecking a key in the presence of a red circle, it pecks more to a red ellipse, or to a pink circle, than it does to a blue square. This “stimulus generalization” happens automatically, without extra training, and it entails an innate “similarity space” [as part of an “innate similarity-determining mechanism”]; otherwise the animal would generalize to everything or to nothing. (Pinker, Language Instinct 416-17)

A Theory of Literary Explication

143

Afterword A Supplement on Justification A Richard Fumerton approaches the problem using absolute rather than relative probability but believes that the justification chain could be ended under the following condition: if the second-step proposition not only could be known without inference but also could make the first-step proposition so probable as to entail it (“Epistemic Probability” 162).

Appendix Evidence and Hypotheses among Probability Types A A different continuum between physical and epistemic probability is shown in Gillies 2, ch. 8. B This is not to say that the object smoker’s actual lifespan would be predicted with perfect accuracy but only that his probable lifespan would be indicated with as perfect an accuracy as (under the theoretical conditions of infinity) it can be indicated by the other smokers’ average lifespan. C She assumes this because, for each further characteristic she tests, she finds no significant negative correlation between the lifespans of each subgroup that includes the characteristic and the lifespans of each corresponding subgroup that excludes the characteristic. D Other examples are weather forecasting and applications of decision theory. For discussion of the mixed-probabilities type, see Pollock 97, 109. E E.g., to weight each characteristic proportionately to the negative correlation mentioned in note C above. F Again, this is not to say that the object smoker’s actual lifespan would be predicted with perfect accuracy (or even that his probable lifespan would be identical with the “perfectly accurate” one in the preceding situation under the theoretical conditions of infinity) but only that his probable lifespan would be indicated with as perfect an accuracy as (under the theoretical conditions of infinity) it can be indicated by this mixed-probabilities procedure. G The term “character” is used here rather than “intention” not merely to avoid a controversial and overly implicative term but to emphasize comparison of this situation with the preceding, lawcourt situation and, more importantly, to suggest an attribute of the author other than the language which became his poem (which some critics believe to be the only indicator of his “intention”)—i.e., to suggest an attribute that might be a composite of his philosophy, temperament, communication “style,” and mental, emotional, and psychological states that brought forth that language from him.

SOURCE CITATIONS

Preface 1. If, at this point in the text, a scientific definition of the term is needed, see substantive note 7A, par. 2. 2. Thagard, Conceptual Revolutions 97. 3. Goldhagen B9; see also Donoghue, Speaking 7-8, and Gottschall. 4. Hassan 204-07; Hartman; Butler, Guillory, and Thomas x; Spivak; Farrell; Patai and Corral; Edmundson, Why Read? 1-7, 41, and throughout; Culler, Literary 14. 5. See, e.g., Jerrold Levinson, Pleasures 175-213 and “Hypothetical Intentionalism”; Iseminger; Livingston, “Intentionalism” and Art; Carroll, “Interpretation” and “Andy Kaufman”; Irwin; Compagnon ch. 2; Currie, Arts ch. 6; Vandevelde; Benedetti 191. 6. Benedetti 10-16, 52-56, and throughout; Swirski. 7. Wolterstorff 36-37; Kameen; Lancashire. 8. Levin 144-49; Koppen; Donoghue, Practice 97 and “Teaching”; Davis and Womack; Williams 1; Bérubé 108; Mitchell, “Commitment” 321-25; Lentricchia and DuBois; Marjorie Levinson; essays in Rasmussen and in Wolfson and Brown. 9. Caruth and Culler. 10. Epigraph sources: Peirce 332; Booth 210; Keynes v; Putnam, Many Faces 85; Chladenius 60; Ricoeur 213. The epigraph on the title page is from Reichenbach 192.

Chapter One Explication and Interpretation 1. Baldick 49. 2. Hirsch, Validity 171n3. 3. Vandevelde 219-20 and throughout. 4. Hernadi xiv-xvin. 5. Lecercle 1, 2, 4, 6, 20, 22, 32, 34. 6. See, e.g., Norris, On Truth 5, 6, 21. 7. Graff, Poetic Statement 148-50. 8. Tyson 117; Farred 80, 92-93; Perloff, Differentials xiii; Gallop. 9. Richards 12.

A Theory of Literary Explication

145

ChapterTwo Theory and Practice 1. Raval 64. 2. Rorty B6. 3. Cain xi. 4. Knapp and Michaels 742; rpt. in Mitchell, Against Theory 30. See also Goodheart, Literary Studies 73. 5. Hiley, Bohman, and Shusterman 11. 6. Quigley 230; rpt. in Dauber and Jost 22. 7. Prendergast 14. 8. Lipking 428. 9. Eddins 2. 10. Hiley, Bohman, and Shusterman 11; see also Parker 4, 7. 11. Battersby, “Authors” 191; rpt. in Battersby, Reason 31. 12. Stern 101. 13. Weinsheimer 13. 14. Graff, Literature 97; see also Ellis, Literature 210. 15. Crews 119. 16. Peltason, Reading xi. 17. Paul Armstrong, “History” 693; rpt. in his Readings 89. 18. Mueller 27. 19. Pechter 298; see also Paul Armstrong, Conflicting Readings xi, Patterson 259, Kastan 30-31. 20. Fish, Professional Correctness 78-79. 21. Johnston 532.

Chapter Three Reasoned Argument 1. Quoted in Shaw, “Politics” 262; rpt. in his War 66. 2. See Farrell 149. 3. Fauconnier and Turner 180-83, 185, 187. See also John Anderson, Cognitive Psychology 362-63; Sperber and Wilson, Relevance 176; Donald, “Mind” 358-59; Sperber, “Metarepresentations” 121-27; William Cooper 199; Ramachandran and Hubbard 58-59; Pinker, “Language” 30; Hurford 41, 44-49; Greenspan and Shanker 189-90. 4. Riedl 150, 225; Damasio and Damasio 89; Edelman, Second Nature 63. 5. Givón 393-445. 6. Jackendoff, Consciousness 323; Bloom, “Some Issues” 204-23. 7. Bickerton 114. 8. See Thelen and Smith 329-31; Alvin Goldman, Epistemology 186-87, 189-90; Dennett, Kinds 151, 159; Carruthers, Language; Nelson, Language 87; Gopnik and Meltzoff 189-95; Gentner and Goldin-Meadow. However, for a

146

Source Citations

comprehensive review of the limited influence of language on cognition, see Bloom, How Children Learn ch. 10. 9. Crosswhite 47. 10. Shafer and Pearl 626. 11. Fish, Is There a Text 367. 12. Newton 38, 39, 43. 13. Mailloux 148, 180-81. 14. Noted in several books by Christopher Norris—e.g., his Deconstruction 1-2, ch. 3.

Chapter Four Second-Order Relative Objective Epistemic Probability (SOROEP) 1. Carnap 25; Hacking 1, 12; Cohen, Probable 42-43; Popper, Realism 283-84; Myron Tribus, quoted in Campbell, Grammatical Man 64; Pollock 96-97; Curley and Benson 204-05; Franklin x; Lagnado and Sloman 158; Sloman and Over 158. 2. Hammond 37. 3. Shafer 312-13. 4. Plantinga 173. 5. Gillies 1. 6. Cohen, Introduction 17, 111-13. 7. Keynes 123-30. 8. Gillies 20, 30. 9. Johns 10, 13, 20-21. 10. Popper, Conjectures 109-10, 118-19. 11. Achinstein 96-100; italics mine. 12. Alvin Goldman, “Justified Belief” 2, 21-22; rpt. in Kornblith 107, 127-28. 13. Moser, Knowledge 8, 141. 14. Feldman and Conee, “Evidentialism” 16; rpt. in Conee and Feldman, Evidentialism 84. 15. Conee and Feldman, Evidentialism 3. 16. Conee and Feldman, Afterword 164. 17. Feldman and Conee, “Evidentialism” 24; rpt. in Conee and Feldman, Evidentialism 93. 18. Conee and Feldman, Evidentialism 93; see also Richard Feldman 46-47. 19. Feldman and Conee, Afterword 104-05. 20. Moser and vander Nat 22; see also Moser, Philosophy 51-52. 21. Chisholm 4 (item 4); Haack 42, 81-82; Swinburne, Epistemic Justification 11; Kvanvig 196. 22. Sober 96. 23. Hesse, “Texts” 41. 24. See, e.g., Bowers 83-84, 91, 100, 102n, 114 and n, 132n; Hirsch, Validity 186. 25. Frawley 57; italics mine. 26. Herman 328-29.

A Theory of Literary Explication

147

27. Hirsch, Validity 174 and n6, 176, 178-79, 197. 28. Hirsch, “Objective Interpretation” 475-78, Validity 169-207, 236-41, and “Value” 64; Irwin 50, 62. 29. Juhl 71 and throughout. 30. Lipton, Inference 2. 31. Stove, Rationality 145, 176, 188. 32. BonJour, In Defense 2. 33. Lycan 135-36. 34. A guide to supporting evidence is in Juhl 213n; an example is in Trimpi 391. 35. Weatherford 243. 36. Savage and Ehrlich, “Brief Introduction” 1-3, 10. 37. McNeill and Freiberger 146. 38. Stephen Toulmin, paraphrased in Ruse 46. See also Ellis, Language 89. 39. Munz 178. 40. Storey 73. 41. Munz 176-77. 42. From Lucas 11. 43. Fishburn 336. 44. Achinstein 102-03. 45. McNeill and Freiberger 66-67. 46. Chang 4-5, 25-27. 47. Schauber and Spolsky 3. 48. Paul Armstrong, “Conflict” 341; rpt. in his Readings 2. 49. Richard Feldman 45; italics mine. 50. Alexandrov 3, 20. 51. Barnes 1. 52. Krausz 1n. 53. Currie, Arts 17, 129. 54. For arguments, see Hirsch, Validity 98, 189-90, 194-96, 227, 237-38; Fish, “Reply” 178; Newton 39-41. 55. Thagard, “Explanatory Coherence” 435-67 and Conceptual Revolutions ch. 4. 56. Schank and Ranney 892, 897; Read and Marcus-Newhall 429; Freedman, “Understanding” 329-33. However, in Giere see also Tweny 84-85, Gorman 410, and Glymour 469-70. 57. Thagard, Conceptual Revolutions 62, 96; Wirth 125. 58. Wirth 119-20, 123; Charniak and McDermott 455, 457. 59. Lipton, Inference 60. 60. Thagard, Conceptual Revolutions 91-93. 61. Thagard, Coherence in Thought 248. 62. Cohen, Probable 50, 51n2; see also Eggleston 129, 132.

148

Source Citations

Chapter Five Evidence and Hypothesis 1. Harland 21-23. 2. John Anderson, Cognitive Psychology 236, 364; Reber et al. 492; Berry and Dienes 1-2, 153; Kihlstrom 1447; Lewicki and Hill 239-40; Lewicki, Hill, and Czyzewska 801; Evans and Over 10, 22, 29, 50, 51, 53, 146, 149-51; Timothy Wilson 25-27. A more recent comprehensive coverage is Litman and Reber. 3. Osherson 68. 4. Evans and Over 160. 5. Evans, Hypothetical Thinking 18, 169. 6. Benjafield 185; see also Reber 7-8. 7. Briggs and Krantz 77-106. 8. Evans, “Deductive Reasoning” 180; see also Feeney 303, Stanovich, Who Is Rational? 144-45, and Evans, Hypothetical Thinking 14-15, 109. 9. Oaksford and Chater 355-56. 10. See ch. 10 note E below. 11. Evans, “Deductive Reasoning” 180. 12. Sacks, “The Abyss” 108, 110; repr. in his Musicophilia 220-22. 13. Sacks, “The Abyss” 112; repr. in his Musicophilia 226-27n12. 14. Hesse, Revolutions 173-74. 15. Culler, “Beyond Interpretation” 246; repr. in his Pursuit 5. 16. Reichenbach 192. 17. Fish, Professional Correctness 72.

Chapter Six General Considerations 1. Shumway 196-97. 2. Shumway 197. 3. Herman 348; italics mine. 4. Runde 109. 5. Devlin 197-98. 6. Tversky and Kahneman, “Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment” 294, 297-300, 303-05, 311-12; rpt. in Gilovich, Griffin, and Kahneman 20, 24-29, 32, 35, 45-47. 7. Tversky and Kahneman, “Judgments” 96-97. 8. Olson 123-24. 9. Clarke 83. 10. Wilkes 277. 11. Epstein 719. 12. Macdonald 19-20. 13. Pepper 116-17. 14. Hesse, “Texts” 41-42. 15. Fish, Is There a Text 10-11, 14-16.

A Theory of Literary Explication

149

16. Rosenblatt 128-29. 17. See Hogan, On Interpretation 27. 18. Marcus 125, 220n52; see also Li and Gleitman 290-91. 19. Kuhn 201-02; see also Farrell 53. 20. Haack 207-09. 21. Davidson, “Locating” 307. 22. Hobbs 14, 22. 23. Stove, Probability 89.

Chapter Seven Modularity in Speech Comprehension and Reading 1. Language Instinct 426-27; see also Dehaene, “Evolution” 133-34, 141-51, and Reading 144-47, 302-03. 2. Marshall, “Description” 82-83; see also Ginzburg 12-14, Dehaene, Reading 212, and Dehaene, quoted in Sperber, “Modularity and Relevance” 59. 3. Fodor, Modularity. 4. Tooby and Cosmides xii. 5. E.g., Putnam, “‘Innateness’”; O’Grady, Principles ix-x; Katz 18 and throughout; Edelman, Bright Air 129-30, 243-45; Braine; Reber 151-58; Hogan, On Interpretation 77-86; Turner, Literary Mind ch. 8; Culicover 197 and throughout; Elman et al. 347-48; Schütze 61-62, 180-81, 186-87; Deacon 35-39, 42, ch. 4; Seidenberg 1599-1603; Landauer and Dumais 212, 226; Tomasello, “Introduction” vii-xxiii, and Constructing 3-7; Calvin and Bickerton 157; Prinz, Furnishing 198-212; Greenspan and Shanker 210-11; Shanker and Taylor; Hawkins 181; La Cerra and Bingham 84, 86; Dabrowska ch. 5-7; Devitt; Tucker and Hirsh-Pasek 363; Thelen and Smith 329-31. Four more are listed in Hogan, On Interpretation 81. 6. O’Grady, Syntactic Development 307. 7. Tucker and Hirsh-Pasek 372-73. 8. Spolsky, Gaps 33, 212n10; see also Elman et al. 101; Rumelhart, McClelland, and PDP Research Group 1: 139-42. 9. Ridley 3. 10. De Waal B2; see also Cosmides and Tooby, “From Evolution” 291; Robin Cooper 163; Keller, ch. 2. 11. See, e.g., Mattingly, “Reading, Linguistic Awareness” 23-25; Geary 330. 12. See, e.g., Mattingly, “Reading and the Biological Function” 339-43. 13. See, e.g., Marshall, “Description” 67-71, 81-83, and “Cultural and Biological Context” 23-25; Donald, “Mind” 363. 14. Lila R. Gleitman and Paul Rozin, paraphrased in Campbell, Improbable Machine 229. 15. Karmiloff-Smith 148; Nelson, Language 345, 357n13. 16. Marshall, “Description” 69-70. 17. Gernsbacher and Robertson 162-65. 18. Millikan 118. 19. Pinker, Language Instinct 278.

150

Source Citations

20. Pinker, Language Instinct 209. For other examples, see Campbell, Improbable Machine 136-37. 21. Stanovich, What Intelligence Tests Miss 22. 22. See secs. 5.2, 6.2, and ch. 6 note C above.

Chapter Eight SOROEP in Speech Comprehension and Reading 1. Sperber and Wilson, Relevance 13-14. 2. Herman 328-29. 3. Sperber and Wilson, Relevance 15-16. 4. Samuels, Stich, and Tremoulet 82-83. 5. Pinker, Language Instinct 283. 6. Putnam, “‘Innateness’” 12, 13, 21-22n1; rpt. in Searle 130-31. See also Komarova and Nowak 328; Jerome Feldman 322-23. 7. Pinker, Language Instinct 285-86; optional italics mine. 8. Deacon 107, 109, 122, 327. 9. Millikan vi and throughout. 10. Bowerman 133-34, 161-62; rpt. in Bloom, Language 329-30, 354-56; italics mine. 11. Maratsos 5; but see also Bloom, “Recent Controversies” 751-52 (rpt. in Bloom, Language 16) and How Children Learn 10-11, 262, 264. 12. MacWhinney 193. 13. Nelson, “Constraints” 224, 241. 14. Woodward and Markman 143. 15. Landauer and Dumais 215, 218. 16. Clive Thompson 30. 17. Landauer and Dumais 215, 216. 18. Katz and Shatz 1133. 19. Sacks, “A Man of Letters” 27. 20. Zeki 245, 253; italics mine. 21. Gawande 62-65. 22. Plotkin, Evolution 265-66. 23. Gopnik and Meltzoff 26-27. 24. Gladwell 56. 25. Siegal 3-5. 26. Siegler, Emerging Minds 164, 166, 167, 235; italics mine. 27. Baron-Cohen, Mindblindness 2, 6-12; see also Baron-Cohen, Tager-Flusberg, and Cohen; Pinker. How the Mind Works 330-32; Nichols and Stich 5, 206; Reddy, ch. 8, esp. 179; Bloom. How Children Learn 61. 28. But see also Millikan 204-19. 29. Dunlosky and Metcalfe 243; Moses. For a general summary of the relationship between theory of mind, theorizing capacity, and executive functioning, see Halford 547, Siegal 34-35, and Doherty 129, 149-50. 30. Stueber 4, 24, 26, 28, and ch. 3; Breithaupt. 31. Baron-Cohen, “Biology” 108.

A Theory of Literary Explication

151

32. Baron-Cohen, “Biology” 104-08. 33. Baron-Cohen, “Biology” 108. 34. Gazzaniga 178, 180-81; Rizzolatti, Fogassi, and Gallese. 35. Astington and Jenkins 1311. 36. Dehaene, Reading 316. 37. Bloom, How Children Learn 72; Carruthers, Architecture 188. 38. Sperber and Wilson, “Pragmatics” 5; Sperber, “Metarepresentations” 133; Happé and Loth 25, 32; but see also Siegal and Surian 140-44. 39. Gazzaniga 179; see also Rizzolatti and Sinigaglia 159-71 and Iacoboni. 40. Rizzolatti, Fogassi, and Gallese 61. 41. Rizzolatti and Buccino 223-24. 42. Siegal 45; Siegal and Surian 146-47; Bloom, “Mindreading” 48-51; Carruthers, Architecture 174-76; Buller 190-95; Dunlosky and Metcalfe 243-44; Swirski 100; Tsimpli and Smith; and see the opposition between Prinz, “Mind” 26, 29 and Samuels, “Human Mind” 50-51. 43. Talbot, “Birdbrain” 73-74; Gazzaniga 49-53, 194-98; Kluger 41. 44. Corballis 1, 6, 16, 35, Part 3, esp. 137. 45. Nichols and Stich 147. 46. O’Grady, Principles 94, 187, 192-93, 206 and Syntactic Development 311-12; see also Plotkin, Evolution 135. 47. Moskowitz 94. 48. Kolers 84; see also Woods 59, 62-70. 49. Cohen, Probable 262 and n. 50. Carruthers, “Roots” 92-93 and Architecture 347. 51. See ch. 4 note F and sec. 4.11. 52. Hanna xiii.

Chapter Nine SOROEP in the Brain 1. Gardner, Frames 282. Cf. Fodor’s “weighing of gains and losses” as a necessary central-processor function (“Author’s Response” 36), and see also Halford 541. 2. Marshall, “Multiple Perspectives” 236-37; Gigerenzer, Adaptive Thinking 229; Gallistel and Chang 11-12; Shallice 273, 306; Jackendoff, Consciousness 265-70 and Foundations 219-30, n17; Tsimpli and Smith 200, 207-13. 3. Samuels, “Massively Modular Minds” 17, 19-20, and “Human Mind” 53n1. 4. Bates, Bretherton, and Snyder 27-30, 286; Shallice 350-52; Donald, “Prehistory” 61. 5. Dror and Thomas 287; Goel 275-78. 6. Karmiloff-Smith 52-54; Gardner, “Centrality” 12-14; Mithen 70-71. 7. Gardner, Frames 55-56, 281, 283 and “Centrality” 12-14; Martindale 14. 8. Whitehouse 8; see also Carruthers, “Distinctively Human Thinking” 71-72. 9. Carruthers, “Thinking in Language” 96 10. See ch. 4 note F and sec. 4.11.

152

Source Citations

11. Carruthers, “Distinctively Human Thinking” 73, 80-87. But see Gleitman and Papafragou 651-52. 12. Clarke 127; Cosmides and Tooby, “Humans” 63-69. 13. Evans et al. 198-99 and Over, “From Massive Modularity” 122. 14. Clarke 37-42. 15. Sperber, “Modularity of Thought” 46, 49; rpt. in Explaining Culture 128, 133 and in Whitehouse 31, 35. 16. Clarke 4, 36. 17. Carruthers, Architecture 358, 364-66. 18. Goldberg, Executive Brain 58-59, 218. 19. Dehaene, Reading 318, 321-22. 20. Samuels, “Complexity” 120. 21. Woodward and Cowie 322-23. 22. Spolsky, Gaps 33; see also Rumelhart, McClelland, and PDP Research Group 1:134-35. 23. E.g., Cohen and Tong 2405-07; Martindale 13, 46-50, and throughout; Elman et al. 77-78, 101, and throughout; Perfetti 370 and n; Loritz 12; Pinker, How the Mind Works 112-13, 131 and Words 117-19; Wilson and Hendler 395-414; Andy Clark; Minsky and Papert 268-80; Segal 146; Ingvar 252-54; Tanenhaus, Dell, and Carlson 83-108; Goldberg, Executive Brain 58-59, 62, 216-18, “Rise” 203-05, and “Higher Cortical Functions” 266, 269; Dabrowska 156, 158n; Stevenson 344-55 and four references to “hybrid connectionist approaches to language processing [that] have taken a modular approach” (344). But see Spivey 260. 24. Evans, “Deductive Reasoning” 180; italics mine. See also Shanks 205-12. 25. Gopnik and Meltzoff 61. 26. Terrence Sejnowski, quoted in Campbell, Improbable Machine 203. 27. Campbell, Improbable Machine 161. 28. Shastri. 29. Franklin 324. 30. Rumelhart, McClelland, and PDP Research Group 1:29-30; McClelland, Rumelhart, and PDP Research Group 2:34-35; Andy Clark 90-92, 95-96; Churchland 168-69; Gee 32, 34, 41-42, 44-45; Gärdenfors 230. 31. Rumelhart, McClelland, and PDP Research Group 1:30; italics mine. 32. Churchland 164-69, 202-05; Gee 30-36. 33. St. John 272, 294, 298-300. 34. Jackendoff, Semantics 157. 35. Jackendoff, Semantics 46, 84-86, 102-03, 128-31. 36. Jackendoff, Semantics 132-33. 37. Spivey 3-4 and throughout. 38. Elman et al. 343-45; Seidenberg 1600-02; Bates and Carnevale 450; but see Green and Vervaeke 153-54 and Fodor, In Critical Condition 150. 39. Pinker, Words 108. 40. Ramsey and Stich, “Connectionism” 197; rpt. in Fetzer 23. See also Komarova and Nowak 326-27, but see also Spivey 201-02. 41. Jerome Feldman 320. 42. Jerome Feldman 322-23.

A Theory of Literary Explication

153

43. Jackendoff, Foundations 163-65, 360, 382, and Language 62; Pinker and Mehler; Green and Vervaeke 153; Neil Smith 106-07; Plaut 163-64; Pinker, How the Mind Works 118-29, Words 110-17, and Blank Slate 78-83; but see Spivey 260. For recent reviews of the controversy, see Halford 535-36 and Abrahamsen and Bechtel 163-69. 44. Jerome Feldman 277. 45. Prinz, Furnishing 211-12. 46. John Anderson, Architecture 1-5, 42; Uttal 209, 214-17; Almor; Pinker, Blank Slate 74; Buller ch. 4. 47. Gee 26; Churchland 160; Pinker, Words 106. 48. Jackendoff, Semantics 157; italics mine. 49. Gee 26-27. 50. Van Gelder and Port 29. 51. Kandel 71-72.

Chapter Ten Other Theories and Related Conditions 1. Huber and Huber 304, 311-13. 2. Moore, Pure, and Furrow 722-25. 3. Edelman, Second Nature 62-63; Pica et al. 501-03. 4. Dehaene, Reading 309; Number 4-7 and throughout, esp. 79-80, 220; and “Evolution” 133-41, 147-51. See also Gallistel and Gelman 559, 562-74; Alvin Goldman, “Sciences” 159-62; Talbot, “Birdbrain” 65; Brannon; and Nieder and Miller. 5. Dehaene et al., “Sources” 973, and Pinker, Stuff 129-30, 138. 6. Moyer 12. 7. Donald Brown 134; Pinker, Blank Slate 436-37, 439. See also Talbot, “Baby Lab” 99. 8. Ginzburg 15, 21, 29; Donald Brown 134, 150. 9. See Stein 62-72; Cosmides 260-63; Cosmides and Tooby, “Cognitive Adaptations” 220-21; Clarke 90-91. However, see also the dissenters listed in Wilkes 394-95. 10. Jackendoff, Patterns 190, 194, 196, 203. See also Pinker, How the Mind Works 191. 11. Hirsch, Validity 174. 12. Jackendoff, Patterns 196. 13. Jackendoff, Consciousness 157. 14. Jackendoff, Patterns 190. 15. Sweetser 28, 46. 16. Ward 248. 17. Kirkham, Slemmer, and Johnson B35. 18. Saffran, Aslin, and Newport. 19. Siegler, Children’s Thinking 218, 223, 248. 20. For a description or review of several of those many ways, see Xu and Garcia; Reber 30-34, 38-40, 61-63; Seidenberg 1601; Kelly and Martin 114-36;

154

Source Citations

Copestake and Briscoe; Saffran, Newport, and Aslin; Prinz, Furnishing 208-11; Reyna and Brainerd. 21. Saffran, Aslin, and Newport 1928. 22. Barsalou 101-02, 107-08. 23. Riedl 20, 30-32, 36-49, 55-56, 141-42, last 2 secs. of ch. 2; Wuketits, Evolutionary Epistemology 66-69, 71, 85. 24. Gigerenzer et al. 234. See also Hasher and Zacks 1380-85; Gigerenzer and Murray; Gigerenzer, “Why the Distinction” 139. 25. Pearl, “Bayesian Approach” 342. 26. Landauer and Dumais 212. 27. Jackendoff, Language 19-20; italics mine.

Chapter Eleven Implications 1. Pinker, Language Instinct 405. 2. Fodor, “Précis” 1, 5. 3. Pinker, Language Instinct 57. 4. Stearns B8. 5. See, e.g., Papineau 25 and Raftopoulos. 6. See, e.g., Roger Brown 262-63; John Anderson, Cognitive Psychology 360-62; Sweetser 6-7; Au 182; Pinker, Language Instinct ch. 3; Li and Gleitman 290-91; Marcus 125, 220n52. 7. See, e.g., Papineau 26-28, 32-34. 8. See, e.g., Alvin Goldman, Epistemology 186-87, 189-90; Carruthers, Language; Dennett, Kinds 151, 159; Nelson, Language 87; Gopnik and Meltzoff 189-95; Thelen and Smith 329-31; Gentner and Goldin-Meadow. 9. Papineau 25; Kay and Kempton 77; Carruthers, Language 278; Farrell 51. For a comprehensive review of the limited influence of language on cognition, see Bloom, How Children Learn ch. 10, Pinker, Stuff 125-50, and Gleitman and Papafragou; and for a description of the complex interrelationship between language, culture, and cognition, see Bennardo 25, 53-55. 10. Fodor, “Précis” 1, 5. 11. Pinker, Language Instinct 405. 12. Raftopoulos 305. 13. Quoted in McNeill and Freiberger 253. 14. Juslin, Nilsson, and Olsson; Juslin and Persson. 15. Josephson and Josephson, ch. 9. 16. McNeill and Freiberger 253.

Chapter Twelve Rationalism, Empiricism, and Coherentism 1. Lehrer, Knowledge 77. 2. Prinz, Furnishing 196.

A Theory of Literary Explication

155

3. See, e.g., Carruthers, Language xiv. 4. Bender, “Coherence” 1. 5. Rescher, Philosophical Reasoning 173, 190. 6. Audi, Structure 6-7, 12. 7. Haack 1-2, 19, and throughout; see also Plantinga 179, 181-82; Klein, “Pyrrhonian Skeptic” 85; Thagard, Brain 90, 251; Conee 393-96; rpt. in Conee and Feldman, Evidentialism 42-46. 8. Audi, Structure 6-7n. 9. Popper, Logic 111. 10. Rescher, Philosophical Reasoning 173-74, 179, 187, 195n4; Alvin Goldman, “Internalism” 282. 11. Rescher, Philosophical Reasoning 178. 12. Rescher, Philosophical Reasoning 182. 13. See, e.g., Bender, Current State; Plantinga 178-82; Clarke 32-34, 157n2; Haack ch. 3. 14. Alston 12n. 15. Alston 11-12; see also Plantinga 182-83. 16. E.g., Alston (12-13), Moser (Knowledge 2, 7, 166), Plantinga (183-85), Audi (Structure 33, sec. II of ch.12), Williamson (186), Clarke (17). 17. Audi, Structure 7n.

Chapter Thirteen Three “Quasifoundational” Concepts 1. Shapiro. 2. Kant, Critique of Judgement 17. 3. Kant, Critique of Pure Reason 319. 4. Hogan, Philosophical Approaches 53-54, 268, 335. 5. Hogan, On Interpretation 19. 6. Hogan, Philosophical Approaches 268. 7. Gill, Tacit Mode 57. 8. Copeland 128. 9. Sher 151. 10. Weatherford 243. 11. Hintikka 167. 12. The four examples are from Hintikka 165-66, 168-69. 13. Copeland 129.

Chapter Fourteen SOROEP Judgments and Internal-Representation Judgments 1. Shepard and Chipman 2, 17. 2. Shepard and Chipman 1-2. 3. Shepard, Kilpatric, and Cunningham 82.

156

Source Citations

4. Shepard and Chipman 3. 5. Polanyi, Tacit Dimension 16-17. 6. See sec. 8.3, theory #3. 7. Richter 32. 8. Richter 247, 252. 9. See sec. 4.4.

Afterword A Supplement on Justification 1. Lehrer, Knowledge 192. 2. Fumerton, “Theories” 231n2. 3. Klein, “Pyrrhonian Skeptic” 92n11. 4. Pappas and Swain 35; see also Moser and vander Nat 12. 5. Corballis 6 6. Klein, “Human Knowledge,” “Pyrrhonian Skeptic” 86-88, and “Infinitism.” 7. Pappas and Swain 32; Swain, Reasons 138-39. 8. See ch. 4 note F and sec. 4.11. 9. Alan Goldman, Empirical Knowledge 11, 65, 77, 144, 203; Richard Feldman, Epistemology 72-73; see also Moser and vander Nat 15. 10. Bergmann 4n5, 5-6. 11. Klein, “Infinitism” 137-38. 12. Rescher, Philosophical Reasoning 173. 13. Popper, Logic 111; see ch. 12 above. 14. Pinker, How the Mind Works 79, 99. See also Dennett, Brainstorms 80-81, 122-25.

Appendix Evidence and Hypotheses among Probability Types 1. Weatherford 160, 165; Gillies 119-21. 2. Pinker, How the Mind Works 349.

BIBLIOGRAPHY

Abrahamsen, Adele, and William Bechtel. “Phenomena and Mechanisms: Putting the Symbolic, Connectionist, and Dynamical Systems Debate in Broader Perspective.” Stainton 159-85. Achinstein, Peter. The Book of Evidence. New York: Oxford UP, 2001. Alexandrov, Vladimir E. Limits to Interpretation: The Meanings of “Anna Karenina.” Madison: U of Wisconsin P, 2004. Almor, Amit. “Specialized Behaviour without Specialized Modules.” Over, Evolution 101-20. Alston, William P. Epistemic Justification: Essays in the Theory of Knowledge. Ithaca: Cornell UP, 1989. Alter, Robert. The Pleasures of Reading: In an Ideological Age. New York: Simon, 1989. Altieri, Charles. “Taking Lyrics Literally: Teaching Poetry in a Prose Culture.” New Literary History 32 (2001): 259-81. Anderson, Amanda. The Way We Argue Now: A Study in the Cultures of Theory. Princeton: Princeton UP, 2006. Anderson, David R. “Razing the Framework: Reader-Response Criticism after Fish.” Easterlin and Riesling 155-75. Anderson, John R. The Adaptive Character of Thought. Hillsdale, NJ: Erlbaum, 1990. —. The Architecture of Cognition. Cambridge: Harvard UP, 1983. —. Cognitive Psychology and Its Implications. 4th ed. New York: Freeman, 1995. Arbib, Michael A., and Mary B. Hesse. The Construction of Reality. Cambridge: Cambridge UP, 1986. Arms, George Warren. “Explication.” Preminger, Warnke, and Hardison. Armstrong, Isobel. The Radical Aesthetic. Oxford: Blackwell, 2000. Armstrong, Paul B. “The Conflict of Interpretations and the Limits of Pluralism.” PMLA 98 (1983): 341-52. Rpt. in his Conflicting Readings 1-19. —. Conflicting Readings: Variety and Validity in Interpretation. Chapel Hill: U of North Carolina P, 1990. —. “History, Epistemology, and the Example of The Turn of the Screw.” New Literary History 19 (1988): 693-712. Rpt. in his Conflicting Readings 89-108.

158

Bibliography

Astington, Janet Wilde, and Jennifer M. Jenkins. “A Longitudinal Study of the Relation between Language and Theory-of-Mind Development.” Developmental Psychology 35 (1999): 1311-20. Atkins, Kenneth R. Physics. 2nd ed. New York: Wiley, 1970. Au, T.K. “Chinese and English Counterfactuals: The Sapir-Whorf Hypothesis Revisited.” Cognition 15 (1983): 155-87. Audi, Robert. Epistemology: A Contemporary Introduction to the Theory of Knowledge. London: Routledge, 1998. —. The Structure of Justification. Cambridge: Cambridge UP, 1993. Baldick, Chris. The Concise Oxford Dictionary of Literary Terms. Oxford: Oxford UP, 1990. Banich, Marie T., and Molly Mack, eds. Mind, Brain, and Language: Multidisciplinary Perspectives. Mahwah, NJ: Erlbaum, 2003. Barkow, Jerome H., Leda Cosmides, and John Tooby, eds. The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York: Oxford UP, 1992. Barnes, Annette. On Interpretation: A Critical Analysis. Oxford: Blackwell, 1988. Baron-Cohen, Simon. “The Biology of the Imagination: How the Brain Can Both Play with Truth and Survive a Predator.” Wells and McFadden 103-10. —. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge: MIT P, 1995. Baron-Cohen, Simon, Helen Tager-Flusberg, and Donald J. Cohen, eds. Understanding Other Minds: Perspectives from Developmental Cognitive Neuroscience. 2nd ed. Oxford: Oxford UP, 2000. Barrett, H. Clark. “Modules in the Flesh.” Gangestad and Simpson 16168. Barsalou, Lawrence W. “The Instability of Graded Structure: Implications for the Nature of Concepts.” Neisser 101-40. Bates, Elizabeth, Inge Bretherton, and Lynn Snyder. From First Words to Grammar: Individual Differences and Dissociable Mechanisms. Cambridge: Cambridge UP, 1988. Bates, Elizabeth, and George F. Carnevale. “New Directions in Research on Language Development.” Developmental Review 13 (1993): 43670. Battersby, James L. “Authors and Books: The Return of the Dead from the Graveyard of Theory.” Harris, Beyond Poststructuralism 177-201. Rpt. in Battersby, Reason 18-39. —. Paradigms Regained: Pluralism and the Practice of Criticism. Philadelphia: U of Pennsylvania P, 1991.

A Theory of Literary Explication

159

—. Reason and the Nature of Texts. Philadelphia: U of Pennsylvania P, 1996. Beech, John R., and Anne M. Colley, eds. Cognitive Approaches to Reading. New York: Wiley, 1987. Bender, John W. “Coherence, Justification, and Knowledge: The Current Debate.” Bender 1-14. —. ed. The Current State of the Coherence Theory: Critical Essays on the Epistemic Theories of Keith Lehrer and Laurence BonJour, with Replies. Dordrecht: Kluwer Academic, 1989. Benedetti, Carla. The Empty Cage: Inquiry into the Mysterious Disappearance of the Author. Trans. William J. Hartley. Ithaca: Cornell UP, 2005. Trans. of L’ombra lunga dell’autore: Indagine su una figura cancellata. 1999. Benenson, F.C. Probability, Objectivity and Evidence. London: Routledge, 1984. Benjafield, John G. Cognition. 2nd ed. Upper Saddle River, NJ: Prentice Hall, 1997. Bennardo, Giovanni. “Language, Mind, and Culture: From Linguistic Relativity to Representational Modularity.” Banich and Mack 23-59. Berger, Karol. A Theory of Art. New York: Oxford UP, 2000. Bergmann, Michael. Justification without Awareness: A Defense of Epistemic Externalism. Oxford: Clarendon, 2006. Berman, Paul, ed. Debating P.C.: The Controversy over Political Correctness on College Campuses. New York: Dell, 1992. Bernecker, Sven. Reading Epistemology: Selected Texts with Interactive Commentary. Malden, MA: Blackwell, 2006. Berry, Dianne C., and Zoltán Dienes. Implicit Learning: Theoretical and Empirical Issues. Hove, East Sussex: Erlbaum, 1993. Bérubé, Michael. “Peer Pressure: Literary and Cultural Studies in the Bear Market.” Williams 95-110. Bickerton, Derek. Language and Human Behavior. Seattle: U of Washington P, 1995. Black, Carolyn. “Foundations.” Bender, Current State 200-04. Blackburn, Simon. The Oxford Dictionary of Philosophy. Oxford: Oxford UP, 1996. Block, Ned, ed. Readings in the Philosophy of Psychology. Vol. 1. London: Methuen, 1981. Bloom, Paul. How Children Learn the Meanings of Words. Cambridge: MIT P, 2000. —. ed. Language Acquisition: Core Readings. Cambridge: MIT P, 1994. —. “Mindreading, Communication and the Learning of Names for Things.” Mind and Language 17 (2002): 37-54.

160

Bibliography

—. “Recent Controversies in the Study of Language Acquisition.” Gernsbacher 741-79. Rpt. as “Overview: Controversies in Language Acquisition.” Bloom, Language 5-48. —. “Some Issues in the Evolution of Language and Thought.” Cummins and Allen 204-23. Bloomfield, Morton W., ed. In Search of Literary Theory. Ithaca: Cornell UP, 1972. BonJour, Laurence. In Defense of Pure Reason: A Rationalist Account of A Priori Justification. Cambridge: Cambridge UP, 1998. —. The Structure of Empirical Knowledge. Cambridge: Harvard UP, 1985. Booth, Wayne C. Now Don’t Try to Reason with Me. Chicago: U of Chicago P, 1970. Bowerman, Melissa. “Learning a Semantic System: What Role Do Cognitive Predispositions Play?” Rice and Schiefelbusch 133-69. Rpt. in Bloom, Language 329-63. Bowers, Fredson. Bibliography and Textual Criticism. Oxford: Clarendon, 1964. Braine, Martin D.S. “What Sort of Innate Structure Is Needed to ‘Bootstrap’ into Syntax?” Cognition 45 (1992): 77-100. Brannon, Elizabeth M. “Quantitative Thinking: From Monkey to Human and Human Infant to Human Adult.” Dehaene et al., From Monkey 97116. Breithaupt, Fritz. “How is it Possible to Have Empathy? Four Models.” Leverage et al. 273-88. Briggs, Laura K., and David H. Krantz. “Judging the Strength of Designated Evidence.” Journal of Behavioral Decision Making 5 (1992): 77-106. Brook, Andrew, and Kathleen Akins, eds. Cognition and the Brain: The Philosophy and Neuroscience Movement. Cambridge: Cambridge UP, 2005. Brook, Andrew, and Pete Mandik. Introduction to Brook and Akins 1-24. Brown, Donald E. Human Universals. Philadelphia: Temple UP, 1991. Brown, Roger. Words and Things. New York: Free P of Glencoe, 1958. Buller, David J. Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human Nature. Cambridge: MIT P, 2005. Butler, Judith, John Guillory, and Kendall Thomas, eds. What’s Left of Theory? New Work on the Politics of Literary Theory. New York: Routledge, 2000. Cain, William E. The Crisis in Criticism: Theory, Literature, and Reform in English Studies. Baltimore: Johns Hopkins UP, 1984.

A Theory of Literary Explication

161

Calvin, William H. The Cerebral Symphony: Seashore Reflections on the Structure of Consciousness. New York: Bantam, 1990. Calvin, William H., and Derek Bickerton. Lingua ex Machina: Reconciling Darwin and Chomsky with the Human Brain. Cambridge: MIT P, 2000. Campbell, Jeremy. Grammatical Man: Information, Entropy, Language, and Life. New York: Simon, 1982. —. The Improbable Machine: What the Upheavals in Artificial Intelligence Research Reveal about how the Mind Really Works. New York: Simon, 1989. Carnap, Rudolf. Logical Foundations of Probability. 2nd ed. Chicago: U of Chicago P, 1962. Carroll, Noël. “Andy Kaufman and the Philosophy of Interpretation.” Krausz 319-44. —. “Interpretation and Intention: The Debate between Hypothetical and Actual Intentionism.” Margolis and Rockmore 75-95. Carruthers, Peter. The Architecture of the Mind: Massive Modularity and the Flexibility of Thought. Oxford: Clarendon, 2006. —. “Distinctively Human Thinking: Modular Precursors and Components.” Carruthers, Laurence, and Stich, Innate Mind: Structure 69-88. —. Language, Thought and Consciousness: An Essay in Philosophical Psychology. Cambridge: Cambridge UP, 1996. —. “The Roots of Scientific Reasoning: Infancy, Modularity, and the Art of Tracking.” Carruthers, Stich, and Siegal 73-95. —. “Thinking in Language? Evolution and a Modularist Possibility.” Carruthers and Boucher 94-119. Carruthers, Peter, and Jill Boucher, eds. Language and Thought: Interdisciplinary Themes. Cambridge: Cambridge UP, 1998. Carruthers, Peter, and Andrew Chamberlain, eds. Evolution and the Human Mind: Modularity, Language, and Meta-cognition. Cambridge: Cambridge UP, 2000. Carruthers, Peter, Stephen Laurence, and Stephen Stich, eds. The Innate Mind: Structure and Contents. New York: Oxford UP, 2005. —. eds. The Innate Mind: Volume 2: Culture and Cognition. New York: Oxford UP, 2006. Carruthers, Peter, and Peter K. Smith, eds. Theories of Theories of Mind. Cambridge: Cambridge UP, 1996. Carruthers, Peter, Stephen Stich, and Michael Siegal, eds. The Cognitive Basis of Science. Cambridge: Cambridge UP, 2002. Cartwright, Nancy. How the Laws of Physics Lie. Oxford: Clarendon, 1983.

162

Bibliography

Caruth, Cathy, and Jonathan Culler. “Literary Criticism for the TwentyFirst Century.” PMLA 123 (2008): 7 or 542. Cascardi, Anthony J., ed. Literature and the Question of Philosophy. Baltimore: Johns Hopkins UP, 1987. Castañeda, Hector-Neri. “The Multiple Faces of Knowing: The Hierarchies of Epistemic Species.” Bender, Current State 231-41. Chang, Ruth, ed. Incommensurability, Incomparability, and Practical Reason. Cambridge: Harvard UP, 1997. Charniak, Eugene, and Drew McDermott. Introduction to Artificial Intelligence. Reading, MA: Addison-Wesley, 1986. Chisholm, Roderick M. The Foundations of Knowing. Minneapolis: U of Minnesota P, 1982. Chladenius, Johann Martin. “On the Concept of Interpretation” (ch. 4 of his Einleitung zur richtigen Auslegung vernünftiger Reden und Schriften, 1742). Trans. Carrie Asman-Schneider. Mueller-Vollmer 55-64. Chomsky, Noam. On Nature and Language. Ed. Adriana Belletti and Luigi Rizzi. Cambridge: Cambridge UP, 2002. —. Rules and Representations. New York: Columbia UP, 1980. Christiansen, Morten H., and Simon Kirby, eds. Language Evolution. Oxford: Oxford UP, 2003. Churchland, Paul M. A Neurocomputational Perspective: The Nature of Mind and the Structure of Science. Cambridge: MIT P, 1989. Clark, Andy. Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. Cambridge: MIT P, 1989. Clark, Michael. Revenge of the Aesthetic: The Place of Literature in Theory Today. Berkeley: U of California P, 2000. Clarke, Murray. Reconstructing Reason and Representation. Cambridge: MIT P, 2004. Cohen, Jonathan D., and Jonathan W. Schooler, eds. Scientific Approaches to Consciousness. Mahwah, NJ: Erlbaum, 1997. Cohen, Jonathan D., and Frank Tong. “The Faces of Controversy.” Science 293 (2001): 2405-07. Cohen, L. Jonathan. An Introduction to the Philosophy of Induction and Probability. Oxford: Clarendon, 1989. —. The Probable and the Provable. Oxford: Clarendon, 1977. Compagnon, Antoine. Literature, Theory, and Common Sense. Trans. Carol Cosman. Princeton: Princeton UP, 2004. Trans. of Le Démon de la théorie: Littérature et sens commun. 1998. Conee, Earl. “The Basic Nature of Epistemic Justification.” The Monist 71 (1988): 389-404. Rpt. in Conee and Feldman, Evidentialism 37-52.

A Theory of Literary Explication

163

Conee, Earl, and Richard Feldman. Afterword. “The Generality Problem for Reliabilism.” Conee and Feldman, Evidentialism 159-65. —. Evidentialism: Essays in Epistemology. Oxford: Clarendon, 2004. “Construe.” Def. 2, 3. Webster’s Dictionary. Cooper, Robin Panneton. “The Effect of Prosody on Young Infants’ Speech Perception.” Rovee-Collier and Lipsitt 137-67. Cooper, William S. The Evolution of Reason: Logic as a Branch of Biology. Cambridge: Cambridge UP, 2001. Copeland, B. Jack. “Artificial Intelligence.” Guttenplan. Copestake, Ann, and Ted Briscoe. “Semi-productive Polysemy and Sense Extension.” Journal of Semantics 12 (1995): 57-60. Corballis, Michael C.. The Recursive Mind: The Origins of Human Language, Thought, and Civilization. Princeton: Princeton UP, 2011. Cornwell, John, ed. Explanations: Styles of Explanation in Science. Oxford: Oxford UP, 2004. Cosmides, Leda. “The Logic of Social Exchange: Has Natural Selection Shaped How Humans Reason? Studies with the Wason Selection Task.” Cognition 31 (1989): 187-276. Cosmides, Leda, and John Tooby. “Are Humans Good Intuitive Statisticians After All? Rethinking Some Conclusions from the Literature on Judgment Under Uncertainty.” Cognition 58 (1996): 173. —. “Cognitive Adaptations for Social Exchange.” Barkow, Cosmides, and Tooby 163-228. ___. “From Evolution to Behavior: Evolutionary Psychology as the Missing Link.” Dupré 277-306. Cossu, Giuseppe, and John C. Marshall. “Are Cognitive Skills a Prerequisite for Learning to Read and Write?” Cognitive Neuropsychology 7 (1990): 21-40. —. “Theoretical Implications of the Hyperlexia Syndrome: Two New Italian Cases.” Cortex 22 (1986): 579-89. Cowie, Fiona. What’s Within? Nativism Reconsidered. New York: Oxford UP, 1999. Cox, R.T. “Probability, Frequency and Reasonable Expectation.” American Journal of Physics 14 (1946): 1-13. Crain, Stephen, and Rosalind Thornton. “Recharting the Course of Language Acquisition: Studies in Elicited Production.” Krasnegor et al. 321-37. Crews, Frederick. Skeptical Engagements. New York: Oxford UP, 1986. Crosswhite, James. The Rhetoric of Reason: Writing and the Attractions of Argument. Madison: U of Wisconsin P, 1996.

164

Bibliography

Culicover, Peter W. Syntactic Nuts: Hard Cases, Syntactic Theory, and Language Acquisition. Oxford: Oxford UP, 1999. Culler, Jonathan. “Beyond Interpretation: The Prospects of Contemporary Criticism.” Comparative Literature 28 (1976): 244-56. Rpt. in his Pursuit 3-17. —. “The Literary in Theory.” Butler, Guillory, and Thomas 273-92. —. The Literary in Theory. Stanford: Stanford UP, 2007. —. The Pursuit of Signs: Semiotics, Literature, Deconstruction. Ithaca: Cornell UP, 1981. Cummins, Denise Dellarosa, and Colin Allen, eds. The Evolution of Mind. New York: Oxford UP, 1998. Curley, Shawn P., and P. George Benson. “Applying a Cognitive Perspective to Probability Construction.” Wright and Ayton 185-209. Currie, Gregory. Arts and Minds. Oxford: Clarendon, 2004. —. “Interpreting Fictions.” Freadman and Reinhardt 96-112. Dabrowska, Ewa. Language, Mind and Brain: Some Psychological and Neurological Constraints on Theories of Grammar. Washington, DC: Georgetown UP, 2004. Damasio, Antonio R., and Hanna Damasio. “Brain and Language.” Scientific American 267.3 (1992): 89-95. Dasenbrock, Reed Way. “Do We Write the Text We Read?” Dasenbrock, Literary Theory 18-36. —. ed. Literary Theory After Davidson. University Park: Pennsylvania State UP, 1993. —. Truth and Consequences: Intentions, Conventions, and the New Thematics. University Park: Pennsylvania State UP, 2001. Dauber, Kenneth, and Walter Jost, eds. Ordinary Language Criticism: Literary Thinking after Cavell after Wittgenstein. Evanston: Northwestern UP, 2003. Davidson, Donald. “Locating Literary Language.” Dasenbrock, Literary Theory 295-308. —. “A Nice Derangement of Epitaphs.” LePore 433-46. Davis, Todd F., and Kenneth Womack. Formalist Criticism and ReaderResponse Theory. Houndmills, Basingstoke, Hampshire: Palgrave, 2002. Deacon, Terrence W. The Symbolic Species: The Co-evolution of Language and the Brain. New York: Norton, 1997. De Beaugrande, Robert. New Foundations for a Science of Text and Discourse: Cognition, Communication, and the Freedom of Access to Knowledge and Society. Norwood, NJ: Ablex, 1997. —. Text Production: Toward a Science of Composition. Norwood, NJ: Ablex, 1984.

A Theory of Literary Explication

165

De Beaugrande, Robert, and Wolfgang Dressler. Introduction to Text Linguistics. London: Longman, 1981. De Bolla, Peter. Art Matters. Cambridge: Harvard UP, 2001. Deemter, Kees van, and Stanley Peters, eds. Semantic Ambiguity and Underspecification. Stanford: CSLI, 1996. Dehaene, Stanislas. “Evolution of Human Cortical Circuits for Reading and Arithmetic: The ‘Neuronal Recycling’ Hypothesis.” Dehaene et al., From Monkey Brain 133-57. —. The Number Sense: How the Mind Creates Mathematics. New York: Oxford UP, 1997. —. Reading in the Brain: The Science and Evolution of a Human Invention. New York: Viking, 2009. Dehaene, Stanislas, et al. From Monkey Brain to Human Brain: A Fyssen Foundation Symposium. Cambridge: MIT P, 2005. Dehaene, Stanislas, et al. “Sources of Mathematical Thinking: Behavioral and Brain-Imaging Evidence.” Science 284 (1999): 970-74. Dennett, Daniel C. Brainstorms: Philosophical Essays on Mind and Psychology. Brighton, Sussex: Harvester, 1981. —. Darwin’s Dangerous Idea: Evolution and the Meanings of Life. New York: Simon, 1995. —. Kinds of Minds: Toward an Understanding of Consciousness. New York: Basic, 1996. Devitt, Michael. Ignorance of Language. Oxford: Clarendon, 2006. Devlin, Keith. Goodbye, Descartes: The End of Logic and the Search for a New Cosmology of the Mind. New York: Wiley, 1997. De Waal, Frans B.M. “The Biological Basis of Behavior.” Chronicle of Higher Education 14 June 1996: B1-2. Dickstein, Morris. Double Agent: The Critic and Society. New York: Oxford UP, 1992. Doherty, Martin J. Theory of Mind: How Children Understand Others’ Thoughts and Feelings. Hove, East Sussex: Psychology, 2009. Donald, Merlin. “The Mind Considered from a Historical Perspective: Human Cognitive Phylogenesis and the Possibility of Continuing Cognitive Evolution.” Johnson and Erneling 355-65. —. “‘The Prehistory of the Mind’: An Exchange.” New York Review of Books, 14 May 1998: 61. Donoghue, Denis. “The Practice of Reading.” Kernan 122-40. Rpt. with additions in Donoghue, Practice ch. 4. —. The Practice of Reading. New Haven: Yale UP, 1998. —. Speaking of Beauty. New Haven: Yale UP, 2003. —. “Teaching Literature: The Force of Form.” New Literary History 30 (1999): 5-24. Rpt. in Donoghue, Speaking ch. 4.

166

Bibliography

Dornic, Stanislav, ed. Attention and Performance VI. Hillsdale, NJ: Erlbaum, 1977. Dowling, William C. The Senses of the Text: Intensional Semantics and Literary Theory. Lincoln: U of Nebraska P, 1999. Downing, John, and Renate Valtin, eds. Language Awareness and Learning to Read. New York: Springer-Verlag, 1984. Dror, Itiel E., and Robin D. Thomas. “The Cognitive Neuroscience Laboratory: A Framework for the Science of Mind.” Erneling and Johnson 283-92. Dunlosky, John, and Janet Metcalfe. Metacognition. Los Angeles: Sage, 2009. Dunning, Stephen N. Dialectical Readings: Three Types of Interpretation. University Park: Pennsylvania State UP, 1997. Dupré, John, ed. The Latest on the Best: Essays on Evolution and Optimality. Cambridge: MIT P, 1987. Dutton, Denis. “Why Intentionalism Won’t Go Away.” Cascardi 194209. Eagleton, Terry. How to Read a Poem. Oxford: Blackwell, 2007. Easterlin, Nancy, and Barbara Riesling, eds. After Poststructuralism: Interdisciplinarity and Literary Theory. Evanston: Northwestern UP, 1993. Eco, Umberto. The Limits of Interpretation. Bloomington: Indiana UP, 1990. Eddins, Dwight, ed. The Emperor Redressed: Critiquing Critical Theory. Tuscaloosa: U of Alabama P, 1995. Edelman, Gerald M. Bright Air, Brilliant Fire: On the Matter of the Mind. New York: Basic, 1992. —. Second Nature: Brain Science and Human Knowledge. New Haven: Yale UP, 2006. Edmundson, Mark. “Against Readings.” Chronicle of Higher Education 24 Apr. 2009: B7-10. —. Why Read? New York: Bloomsbury, 2004. Eggleston, Richard. Evidence, Proof and Probability. 2nd ed. London: Weidenfeld and Nicolson, 1983. Elliott, Emory. “Introduction: Cultural Diversity and the Problem of Aesthetics.” Elliott, Caton, and Rhyne 3-27. Elliott, Emory, Louis Freitas Caton, and Jeffrey Rhyne, eds. Aesthetics in a Multicultural Age. New York: Oxford UP, 2002. Ellis, John M. Language, Thought, and Logic. Evanston: Northwestern UP, 1993. —. Literature Lost: Social Agendas and the Corruption of the Humanities. New Haven: Yale UP, 1997.

A Theory of Literary Explication

167

—. The Theory of Literary Criticism: A Logical Analysis. Berkeley: U of California P, 1974. Elman, Jeffrey, et al. Rethinking Innateness: A Connectionist Perspective on Development. Cambridge: MIT P. 1996. Engel, Howard. The Man Who Forgot How to Read. New York: Thomas Dunne-St. Martin’s, 2007. Epstein, Seymour. “Integration of the Cognitive and the Psychodynamic Unconscious.” American Psychologist 49 (1994): 709-24. Ermer, Elsa, Leda Cosmides, and John Tooby. “Functional Specialization and the Adaptationist Program.” Gangestad and Simpson 153-60. Erneling, Christina E., and David Martel Johnson, eds. The Mind as a Scientific Object: Between Brain and Culture. New York: Oxford UP, 2005. Evans, Jonathan St.B.T. “Deductive Reasoning.” Holyoak and Morrison 169-84. —. Hypothetical Thinking: Dual Processes in Reasoning and Judgement. Hove, East Sussex: Psychology, 2007. Evans, Jonathan St.B.T., et al. “Frequency versus Probability Formats in Statistical Word Problems.” Cognition 77 (2000): 197-213. Evans, Jonathan St.B.T., and David E. Over. Rationality and Reasoning. Hove, East Sussex: Psychology, 1996. Fabb, Nigel, et al., eds. The Linguistics of Writing: Arguments Between Language and Literature. Manchester: Manchester UP, 1987. Fales, Evan. A Defense of the Given. Lanham, MD: Rowman and Littlefield, 1996. Farred, Grant. “Cultural Studies: Literary Criticism’s Alter Ego.” Williams 77-94. Farrell, Frank B. Why Does Literature Matter? Ithaca: Cornell UP, 2004. Faucher, Luc, et al. “The Baby in the Lab-Coat: Why Child Development Is Not an Adequate Model for Understanding the Development of Science.” Carruthers, Stich, and Siegal 335-62. Fauconnier, Gilles, and Mark Turner. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York: Basic, 2002. Feeney, Aidan. “Individual Differences, Dual Processes, and Induction.” Feeney and Heit 302-27. Feeney, Aidan, and Evan Heit, eds. Inductive Reasoning: Experimental, Developmental, and Computational Approaches. New York: Cambridge UP, 2007. Feldman, Jerome A. From Molecule to Metaphor: A Neural Theory of Language. Cambridge: MIT P, 2006.

168

Bibliography

Feldman, Richard. Epistemology. Upper Saddle River, NJ: Prentice Hall, 2003. Feldman, Richard, and Earl Conee. Afterword. “Evidentialism.” Conee and Feldman, Evidentialism 101-07. —. “Evidentialism.” Philosophical Studies 48 (1985): 15-34. Rpt. in Conee and Feldman, Evidentialism 83-101. Fetzer, James H., ed. Epistemology and Cognition. Dordrecht: Kluwer Academic, 1991. Fiedler, Klaus. “The Dependence of the Conjunction Fallacy on Subtle Linguistic Factors.” Psychological Research 50 (1988): 123-29. Fish, Stanley. Is There a Text in This Class? The Authority of Interpretive Communities. Cambridge: Harvard UP, 1980. —. Professional Correctness: Literary Studies and Political Change. Oxford: Clarendon, 1995. —. “A Reply to John Reichert; or, How to Stop Worrying and Learn to Love Interpretation.” Critical Inquiry 6 (1979): 173-78. Fishburn, Peter C. “The Axioms of Subjective Probability.” Statistical Science 1 (1986): 335-50. Fodor, Jerry A. “Author’s Response: Reply Module.” Behavioral and Brain Sciences 8 (1985): 33-39. —. In Critical Condition: Polemical Essays on Cognitive Science and the Philosophy of Mind. Cambridge: MIT P, 1998. —. The Language of Thought. Cambridge: Harvard UP, 1979. —. The Modularity of Mind: An Essay on Faculty Psychology. Cambridge: MIT P, 1983. —. “Précis of The Modularity of Mind.” Behavioral and Brain Sciences 8 (1985): 1-5. Frawley, William. Linguistic Semantics. Hillsdale, NJ: Erlbaum, 1992. Frazier, Lyn, and Charles Clifton, Jr. Construal. Cambridge: MIT P, 1996. Freadman, Richard, and Lloyd Reinhardt, eds. On Literary Theory and Philosophy. New York: St. Martin’s, 1991. Freedman, Eric G. “Understanding Scientific Controversies from a Computational Perspective: The Case of Latent Learning.” Giere 31037. Frith, Chris. Making up the Mind: How the Brain Creates our Mental World. Malden, MA: Blackwell, 2007. Fumerton, Richard. “Epistemic Probability.” Sosa and Villanueva 14964. —. “The Epistemic Role of Testimony: Internalist and Externalist Perspectives.” Lackey and Sosa 77-92. —. “Theories of Justification.” Moser, Oxford 204-33.

A Theory of Literary Explication

169

Galaburda, Albert M., ed. From Reading to Neurons. Cambridge: MIT P, 1989. Gallistel, C.R., and Ken Chang. “A Modular Sense of Place?” Behavioral and Brain Sciences 8 (1985): 11-12. Gallistel, C.R., and Rochel Gelman. “Mathematical Cognition.” Holyoak and Morrison 559-88. Gallop, Jane. “The Historicization of Literary Studies and the Fate of Close Reading.” Profession 2007. New York: MLA, 2007. 181-86. Gander, Eric M. On Our Minds: How Evolutionary Psychology Is Reshaping the Nature-versus-Nurture Debate. Baltimore: Johns Hopkins UP, 2003. Gangestad, Steven W., and Jeffry A. Simpson, eds. The Evolution of Mind: Fundamental Questions and Controversies. New York: Guilford, 2007. Gärdenfors, Peter. Conceptual Spaces: The Geometry of Thought. Cambridge: MIT P, 2000. Gardner, Howard. “The Centrality of Modules.” Behavioral and Brain Sciences 8 (1985): 12-14. —. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic, 1983. Garfield, Jay L., ed. Modularity in Knowledge Representation and Natural-Language Understanding. Cambridge: MIT P, 1987. Gawande, Atul. “The Itch.” New Yorker 30 June 2008: 58-65. Gazzaniga. Michael S. Human: The Science behind What Makes Us Unique. New York: Harper, 2008. Geary, David C. The Origin of Mind: Evolution of Brain, Cognition, and General Intelligence. Washington, DC: Amer. Psychological Assn., 2005. Gee, James Paul. The Social Mind: Language, Ideology, and Social Practice. New York: Bergin and Garvey, 1992. Gentner, Dedre, and Susan Goldin-Meadow, eds. Language in Mind: Advances in the Study of Language and Thought. Cambridge: MIT P, 2003. Gernsbacher, Morton Ann, ed. Handbook of Psycholinguistics. San Diego: Academic, 1994. Gernsbacher, Morton Ann, and David A. Robertson. “Watching the Brain Comprehend Discourse.” Healy 157-67. Gibson, John, and Wolfgang Huemer, eds. The Literary Wittgenstein. London: Routledge, 2004. Giere, Ronald N., ed. Cognitive Models of Science. Minneapolis: U of Minnesota P, 1992.

170

Bibliography

Gigerenzer, Gerd. Adaptive Thinking: Rationality in the Real World. New York: Oxford UP, 2000. —. Gut Feelings: The Intelligence of the Unconscious. New York: Viking, 2007. —. “Why the Distinction between Single-event Probabilities and Frequencies is Important for Psychology (and Vice Versa).” Wright and Ayton 130-61. Gigerenzer, Gerd, et al. The Empire of Chance: How Probability Changed Science and Everyday Life. Cambridge: Cambridge UP, 1989. Gigerenzer, Gerd, and David J. Murray. Cognition as Intuitive Statistics. Hillsdale, NJ: Erlbaum, 1987. Gill, Jerry H., ed. Philosophy Today No. 2. London: Macmillan, 1969. —. The Tacit Mode: Michael Polanyi’s Postmodern Philosophy. Albany: State U of New York P, 2000. Gillies, Donald. Philosophical Theories of Probability. London: Routledge, 2000. Gilovich, Thomas, Dale Griffin, and Daniel Kahneman, eds. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge UP, 2002. Ginzburg, Carlo. “Morelli, Freud and Sherlock Holmes: Clues and Scientific Method.” History Workshop Journal 9 (1980): 5-36. Giora, Rachel. On Our Mind: Salience, Context, and Figurative Language. New York: Oxford UP, 2003. Girotto, Vittorio, and Philip N. Johnson-Laird, eds. The Shape of Reason: Essays in Honour of Paolo Legrenzi. Hove, East Sussex: Psychology, 2005. Givón, T. Functionalism and Grammar. Amsterdam: Benjamins, 1995. Gladwell, Malcolm. “In the Air.” New Yorker 12 May 2008: 50-60. Gleitman, Lila, and Anna Papafragou. “Language and Thought.” Holyoak and Morrison 633-61. Glymour, Clark. “Invasion of the Mind Snatchers.” Giere 465-71. Goel, Vinod. “Can There Be a Cognitive Neuroscience of Central Cognitive Systems?” Erneling and Johnson 265-82. Goldberg, Elkhonon, ed. Contemporary Neuropsychology and the Legacy of Luria. Hillsdale, NJ: Erlbaum, 1990. —. The Executive Brain: Frontal Lobes and the Civilized Mind. New York: Oxford UP, 2001. —. “Higher Cortical Functions in Humans: The Gradiential Approach.” Goldberg, Contemporary Neuropsychology 229-76. —. “Rise and Fall of Modular Orthodoxy.” Journal of Clinical and Experimental Neuropsychology 17 (1995): 193-208.

A Theory of Literary Explication

171

Goldhagen, Sarah Williams. “Our Degraded Public Realm: The Multiple Failures of Architectural Education.” Chronicle of Higher Education 10 Jan. 2003: B7-9. Goldin-Meadow, Susan. The Resilience of Language: What Gesture Creation in Deaf Children Can Tell Us About How All Children Learn Language. New York: Psychology, 2003. Goldman, Alan H. “BonJour’s Coherentism.” Bender, Current State 12533. —. Empirical Knowledge. Berkeley: U of California P, 1988. Goldman, Alvin I. Epistemology and Cognition. Cambridge: Harvard UP, 1986. —. “Internalism Exposed.” Journal of Philosophy 96 (1999): 271-93. —. “The Sciences and Epistemology.” Moser, Oxford 144-76. —. “What Is Justified Belief?” Pappas 1-23. Rpt. in Kornblith 105-30. Goodheart, Eugene. “Criticism in an Age of Discourse.” Clio 32 (2003): 205-08. —. Does Literary Studies Have a Future? Madison: U of Wisconsin P, 1999. Goodman, Kenneth S. “Reading: A Psycholinguistic Guessing Game.” Journal of the Reading Specialist 6 (1967): 126-35. Gopnik, Alison. “Theories and Modules: Creation Myths, Developmental Realities, and Neurath’s Boat.” Carruthers and Smith 169-83. Gopnik, Alison, and Clark Glymour. “Causal Maps and Bayes Nets: A Cognitive and Computational Account of Theory-Formation.” Carruthers, Stich, and Siegal 117-32. Gopnik, Alison, and Andrew N. Meltzoff. Words, Thoughts, and Theories. Cambridge: MIT P, 1997. Gopnik, Alison, Andrew N. Meltzoff, and Patricia K. Kuhl. The Scientist in the Crib: Minds, Brains, and How Children Learn. New York: Morrow, 1999. Gorman, Michael E. “Simulating Social Epistemology: Experimental and Computational Approaches.” Giere 400-26. Gottschall, Jonathan. Literature, Science, and a New Humanities. New York: Palgrave Macmillan, 2008. Gracia, Jorge J.E. “Relativism and the Interpretation of Texts.” Metaphilosophy 31 (2000): 43-62. Graff, Gerald. Literature Against Itself: Literary Ideas in Modern Society. Chicago: U of Chicago P, 1979. —. Poetic Statement and Critical Dogmas. Evanston: Northwestern UP, 1970. Graham, Peter. “Liberal Fundamentalism and Its Rivals.” Lackey and Sosa 93-115.

172

Bibliography

Grammont, Franck. “Can I Really Intend More than What I Am Able to Do?” Grammont, Legrand, and Livet 117-39. Grammont, Franck, Dorothée Legrand, and Pierre Livet, eds. Naturalizing Intention in Action. Cambridge: MIT P, 2010. Green, Christopher D., and John Vervaeke. “But What Have You Done for Us Lately? Some Recent Perspectives on Linguistic Nativism.” Johnson and Erneling 149-63. Green, Georgia M. “Ambiguity Resolution and Discourse Interpretation.” Deemter and Peters 1-26. Greenspan, Stanley I., and Stuart G. Shanker. The First Idea: How Symbols, Language, and Intelligence Evolved From Our Primate Ancestors to Modern Humans. n.p.: Da Capo, 2004. Gunnar, Megan R., and Michael Maratsos, eds. Modularity and Constraints in Language and Cognition. Hillsdale, NJ: Erlbaum, 1992. Guttenplan, Samuel, ed. A Companion to the Philosophy of Mind. Oxford: Blackwell, 1994. Haack, Susan. Evidence and Inquiry: Towards Reconstruction in Epistemology. Oxford: Blackwell, 1993. Hacking, Ian. The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference. London: Cambridge UP, 1975. Hales, Steven D. Relativism and the Foundations of Philosophy. Cambridge: MIT P, 2006. Halford, Graeme S. “Development of Thinking.” Holyoak and Morrison 529-58. Hammond, Kenneth R. Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. New York: Oxford UP, 1996. Hanna, Robert. Rationality and Logic. Cambridge: MIT P, 2006. Happé, Francesca, and Eva Loth. “‘Theory of Mind’ and Tracking Speakers’ Intentions.” Mind and Language 17 (2002): 24-36. Hardcastle, Valerie Gray, and C. Matthew Stewart. “Localization in the Brain and Other Illusions.” Brook and Akins 27-39. Harland, Richard. Beyond Superstructuralism: The Syntagmatic Side of Language. London: Routledge, 1993. Harris, Wendell V., ed. Beyond Poststructuralism: The Speculations of Theory and the Experience of Reading. University Park: Pennsylvania State UP, 1996. —. Literary Meaning: Reclaiming the Study of Literature. New York: New York UP, 1996.

A Theory of Literary Explication

173

Hartman, Geoffrey H. The Fateful Question of Culture. New York: Columbia UP, 1997. Hasher, Lynn, and Rose T. Zacks. “Automatic Processing of Fundamental Information: The Case of Frequency of Occurrence.” American Psychologist 39 (1984): 1372-88. Hassan, Ihab. “Let the Fresh Air In: Graduate Studies in the Humanities.” Soderholm 190-207. Hawkins, Jeff, with Sandra Blakeslee. On Intelligence. New York: Holt, 2004. Healy, Alice F., ed. Experimental Cognitive Psychology and its Applications. Washington, DC: Amer. Psychological Assn., 2005. Herman, David. “A la recherche du sens perdu.” Poetics Today 23 (2002): 327-50. Hernadi, Paul, ed. What Is Criticism? Bloomington: Indiana UP, 1981. Hertwig, R., and G. Gigerenzer. “The ‘Conjunction Fallacy’ Revisited: How Intelligent Inferences Look like Reasoning Errors.” Unpub. ms. Max Planck Institute for Psychological Research, Munich, 1997. Paraphrased in Pinker, How the Mind Works 350-51. Hesse, Mary. Revolutions and Reconstructions in Philosophy of Science. Bloomington: Indiana UP, 1980. —. “Texts Without Types and Lumps Without Laws.” New Literary History 17 (1985): 31-48. Hiley, David R., James F. Bohman, and Richard Shusterman, eds. The Interpretive Turn: Philosophy, Science, Culture. Ithaca: Cornell UP, 1991. Hill, Archibald A. Constituent and Pattern in Poetry. Austin: U of Texas P, 1976. —. “Principles Governing Semantic Parallels.” Texas Studies in Literature and Language 1 (1959): 356-65. Rpt. in his Constituent 95103. Hintikka, Jaakko. “Quantifiers vs. Quantification Theory.” Linguistic Inquiry 5 (1974): 153-77. Hirsch, E.D., Jr. The Aims of Interpretation. Chicago: U of Chicago P, 1976. —. “Objective Interpretation.” PMLA 75 (1960): 463-79. —. Validity in Interpretation. New Haven: Yale UP, 1967. —. “Value and Knowledge in the Humanities.” Bloomfield 55-72. Hirschfeld, Lawrence A., and Susan A. Gelman, eds. Mapping the Mind: Domain Specificity in Cognition and Culture. Cambridge: Cambridge UP, 1994. Hitchcock, Christopher, ed. Contemporary Debates in Philosophy of Science. Oxford: Blackwell, 2004.

174

Bibliography

Hobbs, Jerry R. Literature and Cognition. Stanford: CSLI, 1990. Hogan, Patrick Colm. Cognitive Science, Literature, and the Arts: A Guide for Humanists. New York: Routledge, 2003. —. On Interpretation: Meaning and Inference in Law, Psychoanalysis, and Literature. Athens: U of Georgia P, 1996. —. Philosophical Approaches to the Study of Literature. Gainesville: UP of Florida, 2000. Holbrook, Jennifer K., et al. “(Almost) Never Letting Go: Inference Retention during Text Understanding.” Small, Cottrell, and Tanenhaus 383-409. Holyoak, Keith J., and Robert G. Morrison, eds. The Cambridge Handbook of Thinking and Reasoning. New York: Cambridge UP, 2005. Howarth, W.D., and C.L. Walton. Explications: The Technique of French Literary Appreciation. London: Oxford UP, 1971. Huber, Beate L., and Oswald Huber. “Development of the Concept of Comparative Subjective Probability.” Journal of Experimental Child Psychology 44 (1987): 304-16. Hurford, James R. “The Language Mosaic and its Evolution.” Christiansen and Kirby 38-57. Iacoboni, M., et al. “Grasping the Intentions of Others with One’s Own Mirror Neuron System.” Public Library of Science: Biology 3 (2005): 529-35. Ingvar, Martin. “All in the Interest of Time: On the Problem of Speed and Cognition.” Erneling and Johnson 251-64. Irwin, William. Intentionalist Interpretation: A Philosophical Explanation and Defense. Westport, CT: Greenwood, 1999. Iseminger, Gary. “Actual Intentionalism vs. Hypothetical Intentionalism.” Journal of Aesthetics and Art Criticism 54 (1996): 319-26. Iser, Wolfgang. The Range of Interpretation. New York: Columbia UP, 2000. Jackendoff, Ray. Consciousness and the Computational Mind. Cambridge: MIT P, 1987. —. Foundations of Language: Brain, Meaning, Grammar, Evolution. New York: Oxford UP, 2002. —. Language, Consciousness, Culture: Essays on Mental Structure. Cambridge: MIT P, 2007. —. Languages of the Mind: Essays on Mental Representation. Cambridge: MIT P, 1992. —. Patterns in the Mind: Language and Human Nature. New York: Basic, 1994. —. Semantics and Cognition. Cambridge: MIT P, 1983.

A Theory of Literary Explication

175

Johansen, Jørgen Dines. Literary Discourse: A Semantic-Pragmatic Approach to Literature. Toronto: U of Toronto P, 2002. Johns, Richard. A Theory of Physical Probability. Toronto: U of Toronto P, 2002. Johnson, David Martel, and Christina E. Erneling, eds. The Future of the Cognitive Revolution. New York: Oxford UP, 1997. Johnson, Mark. The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. Chicago: U of Chicago P, 1987. Johnston, Kenneth R. “Forum: Poetics Against Itself.” PMLA 105 (1990): 531-32. Josephson, John R., and Susan G. Josephson. Abductive Inference: Computation, Philosophy, Technology. Cambridge: Cambridge UP, 1994. Jost, Walter, and Michael J. Hyde, eds. Rhetoric and Hermeneutics in Our Time: A Reader. New Haven: Yale UP, 1997. Juhl, P.D. Interpretation: An Essay in the Philosophy of Literary Criticism. Princeton: Princeton UP, 1980. Joughin, John J., and Simon Malpas, eds. The New Aestheticism. Manchester: Manchester UP, 2003. Juslin, Peter, Håkan Nilsson, and Henrik Olsson. “Where Do Probability Judgments Come From? Evidence for Similarity-Graded Probability.” Moore and Stenning 471-76. Juslin, Peter, and Magnus Persson. “PROBabilities from EXemplars (PROBEX): A ‘Lazy’ Algorithm for Probabilistic Inference from Generic Knowledge.” Cognitive Science 26 (2002): 563-607. Kahneman, Daniel, Paul Slovic, and Amos Tversky, eds. Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge UP, 1982. Kahneman, Daniel, and Amos Tversky. “On the Psychology of Prediction.” Psychological Review 80 (1973): 237-51. Rpt. in Kahneman, Slovic, and Tversky 48-68. Kameen, Paul. Re-reading Poets: The Life of the Author: Pittsburgh: U of Pittsburgh P, 2011. Kandel, Eric R. In Search of Memory: The Emergence of a New Science of Mind. New York: Norton, 2006. Kant, Immanuel. Critique of Judgement. Trans. J.H. Bernard. New York: Hafner, 1951. —. Critique of Pure Reason. Trans. Norman Kemp Smith. New York: St. Martin’s, 1929. Karmiloff-Smith, Annette. Beyond Modularity: A Developmental Perspective on Cognitive Science. Cambridge: MIT P, 1992. Kastan, David Scott. Shakespeare after Theory. New York: Routledge, 1999.

176

Bibliography

Katz, Jerrold J. The Metaphysics of Meaning. Cambridge: MIT P, 1990. Katz, L.C., and C.J. Shatz. “Synaptic Activity and the Construction of Cortical Circuits.” Science 274 (1996): 1133-38. Kay, Paul, and Willett Kempton. “What Is the Sapir-Whorf Hypothesis? American Anthropologist 86 (1984): 65-79. Keller, Evelyn Fox. The Mirage of a Space between Nature and Nurture. Durham, NC: Duke UP, 2010. Kelly, Michael H., and Susanne Martin. “Domain-General Abilities Applied to Domain-Specific Tasks: Sensitivity to Probabilities in Perception, Cognition, and Language.” Lingua 92 (1994): 105-40. Kernan, Alvin, ed. What’s Happened to the Humanities? Princeton: Princeton UP, 1997. Keynes, John Maynard. A Treatise on Probability. London: Macmillan, 1921. Kihlstrom, John F. “The Cognitive Unconscious.” Science 237 (1987): 1445-52. Kiparsky, Paul. “On Theory and Interpretation.” Fabb et al. 185-98. Kirkham, Natasha Z., Jonathan A. Slemmer, and Scott P. Johnson. “Visual Statistical Learning in Infancy: Evidence for a Domain General Learning Mechanism.” Cognition 83 (2002): B35-B42. Kitching, Gavin. The Trouble with Theory: The Educational Costs of Postmodernism. University Park: Pennsylvania State UP, 2008. Klein, Peter. “How a Pyrrhonian Skeptic Might Respond to Academic Skepticism.” Luper 75-94. —. “Human Knowledge and the Infinite Regress of Reasons.” Philosophical Perspectives 13 (1999): 297-325. —. “Infinitism Is the Solution to the Regress Problem.” Steup and Sosa 131-40. Kluger, Jeffrey. “Inside the Minds of Animals.” Time 16 Aug. 2010: 3643. Knapp, Steven, and Walter Benn Michaels. “Against Theory.” Critical Inquiry 8 (1982): 723-42. Rpt. in Mitchell, Against Theory 11-30. Koehler, Derek J., and Nigel Harvey, eds. Blackwell Handbook of Judgment and Decision Making. Oxford: Blackwell, 2004. Kolers, Paul A. “Experiments in Reading.” Scientific American 227.1 (1972): 84-91. Komarova, Natalia L., and Martin A. Nowak. “Language, Learning and Evolution.” Christiansen and Kirby 317-37. Koppen, Randi. “Formalism and the Return to the Body: Stein’s and Forne’s Aesthetic of Significant Form.” New Literary History 28 (1997): 791-809.

A Theory of Literary Explication

177

Kornblith, Hilary, ed. Naturalizing Epistemology. 2nd ed. Cambridge: MIT P, 1994. Kosko, Bart. Fuzzy Thinking: The New Science of Fuzzy Logic. New York: Hyperion, 1993. Krasnegor, Norman A., et al., eds. Biological and Behavioral Determinants of Language Development. Hillsdale, NJ: Erlbaum, 1991. Krausz, Michael, ed. Is There a Single Right Interpretation? University Park: Pennsylvania State UP, 2002. Kuhn, Thomas S. The Structure of Scientific Revolutions. 2nd ed. enl. Chicago: U of Chicago P, 1970. Kvanvig, Jonathan L. The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge UP, 2003. Kyburg, Henry E., Jr. The Logical Foundations of Statistical Inference. Dordrecht: Reidel, 1974. La Cerra, Peggy, and Roger Bingham. The Origin of Minds: Evolution, Uniqueness, and the New Science of the Self. New York: Harmony, 2002. Lackey, Jennifer, and Ernest Sosa, eds. The Epistemology of Testimony. Oxford: Clarendon, 2006. Lagnado, David A., and Steven A. Sloman. “Inside and Outside Probability Judgment.” Koehler and Harvey 157-76. Lakoff, George, and Mark Johnson. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York: Basic, 1999. Lamberts, Koen, and David Shanks, eds. Knowledge, Concepts, and Categories. Cambridge: MIT P, 1997. Lancashire, Ian. Forgetful Muses: Reading the Author in the Text. Toronto: U of Toronto P, 2010. Landauer, Thomas K., and Susan T. Dumais. “A Solution to Plato’s Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge.” Psychological Review 104 (1997): 211-40. Landauer, Thomas K., Peter W. Foltz, and Darrell Laham. “An Introduction to Latent Semantic Analysis.” Discourse Processes 25 (1998): 259-84. Lecercle, Jean-Jacques. Interpretation as Pragmatics. New York: St. Martin’s, 1999. Lehrer, Keith. Knowledge. Oxford: Clarendon, 1974. —. Theory of Knowledge. 2nd ed. Boulder: Westview, 2000. Lentricchia, Frank, and Andrew DuBois. Close Reading: The Reader. Durham: Duke UP, 2003.

178

Bibliography

Lentricchia, Frank, and Thomas McLaughlin, eds. Critical Terms for Literary Study. Chicago: U of Chicago P, 1990. LePore, Ernest, ed. Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson. Oxford: Blackwell, 1986. Lepore, Ernest, and Zenon Pylyshyn, eds. What Is Cognitive Science? Malden, MA: Blackwell, 1999. Lesser, Wendy. Nothing Remains the Same: Rereading and Remembering. Boston: Houghton, 2002. Leverage, Paula, et al., eds. Theory of Mind and Literature. West Lafayette, IN: Purdue UP, 2011. Levin, Richard. “The Cultural Materialist Attack on Artistic Unity and the Problem of Ideological Criticism.” Harris, Beyond Poststructuralism 137-56. Levine, George Lewis, ed. Aesthetics and Ideology. New Brunswick, NJ: Rutgers UP, 1994. Levinson, Jerrold. “Hypothetical Intentionalism: Statement, Objections, and Replies.” Krausz 309-18. —. The Pleasures of Aesthetics: Philosophical Essays. Ithaca: Cornell UP, 1996. Levinson, Marjorie. “What Is Formalism? PMLA 122 (2007): 558-69. Lewicki, Pawel, Maria Czyzewska, and Thomas Hill. “Cognitive Mechanisms for Acquiring ‘Experience’: The Dissociation Between Conscious and Nonconscious Cognition.” Cohen and Schooler 161-77. Lewicki, Pawel, and Thomas Hill. “On the Status of Nonconscious Processes in Human Cognition: Comment on Reber.” Journal of Experimental Psychology: General 118 (1989): 239-41. Lewicki, Pawel, Thomas Hill, and Maria Czyzewska. “Nonconscious Acquisition of Information.” American Psychologist 47 (1992): 796801. Li, Peggy, and Lila Gleitman. “Turning the Tables: Language and Spatial Reasoning.” Cognition 83 (2002): 265-94. Lipking, Lawrence I. “The Practice of Theory.” Staton 426-40. Lipton, Peter. Inference to the Best Explanation. 2nd ed. London: Routledge, 2004. —. “What Good Is an Explanation?” Cornwell 1-21. Litman, Leib, and Arthur S. Reber. “Implicit Cognition and Thought.” Holyoak and Morrison 431-53. Livingston, Paisley. Art and Intention: A Philosophical Study. Oxford: Clarendon, 2005. —. “Intentionalism in Aesthetics.” New Literary History 29 (1998): 83146.

A Theory of Literary Explication

179

—. Literary Knowledge: Humanistic Inquiry and the Philosophy of Science. Ithaca: Cornell UP, 1988. —. Literature and Rationality: Ideas of Agency in Theory and Fiction. Cambridge: Cambridge UP, 1991. Loesberg, Jonathan. A Return to Aesthetics: Autonomy, Indifference, and Postmodernism. Stanford: Stanford UP, 2005. Loritz, Donald. How the Brain Evolved Language. New York: Oxford UP, 1999. Lucas, F.R. The Concept of Probability. Oxford: Clarendon, 1970. Luper, Steven, ed. The Skeptics: Contemporary Essays. Aldershot, Hampshire: Ashgate, 2003. Lycan, William G. Judgement and Justification. Cambridge: Cambridge UP, 1988. Macdonald, Ranald R. “Credible Conceptions and Implausible Probabilities.” British Journal of Mathematical and Statistical Psychology 39 (1986): 15-27. MacWhinney, Brian. “A Reply to Woodward and Markman.” Developmental Review 11 (1991): 192-94. Mailloux, Steven. Rhetorical Power. Ithaca: Cornell UP, 1989. Malle, Bertram F., and Sara D. Hodges, eds. Other Minds: How Humans Bridge the Divide between Self and Others. New York: Guilford, 2005. Maratsos, Michael. “Constraints, Modules, and Domain Specificity: An Introduction.” Gunnar and Maratsos 1-23. Marcus, Gary. The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought. New York: Basic, 2004. Margolis, Joseph, and Tom Rockmore, eds. The Philosophy of Interpretation. Oxford: Blackwell, 1999. Markman, Ellen M. “Constraints on Word Learning: Speculations About Their Nature, Origins, and Domain Specificity.” Gunnar and Maratsos 59-101. Marshall, John C. “The Cultural and Biological Context of Written Languages: Their Acquisition, Deployment and Breakdown.” Beech and Colley 15-30. —. “The Description and Interpretation of Acquired and Developmental Reading Disorders.” Galaburda 67-86. —. “Multiple Perspectives on Modularity.” Cognition 17 (1984): 209-42. Marshall, John C., and Giuseppe Cossu. “Is Pathological Development Part of Normal Cognitive Neuropsychology?—A Rejoinder to Marcel.” Cognitive Neuropsychology 7(1990): 49-55. Martindale, Colin. Cognitive Psychology: A Neural-Network Approach. Pacific Grove, CA: Brooks / Cole, 1998.

180

Bibliography

Mattingly, Ignatius G. “Reading and the Biological Function of Linguistic Representations.” Mattingly and Studdert-Kennedy 339-46. —. “Reading, Linguistic Awareness, and Language Acquisition.” Downing and Valtin 9-25. Mattingly, Ignatius G., and Michael Studdert-Kennedy, eds. Modularity and the Motor Theory of Speech Perception. Hillsdale, NJ: Erlbaum, 1991. McAlindon, Tom. Shakespeare Minus “Theory.” Aldershot, Hampshire: Ashgate, 2004. McClelland, James L., David E. Rumelhart, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 2 of 2: Psychological and Biological Models. Cambridge: MIT P, 1986. McNeill, Daniel, and Paul Freiberger. Fuzzy Logic. New York: Simon, 1993. McSweeney, Kerry. What’s the Import? Nineteenth-Century Poems and Contemporary Critical Practice. Montreal & Kingston: McGillQueen’s UP, 2007. Metzger, Mary Ann. “Multiprocess Models Applied to Cognitive and Behavioral Dynamics.” Port and Van Gelder 491-526. Miller, J. Hillis. “Tradition and Difference.” Diacritics 2.4 (1972): 6-13. Millikan, Ruth Garrett. Language: A Biological Model. Oxford: Clarendon, 2005. Minsky, Marvin, and Seymour Papert. Perceptrons: An Introduction to Computational Geometry. Expanded ed. Cambridge: MIT P, 1988. Mitchell, W.J.T., ed. Against Theory: Literary Studies and the New Pragmatism. Chicago: U of Chicago P, 1985. —. “The Commitment to Form; or, Still Crazy after All These Years.” PMLA 118 (2003): 321-25. Mithen, Steven. The Prehistory of the Mind. London: Thames, 1996. Moore, A.W., ed. Meaning and Reference. Oxford: Oxford UP, 1993. Moore, Chris, Kiran Pure, and David Furrow. “Children’s Understanding of the Modal Expression of Speaker Certainty and Uncertainty and Its Relation to the Development of a Representational Theory of Mind.” Child Development 61 (1990): 722-30. Moore, Johanna D., and Keith Stenning, eds. Proceedings of the TwentyThird Annual Conference of the Cognitive Science Society. Mahwah, NJ: Erlbaum, 2001. Moser, Paul K. Empirical Justification. Dordrecht: Reidel, 1985. —. Knowledge and Evidence. Cambridge: Cambridge UP, 1989. —. ed. The Oxford Handbook of Epistemology. New York: Oxford UP, 2002.

A Theory of Literary Explication

181

—. Philosophy after Objectivity: Making Sense in Perspective. New York: Oxford UP, 1993. Moser, Paul K., and Arnold vander Nat, eds. Human Knowledge: Classical and Contemporary Approaches. 3rd ed. New York: Oxford UP, 2003. Moses, Louis J. “Executive Functioning and Children’s Theories of Mind.” Malle and Hodges 11-25. Moskowitz, Breyne Arlene. “The Acquisition of Language.” Scientific American 239.5 (1978): 92-108. Moyer, Robert. “Making Comparisons.” New Yorker 31 Mar. 2008: 12. Mueller, Martin. “Yellow Stripes and Dead Armadillos.” Profession 89. New York: MLA, 1989. 23-31. Mueller-Vollmer, Kurt, ed. The Hermeneutics Reader: Texts of the German Tradition from the Enlightenment to the Present. Oxford: Blackwell, 1986. Munz, Peter. Philosophical Darwinism: On the Origin of Knowledge by Means of Natural Selection. London: Routledge, 1993. Neisser, Ulric, ed. Concepts and Conceptual Development: Ecological and Intellectual Factors in Categorization. Cambridge: Cambridge UP, 1987. Nelson, Katherine. “Constraints on Word Learning?” Cognitive Development 3 (1988): 221-46. —. Language in Cognitive Development: Emergence of the Mediated Mind. Cambridge: Cambridge UP, 1996. Newell, Benjamin R., David A. Lagnado, and David R. Shanks. Straight Choices: The Psychology of Decision Making. Hove, East Sussex: Psychology, 2007. Newport, Elissa L. “Maturational Constraints on Language Learning.” Cognitive Science 14 (1990): 11-28. Rpt. in Bloom, Language 543-60. Newton, K.M. In Defence of Literary Interpretation: Theory and Practice. London: Macmillan, 1986. Nichols, Shaun, and Stephen P. Stich. Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding Other Minds. Oxford: Clarendon, 2003. Nieder, Andreas, and Earl K. Miller. “Neural Correlates of Numerical Cognition in the Neocortex of Nonhuman Primates.” Dehaene et al., From Monkey Brain 117-31. Norris, Christopher. Deconstruction and the “Unfinished Project of Modernity.” London: Athlone, 2000. —. On Truth and Meaning: Language, Logic and the Grounds of Belief. London: Continuum, 2006. Novitz, David. “Against Critical Pluralism.” Krausz 101-21.

182

Bibliography

Oaksford, Mike, and Nick Chater. “The Probabilistic Approach to Human Reasoning.” Trends in Cognitive Sciences 5 (2001): 349-57. O’Grady, William. Principles of Grammar and Learning. Chicago: U of Chicago P, 1987. —. Syntactic Development. Chicago: U of Chicago P, 1997. Olson, David R. The World on Paper: The Conceptual and Cognitive Implications of Writing and Reading. Cambridge: Cambridge UP, 1994. Osherson, Daniel N. “Probability Judgment.” Smith and Osherson 35-75. Osman, Magda. “An Evaluation of Dual-Process Theories of Reasoning.” Psychonomic Bulletin and Review 11 (2004): 988-1010. Over, David E., ed. Evolution and the Psychology of Thinking: The Debate. Hove, East Sussex: Psychology, 2003. —. “From Massive Modularity to Metarepresentation: The Evolution of Higher Cognition.” Over, Evolution 121-44. Paglia, Camille. Break, Blow, Burn. New York: Pantheon, 2005. Papineau, David. Theory and Meaning. Oxford: Clarendon, 1979. Pappas, George, ed. Justification and Knowledge: New Studies in Epistemology. Dordrecht: Reidel, 1979. Pappas, George S., and Marshall Swain, eds. Essays on Knowledge and Justification. Ithaca: Cornell UP, 1978. Parker, Robert Dale. How to Interpret Literature: Critical Theory for Literary and Cultural Studies. New York: Oxford UP, 2008. Pastin, Mark. “Modest Foundationalism and Self-Warrant.” Rescher, Studies 141-49. Rpt. in Pappas and Swain 279-88. Patai, Daphne, and Will H. Corral, eds. Theory’s Empire: An Anthology of Dissent. New York: Columbia UP, 2005. Patterson, Lee. “Literary History.” Lentricchia and McLaughlin 250-62. Pearl, Judea. “The Bayesian Approach.” Shafer and Pearl 339-44. —. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Rev. 2nd printing. San Mateo, CA: Morgan Kaufmann, 1988. Pechter, Edward. “The New Historicism and its Discontents: Politicizing Renaissance Drama.” PMLA 102 (1987): 292-303. Peirce, Charles Sanders. “Definition and Function of a University.” Peirce 331-35. __. Selected Writings (Values in a Universe of Chance). Ed. Philip P. Wiener. New York: Dover, 1966. Peltason, Timothy. Reading “In Memoriam.” Princeton: Princeton UP, 1985. —. “Seeing Things as They Are: Literary Judgment and Disinterestedness.” Literary Imagination 9 (2007): 177-94.

A Theory of Literary Explication

183

—. “The Uncommon Pursuit.” Literary Imagination 6 (2004): 499-515. Pepper, Stephen C. World Hypotheses: A Study in Evidence. Berkeley: U of California P, 1942. Perfetti, Charles A. “The Limits of Co-Occurrence: Tools and Theories in Language Research.” Discourse Processes 25 (1998): 363-77. Perloff, Marjorie. Differentials: Poetry, Poetics, Pedagogy. Tuscaloosa: U of Alabama P, 2004. —. “Presidential Address 2006: It Must Change.” PMLA 122 (2007): 65262. Pica, Pierre, et al. “Exact and Approximate Arithmetic in an Amazonian Indigene Group.” Science 306 (2004): 499-503. Pinker, Steven. The Blank Slate: The Modern Denial of Human Nature. New York: Viking, 2002. —. How the Mind Works. New York: Norton, 1997. —. The Language Instinct: How the Mind Creates Language. New York: Morrow, 1994. —. “Language as an Adaptation to the Cognitive Niche.” Christiansen and Kirby 16-37. —. The Stuff of Thought: Language as a Window into Human Nature. New York: Viking, 2007. —. Words and Rules: The Ingredients of Language. New York: Basic, 1999. Pinker, Steven, and Jacques Mehler, eds. Connections and Symbols. Cambridge: MIT P, 1988. Plantinga, Alvin. Warrant and Proper Function. New York: Oxford UP, 1993. Plaut, David C. “Connectionist Modeling of Language: Examples and Implications.” Banich and Mack 143-67. Plotkin, Henry. Evolution in Mind: An Introduction to Evolutionary Psychology. Cambridge: Harvard UP, 1998. —. Necessary Knowledge. New York: Oxford UP, 2007. Polanyi, Michael. Knowing and Being: Essays by Michael Polanyi. Ed. Marjorie Grene. Chicago: U of Chicago P, 1969. —. Personal Knowledge: Towards a Post-Critical Philosophy. New York: Harper, 1964. —. The Tacit Dimension. 1966. Garden City, NY: Anchor-Doubleday, 1967. Politzer, Guy, and Laura Macchi. “The Representation of the Task: The Case of the Lawyer-Engineer Problem in Probability Judgement.” Girotto and Johnson-Laird 119-35.

184

Bibliography

Politzer, Guy, and Ira A. Noveck. “Are Conjunction Rule Violations the Result of Conversational Rule Violations?” Journal of Psycholinguistic Research 20 (1991): 83-103. Pollock, John L. Contemporary Theories of Knowledge. Totowa, NJ: Rowman, 1986. Popper, Karl R. Conjectures and Refutations: The Growth of Scientific Knowledge. 2nd ed. New York: Basic, 1965. —. “Epistemology Without a Knowing Subject.” Gill, Philosophy 225-77. Rpt. in Popper, Objective Knowledge 106-52. —. The Logic of Scientific Discovery. 1959. 10th impression (rev.). London: Hutchinson, 1980. —. Objective Knowledge: An Evolutionary Approach. Oxford: Clarendon, 1972. —. Realism and the Aim of Science. Totowa, NJ: Rowman, 1982. Port, Robert F., and Timothy van Gelder, eds. Mind as Motion: Explorations in the Dynamics of Cognition. Cambridge: MIT P, 1995. Preminger, Alex, Frank J. Warnke, and O.B. Hardison, Jr., eds. Encyclopedia of Poetry and Poetics. Princeton: Princeton UP, 1965. Prendergast, Christopher, ed. Nineteenth-Century French Poetry: Introductions to Close Readings. Cambridge: Cambridge UP, 1990. Prince, Alan, and Paul Smolensky. “Optimality: From Neural Networks to Universal Grammar.” Science 275 (1997): 1604-10. Prinz, Jesse J. Furnishing the Mind: Concepts and Their Perceptual Basis. Cambridge: MIT P, 2002. —. “Is the Mind Really Modular? Stainton 22-36. Putnam, Hilary. “The ‘Innateness Hypothesis’ and Explanatory Models in Linguistics.” Synthese 17 (1967): 12-22. Rpt. in Searle 130-39. —. The Many Faces of Realism: The Paul Carus Lectures. LaSalle, IL: Open Court, 1987. Quigley, Austin E. “Wittgenstein’s Philosophizing and Literary Theorizing.” New Literary History 19 (1988): 209-37. Rpt. in Dauber and Jost 3-30. Quint, David. Cervantes’s Novel of Modern Times: A New Reading of “Don Quijote.” Princeton: Princeton UP, 2004. Quinton, Anthony. “The Foundations of Knowledge.” Williams and Montefiore 55-86. —. The Nature of Things. London: Routledge, 1973. Raftopoulos, Athanassios. Cognition and Perception: How Do Psychology and Neural Science Inform Philosophy? Cambridge: MIT P, 2009. Ramachandran, Vilayanur S., and Edward M. Hubbard. “Hearing Colors, Tasting Shapes.” Scientific American 288.5 (2003): 52-59.

A Theory of Literary Explication

185

Ramsey, Frank P. The Foundations of Mathematics and Other Logical Essays. New York: Harcourt, 1931. Ramsey, William, and Stephen Stich. “Connectionism and the Three Levels of Nativism.” Synthese 82 (1990): 177-205. Rpt. in Fetzer 331. Rasmussen, Mark David, ed. Renaissance Literature and Its Formal Engagements. New York: Palgrave, 2002. Raval, Suresh. Grounds of Literary Criticism. Urbana: U of Illinois P, 1998. Read, Stephen J., and Amy Marcus-Newhall. “Explanatory Coherence in Social Explanations: A Parallel Distributed Processing Account.” Journal of Personality and Social Psychology 65 (1993): 429-47. Reber, Arthur S. Implicit Learning and Tacit Knowledge: An Essay on the Cognitive Unconscious. New York: Oxford UP, 1993. Reber, Arthur S., et al. “On the Relationship Between Implicit and Explicit Modes in the Learning of a Complex Rule Structure.” Journal of Experimental Psychology: Human Learning and Memory 6 (1980): 492-502. Reddy, Vasudevi. How Infants Know Minds. Cambridge: Harvard UP, 2008. Reichenbach, Hans. Experience and Prediction: An Analysis of the Foundations and the Structure of Knowledge. Chicago: U of Chicago P, 1938. Rendall, Steven. “Mus in Pice: Montaigne and Interpretation.” Modern Language Notes 94 (1979): 1056-71. Rescher, Nicholas. Philosophical Reasoning: A Study in the Methodology of Philosophizing. Oxford: Blackwell, 2001. —. ed. Studies in Epistemology. American Philosophical Quarterly Monograph Ser. 9. Oxford: Blackwell, 1975. Reyna, Valerie F., and Charles J. Brainerd. “The Origins of Probability Judgment: A Review of Data and Theories.” Wright and Ayton 23972. Rice, Mabel L., and Richard L. Schiefelbusch, eds. The Teachability of Language. Baltimore: Paul H. Brookes, 1989. Richards, I.A. Practical Criticism: A Study of Literary Judgment. New York: Harcourt, 1929. Richter, David H., ed. Falling into Theory: Conflicting Views on Reading Literature. 2nd ed. Boston: Bedford/St. Martin’s, 2000. Ricoeur, Paul. Hermeneutics and the Human Sciences: Essays on Language, Action and Interpretation. Ed. and trans. John B. Thompson. Cambridge: Cambridge UP, 1981.

186

Bibliography

Ridley, Matt. Nature via Nurture: Genes, Experience, and What Makes Us Human. New York: Harper, 2003. Riedl, Rupert. Biology of Knowledge: The Evolutionary Basis of Reason. Trans. Paul Foulkes. Chichester: Wiley, 1984. Rizzolatti, Giacomo, and Giovanni Buccino. “The Mirror Neuron System and Its Role in Imitation and Language.” Dehaene et al., From Monkey Brain 213-33. Rizzolatti, Giacomo, Leonardo Fogassi, and Vittorio Gallese. “Mirrors of the Mind.” Scientific American 295.5 (2006): 54-61. Rizzolatti, Giacomo, and Corrado Sinigaglia. Mirrors in the Brain—How Our Minds Share Actions and Emotions. Trans. Frances Anderson. Oxford: Oxford UP, 2008. Roeper, Tom. The Prism of Grammar: How Child Language Illuminates Humanism. Cambridge: MIT P, 2007. Rorty, Richard. “The Dark Side of the Academic Left.” Chronicle of Higher Education 3 Apr. 1998: B4-6. Rosenblatt, Louise M. The Reader, the Text, the Poem: The Transactional Theory of the Literary Work. Carbondale: Southern Illinois UP, 1978. Rovee-Collier, Carolyn, and Lewis P. Lipsitt, eds. Advances in Infancy Research. Vol. 8. Norwood, NJ: Ablex, 1993. Rumelhart, David E. “Toward an Interactive Model of Reading.” Dornic 573-602. Rumelhart, David E., James L. McClelland, and the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol 1 of 2: Foundations. Cambridge: MIT P, 1986. Runde, Jochen. “Keynes After Ramsey: In Defence of A Treatise on Probability.” Studies in History and Philosophy of Science 25 (1994): 97-121. Ruse, Michael. Taking Darwin Seriously: A Naturalistic Approach to Philosophy. Oxford: Blackwell, 1986. Ruthven, K.K. Critical Assumptions. Cambridge: Cambridge UP, 1979. Sacks, Oliver. “The Abyss.” New Yorker 24 Sept. 2007: 100-12. Rpt. in his Musicophilia 201-31. —. Afterword to Engel 149-57. —. “A Man of Letters.” New Yorker 28 June 2010: 22-28. —. Musicophilia: Tales of Music and the Brain. Rev. and Expanded ed. New York: Vintage-Random, 2008. Saffran, Jenny R., Elissa L. Newport, and Richard N. Aslin. “Word Segmentation: The Role of Distributional Cues.” Journal of Memory and Language 35 (1996): 606-21. Saffran, Jenny R., Richard N. Aslin, and Elissa L. Newport. “Statistical Learning by 8-Month-Old Infants.” Science 274 (1996): 1926-28.

A Theory of Literary Explication

187

Said, Edward W. “The Politics of Knowledge.” Raritan 11.1 (1991): 1731. Rpt. in Berman 172-89. Sampson, Geoffrey. The “Language Instinct” Debate. Rev. ed. London: Continuum, 2005. Samuels, Richard. “The Complexity of Cognition: Tractability Arguments for Massive Modularity.” Carruthers, Laurence, and Stich, Innate Mind: Structure 107-21. —. “Is the Human Mind Massively Modular?” Stainton 37-56. —. “Massively Modular Minds: Evolutionary Psychology and Cognitive Architecture.” Carruthers and Chamberlain 2-46. —. “Nativism in Cognitive Science.” Mind and Language 17 (2002): 23365. Samuels, Richard, Stephen Stich, and Patrice D. Tremoulet. “Rethinking Rationality: From Bleak Implications to Darwinian Modules.” Lepore and Pylyshyn 74-120. Savage, C. Wade, and Philip Ehrlich. “A Brief Introduction to Measurement Theory and to the Essays.” Savage and Ehrlich, Philosophical and Foundational Issues 1-14. —. eds. Philosophical and Foundational Issues in Measurement Theory. Hillsdale, NJ: Erlbaum, 1992. Schank, Patricia, and Michael Ranney. “Modeling an Experimental Study of Explanatory Coherence.” Program of the Thirteenth Annual Conference of the Cognitive Science Society, 1991: 892-97. Schauber, Ellen, and Ellen Spolsky. The Bounds of Interpretation: Linguistic Theory and Literary Text. Stanford: Stanford UP, 1983. Scholes, Robert. Textual Power: Literary Theory and the Teaching of English. New Haven: Yale UP, 1985. Schrag, Calvin O. “Hermeneutical Circles, Rhetorical Triangles, and Transversal Diagonals.” Jost and Hyde 132-46. Schütze, Hinrich. Ambiguity Resolution in Language Learning: Computational and Cognitive Models. Stanford: CSLI, 1997. Schwartz, Myrna F., and Barry Schwartz. “In Defence of Organology.” Cognitive Neuropsychology 1 (1984): 25-42. Searle, J.R., ed. The Philosophy of Language. Oxford: Oxford UP, 1971. Sedivy, Sonia. “Wittgenstein Against Interpretation: The Meaning of a Text Does not Stop Short of its Facts.” Gibson and Huemer 165-85. Segal, Gabriel. “The Modularity of Theory of Mind.” Carruthers and Smith 141-57. Seidenberg, Mark S. “Language Acquisition and Use: Learning and Applying Probabilistic Constraints.” Science 275 (1997): 1599-1603. Shafer, Glenn. “Non-Additive Probabilities in the Work of Bernoulli and Lambert.” Archive for History of Exact Sciences 19 (1978): 309-70.

188

Bibliography

Shafer, Glenn, and Judea Pearl, eds. Readings in Uncertain Reasoning. San Mateo, CA: Morgan Kaufmann, 1990. Shallice, Tim. From Neuropsychology to Mental Structure. Cambridge: Cambridge UP, 1988. Shanker, Stuart G., and Talbot J. Taylor. “The Significance of Ape Language Research.” Erneling and Johnson 367-80. Shanks, David R. “Distributed Representations and Implicit Knowledge: A Brief Introduction.” Lamberts and Shanks 197-214. Shapiro, Stewart. Foundations without Foundationalism: A Case for Second-Order Logic. Oxford: Clarendon, 1991. Shastri, Lokendra. Semantic Networks: An Evidential Formalization and its Connectionist Realization. London: Pitman, 1988. Shaw, Peter. “The Politics of Deconstruction.” Partisan Review 53 (1986): 253-62. Rpt. in his War 56-66. —. The War Against the Intellect: Episodes in the Decline of Discourse. Iowa City: U of Iowa P, 1989. Shepard, Roger N., and Susan Chipman. “Second-Order Isomorphism of Internal Representations: Shapes of States.” Cognitive Psychology 1 (1970): 1-17. Shepard, Roger N., Dan W. Kilpatric, and James P. Cunningham. “The Internal Representation of Numbers.” Cognitive Psychology 7 (1975): 82-138. Sher, Gila. Rev. of Foundations without Foundationalism: A Case for Second-Order Logic, by Stewart Shapiro. Philosophical Review 103 (1994): 150-53. Shumway, David R. “The Star System in Literary Studies.” Williams 173-201. Siegal, Michael. Marvelous Minds: The Discovery of What Children Know. New York: Oxford UP, 2008. Siegal, Michael, and Luca Surian. “Modularity in Language and Theory of Mind: What Is the Evidence?” Carruthers, Laurence, and Stich, Innate Mind: Volume 2 133-48. Siegler, Robert S. Children’s Thinking. 2nd ed. Englewood Cliffs, NJ: Prentice, 1991. —. Emerging Minds: The Process of Change in Children’s Thinking. New York: Oxford UP, 1996. —. “What Do Developmental Psychologists Really Want?” Gunnar and Maratsos 221-32. Simpson, Tom, et al. Introduction to Carruthers, Laurence, and Stich, Innate Mind: Structure 1-19. Singer, Alan. Aesthetic Reason: Artworks and the Deliberative Ethos. University Park: Pennsylvania State UP, 2003.

A Theory of Literary Explication

189

Singer, Alan, and Allen Dunn, eds. Literary Aesthetics: A Reader. Oxford: Blackwell, 2000. Sloman, Steven A, and David E. Over. “Probability Judgement from the Inside and Out.” Over, Evolution 145-69. Small, Steven L., Garrison W. Cottrell, and Michael K. Tanenhaus, eds. Lexical Ambiguity Resolution: Perspectives from Psycholinguistics, Neuropsychology, and Artificial Intelligence. San Mateo, CA: Morgan Kaufmann, 1988. Smith, Edward E., and Daniel N. Osherson, eds. Thinking. Vol. 3 of An Invitation to Cognitive Science. 2nd ed. Cambridge: MIT P, 1995. Smith, Frank. Understanding Reading: A Psycholinguistic Analysis of Reading and Learning to Read. 5th ed. Hillsdale, NJ: Erlbaum, 1994. Smith, Linda B., and Esther Thelen, eds. A Dynamic Systems Approach to Development: Applications. Cambridge: MIT P, 1993. Smith, Neil. “Dissociation and Modularity: Reflections on Language and Mind.” Banich and Mack 87-111. Sober, Elliott. “The Evolution of Rationality.” Synthese 46 (1981): 95120. Soderholm, James, ed. Beauty and the Critic: Aesthetics in an Age of Cultural Studies. Tuscaloosa: U of Alabama P, 1997. Sosa, Ernest, and Enrique Villanueva, eds. Epistemology. Philosophical Issues 14. Boston: Blackwell, 2004. Sperber, Dan. Explaining Culture: A Naturalistic Approach. Oxford: Blackwell, 1996. —. ed. Metarepresentations: A Multidisciplinary Perspective. New York: Oxford UP, 2000. —. “Metarepresentations in an Evolutionary Perspective.” Sperber, Metarepresentations 117-37. —. “Modularity and Relevance: How Can a Massively Modular Mind Be Flexible and Context-Sensitive?” Carruthers, Laurence, and Stich, Innate Mind: Structure 53-68. —. “The Modularity of Thought and the Epidemiology of Representations.” Hirschfeld and Gelman 39-67. Rpt. as “Mental Modularity and Cultural Diversity” in Sperber, Explaining Culture 119-50 and in Whitehouse 23-56. Sperber, Dan, and Deirdre Wilson. “Pragmatics, Modularity and Mindreading.” Mind and Language 17 (2002): 3-23. —. Relevance: Communication and Cognition. 2nd ed. Oxford: Blackwell, 1995. Spiro, Rand J., Bertram C. Bruce, and William F. Brewer, eds. Theoretical Issues in Reading Comprehension: Perspectives from

190

Bibliography

Cognitive Psychology, Linguistics, Artificial Intelligence, and Education. Hillsdale, NJ: Erlbaum, 1980. Spitzer, Leo. Linguistics and Literary History: Essays in Stylistics. Princeton: Princeton UP, 1948. Spivak, Gayatri Chakravorty. Death of a Discipline. New York: Columbia UP, 2003. Spivey, Michael. The Continuity of Mind. New York: Oxford UP, 2007. Spolsky, Ellen. “Darwin and Derrida: Cognitive Literary Theory as a Species of Post-Structuralism.” Poetry Today 23 (2002): 43-62. —. Gaps in Nature: Literary Interpretation and the Modular Mind. Albany: State U of New York P, 1993. —. “The Limits of Literal Meaning.” New Literary History 19 (1988): 419-40. Stainton, Robert J., ed. Contemporary Debates in Cognitive Science. Oxford: Blackwell, 2006. Stanovich, Keith E. What Intelligence Tests Miss: The Psychology of Rational Thought. New Haven: Yale UP, 2009. —. Who Is Rational? Studies of Individual Differences in Reasoning. Mahwah, N.J.: Erlbaum, 1999. Starkey, Prentice, and Robert G. Cooper, Jr. “Perception of Numbers by Infants.” Science 210 (1980): 1033-35. Starkey, Prentice, Elizabeth S. Spelke, and Rochel Gelman. “Detection of Intermodal Numerical Correspondences by Human Infants.” Science 222 (1983): 179-81. Staton, Shirley F., ed. Literary Theories in Praxis. Philadelphia: U of Pennsylvania P, 1987. Stearns, Peter N. “Expanding the Agenda of Cultural Research.” Chronicle of Higher Education 2 May 2003: B7-9. Stein, Edward. Without Good Reason: The Rationality Debate in Philosophy and Cognitive Science. Oxford: Clarendon, 1996. Stenning, Keith, and Michiel van Lambalgen. Human Reasoning and Cognitive Science. Cambridge: MIT P, 2008. Stern, Laurent. Interpretive Reasoning. Ithaca: Cornell UP, 2005. Steup, Matthias, and Ernest Sosa, eds. Contemporary Debates in Epistemology. Oxford: Blackwell, 2005. Stevenson, Suzanne. “Bridging the Symbolic-Connectionist Gap in Language Comprehension.” Lepore and Pylyshyn 336-55. Stewart, Ian. “Mathematical Recreations: A Partly True Story.” Scientific American 268.2 (1993): 110-12. St.John, Mark F. “The Story Gestalt: A Model of Knowledge-Intensive Processes in Text Comprehension.” Cognitive Science 16 (1992): 271306.

A Theory of Literary Explication

191

Storey, Robert. Mimesis and the Human Animal: On the Biogenetic Foundations of Literary Representation. Evanston: Northwestern UP, 1996. Stove, D.C. Probability and Hume’s Inductive Scepticism. Oxford: Clarendon, 1973. —. The Rationality of Induction. Oxford: Clarendon, 1986. Stromswold, Karin. “Cognitive and Neural Aspects of Language Acquisition.” Lepore and Pylyshyn 356-400. Stueber, Karsten R. Rediscovering Empathy: Agency, Folk Psychology, and the Human Sciences. Cambridge: MIT P, 2006. Swain, Marshall. “Justification and the Basis of Belief.” Pappas 25-49. —. Reasons and Knowledge. Ithaca: Cornell UP, 1981. Sweetser, Eve. From Etymology to Pragmatics: Metaphorical and Cultural Aspects of Semantic Structure. Cambridge: Cambridge UP, 1990. Swinburne, Richard. Epistemic Justification. Oxford: Clarendon, 2001. __. An Introduction to Confirmation Theory. London: Methuen, 1973. Swirski, Peter. Literature, Analytically Speaking: Explorations in the Theory of Interpretation, Analytic Aesthetics, and Evolution. Austin: U of Texas P, 2010. Talbot, Margaret. “The Baby Lab.” New Yorker 4 Sept. 2006: 90-101. —. “Birdbrain.” New Yorker 12 May 2008: 64-75. Tanenhaus, Michael K., Gary S. Dell, and Greg Carlson. “Context Effects in Lexical Processing: A Connectionist Approach to Modularity.” Garfield 83-108. Thagard, Paul. “Abductive Inference: From Philosophical Analysis to Neural Mechanisms.” Feeney and Heit 226-47. —. The Brain and the Meaning of Life. Princeton: Princeton UP, 2010. —. Coherence in Thought and Action. Cambridge: MIT P, 2000. —. Conceptual Revolutions. Princeton: Princeton UP, 1992. —. “Explanatory Coherence.” Behavioral and Brain Sciences 12 (1989): 435-67. Thelen, Esther, and Linda B. Smith. A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge: MIT P, 1994. Thompson, Clive. “New Word Order: The Attack of the Incredible Grading Machine.” Lingua Franca 9.5 (1999): 28-37. Thompson, John B. “Notes on Editing and Translating.” Ricoeur 27-31. Tomasello, Michael. Constructing a Language: A Usage-Based Theory of Language Acquisition. Cambridge: Harvard UP, 2003. —. “Introduction: A Cognitive-Functional Perspective on Language Structure.” Tomasello, New Psychology vii-xxiii.

192

Bibliography

—. ed. The New Psychology of Language: Cognitive and Functional Approaches to Language Structure. Mahwah, NJ: Erlbaum, 1998. Tooby, John, and Leda Cosmides. Foreword to Baron-Cohen, Mindblindness xi-xviii. Treiber, Frank, and Stephen Wilcox. “Discrimination of Number by Infants.” Infant Behavior and Development 7 (1984): 93-100. Trimpi, Wesley. Muses of One Mind: The Literary Analysis of Experience and Its Continuity. Princeton: Princeton UP, 1983. Tsimpli, Ianthi-Maria, and Neil Smith. “Modules and Quasi-modules: Language and Theory of Mind in a Polyglot Savant.” Learning and Individual Differences 10 (1998): 193-215. Tucker, Michael, and Kathryn Hirsh-Pasek. “Systems and Language: Implications for Acquisition.” Smith and Thelen 359-84. Turner, Mark, ed. The Artful Mind: Cognitive Science and the Riddle of Human Creativity. New York: Oxford UP, 2006. —. The Literary Mind. New York: Oxford UP, 1996. Tversky, Amos, and Daniel Kahneman. “Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.” Psychological Review 90 (1983): 293-315. Rpt. in Gilovich, Griffin, and Kahneman 19-48. —. “Judgments of and by Representativeness.” Kahneman, Slovic, and Tversky 84-98. Tweney, Ryan D. “Serial and Parallel Processing in Scientific Discovery.” Giere 77-88. Tyson, Lois. Critical Theory Today: A User-Friendly Guide. New York: Garland, 1999. Uttal, William R. The New Phrenology: The Limits of Localizing Cognitive Processes in the Brain. Cambridge: MIT P, 2001. Vandevelde, Pol. The Task of the Interpreter: Text, Meaning, and Negotiation. Pittsburgh: U of Pittsburgh P, 2005. Van Gelder, Timothy, and Robert F. Port. “It’s About Time: An Overview of the Dynamical Approach to Cognition.” Port and Van Gelder 1-43. Vanhoozer, Kevin J., James K.A. Smith, and Bruce Ellis Benson, eds. Hermeneutics at the Crossroads. Bloomington: Indiana UP, 2006. Vendler, Helen. The Music of What Happens: Poems, Poets, Critics. Cambridge: Harvard UP, 1988. Ward, Lawrence M. Dynamical Cognitive Science. Cambridge: MIT P, 2002. Weatherford, Roy. Philosophical Foundations of Probability Theory. London: Routledge, 1982. Webster’s New International Dictionary of the English Language. 2nd ed. unabr. 1961.

A Theory of Literary Explication

193

Weinsheimer, Joel. Philosophical Hermeneutics and Literary Theory. New Haven: Yale UP, 1991. Wells, Robin Headlam, and Johnjoe McFadden, eds. Human Nature: Fact and Fiction. London: Continuum, 2006. Whitehouse, Harvey, ed. The Debated Mind: Evolutionary Psychology versus Ethnography. Oxford: Berg, 2001. Wichmann, Eyvind H. Quantum Physics. New York: McGraw-Hill, 1967. Wiese, Heike. Numbers, Language, and the Human Mind. Cambridge: Cambridge UP. 2003. Wilkes, A.L. Knowledge in Minds: Individual and Collective Processes in Cognition. Hove, East Sussex: Psychology, 1997. Williams, Bernard, and Alan Montefiore, eds. British Analytical Philosophy. London: Routledge, 1966. Williams, Jeffrey J., ed. The Institution of Literature. Albany: State U of New York P, 2002. Williamson, Timothy. Knowledge and its Limits. New York: Oxford UP, 2000. Wilson, Anne, and James Hendler. “Linking Symbolic and Subsymbolic Computing.” Connection Science 5 (1993): 395-414. Wilson, Robert A. Boundaries of the Mind: The Individual in the Fragile Sciences. Cambridge: Cambridge UP, 2004. Wilson, Timothy D. Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge: Harvard UP, 2002. Wirth, Uwe. “Abductive Reasoning in Peirce’s and Davidson’s Account of Interpretation.” Transactions of the Charles S. Peirce Society 35 (1999): 115-27. Wolfson, Susan J., and Marshall Brown, eds. March issue of the Modern Language Quarterly 61.1 (2000). Wolterstorff, Nicholas. “Resuscitating the Author.” Vanhoozer, Smith, and Benson 35-50. Woods, William A. “Multiple Theory Formation in Speech and Reading.” Spiro, Bruce, and Brewer 59-82. Woodward, Amanda L., and Ellen M. Markman. “Constraints on Learning as Default Assumptions: Comments on Merriman and Bowman’s ‘The Mutual Exclusivity Bias in Children’s Word Learning.’” Developmental Review 11 (1991): 137-63. Woodward, James, and Fiona Cowie. “The Mind is not (just) a System of Modules Shaped (just) by Natural Selection.” Hitchcock 312-34. Wright, George, and Peter Ayton, eds. Subjective Probability. Chichester: Wiley, 1994.

194

Bibliography

Wuketits, Franz M. Evolutionary Epistemology and Its Implications for Humankind. Albany: State U of New York P, 1990. Wynn, Karen. “Addition and Subtraction by Human Infants.” Nature 358 (1992): 749-50. Xu, Fei, and Vashti Garcia. “Intuitive Statistics by 8-Month-Old Infants.” Proceedings of the National Academy of Sciences 105.13 (2008): 5012-15. Zeki, Semir. “The Neurology of Ambiguity.” Turner, Artful Mind 243-70. Zunshine, Lisa. Why We Read Fiction: Theory of Mind and the Novel. Columbus: Ohio State UP, 2006.

ABOUT THE AUTHOR Kenneth B. Newell is an alumnus of the University of Massachusetts at Lowell and received a master’s degree and doctorate in English at Columbia University and the University of Pennsylvania respectively. Before his retirement, he coordinated the Humanistic Studies Program at Christopher Newport University and taught English there as well as at several other universities—Drexel, Kansas, California at Los Angeles, Virginia Commonwealth, California State at Bakersfield, and Southern California. He is the author of Structure in Four Novels by H.G. Wells, Pattern Poetry: A Historical Critique from the Alexandrian Greeks to Dylan Thomas, Conrad’s Destructive Element: The Metaphysical WorldView Unifying “Lord Jim,” New Conservative Explications: Reasoning with Some Classic English Poems, and scholarly articles mainly on early Modern British fiction.