Epistemological Studies 9783110319491, 9783110319149

The present book continues Rescher's longstanding practice of publishing occasional studies written for formal pres

174 83 1MB

English Pages 112 [121] Year 2009

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Epistemological Studies
 9783110319491, 9783110319149

Table of contents :
PREFACE
Chapter 1 - INTELLIGENCE AND EVOLUTIONARY INNOVATION
Chapter 2 - ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE
Chapter 3 - VAGUENESS: SOME VARIANT APPROACHES
Chapter 4 - UNDERDETERMINATION
Chapter 5 - COGNITIVE COMPROMISE On Managing Cognitive Risk in the Face of Imperfect/Flawed Information
Chapter 6 - AUTHORITY
Chapter 7 - AN EXPLANATORY CONUNDRUM
Chapter 8 - THE MUSICAL CHAIRS PARADOX
Chapter 9 - TRANSCENDENTAL ARGUMENTATION AND HUMAN NATURE
Chapter 10 - A MULTITUDE OF WORLDS?
NAME INDEX

Citation preview

Nicholas Rescher Epistemological Studies

Nicholas Rescher

Epistemological Studies

Bibliographic information published by Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliographie; detailed bibliographic data is available in the Internet at http://dnb.ddb.de

North and South America by Transaction Books Rutgers University Piscataway, NJ 08854-8042 [email protected] United Kingdom, Eire, Iceland, Turkey, Malta, Portugal by Gazelle Books Services Limited White Cross Mills Hightown LANCASTER, LA1 4XS [email protected]

Livraison pour la France et la Belgique: Librairie Philosophique J.Vrin 6, place de la Sorbonne; F-75005 PARIS Tel. +33 (0)1 43 54 03 47; Fax +33 (0)1 43 54 48 18 www.vrin.fr

2009 ontos verlag P.O. Box 15 41, D-63133 Heusenstamm www.ontosverlag.com ISBN 978-3-86838-048-4 2009 No part of this book may be reproduced, stored in retrieval systems or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use of the purchaser of the work Printed on acid-free paper FSC-certified (Forest Stewardship Council) This hardcover binding meets the International Library standard Printed in Germany by buch bücher dd ag

Epistemological Studies Contents Preface Chapter 1: INTELLIGENCE AND EVOLUTIONARY INNOVATION

1

Chapter 2: ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE

13

Chapter 3: VAGUENESS: SOME VARIANT APPROACHES

29

Chapter 4: UNDERDETERMINATION

45

Chapter 5: COGNITIVE COMPROMISE: On Managing Cognitive Risk in the Face of Imperfect/Flawed Information

57

Chapter 6: AUTHORITY

65

Chapter 7: AN EXPLANATORY CONUNDRUM

75

Chapter 8: THE MUSICAL CHAIRS PARADOX

81

Chapter 9: TRANSCENDENTAL ARGUMENTATION AND HUMAN NATURE

83

Chapter 10: A MULTITUDE OF WORLDS?

89

Name Index

109

PREFACE

T

he present book continues my longstanding practice of publishing occasional studies written for formal presentation and informal discussion with colleagues. They form part of a wider program of investigation of the scope and limits of rational inquiry in the pursuit of knowledge. I am grateful to Estelle Burris for helping me to put this material into a form suitable for publication.

Nicholas Rescher Pittsburgh PA May 2009

Chapter 1 INTELLIGENCE AND EVOLUTIONARY INNOVATION 1. HOW DOES THE PRESENCE OF INTELLIGENCE CHANGE THE WORLD?

C

ertainty is hard to achieve in philosophical matters. But one thing looks to be for sure. The universe contains intelligent beings. Not, perhaps, very intelligent beings, but nevertheless beings who not only have the capacity for intelligent agency, but who do actually make use of it some of the time. Such beings do, or can, act on the basis of what they think, and do, or can, make an effort to align their thought with the world’s realities. How did such beings get to be there? Essentially by evolutionary processes. First there was cosmic evolution through the developing complexity and diversification of physical and chemical processes. Then came biological evolution by variation and natural selection. And ultimately intelligent beings emerged—presumably because there was a viable niche for creatures whose survival advantage came through intelligence rather than various alternatives such as the profuseness of ant’s eggs or the swiftness of antelopes. Evolution is nature’s innovator. Cosmic, biological, and cultural evolution—all bring massive novelties in their wake. There were no laws of chemistry in the first nanosecond of the universe after the big bang—only a boiling soup of subatomic stuff in which chemicals had not yet emerged. And similarly, there were no laws of cellular biology in the first billion years of our universe’s existence, nor laws of macroeconomics in its first ten billion. But with the emergence of new modes of process, new sorts of things continuously came into existence, and new modes of lawfulness arose in their wake. Was the evolutionary emergence of intelligence fortuitous or unavoidable? Many theorists believe that it was inevitable because intelligence is so effective a survival mechanism in a complex and changeable environment. But be this as it may, once intelligence gains a foothold in the universe—by whatever mechanism or means—there arises the question of the

Nicholas Rescher • Epistemological Studies

difference that its presence there makes? What dimensions of reality that would otherwise be missing came into existence in the world through the emergence of a cognitively aggressive and probing order of intelligence? This question sets the focal theme of the present discussion, which is predicated on the idea that nothing so fundamentally changes a world as the emergence of intelligent beings upon its stage. So, how does higher-order intelligence change the universe? This is a philosophical issue which (somewhat surprisingly) rather few philosophers have addressed. It is a question that can be posed in many ways: What sort of significant novelty has come into existence with the evolutionary emergence of intelligence? What massive operational difference is there between an intelligence-containing universe and its intelligence-lacking variants? Such questions highlight varying aspects of one selfsame problem. 2. KEY INNOVATIONS Some things that come into being only in the wake of intelligence are pretty trivial. For of course only intelligent, mind-endowed beings can play tictac-toe. But here we are not concerned with such minutiae. Only very significant, large-scale, and far-reaching macro-features are presently of interest. Our question is: What really big phenomena of portentous import and significance that were previously absent does the evolutionary emergence of higher-order intelligence on the cosmic scene bring into being? (1) Cognition: Awareness of Facts To begin with, a universe without intelligence is one from which knowledge as such is absent. Earlier on, there certainly were things to be known, but there was no knowledge of them. Absent intelligence, the world is a cognitive vacuum. But once thinking minds enter upon the scene, there now comes to be something rather different from the bare realities and facts, namely the thought-perspectives at issue with ideas, opinions, and views about them. Intelligent beings—and they alone—are able to understand facts and features of the world about them. Indeed, intelligent beings can actually endeavor to realize a thought-created model of the universe. And not only is this something that intelligent beings can in theory do, it is something which they are, in the very nature of things, ultimately likely to endeavor to do.

2

INTELLIGENCE AND EVOLUTIONARY INNOVATION

It is perhaps not a leap beyond the bounds of imaginative conjecture to view an intelligence-containing universe as embarked on an endeavor to evolve a thought-controlled model of itself. Viewed in this light, nature might be seen as a self-modeling system, leading to something of a Hegelian vision of Nature as embodiment of an Intelligence (Geist) whose work—and perhaps even mission—it is to undertake a progressively developmental process in whose operation the world increasingly realizes a vast process of self-comprehension. But let us turn away from such flights of the imagination and restore our feet to more solid ground. (2) Imagination: Projection of Possibilities It should then be clear that not only is a mindless world bereft of a knowledge of actual facts, but it is even devoid of any surmise about possibilities. In such a world there will, no doubt, be possibilities, there can be no contemplation of nor even speculation about them. Only intelligent beings can consider (unrealized) possibilities, and undertake the process of conditional (if-then) speculation and the mindinaugurated projection of possibilities that will never be realized. The emergence of intelligence thus transmutes the universe from a manifold of fact into a theater of possibility-contemplation. It is only through the mediation of minds that unrealized possibilities can gain an ontological foothold of sorts in the domain of the world’s actualities. (3) Evaluation: Assessment of Conditions Then too there is evaluation. In the intelligence-antedating era there is simply physical eventuation: actions and interactions of various sorts. But now evaluation comes on the scene—be it positive or negative. (And this evaluation goes beyond the existing realities to encompass the possibilities as well.) Of course even a mind-bereft world has aspects that can be valued by a mind-possessing being who contemplates it from without, so to speak, by projecting hypotheses. But actual valuing is something that can go on only in mind-containing worlds. And the same goes for normative judgments regarding what ought to be in contrast to what is. No doubt, even in a world without minds there can be things that are worthy of being valued— that is, which would be valued if there were duly perceptive minds. But

3

Nicholas Rescher • Epistemological Studies

there can be no recognition or acknowledgement of this prospect, and so nothing that is actively being valued. Now it is tempting to maintain—though this is not the place to argue for it—that while in a universe without intelligent being there are circumstances of conditional value (in that various sorts of developments can be good for giraffes or elephants)—that it is only in a universe with intelligent beings there are circumstances of categorical value, because what is good for intelligent beings is good, period. The appreciation of value—so it would be maintained—is only and alone able to effect the transmutation from relative to unconditional. (4) Purposive Action and Free Decision All sorts of things can be going on in a mind-bereft world, but deliberate, thought-guided, purposive action is not one of them. Rational purposive choice requires both a cognitive apprehension of cause-effect relations (“If I do A, B will ensure”) and evaluation (“I prefer Xs happening rather than Y.”) In view of this circumstance, it happens that reason-guided choices made in the light of agent-inaugurated objectives—acts of free will as they are often called—are prospects that emerge only with the development of intelligence. (5) Morality Clearly, one of the key obligations of intelligent beings—be they humans or extraterrestrial aliens—is to satisfy the requirement, and indeed the need, for seeing oneself as something higher and worthier than mere animals—as beings who, being equipped with minds and spirits, thereby occupy a place of special worth and significance upon the world’s stage. And we do (and should!) incline to see immoral action as degrading and unworthy, as diminishing us in our own sight. In the face of unethical and unworthy action we can no longer see ourselves as the sorts of beings we would ideally like to think of ourselves as beings. Looking down upon oneself with disapproval is something no-one finds anything but distasteful. Psychopaths aside—and being one is clearly not an intelligent option—those who yield to the temptation of unethical, immoral, and antisocial behavior generally devote considerable psychic effort and energy to invent excuses—excuses which, by and large, do not succeed in convincing even themselves.

4

INTELLIGENCE AND EVOLUTIONARY INNOVATION

(6) Deliberated Creativity, Artifice, and Art Serendipity aside, cognitive and aesthetic creativity is a matter not just of novelty, but of deliberate innovation through imaginative thought. And this too is impossible in a mind-denuded universe. For thought-guided artifice in the context of inauguration-projected end-products are all too clearly phenomena that can only arise in the presence of intelligent agents. And this means, in specific, that artifice and art—the purposeful creation of artifacts for the sake of use, enjoyment, enlightenment, or edification—is a venture that can only exist where intelligent beings are present. (7) Spirituality When we finite beings consider our own insignificance and impotence— our vulnerability in the face of the vast forces at work in the cosmos about us—we are bound to be struck by awe and wonder. Why are we here and what are we to make of the opportunities that our presence affords us are issues that any community of intelligent beings is going to confront sooner or later. Spirituality as one shall understand it here invokes three components. (I) An affective appreciation of the vastness and power of nature and its operative forces—“awed wonder” as it might be called. (II) A sense of thankfulness, gratitude, and appreciation for one’s having some small share in the great drama of creation. (III) A hope and wish that the things one holds near and dear will be treated kindly by those vast forces of Nature that lie outside of one’s own meager powers of control. All this sort of thing can come into operation only with the emergence of intelligence and forms an important part of its constructive spontaneity. 3. THE DARK SIDE OF THE ISSUE However, unfortunate though it is, there is also the other side of the coin. For with the emergence of intelligence, there also arises the prospect of its misuse. In fact, only intelligence-possessing beings are in a position to sin. Deliberate evil doing in all its forms—willful mischief, vandalism, envy, schadenfreude, the seven deadly sins and their myriad congeners—all become possible only with the emergence of intelligence. But these negativities can be sidelined for present purposes. And this is so not because we propose to take the line of a “see-no-evil” Pollyanna, but because one must

5

Nicholas Rescher • Epistemological Studies

distinguish between the use of intelligence and its abuse. Granted, all these modes of malfeasance also originate with intelligence, but they all go against the intelligent employment of intelligence. The reflective selfrecognition by intelligence is lamentably absent throughout the sphere of such negativities. What comes to the fore here is the prospect of a reflectively selfdemanded use of intelligence in the evaluation of its own undertakings. The unintelligent use of intelligence leads to its abuse, and while these two possibilities—use and abuse—cannot but emerge conjointly, it is the former whose significance is paramount. 4. VAGRANT PREDICATES At this point I want to narrow our preview a bit and focus on an aspect of the specifiably cognitive sector of our problem. Thus far, we have surveyed a large panorama by looking at intelligencedriven innovation in a highly abstract and general way. But the ensuing discussion will now try to provide a more narrowly detailed example of something of substantial significance that emerges on the world stage with the origination of intelligence, namely unknowable fact. The example to be addressed will pivot on a question that meaningfully arises but nevertheless will unavoidably go unresolved—not by contingent, chance or ignorance, but because it is in principle unanswerable. The question at issue is one that is clearly meaningful and appropriate, but yet effectively defies any prospect of resolution. To see how this comes about, let us embark on a somewhat peculiar line of thought that makes a detour via the logico-semantics of predication. One can refer to a particular item in two distinctly different ways. The one proceeds directly, specifically, and individually by actually naming names. The other proceeds obliquely and sortally by characterizing the item as being of a certain type or kind (“the first American male born in the 18th century”). Now a somewhat bizarre, but nevertheless highly interesting mode of reference occurs when an item is referred to in such a way that—while generically indicated—nevertheless its specific identification is flat-out precluded as a matter of principle. This phenomenon is illustrated by claims to the existence of: a thing whose identity will never be known by anyone,

6

INTELLIGENCE AND EVOLUTIONARY INNOVATION

an idea that has never occurred to anybody, an occurrence that no-one ever takes note of, —an integer that is never individually specified by anyone. Given the ways of the world, there will undoubtedly be such things. But nevertheless, we obviously cannot possibly manage to give an example of them. For to indicate them individually and specifically as instances of the predicate at issue is ipso facto to unravel them as so-characterized items.1 Here direct identification of the item is impossible as a matter of principle. The concept of an applicable and yet noninstantiable predicate comes to view at this point. This is a predicate F whose realization is noninstantiable, because while it is true in the abstract that this property is exemplifiedso that (∃u)Fu will be truenevertheless, the very manner of this predicate’s specification makes it impossible to identify any particular individual u0 such that Fu0 obtains. Accordingly we may define: F is a vagrant predicate iff (∃x)Fx is true while nevertheless Fx0 is false for each and every specifically identified x0.2 Such predicates are “vagrant” in the sense of having no known address or fixed abode: though they indeed have applications these cannot be specifically placed—they cannot be pinned down and located in a particular spot. Predicates of this sort are such that while general principles indicate that there actually are items to which they apply, nevertheless no such items can possibly be individually and concretely identified. Consider, for example, the following predicates: being an ever-unstated (proposition, theory, etc.), being a never-mentioned topic (idea, object, etc.), being a truth (or fact) no-one ever realizes (learns, states), being someone whom everyone has forgotten, being the never-identified perpetuation of a certain crime,

7

Nicholas Rescher • Epistemological Studies

being an issue no-one has thought about since the 16th century. Here the challenge of the question “What is even a single instantiation of this predicate?” is one that simply cannot be met. There doubtless are such things, but we cannot ever identify them. It transpires that, in asking for instances, we have before us questions which are genuine insolubilia in the medieval sense—question which indeed have a correct answer, but one which cannot possibly be provided.3 5. COGNITIVE INACCESSIBILITY EMERGES WITH COGNITION ITSELF But note that something peculiar is going on here. With those necessarily inapplicable predicates that concern us now, case-specific noninstantiability is built into the very specification at issue. Giving an illustrative instance stands in outright logical conflict with the characterization at issue here. To identify such an item is thereby to unravel its specifying characterization, because cognitive inaccessibility is always involved here. By their very nature, vagrant predicates are cognitively linked.4 What is pivotal with vagrant predicates is just this feature that they involve a cognitively-geared specification whichlike identification, comprehension, formulation, mention, etc.is fundamentally epistemic in nature. And this is no accident because the issue of cognition is itself always in play with vagrancy. And just like knowing itself, failures to know are something that can only be managed by a creature capable of cognitive and communicative performances. But since knowing and unknowing come into operation only where there are intelligent beings, it follows that such facts can emerge in a world only after intelligence has gained a foothold there. A state of affairs relating to the cognition of beings in the world can only arise after intelligent beings have come upon the scene. And viewed from this perspective, it transpires that predicative vagrancy springs into being only with the emergence of intelligence. Those cognitive features such as knowability or unknowbility by its members can only be ascribed to a world’s facts if that world contains intelligent beings. On this basis, the existence of intelligent beings in a world is a precondition for the prospect of facts unknowable to that world’s intelligences. But why should it require the presence of intelligence in a world to bring the idea of vagrant predicates into operation? Simply because the very characterizations of such predicates involve cognitive processes.

8

INTELLIGENCE AND EVOLUTIONARY INNOVATION

Consider such adjectives as: • is a book • is a coin A world without intelligent beings could certainly contain objects physically indistinguishable from books or coins. But these would not be described correctly and appropriately as books or coins. A book is a certain kind of object made for the sake of reading. A coin is a certain kind of object made for the sake of exchange. In minds where reading or exchange have no place, these objects could not exist as such. And much the same is true for unknowable facts—that is, facts unknowable to their world’s intelligences. In a world without intelligence there would be no intrinsically untenable facts. The very idea would not make sense. And so with the emergence of intelligence, there also emerges the spectrum of its potential multifunctions: error, confusion, delusion. That much is obvious and trivial. But what is less apparent, and decidedly more portentous, is that there will now come into being—and do so for the first time—facts about the world which its intelligences cannot possibly come to know—facts that are in principle unknowable to them. We must, of course, acknowledge that we—those who carry on these deliberations—are ourselves part of the universe and members of its intelligencia. If we were external observers contemplating a hypothetical universe from without, there would, of course, be no difficulty of principle about our knowing that a certain specific fact F is unknown to them—those who operate within that universe. The predicate —being a fact that nobody in that universe knows is certainly not uninstantiable with a universe contemplated ab extra. But what concerns us in these matters of vagrancy is unknowability to world members. And this is something that can only be so with a world that does indeed have intelligent members. For, of course, when we hypothesize an intelligence that contemplates a world from without (ab extra) then, obviously, such an intelligent being can ascribe value, purpose, and significance to a world—and perhaps even do this correctly. But without the presence of intelligent beings within a

9

Nicholas Rescher • Epistemological Studies

world, there is no way for it to actually (rather than merely hypothetically) manifest any such cognitive features. The spectator to a game of chess can consider and appraise certain moves, but he cannot actually make them. So it is only when that no-one/everyone at issue with predictive vagrancy also includes those who conduct these deliberations that the anomaly of an unspecifiability-in-principle arises. The argumentation at issue here is based upon two premisses: (1) When predicative vagrancy occurs in a world, then there will ipso facto be facts about it that are unknowable to its intelligences. (2) Whenever intelligent beings exist in a world—but only then—will and must this world have predicatively vagrant features. It follows from these considerations that as intelligence emerges in a world, there will (necessarily) be facts about it that are unavailable to its intelligences. Now when we consider states of the world prior to the emergence of intelligence, there is, in principle, and in theory, nothing which—as a matter of principle—those eventually emerging intelligences cannot possibly find out, as it were, on the basis of traces left behind by the events in question. But once those finite intelligences are actually there, there will come to be facts relating to their own incapacity that they cannot possibly discover. Thus prior to the emergence of intelligence there are only unknown facts about the universe. But thereafter there can also come to be unknowable ones! For, of course, our concern here is with what may be unknowable about a world to the intelligences that live and function within it. At this point one must be careful. Even a mind-bereft world would provide for learnable (or for that matter even surprising) facts, facts subject to the iffy description that IF there were a (duly powerful and appropriately situated) mind in the world, then, the fact would be recognizable by it. But this iffy status cannot be maintained with respect to vagrant predicates. For these predicates are by definition such that no mind (or its world) could possibly gain cognitive access. And this will have to be just as true for any mind that we would by hypotheses super-add to the population of the universe. For when a fact is by definition inaccessible to all of a world’s intelligent beings, we cannot alter its unknowability status by saying: “Ah, but if we added yet another intelligence to that world’s population then the fact in question might become knowable.”

10

INTELLIGENCE AND EVOLUTIONARY INNOVATION

A universe without intelligent beings will not be one that involucrates inherently unknowable facts. There is no reason of principle why intelligences should not be able to resolve directly reality-oriented and cognitionindependent questions about Nature’s make-up and modus operandi. As far as the impersonal facts about the world’s ways are concerned, intelligence appears to enjoy a potential of “no holds barred.” But once the world begins to contain intelligent beings (finite intelligences) there will come to be facts about it that are not just unknown to those intelligences, but actually are even unknowable to them. It is only the emergence of intelligent beings that brings absolute unknowability into play via vagrant predicates. For what intelligence cannot do is to get a comprehensive grip on itself—and in specific on its own limits. Only with a world that contains intelligent beings it will make sense to say that there are certain things which that world’s intelligent can or cannot do. 6. CONCLUDING OBSERVATIONS And so—back to the beginning! How does the emergence of intelligence change the world? The short answer is—revolutionarily. For there is now, suddenly, a place for self-awareness in the scheme of things—self-awareness in matters of thought and in action through recognizing the limitations that will inevitably afflict the condition of finite beings. Those finite minds will always be imperfectly informed about their own limits and limitations. However, since thinking and thought-guided acting are integral to the make-up of the world, there will be aspects of an intelligence-containing reality regarding which the world’s intelligence must of necessity remain imperfectly informed. Given the integration of thought into nature, an incompleteness of knowledge regarding the former carries in its wake an incompleteness of knowledge originating the latter. Strange though it may seem, it is only with the emergence of intelligence the universe itself unavoidably becomes imperfectly intelligible. For the questions that defy the utmost efforts of intelligence are exactly those that relate to facts concerning the limits of its own operation. The irony of it is that the existence of intelligence in the world is a precondition of its own defeat.5

11

Nicholas Rescher • Epistemological Studies NOTES 1

We can, of course, refer to such individuals and even to some extent describe them. But what we cannot do is to identify them in the sense of the contrast between direct and oblique specification.

2

This is squarely at odds with the substitutional interpretation of quantifiers which construes (∃x)Fx to mean that there indeed is an x0 for which Fx0. And it is in synch with an intuitionistic mathematics holding that to establish that there actually is some x for which Fx, it does not suffice to argue on general principles that (∃x)Fx obtains, but one must also show that a definite instance can be produced.

3

The “never”/“ever” “anyone”/“no-one” that are definitively at issue in these specifications of predicative vagrancy need not look to the universe at large from alpha to omega. For there we must perhaps acknowledge difficulties. But the range at issue could well be the limited spatio-temporal existence of the solar system. What is crucial for vagrancy is that those never-be-anyone specifications should include in this referential range the specific individuals by whom these claims are made. As long as the group G included the claimant, the claim “X is an item that G-members can never identify” cannot be instantiated by G members.

4

To be sure one could (truthfully) say something like “The individual who prepared Caesar’s breakfast on the fatal Ides of March is now totally unknown.” But the person at issue here goes altogether unknown, that is, he or she is alluded to but not specified—individuated but not concretely identified. So I cannot appropriately claim to know who the individual at issue is but only at best that a certain individual is at issue.

5

This essay was originally presented in the colloquium series of the Department of Philosophy at the University of Nevada at Las Vegas in November of 2008.

12

Chapter 2 ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE 1. ON COGNITIVE FAILINGS

T

wo importantly different sorts of “worlds” figure on the agenda whenever matters of knowledge are at issue, namely (1) the world as it actually is, the real world, and (2) the world as we think it to be, the phenomenal world. And as regards the latter three sorts of prospective deficiencies loom: (1) the error of getting the facts wrong, (2) indeterminacy or indecision about how the facts stand among identifiable alternatives, and (3) the sheer ignorance or unknowing of not having any idea as to what the possibilities (let alone the facts) actually are. This is readily illustrated by contrasting The Real World

The Phenomenal World (Our world picture)

0

1

0

0

1

1

1

1

1

?

1

0

0

0

0

1

1

Here our world-picture involves all three cognitive failings: indeterminacy (indicated by ?), ignorance (indicated by blanks), and error (betokened by the incorrect entries in boldface. Yet note that all of our cognitive failings to the contrary notwithstanding, we have a correct grasp of various of the world’s laws, specifically those having it that: • Every row is 1-contiaining • Every column is 1-containing

Nicholas Rescher • Epistemological Studies

• Every diagonal is 1-contianing • Every diagonal is 0-containing • Every corner-set is 0-containing • Every corner-set is 1-containing These being all the general laws that do in fact obtain (relative to the various natural kinds at issue), it happens in the example that our cognitive grip on the world’s law structure is complete and correct, notwithstanding the imperfection of our knowledge of its phenomena. Our simple story carries an important lesson: ignorance and error in matters of detail certainly can but actually need not stand in the way of having correct information about the world’s lawful structure. 2. APPEARANCE AND REALITY More generally, those trigrammatic microworlds also afford a pathway to various significant points about how appearance can oversimplify reality. Thus suppose Reality differentiates order in rows while Appearance fails to do so by losing sight of this particular aspect of order. We will thus contrast: Reality

Appearance

X

O

X

X

X

X

X

X

O

X

X

X

O

X

X

O

O

O

Here appearance gets one fundamental fact right: every column contains two X’s and one O. However, the actual situation affords further detail which appearance looses in the confusion of a mistaken view that the columns are just alike, each with two O’s and one X. The result is clearly a very mistaken view of the real. A key epistemological lesson emerges here:

14

ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE

Confusion and conflation issue in a loss of discriminative details. They both lead to an error that diminishes detail and discrimination by treating unlikes as likes. The errors of confusion and conflation are thus going to be errors of oversimplification.

Moreover, as such examples illustrate, the element of confusion that is pretty well inevitable in our perceptual knowledge of the real can readily spill over into the range of our conceptual knowledge as well. And this situation has significant consequences, preeminently the following two: • To an observer who is oblivious to various details of reality, things may well appear simpler and subject to a cruder lawful order than is the case. And— • Thanks to such nomic (over)-simplification certain actually obtaining phenomena can become inexplicable. 3. OVERSIMPLIFICATION AS A GATEWAY TO ERROR Oversimplification is a salient form of misinformation about reality because it involves errors of omission, occurring whenever someone leaves out of account features of an item that bear upon a correct understanding of its nature. Whenever we unwittingly oversimplify matters we have a blindspot where some facet of reality is concealed from our view. And this can do real damage. After all, the student who never progresses from Lamb’s Tales from Shakespeare to the works of the bard of Avon himself pays a price not just in detail of information but in the comprehension of significance. And the student who substitutes the Cliff’s Notes version for the work itself suffers a comparable impoverishment. To oversimplify a work of literature is to miss much of its very point. Whenever we oversimplify matters by neglecting potentially relevant detail we succumb to the flaw of superficiality. Our understanding of matters then lacks depth and thereby compromises its cogency. But this is not the worst of it. Oversimplification thus consists in the omission of detail in a way that is misleading in creating or inviting a wrong impression in some significant—i.e., issue-relevant—regard. In practice the line between beneficial simplification and harmful over-simplification is not easy to draw. Often as not it can only be discerned with the wisdom of retrospective hindsight. For whether that loss of detail has negative consequences and repercussions is generally not clear until after a good many returns are in.

15

Nicholas Rescher • Epistemological Studies

Oversimplification is, at bottom, nothing but a neglect (or ignorance) of detail. Its beginnings and origination lies in a lack of detail—in errors of omission. But that is not by any means the end of the matter. For such errors of omission all too readily carry errors of commission in their wake. Suppose, for example, that the reality of it is as per (R)

aaA aAA

And let it be that we “oversimplify” matters by failing to differentiate between a and A, viewing both alike simply as instances of one common α. We then arrive at the following model of reality: (M)

ααα ααα

And now on this basis we are led straightaway to conclude that “Both compartments are exactly the same in composition”—a clearly erroneous belief. And now any fact whose explanation hinges on the difference between those two boxes becomes an unexplainable mystery for us. Whenever there is a blank in our knowledge, the natural and indeed the sensible thing to do is to fill it in in the most direct standard, plausible way. We assume that the person we bump into in the street speaks English and say “oops, sorry”—even though this may well prove to be altogether unavailing. We regard the waiter in the restaurant as ours even where it is the brother who bears a family resemblance. 4. OVERSIMPLIFICATION ENLARGED: COGNITIVE MYOPIA: CONFUSION AND CONFLATION AND THEIR CONSEQUENCES Confusion and conflation are the two prime modes of oversimplification. The key ideas at issue here are to be understood as follows:

16

1.

X confuses items x and y over the question-manifold Q iff in answering the questions within this manifold X often fails to distinguish between x and y.

2.

X conflates items x and y over the question-manifold Q iff in answering the question within the manifold X always sees both x and y as one selfsame z.

ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE

For the sake of illustration consider someone whose visual myopia is such that he is unable to tell 5 and 6 apart. On this basis an individual may well through conflation envision 56 as ÁÁ. Or again, the individual may through confusion envision 56 as 66. Such modes of cognitive myopia have significantly different ramifications for our grasp of the world’s lawful comportment. Suppose that we are in reality dealing with the perfectly regular series R: 6 5 6 5 6 5 6 5 5 5 6 … but due to the occasional confusion of a mild cognitive myopia we may then actually “see” this (be it by way of observation or conceptualization) as A: 6 5 5 5 6 5 5 5 6 5 … Observe that our inability to distinguish has here effectively transmuted a lawful regularity into a random disorder. The appearances then indicate (via “Mill’s Methods of Agreement and Difference”) that there is no causal correlation between reality and its appearance. The supposition of (mild) myopia thus induces a drastic disconnection between the two levels of consideration at issue, with reality’s lawful order of giving way to lawlessness in the realm of appearance. Thus even so crude an example suffices to indicate that even if the world is possessed of a highly lawful order, this feature of reality may well fail to be captured in even a mildly myopic representation of it. And this in turn means that, given myopia, the world-view presented in our worldmodeling may well be no more than loosely coupled to the underlying reality of things thanks to the oversimplification that is almost inevitably involved. On the other hand, there is also the prospect of a severe cognitive myopia that results in a systemic conflation of reality in the setting of its conceptualization.

17

Nicholas Rescher • Epistemological Studies

For the sake of illustration let it thus be that the reality that confronts us has the random structure: R: 6 5 5 6 6 6 5 5 5 6 5 5 6 6 5 … But let it also be that in representing this reality in our observations and/or conceptualization our view of the matter is so myopic that we cannot readily distinguish between 5 and 6: both simply look like a blurred (Á) to us. Our random series is now representatively transmuted into the elegant uniformity of the series A: Á Á Á Á Á Á … Where reality is in fact random and discordant, its representation in our cognitive field of vision is the quintessence of lawful elegance. Here a realm whose physical comportment is in certain respect random and lawless may well be seen by its cognitively myopic observers as having a phenomenology that is deterministically lawful of a series whose random basis is laden beneath a cloud cover of indistinguishability. And so, insofar as it makes matters appear more uniform than they actually are, it is virtually bound to lead to spurious regularities. The point is that there are not the optical illusions of bodily vision but also comparable cognitive illusions where we exercise our mental vision to grasp the ways of the world. Our oversimplified models of reality distort our view of its modes of operation in ways that not only block various lawful regularities from our view but which also project spacious regularities onto the screen of mind. So why do we persist on oversimplifying? In essence, for methodological reasons. Whenever there is a blank in our knowledge, the natural and indeed the sensible thing to do is to fill it in in the most direct standard, plausible way. We assume that the person we bump into in the street speaks English and say “oops, sorry”—even though this may well prove to be altogether unavailing. We regard the waiter in the restaurant as ours even where it is the brother who bears a family resemblance. We follow the most straightforward and familiar routes up to the point where a DETOUR sign appears. We willingly and deliberately adopt the policy of risking oversimplification to lead us in error because this is part and parcel of the price demanded by operating a policy of inquiry that is effectively self-corrective.

18

ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE

5. WHY OVERSIMPLIFICATION: SCIENTIFIC PROGRESS AND COGNITIVE COMPLEXITY But why do we ever oversimplify? Why not just go ahead and take those ignored complications into account? The answer is that in the circumstances we simply do not know how to. The situation is akin to that of the Paradox of the Preface. Recall that here an author writes: “I want to thank X., Y, and Z for their help with the material in the book. I apologize to the reader for the remaining errors, which are entirely mine.” One is, of course, tempted to object: “Why apologize for those errors? Why not simply correct them?” But of course he cannot do so because he does not know where those errors are located. And the situation with oversimplification is much the same. All too often we realize that we oversimplify, what we do not know is where we oversimplify. This is, in general, something that we can discern only within the wisdom of hindsight. We willingly and deliberately adopt the policy of allowing oversimplification to lead us in error time and again because we actually have very little choice about it. Oversimplification is inherent in the very nature of cognitive rationality as it functions in scientific inquiry. Empirical science is a matter of drawing universal conclusions (“theories” they are usually called) from the perceived facts of observation and experiment. But observation and experimentation are continuously enhanced by technological advance in the devices used to monitor and manipulate nature. The progress of science proceeds in the wake of an ever more powerful technology for the acquisition and processing of data which increasingly sophisticates the distinctions that have to be drawn and increasingly refines the theories employed in providing explanations.1 And a web of theory woven about a given manifold of data will not—and effectively cannot—be adequate to the situation that will obtain later on, after our body of information has become enhanced. The other-things-equal preference of simpler solutions over more complex ones is sensible enough. Simpler solutions are less cumbersome to store, easier to take hold of, and less difficult to work with, cognitive rationality combines the commonsensical precept, “Try the simplest thing first,” with a principle of burden of proof: “Maintain your cognitive commitments until there is good reason to abandon them.” But unfortunately oversimplification is inherent in the very nature of cognitive rationality as it functions in scientific inquiry.

19

Nicholas Rescher • Epistemological Studies

6. OCKHAM’S RAZOR Throughout rational inquiry—and accordingly throughout natural science—we naturally adopt the methodological principle of rational economy to “Try the simplest solutions first” and then to make this result do as long as it can. For rationality enjoins us to operate on the basis of Ockham’s Razor—considerations are never to be introduced where they are not required: complexity is never to be pushed beyond necessity. Our theories must be minimalistic: they must fit the existing data tightly—just because simplicity is a form of definiteness: once we stand adding superfluous fifth wheels there are just too many places to attach them. And this means that as our data are amplified in the course of ongoing inquiry through new observations and experiments the previously prevailing theories will almost invariably become destabilized. Those old theories oversimplified matters: new conditions call for new measures, new data for more complex theories. It lies in the rational economy of sensible procedure that the history of science is an ongoing litany of oversimple old theories giving way to more sophisticated new ones that correct their oversimplification of the old. The principle of rational economy in matters of fact and existence is widely knows as “Ockham’s Razor,” after William of Ockham (ca. 1285– 1347), one of the ablest and most prolific of the medieval schoolmen. In arguing for nominalism in opposition to a Platonistic theory of universals, Ockham actually said that pluralitas non est ponenda sine necessitate. This statement of “Ockham’s Razor” became reasserted in various paraphrases as entia non sunt multiplicanda [or: non multiplicantur] sine necessitate; [or praeter necessitatem].2 The general idea of ontological economy that indicates Ockham’s principle has been integral to the Aristotelian tradition ever since Aristotle’s own “Third Man” argument against the Plutonic Forms. And it is viscously present in earlier medieval scholastics such as Thomas Aquinas. However, the term “Ockham’s Razor” is itself something relatively recent. Its firstknown occurrence is in the 1852 Lectures on Quaternions of the ScotchIrish mathematician William Rowan Hamilton (inventor of the so-called “Hamiltonian” device in mathematics). The principle that underlies Ockham’s Razor admits of many applications and variations. For there are a great many contexts in which avoidable applications are counter-productive. And so, while Ockham’s own stricture was originally directed to the ontological issues in controversy in the nominalism/realism debate, the idea

20

ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE

that complications should not be multiplied beyond necessity has a far wider range that would include also principles, explanations, hypotheses, theories, taxonomies, and any number of other things.3 The fact, however, is that Ockham’s Razor is readily transmuted into a principle of rational economy at large. For what is basically at issue is really not an ontological thesis, but a fundamental principle of rational procedure to the general effect that complications (sophistications, elaborations) are never to be introduced beyond necessity but are always to be awarded insofar as possible. “Keep your elaborations simple to the greatest practicable extent” is another way of putting it. The idea is that complications always have to pay their way by way of greater factual efficacy. What sorts of considerations are there to validate such an enlarged construal of Ockham’s Razor? What is the rationale of its validity? Why should we always struggle to avoid adding a fifth wheel to the vehicle of our deliberations? 1. A fundamental principle to rational economy. Never pay more for something than you actually need to. 2. A fundamental fact of epistemology. In any situation there are always many different ways of moving onwards and we never know which one to choose until guiding instructions come our way. From the very start, the Ockham’s Razor principle has, throughout all of its formulations, been a practical precept of procedure. It relates not to what there is, but to what is to be done—or, rather, not done. It affirms that something-or-other is not to be posited, affirmed, accepted, etc. to a greater extent than is unavoidably necessary in the circumstances. It effectively tells us not to accept complications to a greater extent than absolutely necessary. And so, insofar as Ockham’s Razor can be established as a principle of parsimony it is epistemic rather than ontological economy that will have to be at issue. The idea fundamentally at issue with Ockham’s Razor can be both generalized and softened: “In whatever field of endeavor you are engaged, do not push the resources you put to work beyond the range of utility.” Or again “Do not expand your resources—be they physical or intellectual— beyond the point of a compensatory return.” And perhaps in particular, “Do not adopt theoretical devices that fail to cooperate for the added complexities they involve by doing a useful explanatory job.” What is at issue

21

Nicholas Rescher • Epistemological Studies

throughout is not a matter of sophistication but one of little more than plain everyday common-sense. The factor of necessity (“praeter necessitaten”) that is central to Ockham’s Razor introduces difficulty. For suppose that there are two possible ways to replace some critical issues, each of which makes a certain complication that the other dispenses with. Neither of these complications is necessary to problem solution—we could avoid each of them by accepting the other. But there just is no way of accepting both of them. And so the Ockham’s Razor principle cannot be formulated as “Never accept an avoidable complication,” but has to be read as “Accept no more complication than the absolute minimum that is practicable in the circumstances.” Properly understood, Ockham’s Razor is thus not an ontological thesis about the nature of the world: it does not claim “The universe is simple and uncomplicated.” Nor yet is it an epistemological thesis about our knowledge: it does not claim “Our cognitive account or model of the universe must be a simple one.” It is, instead a methodological principle of procedure to the effect “Do not make your model of the universe any more complex than it morally has to be to account adequately for the phenomena!” It is, in sum, a procedural rule and not a substantive thesis—a directive regarding what is to be done, that has no substantive implications about the nature of what is. And so, while Ockham’s Razor is indeed a sound principle of cognitive practice, it nowise stands in the way of discerning the complexities of the world. 7. EVOLVING COMPLEXITY An inherent impetus towards greater complexity pervades the entire realm of human creative effort. We find it in art; we find it in technology; and we certainly find it in the cognitive domain as well.4 And so we have no alternative to deeming science-as-we-have-it to afford an oversimplified model of reality. And in consequence we have no real alternative to becoming enmeshed in the same shortcomings that beset oversimplification in general. The increasing complexity of our world-picture is a striking phenomenon throughout the development of modern science. Whatever information the sciences achieve is bought dearly through the proliferation of complexity. It is, of course, possible that the development of physics may eventually carry us to theoretical unification where everything that we class among the “laws of nature” belongs to one grand unified theory—one all-

22

ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE

encompassing deductive systematization integrated even more tightly than that Newton’s Principia Mathematica.5 But on all discernible indications the covers of this elegantly contrived “book of nature” will have to encompass a mass of ever more elaborate diversity and variety. And the integration at issue at the principle of a pyramid will cover further down an endlessly expansive range and encompassing the most variegated components. The lesson of such considerations is clear. In the course of scientific progress our knowledge grows not just in extent but also in complexity, so that science presents us with a scene of ever-increasing complexity. The history of science tells an ongoing story of taxonomic complexification. And it is thus fair to say that modern science confronts us with a cognitive manifold that involves an ever more extensive specialization and division of labor. The years of apprenticeship that separate a master from novice grow ever greater. A science that moves continually from an over-simple picture of the world to one that is more complex calls for ever more elaborate processes for its effective cultivation. And as the scientific enterprise itself grows more extensive, the greater elaborateness of its productions requires an ever more intricate intellectual structure for its accommodation. Induction with respect to the history of science—a constant veritable litany of errors of oversimplification—soon undermines our confidence that nature operates in the way we would deem all that simple. The history of science is in fact a litany of ongoing corrections of flaws of oversimplification. For that history is an endlessly repetitive story of simple theories giving way to more complicated and sophisticated ones. The Greeks had four elements; in the 19th century Mendeleev had some sixty; by the 1900s this had gone to eighty, and nowadays we have a vast series of elemental stability states. Aristotle’s cosmos had only spheres; Ptolemy’s added epicycles; ours has a virtually endless proliferation of complex orbits that only supercomputers can approximate. Greek science was contained on a single shelf of books; that of the Newtonian age required a roomful; ours requires vast storage structures filled not only with books and journals but with photographs, tapes, floppy disks, and so on. Of the quantities currently recognized as the fundamental constants of physics, only one was contemplated in Newton’s physics: the universal gravitational constant. A second was added in the 19th century, Avogadro’s constant. The remaining six are all creatures of 20th century physics: the speed of light (the velocity of electromagnetic radiation in free space), the elementary charge, the rest mass of the electron, the rest mass of the proton, Planck’s constant, and Boltzmann’s constant.6

23

Nicholas Rescher • Epistemological Studies

Consider a further example. In the 11th (1911) edition of the Encyclopedia Britannica, physics is described as a discipline composed of 9 constituent branches (e.g., “Acoustics” or “Electricity and Magnetism”) which were themselves partitioned into 20 further specialties (e.g., “Thermoelectricity: of “Celestial Mechanics”). The 15th (1974) version of the Britannica divides physics into 12 branches whose subfields are— seemingly—too numerous for listing. (However the 14th 1960’s edition carried a special article entitled “Physics, Articles on” which surveyed more than 130 special topics in the field.) When the National Science Foundation launched its inventory of physical specialties with the National Register of Scientific and Technical Personnel in 1954, it divided physics into 12 areas with 90 specialties. By 1970 these figures had increased to 16 and 210, respectively. And the process continues unabated to the point where people are increasingly reluctant to embark on this classifying project at all. The long and short of it is that it would be naive—and quite wrong—to think that the course of scientific progress is one of increasing simplicity. The very reverse is the case: scientific progress is a matter of complexification because over-simple theories invariably prove untenable in a complex world. The natural dialectic of scientific inquiry continuously impels us into ever deeper levels of sophistication.7 In this regard our commitment to simplicity and systematicity, though methodologically necessary, is ontologically unavailing. Our increasingly sophisticated investigations invariably engender changes of mind moving in the direction of an ever more complex picture of the world. And so oversimplification of the real is inherent in the very nature of cognitive rationality as it functions in scientific inquiry. It roots in the very nature of the venture as a project of human inquiry as a matter of rational economy in the exploiting data to ground our inferences and conjectures regarding Reality. Empirical science is a matter of drawing universal conclusion (“theories” they are usually called) from the perceived facts of observation and experiment. But observation and experimentation is continuously enhanced by technological advance in the devices used to monitor and manipulate nature. And our theories must be minimalistic: they must fit the existing data tightly. And so the web of theory that is woven about a given manifold of data will not—and effectively cannot—be adequate to the situation that obtains subsequently, after our body of information has become enhanced. It is—inevitably—oversimple. This means that as our data are amplified through new observations and experiments the previ-

24

ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE

ously prevailing theories will almost invariably become destabilized. Those old theories oversimplified matters: new conditions call for new measures, new data for more complex theories. It lies in the rational economy of sensible inquiry that the history of science is an ongoing litany of oversimple old theories giving way to more sophisticated new ones that correct their oversimplification of the old. There is no fact about the history of science that is established more decidedly than this: that new technology (be it material or conceptual) puts new data at our disposal and that new data manifest the oversimplification of earlier theories. Our methodological commitment to simplicity should not and does not stand in the way of an ongoing journey into complexity. 8. MODELING THE GROWTH OF KNOWLEDGE Perhaps the most common analogy for the growth of knowledge is that of geographic exploration—of pushing further the frontiers of knowledge. However, this model has a serious shortcoming. It suggests (1) that acquiring further knowledge is pretty much a matter of adding more of the same—adjoining further terrain to that already known, and (2) that overall the field of knowledge is—like the earth or moon or planet—finite in scope, and (3) that what is required for purposes is additional effort of the same sort as that expended heretofore. All of these suggestions implicit in the geographic exploration model of scientific progress are all decidedly misleading. If our analogy is to be more adequate, it had better be directed to cosmic exploration which, after all, is (1) potentially boundless, (2) ongoingly difficult in nature in its requirement for ongoingly different and more powerful technologies, and (3) ongoingly more demanding and expensive as we have to move even further outside from our earthly starting point to the increasingly remote vastness of space. But perhaps the most instinctive analogy for cognitive progress is that of the biblical story of the Tower of Babel. In the process of enhancing our knowledge in virtually any area of inquiry there is a complex interaction between quantity and quality. For to take one step forward in enhancing quantity we must expend substantial efforts to enlarge the quality of information so that even further steps become increasingly more demanding. And here stands the analogy. For to raise the top of that Babylonian spiral tower one story more we must enlarge its base very substantially and with every successive stage of the spiral an

25

Nicholas Rescher • Epistemological Studies

analogy obtains, that of the wedding cake. To raise that topmost piece by yet another step we must greatly enlarge the base and add at the bottom a larger layer, one which requires just about as much batter as everything that has gone before. And it also requires a very different sort of understanding. Real progress in knowledge is a matter of moving ahead not by steps but by leaps. But each of these leaps takes place from a platform constructed by a vast number of small steps created through a multitude of smaller contributions. The salient points are: (1) That the items of knowledge avert all of a piece, they come in very different grades or levels of significance and informativeness. (2) That there is an inverse relation between quantity and quality in that there is always a much smaller volume of high-grade than low-grade information. (3) That there is an ineliminable interconnection here: that there is no way of enlarging the body of high-grade information except through a substantial increase in low-grade information—the net that takes the big fish from the sea always takes a far larger volume of little fish along, and the bigger the big ones come to be, the larger the mass of smaller ones. So in this regard, knowledge is like a Babylonian tower: one can only elevate the top by a vastly larger increase at the base. And there is no simple proportionality here: it is not that twice as substantial is just twice as expensive, but rather many times as costly in effort and resources. This situation points toward the idea of a “technological level,” corresponding to a certain state-of-the-art in the technology of inquiry in regard to data-generation and processing. This technology of inquiry falls into relatively distinct levels or stages in sophistication—correlatively with successively “later generations” of instrumentation and manipulative machinery. These levels are generally separated from one another by substantial (roughly, order-of-magnitude) improvements in performance in regard to such information-providing parameters as measurement exactness, dataprocessing volume, detection sensitivity, high voltages, high or low temperatures, and so on. The perspective afforded by such a model of technologically mediated prospecting indicates that progress in natural science has heretofore been relatively easy because in this earlier explanation of our parametric neighborhood we have been able—thanks to the evolutionary heritage of our sensory and conceptual apparatus—to operate with relative ease and freedom in exploring our own parametric neighborhood in the space of physical variables like temperature, pressure, radiation, and so on. But scientific

26

ON OVERSIMPLIFICATION AND THE GROWTH OF KNOWLEDGE

innovation becomes even more difficult—and expensive as we push out further from our home base towards the more remote frontiers. And important lessons for scientific progress root here. For it means that the progress of science is—seen as a while—a matter of ever increasing vastness and complexity. No matter how sparse and economical that topmost tier of the scientific wedding cake (or pyramid or Babylonian tower) achieved in the course of scientific progress it will have to rest on an ever more massive and bulky basis of underlying support. Overall, the growth of science will thus be a matter of ever-increasing complexity because earlier science almost inevitably oversimplifies matters. NOTES 1

On these issues see also the author’s Scientific Progress (Oxford: Blackwell, 1978).

2

On Ockham and his work see William of Ockham, Opera philosophica et theoloigca, ed. by Gedeon Gál et al, 17 vols. (St. Bonaventure, NH: The Franciscan Institute, 1967–88); William of Ockham, Philosophical Workings, ed. by Phulotheus Boehner (Indianapolis: Hackett, 1994); Marilyn McCord Adams, William of Ockham, 2 vols., (Notre Dame, Indiana: University of Notre Dame Press, 1989); Paul Spade (ed.), the Cambridge Companion to Ockham (Cambridge: Cambridge University Press, 1999); and Jan P. Beckman, William von Ockham (München: Beck Verlag, 1995).

3

On the multipersonalistic nature of Ockham’s Razor, see E. C. Barnes, “Ockham’s Razor and the Anti-Superficiality Principle,” Erkenntnis, vol. 53 (1975), pp. 353– 74.

4

An interesting illustration of the extent to which lessons in the school of bitter experience have accustomed us to expect complexity is provided by the contrast between the pairs: rudimentary/nuanced; unsophisticated/sophisticated; plain/elaborate; simple/intricate. Note that in each case the second, complexityreflective alternative has a distinctly more positive (or less negative) connotation than its opposite counterpart.

5

See Steven Weinberg, Dreams of a Final Theory (New York: Pantheon, 1992). See also Edoardo Amaldi, “The Unity of Physics,” Physics Today, vol. 261 (September, 1973), pp. 23–29. Compare also C. F. von Weizsäcker, “The Unity of Physics” in Ted Bastin (ed.) Quantum Theory and Beyond (Cambridge: Cambridge University Press, 1971).

6

See B. W. Petley, The Fundamental Physical Constants and the Frontiers of Measurement (Bristol and Boston: Hilger, 1985).

27

Nicholas Rescher • Epistemological Studies

NOTES 7

On the structure of dialectical reasoning see the author’s Dialectics (Albany NY: State University of New York Press, 1977), and for the analogous role of such reasoning in philosophy see his The Strife of Systems (Pittsburgh: University of Pittsburgh Press, 1985).

References Barnes, E. C., “Ockham’s Razor and the Anti-Superficiality Principle,” Erkenntnis, vol. 53 (1975), pp. 353–74. Kitcher, P., the Advancement of Science (New York and Oxford: Oxford University Press, 1993. Nolan, D., “Quantitative Parsimony,” The British Journal for the Philosophy of Science vol. 48 (1997), pp. 329–43. Quine, W., “On Simple Theories of a Complex World,” in his The Ways of Paradox (New York: Random House, 1966). Sober, E., “The Principle of Parsimony,” The British Journal for the Philosophy of Science, vol. 32 (1981), pp. 145–56. Sober, E., Reconstructing the Past: Parsimony, Evolution and Inference (Cambridge, MA: MIT Press, 1988). Sober, E., “Let’s Razor Ockham’s Razor,” in D. Knowles (ed.), Explanation and its Limits (Cambridge: Cambridge University press, 1990). Walsh, D., “Occam’s Razor: A Principle of Intellectual Elegance,” American Philosophical Quarterly vol., 16 (1979), pp. 241–44.

28

Chapter 3 VAGUENESS: SOME VARIANT APPROACHES 1. INTRODUCTION

V

agueness is a prime source of paradox. Vague terms have a more or less well-defined central core of application, surrounded by a large penumbra of indefiniteness and uncertainty. And since a vague term T will automatically have a complement, non-T, that is so as well, there will inevitably be a nebulous region of ambivalent overlap between T-situations and non-T situations. Here matters seem to stand both ways, so that a paradoxical inconsistency arises. The most familiar ways of addressing the well-known paradoxes of vagueness call for the use of heavy machinery, requiring either a nonstandard mode of reasoning (adopting a multi-valued logic, abandoning the Law of Excluded Middle) or a non-standard semantics (abandoning the Principle of Bivalence, accepting truth-value gaps), or both. There is, however, also a different approach to vagueness which leaves the machinery of classical logic and of standard semantics pretty much intact, and lets the burden of paradox-resolution be borne by strictly epistemological considerations. Unavailable information rather than deficient theorizing is here seen as the crux of the difficulty. The present discussion will explore the promise of this variant approach.1

2. THE SORITES PARADOX AND ITS PROBLEMS The problem of vagueness has a long history. Among the ancient Greeks, Eubulides of Megara (b. ca. 400 BC) was the most prominent and influential member of the Megarian school of dialecticians as whose head he succeeded its founder, Euclid of Megara, a pupil of Socrates.2 Eubulides did more to promote concern for the pardoxicality of vagueness than any other single thinker in the history of the subject. He is credited with seven important paradoxes: The Liar (pseudomenos), The Overlooked Man (dialanthanôn), Electra and her Brother, The Masked Man (egkekalummenos),

Nicholas Rescher • Epistemological Studies

The Heap (sôritês), The Horns (keratinês), and The Bald Man (phalakros). All of them pivot on issues of vagueness or equivocation. What here particularly concerns us among these ancient puzzles is the “Paradox of the Heap”—the Sorites Paradox (from the Greek sôros = heap). It is posed in the following account: A single grain of sand is certainly not a heap. Nor is the addition of a single grain of sand enough to transform a non-heap into a heap: when we have a collection of grains of sand that is not a heap, then adding but one single grain will not create a heap. And so by adding successive grains, moving from 1 to 2 to 3 and so on, we will never arrive at a heap. And yet we know full well that a collection of 1,000,000 grains of sand is a heap, even if not an enormous one.3

Throughout the ages, theorists have diagnosed the problem at issue here through locating its difficulty in vagueness, thereby affiliating it to a vast panoply of similar puzzles. (Example: a new-sharpened bread knife is not dull, and cutting a single additional slice of bread with a knife that is not dull will not dull it. Yet when the knife has cut a million slices, it will be dull. Or again: if you are still on time for an appointment, the delay of a nanosecond will not make you late, and yet a great multitude of such delays engenders lateness.) The guiding idea is that in all such cases the pivotal concept—be it “heap” or “dull” or “late”—is vague in that there is no sharp and definite cut-off point between the IN and OUT of its application. The “borderline” at issue is not exactly that, but rather a blurred band that is imprecise, nebulous, indefinite, inexact, or some such. And just this is seen as the source of difficulty. To come to grips with the core of the problem, let H(n) abbreviate the thesis that “A unified collection of n grains of sands is a heap.” We can then formalize the premisses of the Sorites paradox as follows: (1)

~H(2) (“Two grains do not form a heap.”)

(2)

(∀n)[~H(n) → ~H(n + 1)] (“If n grains are insufficient to form a heap, then adding just one will not mend matters.”)

(3)

H (1,000,000) (“A million grains will form a heap.”)

Starting out from premiss (1), repeated application of (2) will yield the negation of (3). So those three premisses are inconsistent. And yet individu-

30

VAGUENESS: SOME VARIANT APPROACHES

ally considered they all look to be plausible. Hence the paradox. How is it to be resolved? Since premisses (1) and (3) are incontestable, it is clearly premiss (2) that will have to bear the burden of doubt. But in rejecting (2) we will, by classical logic’s Law of the Excluded Middle, be saddled with its negation, namely: (4) (∃n) [~H(n) & H(n + 1)] “For some integer n, n grains fail to make a heap, but n + 1 are sufficient.” But now if this is accepted, grave problems seem to follow, for by in the widely favored Substitutional Construal of Existential Quantification we will have the principle: (S) If (∃x)Fx, then there must be a particular value x0 of the variable x for which Fx0 obtains. And if this is so, then there will be a precise and identifiable transitionpoint—a particular and specifiable integer N for which not-(2) holds good. And so we have: (5) For some particular, specific integer N there obtains: ~H(N) & H(N + 1) This upshot appears to be altogether counterintuitive and unacceptable. But, nevertheless, we seem to have a natural and inevitable transit by standard logic from the rejection of (2) to an acceptance of (4) and thence via (S) to (5). Where does this unpalatable result leave us? To block this chain of reasoning most theorists have proposed to embargo the move from not-(2) to (4) by some maneuver or other. Mathematical intuitionists propose to accomplish this by prohibiting the move from the refutation of a universal claim to the maintenance of an existential one. Supporters of a “fuzzy” logic propose to abandon the classical laws of excluded middle and tertium non datur. Against such approaches, however, the present discussion maintains the availability of another, logically far less radical alternative—an alternative to which—so it appears—one must in any case resort on other grounds. This alternative approach pivots on bringing the idea of vagrant predication into operation.

31

Nicholas Rescher • Epistemological Studies

3. VAGUENESS AS VAGRANCY An important albeit eccentric mode of reference occurs when an item is referred to obliquely in such a way that, as a matter of principle, any and all prospect of its specific identification is precluded. This phenomenon is illustrated by claims to the existence of a thing whose identity will never be known, an idea that never has or will occur to anybody, —a person whom everyone has utterly forgotten, an occurrence that no-one ever mentions, —an integer that is never individually specified. These items are all referentially inaccessible: to indicate them concretely and specifically as bearers of the predicate at issue is straightaway to unravel them as so-characterized items.4 Yet one cannot but acknowledge that there are such items, notwithstanding the infeasibility of identifying. The concept of an applicable and yet nevertheless uninstantiable predicate comes to view at this point. The realizations of such a predicate F will be unavoidably unexemplified. For while it holds in the abstract that this property at issue is indeed exemplifiedso that (∃u)Fu will be truenevertheless the very manner of its specification renders it impossible to specify any particular individual u0 such that Fu0 obtains. Such predicates are “vagrant” in the sense of having no known address or fixed abode. Despite their having applications, these cannot be specifically instanced—they cannot be pinned down and located in a particular spot. So on this basis we may define: F is a vagrant predicate iff (∃u)Fu is true while nevertheless Fu0 is false for each and every specifically identified u0. Predicates of this sort will be such that, while general principles show that there indeed are items to which they apply, nevertheless it lies in their very nature that such instances should ever be identified.5 It lies in the very make-up of their specification that when F is vagrant, then Fx0 is a contra-

32

VAGUENESS: SOME VARIANT APPROACHES

diction in terms where x is a specifically identified item—an incoherent, meaningless contention. And this is a very real phenomenon, seeing that such predicates as: being a person who has passed into total oblivion., —being a never-formulated question, being an idea no-one any longer mentions, illustrate this phenomenon. Throughout such cases, specifically identified instantiation stands in direct logical conflict with the characterization at issue. To identify an item instantiating such a predicate is thereby to contradict its very characterization.6 It is this conception of predicative vagrancy that will provide the key to the presently contemplated approach to vagueness.7 4. VAGRANCY ROOTS IN EPISTEMOLOGY With vagrant predicates the existence of exemplifications may be an ontological fact, but this is offset by the no less firm epistemological fact that the identification of such exemplifying instance is simply impossible. The impossibility lies not in “being an F” as such, but in “being a specifiably identifiable F.” Difficulty lies not with F-hood as such, but with its specific applicationnot with the ontology of there being an F but with the epistemology of its apprehension in individual cases. The problem is not with the indefinite “something is an F” but with the specific “this particular item is an F.” Accordingly, vagrant predicates mark a cognitive divide between reality and our knowledge of it. The crux is now a matter of epistemic characterization. Now in the abstract and formalistic reasonings of logic or mathematics—where predicates are cast in the language of abstraction—cognitive operators of the sort at issue in predicative vagrancy simply have no place. Here there is no place of epistemic characterization and so one will never encounter vagrant predicates. For, in such contexts, matters of cognition are never invoked: we affirm what we know but never claim that we know. However, with matters of empirical fact the situation can be very different. For in those matters of vagrancy that now concern us, cognitive inaccessibility is built into the specification at issue. Here being instantiated

33

Nicholas Rescher • Epistemological Studies

stands in direct logical conflict with the characterization at issue, just as with: —being a sand grain of which no-one ever took note, being a person who has passed into total oblivion, —being a never-formulated question, being an idea no-one any longer mentions. To identify an item of this sort is thereby to unravel its specifying characterization. The difference between predicate vagrancy and its contrary mirrors the contrast between: • generic knowledge: It is known that something has: K(∃x)Fx and • specific knowledge: something that has F is know about, that is, one knows of something in specific that it has F: (∃x)KFx Here K can be read either as the indefinite “It is known (by some) that” or alternatively as the egocentric “I know that.” In the former case it is merely known that F has application, in the latter case one is in a position to identify a specific example of F-applicationto adduce a known instance of F. From the logical standpoint, then, the issue comes down to the relative placement of the existential quantifier and the cognitive operator. 5. A VAGRANCY APPROACH TO VAGUENESS And now back to vagueness. Wherever it functions, there is no viable way of separating the INs from the OUTs. But here one can take either an ontological or an epistemic approach. The former effectively says “there is no definite boundary” the latter says “there indeed is a definite boundary but there is no practicable way of locating it, no feasible way of noting where

34

VAGUENESS: SOME VARIANT APPROACHES

it lies.” The one denies the existence of boundaries, the other their identifiability. In the case of the heap paradox these perspectives afford two possibilities. The one consists in flat-out denying the thesis: (∃n)[ ~H(n) & H(n + 1)] But yet another alternative approach proceeds by letting this contention stand but blocking the move from it to: There is a particular, determinable value N of the variable n for which the preceding contention holds. In effect we now bring the concept of vagrant predicates to bear. By treating vagueness as vagrancy we effectively block the Heap Paradox and its congeners. For once that pivotal predicate which characterizes a transition from non-heap to heap is seen as vagrant, the whole idea of locating that problematic transition value vanishes from the scene. The two conceptions—vagueness and vagrancy—can thus be seen as functionally symbiotic. To be sure, an approach to vagueness along these lines involves a nonstandard handling of the issue of a transition point between the INs and the OUTs. For the traditional approach to such boundaries is that of the ontological contention that they do not exist as such (i.e., as actual boundaries), but are to be replaced by penumbral regions (whose boundaries themselves are penumbral in turn—all the way through). And this means that there will fail to be a “fact of the matter” in regard to being IN or being OUT, so that the logical principle of Tertium non datur has to be abandoned. With vagueness as standardly conceived there will be a region of indetermancy as between the INs and the OUTs, but that this region is, as it were, penumbral. It will not itself have sharp, razor-edged boundaries, but must be nebulous, with the boundary between IN and INDETERMINATE (and again between INDETERMINATE and OUT) will itself be comparably indeterminate (penumbral, “fuzzy”) once more. The absence of clear transitional borders will hold “all the way through” so to speak.) For this reason, a three valued logic of TRUE, FALSE and INDETERMINATE will not do the job that is needed here. Any “fuzzy logic” adequate to the task of accommodating vagueness must be infinite-valued, with never-ending room for

35

Nicholas Rescher • Epistemological Studies

shades and gradations. Pretty complex logical machinery needs to be brought to bear. By contrast, our present vagrancy-based approach takes an epistemological line. It does not call for denying that there is such a thing as a (classically conceived) boundary. And it does not deny that any given item either is IN or not. In sum it does not conflict with the idea that facts are at issue here. But what it does insist upon is that these facts are in principle undeterminable. For the predicate —being the boundary between IN and OUT is now classed as vagrant. The correlative shift from ontology to epistemology leaves traditional logic pretty much intact. The vagrancy-based approach to vagueness pivots on the critical distinction between the located and the locatable. As it views the matters, there indeed is (ontologically, so to speak) a sharp and clear boundary between the INs and the OUTs, but that there is (epistemically, so to speak) no possible way of locating it. In taking this line, the recourse to predicative vagueness shifts the burden from the ontological to the epistemological side of things. The advantage of such a strategy is that it makes it possible to keep in place a classically binary logic and to retain the classical principles of excluded middle and tertium non datur. The only innovation needed—and one that will be required in any case—is to accept the prospect of vagrant predication. What we have here is the anomaly of a boundary (as between being a heap and a non-heap, a sharp or dull knife, etc.) representing an IN/OUT demarcation that is inherently invisible. Such a boundary exists—so it is held—but remains inherently unidentifiable. Viewed from this perspective, vagueness emerges as a product of insufficient cognition. The indefiniteness at issue is now ascribed not to reality’s indecisiveness, but rather to that of our epistemically problematic concepts—as reflected in the indefiniteness of vagrant predicates. And so, while the standard view of vagueness sees the separation of vaguely bounded regions as a matter of unlimitedness—the result of definite boundaries—the present nonstandard approach combines acknowledging the existence of boundaries with an insistence on their (epistemic) unlocatability. The positions are very different but there net effect is in one respect the same: no specifiable boundaries.

36

VAGUENESS: SOME VARIANT APPROACHES

6. FURTHER PERSPECTIVES There is a multitude of examples of objects that are real but unidentifiable. As regards the future, the person who will win the 2020 U.S. presidential election is for-sure currently alive and active among us, but cannot presently be identified. And as regards abstractions there must exist an unprovable arithmetical theorem whose Gödel number is the lowest—but this too cannot possibly be identified. Our present treatment of vagueness extends this actual—but unidentifiable—approach to those otherwise nebulous boundaries involved with vagueness. After all, one must avoid equating nonspecificabiltiy with nonexistence. For as we have seen time and again, vagrant predicates, though uninstantiable by us, need not in themselves be uninstantiated. There will certainly be (some) totally forgotten people, though none of us can possibly provide an example. And analogously, it could be held that there indeed is a sharp boundary between heaps and non-heaps (of sand grains of a given size) even though it is in principle impossible ever to say just where this boundary lies. It is concealed in a cognitive blind-spot, as it were.8 For while— from such a perspective, there indeed is a transition, and even a transitionpoint, nevertheless this is something that simply cannot possibly be fixed upon and identified. Consider, for example, a color strip of distinct compartments as per: C1

C2

C3

C4

where adjacent compartments are visually indistinguishable in point of phenomenal color: (∀i)[P(ci) = P(ci + 1)] Nevertheless, the situation is such that there will be notable differences among sufficiently remote compartments. Thus we will have: P(c1) ≠ P(c100) But where is one to place the transition between P(ci) and P(c100)? Where does P(ci) end and where does P(c100) begin? Here we have exactly the same problem as with heaps. And exactly the same sort of solution looms before us with a resort to predicative vagrancy able to do the needed work.9

37

Nicholas Rescher • Epistemological Studies

Now on the present epistemic perspective, the crux of vagueness is that while one knows that there is a transition point between IN and OUT, nevertheless one cannot possibly manage to locate it. And just this represents a fundamental aspect of vagueness in general: there just is no way of saying at just what point predicate-applicative begins and where it ends. We know that a cross-over is eventually reached, but cannot possibly say just where it lies. 7. THE EPISTEMOLOGICAL TURN Such treatment of vagueness takes the line that there indeed is a boundary between the INs and OUTs in matter of vagueness, so that one can maintain: (I) (∃B[B marks the boundary between IN and OUT] Nevertheless, there is no way of fixing this boundary, no way of determining just exactly where it lies. There is no prospect of identifying a particular value of B0 of the variable B such that (II) B0 marks the boundary between IN and OUT. From the ontological/existential point of view the existence of a boundary is acknowledged as per (I). But from an epistemological/cognitive point of view any and all possibility of locating this boundary—of determining or specifying it—is precluded. Just this is the characteristic situation of predicative vagrancy. As adumbrated above, the crucial difference here is that between the acceptable indefiniteness of: K(∃B) (B marks the boundary between IN and OUT) and the unacceptable: (∃B) K (B marks the boundary between IN and OUT) And in viewing the matter as one of vagueness, the existence of a boundary point is conceded, but any and all prospect of its specifiability is denied. So viewed, the ultimate responsibility for the indefiniteness of vagueness thus lies not with what is at issue in our discourse, but rather in the

38

VAGUENESS: SOME VARIANT APPROACHES

imperfection of our knowledge: “the fault is not in our stars, but in ourselves” in that our very vocabulary precludes exact knowledge by being indefiniteness-friendly. The crux of such an approach to vagueness is that the descriptive qualifier “is a transition point between IN and OUT” and is to be seen as a vagrant predicate—it applies someplace, but we know not where: items may well fall into the indeterminate “just can’t say” region. (The boundaries of that indeterminate region will themselves be specified by vagrant predicates.) In principle undecidable propositions occur not just in mathematics, but in the factual domain as well. But just what is the pay-off difference between saying that there just is no boundary and saying that there is one but it is altogether unidentifiable? Simply and exactly the difference between the epistemic and the existential. It is one thing to say that there is nothing in the box and quite another to say that there is no way for anyone to know what it contains. (Think of the magic box—impenetrable to external scanning—whose content is annihilated by opening the lid.) 8. ADVOCATES OF THE VAGRANCY APPROACH The epistemological treatment of vagueness at work with the vagrancy approach has in recent years been espoused by Theodore Sorensen and Timothy Williamson. Thus, in his interesting and inductive book on Vagueness,10 Williamson set out what he called “the epistemic view” of vagueness—a view whose leading idea is that vague terms, such as “bald” or “heap”, do actually have a definite boundary, the problem merely being that we cannot know what it is. Within the sequence of statements such as 2 sand grains don’t make a heap 3 sand grains don’t make a heap —— —— —— 1,000,000 sand grains don’t make a heap There will be a smallest n for which the switch-over from true to false occurs. It is just that we do not—and indeed cannot—know what it is. “In cases of unclarity, statements remain true or false, but speakers of the language have no way of knowing which” (p. 3). On this basis Williamson

39

Nicholas Rescher • Epistemological Studies

has it that “A picture of language is sketched, on which we can know that a word has a given meaning without knowing what the boundaries of that meaning are in conceptual space” (p. 6). The root of vagueness is thus ignorance. Those vague terms actually have precise boundaries, it is just that we do not—cannot—manage to locate them. Williamson argues for this position on general principles—in particular its valid compatibility with classical logic and semantics. Yet he clearly felt uneasy about specifics on the order of the question “Is there really a day, minute, and second at which a person ceases to be young?” For he acknowledges that “On first sight the epistemic view is incredible” and it is hard to see how a second view would help matters here. But soon a theorist surfaced who cast any such hesitation to the winds, for in his challenging book on Vagueness and Contradiction11 Roy Sorensen stepped forth on the issue with guns blazing: My strangest belief is that vague words have hidden boundaries. I think that the subtraction of a single grain of sand might turn a heap into a non-heap. (p. 1)

As Sorensen sees it, the puzzles at work root in the imperfections of our knowledge: “the threshold statements have truth values, but [these] are unknowable” (p. 13). On this basis he goes on to affirm: “I hold that vagueness consists in the possession [and thus existence!—NR] of absolute borderline cases” (p. 13). 9. A THIRD OPTION When one goes back to basics, one comes to the realization that there are in fact three possible alternative stances to the questions of those mysterious boundaries at issue with vagueness as typified by the question of the minimal size of a heap. The spectrum of possibilities is spanned by three alternative stances correlative with saying that such a boundary:

40

(1)

so that there simply is no sharp line of separation. (Classical vagueness)

(2)

DOES INDEED EXIST,

IS NONEXISTENT,

although we cannot possibly say where it is. (the Wilkinson-Sorensen “epistemic view” of vagueness, assimilable to predicative vagrancy

VAGUENESS: SOME VARIANT APPROACHES

(3)

MAY OR MAY NOT EXIST,

for all that we can possibly say one way or the other. (A third option)

Here the negativism of (1) represents that traditional conception of vagueness, the classical view of the matter which holds that there just is no definite boundary in these matters. By contrast, the agnostic position of (2) is exactly the approach of those contemporary theorists—like Sorensen and Williamson—who hold (somewhat counter-intuitively) that there indeed is an exact boundary of separation although we cannot possibly say where it lies. Beyond these two positions there is, however, a third possible approach. This is yet the more radically agnostic position represented by (3). It is based on the idea that while there may indeed—for aught we know to the contrary—actually be an exact border, but that there just is no feasible way to settle the matter one way or the other. The indeterminacy at work runs deep. There is, after all, a decided difference between not knowing which item of a certain range of possibilities constitutes a boundary and not even knowing whether there is any such item that does so. Here (3) like (2) is an epistemic approach but one of a more defeasible sort. In relation to boundaries (2) affirms merely an ignorance as to where something exits—something whose existence is squarely accepted—while (3) more cautiously maintains a posture of ignorance as to whether it even does so. Such a position enjoys a decided doctrinal advantage. In averting both the dogmatism of (1) and the counter-intuitiveness of (2), it combines the virtues of modesty and plausibility. Untried though it is, it is a decidedly promising approach. For in the end it puts the burden on where it actually seems to belong—viz. on the conceptual fog engendered by the inherent imprecision of our vague concepts. On this perspective, the principal approaches to vagueness can be classified as per Display 1. 10.

WHY VAGUENESS-TOLERANCE PAYS

But why tolerate vagueness at all? The fact is that we have little choice in the matter. We are constantly constrained to use loose terminology and fill our discourse with expressions on the order of “roughly,” “approximately,” “something like,” “in the neighborhood of,” “in his 70s,” “some six feet

41

Nicholas Rescher • Epistemological Studies

Display 1 APPROACHES TO VAGUENESS I. ONTOLOGICAL • States of affairs existing in the world can be vague (indefinite, indecisive, fuzzy) in and of themselves. I. EPISTEMIC A. WEAKLY EPISTEMIC •

The objective facts and realities themselves are never vague. There is always a definite boundary between IN and OUT. However, in situations of vagueness we cannot possibly ever determine just where that boundary exists.

B. STRONGLY EPISTEMIC •

In situation of vagueness we are deeply ignorant and do not know the exact status of the facts themselves. There may or may not be a boundary—we just cannot say; its existence—let alone its location!—is an issue outside our ken.

tall,” and so on. This prominence in our discussions of indecisiveness—of vagueness, equivocation, and the rest—has larger ramifications. Reality is so vastly complex in its mode of operation that a shortfall of detail in our description of it is an inevitable reality. In characterizing the real in man’s natural language the indecisiveness of vagueness is not a failing, but an inevitability. And so, one reason for our tolerance of congeners lies in our having little choice about it. All the same, its vagueness does not stop a statement from being true. If we could not describe the grass of our experience as vaguely green or indeed even merely greenish, but only had the choice of a myriad of exact shades of green, color communication would virtually grind to a halt. If we had to decide when “rock” leaves off and “boulder” begins, we would be in difficulty. Despite its manifest problems, vagueness is immensely useful simply because precision is too hard to come by and deploy. And so in the final analysis we tolerate vagueness because we have no choice, and we do so gladly not just because it is convenient, but also because greater detail is generally not needed in the relevant contexts of operation. (We do not need to know whether the approaching storm will

42

VAGUENESS: SOME VARIANT APPROACHES

bring 1 or 1.5 inches of rain for deciding whether or not to take an umbrella.)12 NOTES 1

Vagueness: A Reader, ed. by Rosanna Keefe and Peter Smith (Cambridge, Mass: MIT Press, 1999) is a useful resource and affords and extensive bibliography of the topic.

2

Pretty well all that is known about Eubulides derives from Diogenes Laertius, Lives of the Philosophers, Bk. II, sect.’s 106–20. See Zeller, Philosophie der Griechen, Vol. II/1, pp. 246.

3

On this paradox and its ramifications see Chapter 2 of R. M. Sainsbury, Paradoxes (2nd ed., Cambridge: Cambridge University Press, 1995), pp. 23–51. Originally the paradox also had a somewhat different form, as follows: Clearly 1 is a small number. And if n is a small number so is n + 1. But this leads straightway to having to say that an obviously large number (say a zillion billion) is a small number. (See Prantl, Geschichte der Logik im Abendlande, vol. I, p. 54.) Note that the paradox could equally well be developed regressively (i.e., from heapness by substantive regression) as progressively from non-heapness by additive progression. The former regressive style of reasoning is called Galenic after Galen (AD 129–c. 210) who wrote prolifically on logic; the latter progressive style Goclenic after Randolph Goclenius (1547–1628) who discussed the matter in his Introduction to Aristotle’s Organon. Isagoge in Organon Aristotelis (Frankfurt, 1598).

4

We can, of course, refer to such individuals and even to some extent describe them. But what we cannot do is to identify them.

5

A uniquely characterizing description on the order of “the tallest person in the room” will single out a particular individual without specifically identifying him.

6

To be sure one could (truthfully) say something like “The individual who prepared Caesar’s breakfast on the fatal Ides of March is now totally unknown.” But the person at issue here goes altogether unknown, that is, he or she is alluded to but not specified—individuated but not concretely identified. So I cannot appropriately claim to know who the individual at issue is but only at best that a certain individual is at issue.

7

For Further details regarding such vagrancy see the author’s Epistemic Logic (Pittsburgh: University of Pittsburgh Press, 2005).

8

For further, different cases of this general sort see Roy E. Sorensen, Blindspots (Oxford: Clarendon Press, 1988). See also his “Vagueness, Measurement, and Blurriness,” Synthese, vol. 75 (1988), pp. 45–82.

9

This shows that transitional continuity is not the core of the problem: the selfsame situation can confront us in the discrete case.

10

London & New York: Routledge, 1994. See also Williamson’s “Vagueness and Ignorance,” Proceedings of the Aristotelian Society, Supplementary Vol. 66 (1992),

43

Nicholas Rescher • Epistemological Studies

pp. 145–62; reprinted in Vagueness: A Reader, ed. by Rosanna Keefe and Peter Smith (Cambridge, MA: MIT Press, 1996), pp. 265–80. 11

Oxford: Clarendon Press, 2001.

12

Further information on paradoxes can be found in the author’s Paradoxes (Chicago: Open Court, 2001). An extensive literature cited there, including: J. C. Beall, (ed.), Liars and Heaps: New Essays on Paradox (Oxford: Clarendon Press, 2003); L. Burns, Vagueness: An Investigation into Natural Languages and the Sorites Paradox (Dordrecht: Reidel, 1991).; V. McGee, Truth, Vagueness, and Paradox (Indianapolis: Hackett, 1990); R. M. Sainsbury, Paradoxes 2nd ed. (Cambridge: Cambridge University Press, 1995) [see especially Chapter 2, “Vagueness: The Paradox of the Heap”].

44

Chapter 4 UNDERDETERMINATION 1. THREE MODES OF UNDERDETERMINATION Underdetermination in the present context is a matter of the relation between the laws of nature as science conceives of them—and the phenomena. Three significantly different modes of underdetermination can in theory be contemplated: •

The Epistemic underdetermination of nature’s laws by the observed phenomena. (Inductive underdetermination)



The Ontological underdetermination of nature’s laws by nature’s phenomena. (Nomic underdetermination)



The Ontological underdetermination of nature’s phenomena by nature’s laws. (Phenomenal underdetermination)

And all of these possibilities must be contemplated in the larger scheme of things regarding the issue of underdetermination. To be sure, philosophers of science have been principally concerned with the first: the inductive and epistemic mode of underdetermination. But the other two are not only no less important and interesting but even, as I see it, substantially more challenging. First, one preliminary observation. Throughout this discussion I shall construe fact-determination in terms of a demonstration that the fact at issue does indeed obtain. A mere demonstration that it is possible that it should obtain, or that it’s obtaining is likely (at some level of probability), does not qualify in this respect. Thus, in particular, establishing that over some realism of phenomena a certain quality may (or is likely to) be lawful does not constitute its determination. Nor, conversely, does the demonstration that give certain laws, a particular phenomenon can obtain (or will do so with a certain likelihood)

Nicholas Rescher • Epistemological Studies

qualify as “determining” such phenomenon. Determination as here construed is something stronger than possibilization or probabilification. 2. INDUCTIVE UNDERDETERMINATION: A FAMILIAR EPISTEMIC PROSPECT Let us begin with inductive underdetermination. The observed phenomena can readily fail to determine Nature’s Laws whenever it transpires that our means of observation—of phenomenal discrimination—are inadequate to the realities, be it quantitatively or qualitatively. For instance, a finite number of data-points (which is of course all that we can ever actually secure) can never definitively determine the sorts of generalities at issue in natural laws. In inductive contexts observations inevitably underdetermines laws because there is invariably a vast assertoric gap between what observation can provide and what the laws actually claim. And this is so for two very good reasons: 1.

Observation is always episodic particular, and finite, whereas laws are general and open-ended, purporting to tell us what always happens in certain circumstances.

2.

Observation is inevitably linked to what does happen in nature, whereas laws will transcend this actualism to purport to tell us what would happen if.

The gap between finite observations and lawful generalizations in inescapable, and law-underdetermination will smoothly fit into this gap. This sort of problem has been on the epistemological agenda well before the work of Pierre Duhem—if not since the days of David Hume. That a scientific law is more than a mere amalgam of fact—that it goes beyond the observed facts and is, as a consequence underdetermined by them, has long been recognized. As William Whewell put it in his Novum Organum Renovatum of 1858: The Inductive truth [of a scientific generalization] is never the mere sum of the facts. It is made into something more by the introduction of a new mental element, and the mind, in order to be able to supply this element, must have peculiar endowments and disciplines. (Butts, p. 170).

46

UNDERDETERMINATION

And again, that other founding father of inductive reasoning J. S. Mill, flatly tells us that “Observation without experiment can ascertain sequences and coexistence, but cannot prove causation” (Logic [1843] Chap. VIII ad fin.). And concedes that even experiment can do no more than establish probability, for as Mill goes on to say “We can, in the most favorable case, only discover by a succession of [experimental] trials that a certain cause is very often followed by a certain effect” [Chap. 8 ad fin.]. The idea that the rigorous universality purported by natural laws goes beyond the reach of observation effectively goes back to Aristotle. More graphically, we confront the situation illustrated vividly by physical science. Using the instrumentalities at our disposal we scan nature to discern its phenomena. But with ever more potent instruments of observation and ever more powerful instrumentalities of experimentation that lead us into greater extremes of temperature, or pressure, of accelerator particle velocity, etc. ever new phenomena come into view. And the theories we devise from the explanatory explanation of the old phenomenal ranges are never able to take these as-yet inaccessible phenomena into appropriate account. The inherently progressive nature of any observational and experimental sciences means that the data-in-hand will provide for theories that will be destabilized by emendation and refinement in the setting of an enlarged universe of observational data. The salient point at issue here can be put in a simple and yet incontestably true way: Science develops over time, and present science is fated to underdetermine future science. And since future science had better claim on the truth than present science, we cannot escape the conclusion that present day science underdetermines the truth as well, regardless of what the date on the calendar happens to read. By the present time of day this sort of thing is rather old hat—though not, of course, thereby made any less than transcendently important.1 But since this issue is thoroughly familiar, I propose here to focus on the other modes of underdetermination. 3. THE ONTOLOGICAL UNDERDETERMINATION OF NATURE’S LAWS BY NATURE’S PHENOMENA Let us then turn to the prospect of an ontological underdetermination of nature’s laws by nature’s phenomena. The idea that the laws supervene on the phenomena is not a merely epistemic conception that is grounded in the

47

Nicholas Rescher • Epistemological Studies

convolutions of inductive reasoning. Taken literally, it is an ontological conception. And seen as such it has big problems. The idea that the actual phenomena—the whole entire lot of them— might be such as to admit of realization through different and actually incompatible law-manifolds has certainly not received the attention it deserves. The phenomena can rule out the prospect that certain generalizations are lawful. But they cannot of themselves settle the inverse issue of just which ones are. Indeed the ontological underdetermination of nature’s laws by nature’s phenomena is an ever-threatening prospect. Even if (per impossible) we had complete access to Nature’s phenomenology we might still be unable to say what the laws of nature actually are. And this can come about in several ways. One possibility here is: • By being accidental. That is, by simply chancing to occur—even as regularities—without there being anything normally necessary or lawful about it.

Granted, the descriptive regularities are determined by the phenomena: they simply are what can be extracted by regularity trawling. But whether those descriptive regularities are actually lawful or whether they are simply fortuitous is an issue that remains to be settled. A related way in which the actualities can find to determine the law is: • By being skewed. That is, by failing to encompass those cases which allow various laws to come into operation. For the laws take the hypothetical form “If …, then - - -” and that antecedent condition may never arise. Examples would be the laws governing what happens under certain parametric conditions (pressure, temperature, velocity) which never obtain.

The salient point in this second regard is that those actually occurrent phenomenon would reveal the laws only if they could be extrapolated beyond the realized course of events to the hypothetical range of occurrence in infinite replays of Nature’s history in ways that exhaust the manifold of possibility. And this is just not practicable. Laws have a fact-transcending dimension: they purport not just how things actually stand but even how things would have to be if … And the domain of iffiness is beyond the reach not just of observable fact but of fact itself. So much, then, for law-underdetermination. Let us nest contemplate something different, namely: we next turn to:

48

UNDERDETERMINATION

4. THE ONTOLOGICAL UNDERDETERMINATION OF NATURES PHENOMENA BY NATURE’S LAWS We now confront the reverse problem of moving from laws to the phenomena and thus come to confront the prospect of the ontological underdetermination of nature’s phenomena by nature’s laws. In this context, one must note that there is a plurality of distinct ways in which the laws can leave the phenomena underdetermined. The available particularities here will specifically include: 1. Chance. When the laws are stochastic they will govern the phenomena, but not determine them. Typically in such cases the laws of nature would stipulate that when A happens then either X or Y must result, without any determination of which it is to be beyond statistical indications. And while sometimes the outcome is probabilistic, and so admits of stochastic laws, nevertheless there can be cases where no well-defined probabilities are involved. After all, even a random order, is a very special set of order and there can be ranges where the laws of probability do not apply because despite the absence of lawful determination the law of large numbers does not apply.2 But in any event, a world governed (even partially) by stochastic laws—such as we take the laws of quantum physics to be— will be one which the laws do not and cannot fully determine the phenomena. 2. Incompleteness (i.e., partial anarchy). The laws may relate to the phenomena the way the rules of chess govern the play of the game: they may canalize them well short of full determination, even as the Gödelian incompleteness of arithmetic means that the axiom of arithmetic govern but yet do not determine the entire manifold of arithmetical fact. This sort of nomic incompleteness is clearly a rational mode of underdetermination. The prospect of his sort of incompleteness arises because the universe might contain regions of phenomena that are simply anarchic—i.e., lawless. There is nothing new about this idea. It underlies C. S. Peirce’s view of cosmic evolution through a gradual conquest of class by lawfulness, and has its origin in the idea of Plato’s Timaeus, an ongoing consequent of order over chaos on the development of the cosmos. Such a prospect is strongly suggested by reading down the law-mandated litany of possible drug effects ranging from the inevitable to the common and the not infrequent to a brontosaurus-reminiscent statistical tail of increasingly diminishing frequency.

49

Nicholas Rescher • Epistemological Studies

3. Symmetry. Yet another sort of situation in which the laws might underdetermine the phenomena can arise through nomic symmetry. For let it be that there is a binary parameter of orientation—up/down, left/right, forward/backward (in space or time)—and that the laws are entirely symmetric as between the two directions. And yet nevertheless the phenomenal realm can be alternative-specific in such a respect. Then it is evident by hypothesi that the laws will not determine the whole of phenomenal reality. 4. Free Will. A world which contains rational agents whose choices are (at least sometime) autonomously determined by “free will”—is the technical sense of the term—will also be one in which the laws of nature do not always determine the phenomena. In such a world the laws governing the machinations of impersonal nature will sometimes prove insufficient in and of themselves to determine certain phenomena—namely those involving the outcome of the deliberations of the intelligent agents that evolution has brought upon the world stage. As psychophysical determinists see it, in situations of decision and choice once all the φ-variables of physical description are in place, the ψvariables of psychological description will be fixed. Brain physiology fixes mental process in place through a regard correlation of the mental with the physical. Now it can be (and has been) argued that even if this were so and a lockstep correlation between the mental and the physical obtains, this does not impede the prospect of free will because the issue of what free is and dependent variables is not resolved through correlation alone. The prospect remains open that the causal initiative lies on the side of the ψ-variables. But there is yet another, more dramatic and yet nevertheless quite plausible prospect. For suppose that those psychophysical laws of actionexplanation are probabilistic rather than classically deterministic—just like the laws of physical nature themselves. Then the salient parameters operative in those laws will not represent definite states of system at issue, but rather mere state-probabilities. In this case even if there is individual φ to ψ determination, what gets determined is not the ψ indications themselves of the system but only their probabilities. But this has drastic implications for the issue of the predeterminability and the determinability of outcomes in matters of choice and decision. For now as the point of decision approaches in time, even if the probability of the eventually actual outcome approaches unity (1), nevertheless at no time prior to the actual even will that outcome be a foregone conclusion beyond the possibility of an alternative outcome.

50

UNDERDETERMINATION

We finally come to what I see as a most intriguing prospect of the underdetermination of the phenomena by nature’s laws, to wit: 5. Overcomplexity. This issue pivots on the prospect that the phenomena may be too complex and variegated for the laws to capture them. Among all these alternatives this is the least familiar, so it is worthwhile to look at the situation in some detail.

Consider an illustration. Suppose we have as microworld a two-sidedly ongoing grid work of the following structure:

The “world” at issue here is to be is such that each square of our grid is filled with some letter of the alphabet. The resulting array constitutes the manifold of phenomena. But what now of laws. As I propose to regard the matter here, a “law of nature” is to be a generalization to the effect that some observable condition of affairs must obtain pervasively throughout some natural kind of item (be it an entity or a situation). In the example being developed there will be two sorts of natural kinds, namely positions and contents. The content-kinds will be the letters of the alphabet. The position kinds will be rows, columns, diagonals (right-hand ascending and descending). Here possible generalizations available for laws will have the format: • Every (or alternatively No) {row, column 1-diagonal, r-diagonal} is l-containing All in all, there will thus be a total of

51

Nicholas Rescher • Epistemological Studies

2 x 4 x 27 = 216 possible laws. Given that a law-complex is a register specifying for each possible law whether or not it obtains, there will accordingly be a maximum of 2216 law complexes. (We can forget about eliminating incompatibilities, because this will not affect our maximum.) Observe not that if we extend our grid work to a square of size N we shall have N x N compartment and thereby a total of 27 N x N possible phenomenal realizations of our microworlds. But—and this is now the crucial point—there are of course values of N for which 27N x N > 2216 There will thus be some point in respect to world size (and thus complexity) as of which there are more possible world realizations than available law manifolds. So even with our rather simple worlds more phenomenal realizations will be possible than there are available law complexes. In other words, at some point the laws will become impotent to determine the phenomena. Lawful comportment affords the only and inevitable basis on what we define nature’s taxa of natural kinds. But there just might not be sufficiently many nomic taxa to capture all the facts about nature’s materials. For example, consider any mental sequence of taxa, as per: animals, vertebrates, canines, dogs, poodles, etc. At each level there are lawful generalizations. But there are always some facts about the items at a given level that do not follow from the higher level laws. For between a concrete individual and any superordinate kind—at any level—there is always the prospect of yet another level of generality/plurality. If these conditions obtain then there will always be some features of individuals that cannot be explained (i.e., derived) for any finite body of laws. In the end, each concrete item would have at least some characteristic law unto itself. And what price nomic determination by general laws then? Textuality may well afford a graphic illustration of such a case where the laws might leave the phenomena underdetermined is afforded by a digitally and recursively articulated language dealing with a nondenumerably complex world. For consider the entire body of what will reasonably be seen as language-relevant law—the totality of principles of grammar, orthography, prosody, logic, rhetoric, what have you. The entire lot will have

52

UNDERDETERMINATION

underdetermined the phenomenology of any particular text be it a tragedy of Shakespeare or the Constitution of the U.S. The entire body of law is just too small to span the entire range of the phenomena upon which it bears—exactly as envisioned in that previous illustration: However far you choose to stretch it, there is, furthermore, yet another cognate way in which the phenomena can outrun the prospect of law-explainability, namely taxonomic insufficiency. In the end, the finite taxonomies of nature’s laws as we can secure them may well fail to capture the unending complexity of Nature itself. As long as the world’s law-structure is recursively articulated on a finite axiomatic basis the prospect opens up before us that the complexity of the world’s phenomenology is such that the law-determination of some of the concrete phenomena will be impracticable. So in the end there are quite a few different ways in which the laws could fail to determine the phenomena. 5. BUT HOW COULD THE PHENOMENA AVOID BEING LAW-MANDATED Yet what would determine the phenomena to be as is if the laws do not do so? Why is it that some particular concrete condition of things should obtain, given that the laws do not require it as against various other possibilities. How could a phenomenal reality possibly get constituted except as the product of the operation of laws? What productive agency is operative in Nature except for the laws? What is at work with this line of questions is a deep-rooted metaphysical prejudice—one so that it is almost impossible to shake off. The medievals thought “How can concrete reality possibly get itself constituted expect through the creative operations of the deity?” The moderns analogously ask: “How can concrete reality possibly get itself constituted except through the creative operation of natural laws?” But however plausible this supposition that “Only laws can make a world” may seem, it is in fact a presumption that has very little by way of clearly visible support over and above the standing prejudices of well-accustomed ideas. To get a better grip on the issue, it is useful to go back to the cosmological perspective of the ancient Greeks as it stood before the Stoics came along to legalize nature. For perhaps reality—like Topsy—just grew and lawful order itself came along—rather imperfectly—in the course of subsequent events (much as Plato envisioned the situation in the Timaeus).

53

Nicholas Rescher • Epistemological Studies

Perhaps our deep-rooted conviction that the phenomena are invariably the consequence of the productive operations of laws is no more than a deep-rooted prejudice—a mis-alliance, if you will, to the Principle of Sufficient Reason. After all, this principle might just possibly—not obtain. There may simply be no law based explanation—no lawful ground or reason—why things should stand this-wise rather than that-wise; that’s just how things happen to be. This sort of situation will be particularly telling for someone who accepts two ideas: (1) The by now well-accustomed view that all scientific explanation must be based on the laws of nature. (2) The idea that the explanation of cosmic evolution has to proceed on the basis of natural laws plus initial conditions. For now as long as a distinction between laws and initial conditions is accepted, it will be clear that those initial conditions cannot themselves be derived from the laws, and will thereby represent a crucial mode of nomic underdetermination of empirical fact. Faced with this sort of prospect, some resort to rather desperate course of evasiveness they seize upon the idea of is a multiverse—that all of those alternatives are in fact realized and that things stand this-wise in the particular universe in which we happen to be. (And why are we in the universe? Just because it is, by definition, the universe in which we happen to be.) Yet what is all this but a frenetic subterfuge to avoid confronting a difficult and discomfiting question? *** Still how could the phenomena possibly be accounted for if not through their emergence through the operation of Nature’s laws? One prospect here is the just-suggested idea that perhaps they simply cannot be accounted for at all. For what is there in this post-theological age to provide us with a principle of Sufficient Reason to the effect that there is a good rational explanation for everything? Perhaps sheer surdity plays a crucial role in the cosmological stage. But this line of thought—difficult to refute as it is—might well fail to carry much conviction. It is too tempting to write it off as a blind leap into

54

UNDERDETERMINATION

incomprehensibility and obscurantism. So we are well advised to look elsewhere. And here we are not entirely empty-handed. For even if one gives up on the explanation of phenomena in the order of lawfulness based efficient causality, we are still not altogether at the end of our explanatory tether. Different orders of explanation are, after all, at our disposal—teleological or axiological explanation to cite only two examples. Sheer prejudice apart, there is really no fundamental reason for dismissing the prospect that those very considerations that explain why the laws of nature—and the natural constants that figure in them—are as is can also be brought into operation to answer the question of why the phenomena are as is. How would the details of such an explanatory program be worked out? This clearly is a large and complicated question. But it may well prove to be one which the philosophy of science—if not science itself—will ultimately have to face sooner or later. NOTES 1

For issues of inductive underdetermination see Pierre Duhem, The Aim and Structure of Physical Theory (Princeton: Princeton University Press, 1991), W. V. O. Quine, From a Logical Point of View (Cambridge: Harvard University press, 1999), as well as Martin Curd and J. A. Cover (eds.), Philosophy of Science: the Central Issues (New York and London: W. W. Norton, 1998), and Donald Gillies, Philosophy of Science in the Twentieth Century: Four Central Themes (Oxford: Blackwells, 1993).

2

Consider as an example, the potentially infinite series of 0s and 1s formed by the rule: 1. Determine entries by coin tosses (H = 0, T = 1) 2. Use true coins A and B where A is loaded to favor H by 2 to 1 and B has the reverse bias. 3. Change the coins used after 10n tosses, n = 1, 2, 3, …

55

Chapter 5 COGNITIVE COMPROMISE On Managing Cognitive Risk in the Face of Imperfect/Flawed Information 1. POTENTIAL FLAWS OF KNOWLEDGE

W

hat we take to be our knowledge of things is potentially subject to three prime flaws: error, ignorance, and uncertainty. Their nature is easily illustrated. Thus consider the following tic-tac-toe diagrams: Actuality

Belief

1

0

1

0

0

A

0

0

1

0

1

0 or 1?

0

0 0

0

0 or 1?

The two boldface entries of the Belief trigram exemplify error: accepted belief here disagrees outright with Actuality. The two ?-entries indicate uncertainty—a failure to decide among alternatives. Here the actual situation is encompassed within the range of envisioned alternatives. The blank indicates acknowledged ignorance: one has no idea what is there—is it a number, a letter (and of which alphabet?), a pentagram, or what? It is “anyone’s guess” how that information vacuum gets filled in. 2. UNCERTAINTY IS BASED ON (PRESUMED) KNOWLEDGE Uncertainty is indecisiveness: with uncertainty we know (or think we know) what the range of possibilities is: it is based on (presumed) knowledge of the possibility range for correct. Accordingly we can generally grapple with uncertainty by means of probabilities—at least in favorable circumstances. For example, if 0 and 1 are the sole members of the possi-

Nicholas Rescher • Epistemological Studies

bility-range we can plausibly allocate a 50:50 chance—at least until some further information comes to view. Probabilistic reasoning is practicable in the face of uncertainty, but not, alas, in the face of ignorance. Probabilities must be defined over a range of alternatives. But whenever such a range is unavailable and possibilities proliferate unmanageability and information, meaningful probabilistic reasoning is impracticable. We are simply lost—albeit not “lost for words” but “lost for ideas.” Without a secure grasp on the range of possibilities there is little or nothing we can do to put probabilistic reasoning to work. And logic does not help us here. For there is effectively nothing that pure logic can do by itself to provide for a manifold of possibility. To do this, logic requires descriptive grist for its mill. Given a descriptor such as white and a classifier such as snow, and logic is off to the taxonomic races, as per: white snow white non-snow non-white snow non-white non-snow But without substantive material to work with logic can get nowhere—and such material must come from elsewhere. This means that in the face of outright ignorance we remain stymied with respect to problematic reasoning. 3. POSSIBILITIES AND INFORMATION But uncertainty is something else again. Here we are not quite so much at sea since more information is at hand. Possibilities rest on actualities that require information to project possibilities, although unfortunately, misinformation will also come into play. The fault line between the real and the apparent runs not across the space of alternative possible realities, but also across the spectrum of envisioned possibilities as well. Certain real possibilities can be overlooked; certain impossibilities can be mis-thought to be available. Thus suppose that a family owns one cat which family members indifferently call either Tom or Puss, whereas a guest thinks that there are two similar cats corresponding to these names. Then Tom being in the house concurrently with Puss being in the garden figures in the guest’s spectrum

58

COGNITIVE COMPROMISE

of envisioned possibilities whereas reality’s spectrum excludes this prospect. And so, just as we must distinguish between actual and merely putative reality, so we must distinguish between actual and merely envisioned possibilities. It thus transpires that there are both ontologically authentic and ontologically inauthentic possibilities, and that the spectrum of real possibilities can differ from that of envisioned possibilities. Such a range of putative possibility pivots on the locution: • For all I take myself to know to the contrary, such-and-such may be the case. The less someone knows the broader this range will be; and the more one knows, the narrower. Knowledge diminishes the scope of speculation—the more an individual knows the less room there is for mere speculations. All in all, then, a certain imagined prospect can be classified as: • Actual (real) • Non-Actual (unrealized) —authentic possibility —inauthentic (merely putative) possibility In matters of uncertainty (of ignorance and unknowing) this difference between authentic and merely putative possibility can play a significant role. And a person’s beliefs—his range of knowledge and ignorance—play a key role in shaping this person’s field of envisioned possibility. 4. AVOIDING COGNITIVE RISK AND THE ALLURE OF SKEPTICISM Life being what it is, we will believe and accept many things which, for aught we actually know to the contrary, may well not be the case. In such circumstances, we run a risk of error. Virtually all of our ventures in claiming knowledge about reality carry some risk of cognitive error in their wake: it is unavoidable companion of the enhancement of knowledge. And so we have it that “to err is human.” How can this risk be managed?

59

Nicholas Rescher • Epistemological Studies

For one thing, agnosticism—suspension of judgment is a sure-fire safeguard against errors of commission in cognitive matters. If you accept nothing then you accept no falsehoods. 5. THE ADVANTAGES OF IMPRECISION A means to error-avoidance that is less drastic than skeptical agnosticism is to “hedge one’s bets” by vagueness. “How old was George Washington when he died?” If I answer “seventy years” my response is at risk, but distinctly less so if I answer “around seventy”, and less yet if I say “over sixty.” Increased security can always be purchased for our estimates at the price of decreased precision. We estimate the height of a tree at around 25 feet. We are quite sure that the tree is 25 ± 5 feet high. We are virtually certain that its height is 25 ± 10 feet. But we are completely and absolutely sure that its height is between 1 inch and 100 yards. Of this we are “completely sure” in the sense that we are “absolutely certain,” “certain beyond the shadow of a doubt,” “as certain as we can be of anything in the world,” “so sure that we would be willing to stake our life on it,” and the like. For any sort of estimate whatsoever, there is always a characteristic trade-off between, on the one hand, the evidential security or reliability of the estimate (as determinable on the basis of its probability or degree of acceptability), and, on the other hand, its contentual definiteness (exactness, detail, precision, etc.). A situation of the sort depicted by the concave curve of Display 1 obtains, illustrating how these desiderata are at loggerheads with one another. On this basis, vagueness is an effective instrument for error avoidance, and less definitely informative one’s response to a question is, the better one’s chances of averting error. Thus as Aristotle noted, we cannot err in viewing an object of thought, whatever it be, as something, but only in being of this or that particular sort.1 Only insofar as our thought becomes definite and content-laden can it manage to err. Unfortunately, however, this safety is gained at the cost of informativeness, a circumstance which indicates that error-avoidance is not a be-all and end-all in cognition since cognitive interests are also at stake. After all, a true statement does not necessarily describe how things stand in the world. When I say that yon tree (which in fact stands at 60 feet) is “over 6 feet high” I say something as true as can be, but do not go very far towards describing the tree.

60

COGNITIVE COMPROMISE

Display 1 THE DEGRADATION OF SECURITY WITH INCREASING DEFINITENESS.

Security

Definiteness

As this situation indicates, averting error is not enough. After all, resolutions that succumb to imprecision, vagueness, indefiniteness, and the like, need not be erroneous, but yet are apt to be unhelpful and uninformative. Accordingly, averting error by vague and insufficient answers to our questions does not offer a very satisfactory route to knowledge. To realize one does not make pancakes from sand, from mercury, from butterfly wings, etc. etc. is certainly to have a great many correct beliefs about the matter. But all such error avoidance does not bring one much closer to knowing how pancakes are actually made. The aims of inquiry are not necessarily furthered by the elimination of cognitive errors of commission. For if in eliminating such an error we simply leave behind a blank and for a wrong answer substitute no answer at all we have simply managed to exchange an error of commission for one of omission. The crucial fact is that inquiry, like virtually all other human endeavors, is not a cost-free enterprise. The process of getting plausible answers to our questions also involves costs and risks. Whether these costs and risks are worth incurring depends on our valuation of the potential benefit to be gained. And unlike the committed skeptic, who sees the entire cognitive project as so much futility, most of us regard of information about the world we live in as something of immense value. After all, in abandoning claims to knowledge the dogmatic skeptic embarks upon a self-imposed exile from the community of communicators, seeing that the communicative use of language is predicated on conceding the warranting presuppositions of language use. To enter into a discussion at all, one must acquiesce in the underlying rules of meaning and informa-

61

Nicholas Rescher • Epistemological Studies

tion-transmission that make discussion in general possible. But, if nothing can appropriately be accepted, then no rules can be established—and thus no statements made, since meaningful discourse requires an agreement on the informative conventions. 6. A CODA ON PROCEDURAL POLICY In matters of cognition, safety engineering is an exercise in error management. Its definitive aims are two: reducing the chance of an occurrence of error, and reducing the negative consequences of error should it occur. Requiring a second opinion before taking a potentially risky step illustrates the former process; arrange for automatic “fail-safe” shut-down as a dangerous condition nears would illustrate the second. And so, the salient lesson of the pervasiveness of error in human affairs is not a nihilistic skepticism but the need for safety engineering with its safeguards against unforeseeable and perhaps even unavoidable errors is typical of the rationality’s sensibly safety-mindedness in the wake of acknowledging the unavoidability of error.2 In matters of cognition, as in any other situation of rational choice among alternatives, the prospect of different policies of procedure lies before us. And here the prime possibilities are: Maximizing: Insist upon the alternative that is the very best possible/ available Satisficing: Seek out on alternative which—even if not ideal, nevertheless is good enough to meet all your needs and requirements Adequating: Settle for an alternative that is good enough that you can live with it even that it fails to meet not just your wants but even your presumed needs. Optimizing: Seek out among the three preceding alternatives that one which represents the best strategy you can afford in the circumstances. In cognitive matters we also have these four basic strategies at our disposal. And so need not be purists about it. For it is, clearly, that fourth, mix-match alternative that offers the most pragmatically promising option.

62

COGNITIVE COMPROMISE

With cognition as elsewhere rationality calls for a pragmatic balance of costs and benefits in the presence of limited resources. Here too we must strike a reasonable compromise between what is ideal and what is affordable. NOTES 1

Aristotle, Metaphysics, IX, 10; 1051b25. However, despite Aristotle’s various sagacious observations regarding error an actual theory of error cannot be credited to him. Compare L. Keeler, The Problem of Error from Plato to Kant, (See especially p. 40).

2

For further variations on the theme of this discussion see the author’s Error (Pittsburgh: University of Pittsburgh Press, 2007).

63

Chapter 6 AUTHORITY 1. INTRODUCTION

A

lexis de Tocqueville sagely observed that:

A principle of authority must … always occur, under all circumstances, in some part or other of the moral and intellectual world … Thus the question is not to know whether any intellectual authority exists in an age of democracy, but simply by what standard it is to be measured.1

Tocqueville was right to emphasize the intellectual dimension. To be sure, authority is almost always considered only in its socio-political dimension of communal authority, and it is generally viewed in its coercive aspect with a view to the power of some to control others. But this sort of thing is not the subject of present concern. Rather, the sort of authority that concerns us here is that which is at issue when we speak of someone as being a recognized authority in some field of endeavor—the kind of authority that is at work when we acknowledge someone as authoritative regarding some sector of thought and action, as when one says that someone is an acknowledged authority regarding Roman coinage or when we say that a physician speaks with authority on the management of a disease. It is, specifically, this sort of authoritativeness in matters of belief and action that will be at issue here. Authority of this sort is an unduly underdeveloped topic. Important though it is, alike in ordinary life, in the theory of knowledge, and in ecclesiastical affairs, there is a dearth of serious study of the topic. For example, philosophical handbooks and encyclopedias—even those that are themselves authoritative—are generally silent on the subject.2 2. EPISTEMIC VS. PRACTICAL AUTHORITY Epistemic or cognitive authority is at issue when the idea operates in the context of belief. It is a matter of credibility in regard to matters of fact. By contrast, practical or pragmatic authority is at issue in regard to action. It is

Nicholas Rescher • Epistemological Studies

a matter of guidance not in relation to what we are to accept or believe, but in relation to what we are to do. Practical authority, then, is not propositional in orientation: it does not have us believe or accept something or other. Its functioning is geared to action by way of authorizing or mandating people to act. We treat someone as an authority insofar as we are prepared to accept what they say. There are, accordingly, two prime forms of authority, the cognitive and the practical, the former relating to information and the latter to action. Practical authority can be exercised either persuasively or coercively. It can arise both with the question “What must I do?” and the question “What should I do?” Accordingly, practical authority can be either mandatory or advisory. But only mandatory authority can be delegated (e.g., by the captain of a ship to his first mate). With advisory or not epistemic authority, authoritativeness must be reorganized de uno, it cannot simply be transferred by delegation. We thus see some person or source as a cognitive authority when we incline to endorse them informative claims as true. Now there are basically two sorts of epistemic issues: issues of fact and issues of interpretation. “What did George Washington’s Farewell Address say and where did he deliver it?” is a purely factual issue. “What was the objective of Washington’s Farewell Address and what effect did it have on American policy?” involves a good deal of interpretation. Occurrence issues are generally factual; significance issues generally interpretational. (History usually involves an inextricable mixture of the two.) Being authoritative with respect to facts is a relatively straightforward and objective issue. Being authoritative on matters of interpretation is something more complex. With interpretations, it is seldom only that claims whether what an authority says is correct or not. All too often authorities disagree in matters of interpretation because the “hard facts” such as they are leave many matters unresolved. On the other hand, what ought to be clear even here is: What qualifies as an authority: who is well informed, seriously engaged, open-minded about the issues? Where interpretative matters are concerned, authoritativeness hinges less on correctness than on due care and proper need of the relevant complexities. An element of preferential choice is thus usually involved with the question of which authority (if any) one is to follow in to certain matter. In matters of credibility authorities are generally pitted against authorities. (Think here of Raphael’s famous painting of “The School of Athens.”) How, then, is one to proceed?

66

AUTHORITY

3. CONCEDING AUTHORITY Control authority can come to an individual simply by commission. But epistemic authority must be earned. Recognizing someone’s epistemic authority is a matter of trust. With epistemic trust one risks error, misinformation, deceit. And in conceding (epistemic) authority to someone I risk that they may be “talking through their hat.” But in conceding practical authority to someone, I risk not just being wrong but actual damage, injury, misfortune to myself and others. I trust someone with respect to a practical issue. I entrust to them some aspect of my (of somebody’s) interests and involve not just error but injury. So why do people ever accept the authority of some person or source— why do they decide to concede it to some other person or agency? We cannot make ourselves knowledgeable in all topics, nor can we make ourselves wise in all spheres of activity. Division of labor is inevitable here and means that we must, much of the time, entrust our own proceedings at least partially to others. The acknowledgement of authority is not an end in itself—authority concession is not something that is done for its own sake. Instead, it has a functional rationale. It is rationally warranted only when it conduces to some significant good—when it serves a positive role in facilitating the realization of a better quality of life, enabling its adherents to conduct the affairs more productively and have them lie as wiser, happier, and better people. The key here is clearly the fact of limitedness. We simply cannot manage in this world all by ourselves. Neither in matters of cognitive knowthat or in that of practical know-how are we humans sufficiently competent as individuals. In both cases alike we concede authority to the experts because we think/expect them to be more competent than ourselves. We resort to them because we believe them to be a more promising path to issueresolution the one we would continue on our own. 4. THE VALIDATION OF AUTHORITY ACKNOWLEDGMENT So much for the motivation of authority-concession. But what about its rationalization? What sorts of considerations are there to indicate that an acknowledgement of authority is appropriate? In conceding authoritativeness to some individual or source we never leave responsibility behind. With epistemic authority we remain responsi-

67

Nicholas Rescher • Epistemological Studies

ble for asserting the reliability of the source, with practical authority we remain responsible for asserting its correlative wisdom. We are justified in acknowledging authority only where we have good ground for imputing authoritativeness. But what is the rationale of such a step? In this regard, there is an important difference between epistemic and practical trust. When we place epistemic trust in certain “recognized authorities” (the presumed personal experts or the impersonal encyclopedias and handbooks) we manage to offload responsibility: we diminish our responsibility if things go wrong. And this is so also in those practical matters which fall into areas of generally acknowledged expertise—medicine and law for example. But in matters of lifestyle—of interpersonal situations and faith and morals—the situation is different. Here there just is no universally acknowledged expertise and if an individual is to seek advice he must choose any decisive “schools of thought.” In consequence he can never offload responsibility completely and definitively onto the shoulders of his advisors, because he himself is responsible for their selection. The impersonally cogent criterion available to guide one’s section for advisors in this region are never sufficient to determine its outcome. The element of personal inclination and decision cannot be fashioned out completely. And in consequence an element of personal responsibility remains in play. The answer here is simple and straightforward. Authority concession is an act of trust and in general trust is something that is rationally warranted through established reliability. In general trust has to be earned on the basis of past experience via a track-record of proven reliability. Authorityacknowledgement, like trust, is rationally valid insofar as that someone has a track record of reliability over the range with respect to which that track record was established. Cognitive authority is validated through a track record of being right in one’s declarations. Prudential authority is established through a track record of being helpful in one’s counsel. A source of information is validated through a history of providing correct information. A source of counsel (advise direction) is validated through a history of providing effective (successful, constructive) advice. But practical authority is something else again. With cognitive authority we look to informativeness, with practical authority we look to life-mode satisfactions, to the achievement of not only smarts but a good, happy

68

AUTHORITY

beneficial life. Personally satisfying and communally beneficial lives are crucial here. If you are going to advise me effectively about how to achieve my aims—how to get from here to there—you must know the lay of the land. You can only advise me satisfactorily in matters of travel or money or health if you are knowledgeable in the relevant range of geography, economics, and medicine. But this matter of relevant factual knowledge is not enough. For it only makes sense to acknowledge someone’s practical authority when we have good reason to this person’s counsel as not only knowledgeable but wellintentioned to boot. There is no point in ceding practical authority to someone for the guidance of one’s own actions unless one has good reason to believe that this source has one’s own best interest at heart. Conceding practical authority makes good sense only in the presence of substantial indications that acting on this source’s counsel will actually conduce to our best interests. If I am to accept some advisor’s counsel as authoritative, then this advisor must take me seriously—he must have my aims and goals in mind and my best interests at hand. No matter how knowledgeable he may be, if my aims and interests are not paramount for him, his advice will be useless to me. The rational ultimate aims of human endeavor and aspiration here on earth is to make us—individually and collectively—into wiser, better, happier people. These correspond to three fundamental sectors of our condition: the cognitive, moral, and affective; and these in turn are correlative with knowledge, action, and value, the concerns of the three prime branches of traditional philosophizing, namely epistemology (“logic” as usually conceived), ethics, and value theory (axiology). Man’s overall well-being—eudaimonia, as Aristotle called it—is spanned by the factors of this range. As philosophers from antiquity onward have seen it, how we fare in regard to this trio of prime desideration—i.e., in terms of wisdom, goodness, and happiness—provides the basis for rational self-contentment. 5. ECCLESIASTICAL AUTHORITY Let us now turn to the issue of specifically ecclesiastical authority. Seeing that adopting a religion involves commitment taken “on faith” that goes beyond what rational inquiry (in its standard “scientific” form) can manage to validate, “how can a rational person appropriately adhere to a religion?” Or, more briefly, “How can there be a cogent rationale for a faith whose doctrines encompass reason-transcending commitments?”

69

Nicholas Rescher • Epistemological Studies

The answer lies in the consideration that factual claims are not the crux here. Ecclesiastical authority in matters of faith and morals will ultimately have to react on practical rather than epistemic authority. It relates to what to do rather than merely what to endorse as fact. For religious commitment is not a matter of historically factual correctness so much as one of lifeorienting efficacy, because the sort of “belief” at issue in religion is usually a matter of life-orientation rather than historical information. Religious narratives are by and large not historical reports but parables. The story of the Good Samaritan is paradigmatic here. From the angle of its role in Christian belief, its historical faithfulness is simply irrelevant. What it conveys is not an historical reportage but an object-lesson for the conduct of life. And much of religious teaching is always just like that, resource of practical guidance rather than one of factual information. Just this is the crux of authority in relation to the “faith” at issue with the putative authoritativeness of the Church in matters of faith and morals. Faith here looks not to historical factuality but to parabolic cogency—the ability to provide appropriate life-orientation for us. Effective guidance in the ecclesiastical context is a matter of putting people on the right track. And two considerations will clearly be paramount here: • Leading satisfying lives, achieving appropriate life-goals, realizing rational contentment (Aristotelian eudemonia), getting guidance in shaping a life one can look back on their lives with rational contentment. • Becoming good people. And on this basis religious authority is ultimately normatively practical not historically factual: what the authorities provide is not historical information but direction for the conduct of life. Religion can thus be viewed in the light of a purposive venture insofar as based rationally (rather than just emotionally or based simply on traditionary grounds), it is something we do for the sake of ends—virtually making peace with our maker, our world, our fellows, and ourselves. And ample experience indicates that motivation to think and act toward the good flourishes in a community of shared values. In this context it makes sense to see as authoritative those who—as best we can tell—are in a good position to offer us effective guidance towards life-enhancing affiliations.

70

AUTHORITY

So why would a rational person subscribe to the authority of a church (an “organized religion”) in matters of faith and morals? Why would such an individual concede authority to those who speak or write on its behalf? Effectively for the same reason that one would concede authority in other practical matters that one deems it important to resolve, namely when (1) one recognizes one’s own limitations in forming a cogent resolution (2) one has grounds for acknowledging the potential authority as thoughtful and well informed with respect to the issues and finally (3) one has good reason to see this authority as well-intentioned. And it is clear that ecclesiastic authoritativeness can and should be appraised on this same basis. And so, the ceding of authority—is rationally appropriate where it serves effectively in the correlative range of humans ends—is lifeenhancing in serving to make us wiser, better, happier people. And this is so with ecclesiastical authority in matters of faith and morals as much as authority of any other kind. What, after all, is it that conscientious parents want for their children? That they be happy and good! (Some will say rich, but that clearly is a desideratum only insofar as it will conduce to happiness!) And so effectively when one asks about effective guidance the issue of effectiveness will have to be addressed in these terms. And it is this sort of thing upon which the issue of the effectiveness of guidance will have to pivot. But is ceding authority not a gateway to disaster? What of the imams who turn faithful devotees into suicide bombers. What of cults and their deluded and abused adherents? The point here is simply that like pretty much anything else, authority can be used and misused. The knife that cuts the bread can wound the innocent. The brick that forms the wall can smash the window. Authority too can be ceded reasonably and inappropriately. The possibility of above calls for sensible care with regard to such prospects, not for their abandonment. 7. WHICH ONE? But just which religion is fitting for an individual? The fact of it is that in matters of religion, the issue of reasonable choice—of asking whether what one is doing makes good sense—is in general not something people do prospectively by deciding upon a religious affiliation. On the contrary, it is something they can and generally will do only retrospectively, in the wake of an already assumed commitment. And, perhaps ironically, the very fact that a commitment has already been made and is a fait accompli forms a

71

Nicholas Rescher • Epistemological Studies

significant part of what constitutes a reason for contriving it. Granted, there are alternatives and it is tempting to think of them as being spread out before us as a matter of choice. But this is quite wrong. Never—or virtually never—do people confront an open choice among alternative religions. For one thing, the realities of place and time provide limits. Homer could not have chosen to be a Buddhist. And cultural accessibility also comes into it. The Parisians of Napoleon’s day could hardly become Muslims. Some are blocked by personal background and disposition. Benjamin Disraeli could hardly have become a Mormon. In the end the question becomes one not simply of “What is effective?” but of “What is effective for me in view of my acculturation to the world’s scheme of things and the manifold of experience—religious experience included?” For ab extra authoritativeness is not all—it will and must be something underpinned by a basis of personal experience. Acquiring ecclesiastical authority is a matter of securing from one’s coreligionists a well-earned recognition as a reliable guide in matters of religious faith and practice. Acknowledgment in matters of ecclesiastical authority is—and must be—a matter of free acknowledgement, just as in the case with cognitive authority. But this said, ecclesiastical authority is also akin to practical rather than epistemic authority—it is a matter of what to think and what to do. The Catholic Church teaches that the pope is the ultimate authority in matters of faith and morals. And it holds every Catholic to be obligated through his faith to accept this fact, grounding this position partly on grounds of revelation and tradition and partly on ground of reason in that a committing coherent faith must have an ultimate arbiter and that the historical claims of the papacy are maximal in this regard. It is thus the Church’s position that while no-one is categorically constrained in this matter of authoritativeness there nevertheless are rational constraints which are decisive for open-mindedly rational thinkers. Granted, religion can be an elective affliction but the Church holds that in the existing circumstance the reasonable person is bound by virtue of this very reasonableness, to see the matter in the Church’s way, not because the Church is the Church, but because the Church is seriously committed to being as reasonable about these issues as the nature of the case permits. However, one final point must be stressed here. Man does not live by reason alone. And there is no necessity for religious commitment to require reason’s Seal of Approval. True, from the strictly rational point of view re-

72

AUTHORITY

ligion exists to serve the interest of life. After all, other factors are at work in the good life apart from reason. And so many factors can lead a person to undertake commitment to a particularly mode of religiosity: family tradition, social pressure, personal inclination, the impetus of persona experience, and so on. Nevertheless, religious commitment, and with it the acknowledgement of ecclesiastical authority, is rational insofar as an essentially functionalistic account can be made successfully operative in the matter. The lesson of the present deliberations is simply that in relation to the rationality of faith it is this functionally goal-oriented perspective of life-enhancement that will have to be given the starring role. (To say this is not, of course, to say that religion must or should be considered “within the limits of reason alone.”) NOTES 1

Alexis de Tocqueville, Democracy in America, ed. by Thomas Bender (New York: Random House, 1982; Modern Library College edition), p. 299.

2

The only philosophical treatise on authority I know of is Was ist Autorität? by J. M. Bochenstein. Curiously, seeing that its author is a priest, the book treats ecclesiastical authority in only a single rather perfunctory paragraph.

73

Chapter 7 AN EXPLANATORY CONUNDRUM

T

he little study appended below raises questions regarding the nature of social statistics that philosophers of science have yet to address satisfactorily, let alone resolve. For such statistics are standardly used in the explanation of social phenomena, and this can prove to be rather problematic. The relative stability of suicide rates serves to explain why there were not a million suicides in the US in the year 2000. The character of grammatical usage in English explains why it is that the Polish linguist who stated that “In English noun must have article” uttered an amusing paradox. And the present study’s consideration that the lexical midpoint of reference works in general is in the H-to-L range helps to explain why this is so for the American National Biography. But in such cases the explanatory principles themselves have no really adequate explanation apart from the rather unsatisfying observation that that is just how things happened to develop. While the natural sciences investigate the modus operandi of nature, the social sciences investigate the modus operandi of man. In each case they seek to explain the observed goings-on. After all, explanation requires inputs—premisses to provide grist to the mill of explanatory reasoning. Now with natural science the expectation—or should one say hope—is that the explanatory project never reaches a dead end. Here we invest hope in a Principle of Sufficient Reason that behind every fact there is a yet deeper fact that provides for its natural explanation. But matters seem to be very different with language, where explaining one statistical phenomenon in terms of yet another does not seem all that definitive. Philosophers of science generally acknowledge that there are “statistical explanations of societal facts.” But this is predicated on the idea that conformity to statistical norms is only natural and to be expected because it is destined “to work out” by producing more successes than failures. But why this pragmatically convenient recourse to statistics should be deemed as theoretically adequate—given its adoption in the face of a clear recognition that it is far from failproof—remains something of a mystery.

Nicholas Rescher • Epistemological Studies

In the social sciences, the quest for ever deeper explanation runs out into an explanatory cul-de-sac of sheer contingency. The most and best that can be said in such cases is something to the general effect: That’s just how things happen to stand. That’s simply how it has turned out that people do things. There really is no deeper reason for it. After all, telling the difference between facts that have a deeper explanation

in terms of underlying general principles and facts that just happen to be as is because that is how matter chanced to work out is far from easy. It is an issue that always sends its tentacles out into a wide area. For in the end, only systemic and contextual considerations can discriminate between the fortuitous and the nomically lawful. This situation finds one of its most vivid and forceful illustrations is the case of language. Here growing explanatory weight to statistical facts seems to be the best one can do. For example, in English E is the most frequently used letter of the alphabet. And in German it is E also, as it is in French and Latin. This seems plausible enough. However in English T comes next, which in German it is N, in French S, and in Latin I. Is there an explanation for this discrepancy? Perhaps! But it is clearly nothing that can run very deep into general principles. Here there just seems to be no deeper reason why. In such matters it would seem that the explanatory project just runs out into the sand of surd factuality. In the end we seem to be stuck in these matters with a “Well, that’s just the way it is.” And in the circumstances of this domain, this situation was vagrant—as the Appendix will show. Here statistical explanation seems to be the best we can do, with deeper explanations being a forlorn hope. Appendix WHAT’S IN THE MIDDLE OF THE ALPHABET? It is clear that the step from M to N marks the middle position of the (English) alphabet. A to M makes for a group of thirteen, and so do N to Z. Moreover, when one removes from the picture the “alphabetical orphans” of English—the letters used less than 0.5% of the time (namely J, Q, X, and Z)—one shifts the alphabetical center (α) from M/N to L/M. So M has good claims to centrality.

76

AN EXPLANATORY CONUNDRUM

However, the letter L can also stake some claim to middle standing. For when one looks to the statistical frequency with which successive letters occur in English texts, one finds that the 50 percent level is reached at the letter L. Thus as regards centrality in point of occurrence in Englishlanguage texts, it is L and not M that stands in the middle. Table 1 SOME CENTRALITY DATA Alphabetic Center (α)*

Occurrence Median (µ)

Lexical Midpoint (λ)

English

L/M

L(1)

Mo(2)

German

L/M

I

Ko(3)

French

L/M

N

La(4)

Spanish

M

L

Le (5)

Italian

M

L

Ma(6)

Latin

L/M

M

Ma(7)

English is deficient in J, Q, X, Z; German in J, Q, X, Y; French in W, X, Y, Z; Italian J, K W, X, Y; Spanish in J, K, W, X; Latin in J, K, W, X, Y, Z. NOTES (1) All occurrence data come from Wikipedia (October 2008). (2) Per OED. (3) Per Brockhaus-Wahrig, Deutsches Wörterbuch (Wiesbaden: F. A. Brockhaus, 1980). (4) Per Le Grand Robert de le Langue Française (Paris: Dictionnaires le Robert, 2001). (5) Per the Diccionario Castellano (Esteban de Terreros y Pando). Again, λ is La in the large Oxford Spanish Dictionary. (6) Per the Grande Dizionario Italiano Dell’uso. (7) Per the Oxford Latin Dictionary (1968).

A more general situation as regards alphabetic centrality is depicted in Table 1. Here the alphabetical center (α) marks the point where there are just as many letters before as after in the alphabetical order. (Again we may dismiss those “alphabetical orphans.”) The occurrence median (µ) marks the transition letter in point of use-frequency in the language at large: preceding letters are used to the same statistical extent overall as succeeding ones. The lexical midpoint (λ) marks the center of a comprehensive dictionary for the language at issue. These various modes of centrality differ 77

Nicholas Rescher • Epistemological Studies

substantially.1 Thus across the different languages we uniformly have λ ≤ α, no doubt for reasons of an alpha-bias against the tail-end of the alphabet, which tends to be more heavily populated by unprominent “orphans.” Table 1 raised some further questions. One might, plausibly enough, think that the starting letters of the words of a language constituted a random sample of its letters, so that µ = λ. Strange to say this is only the case with Latin and Italian. So—specifically—why should it be that: • In French alone µ = N, falls substantially after λ = L? • In German alone µ = I falls substantially before λ = K? It is not easy to envision a plausible rationale for these discrepancies. Table 2 THE H-TO-L RANGE OF INFORMATION CENTRALITY The Encyclopedia Britannica (Micropedia 1994) Brockkaus: Die Enzyklopädie (1996) La Grunde Encyclopédie (1971) Grande Dizionario Encylopedia (1967) Diccionario Enciclopédusi Esasa (1992) Who’s Who in America (2007) The “Index of Places” of the Times Atlas of the World The 2006 White Pages of the Pittsburgh Telephone Directory American National Biography Dictionary of National Biography (Oxford) Routledge Encyclopedia of Philosophy

Ku Ky Le Is Ha Kr La Le Hu Hu La

Interestingly, when one considers most any large body of alphabetically organized information one generally finds the midpoint in the H-to-L region, which seems uniformly to represent what one might call information centrality. (For details see Table 2.) This contrasts with the L-to-N region of alphabetic centrality, again suggesting the alpha-bias phenomenon of tail-end underutilization. Curiously, the occurrence median (µ) never precedes the H-to-L information center. This eccentricity of this circumstance seems to be a phe-

78

AN EXPLANATORY CONUNDRUM

nomenon in search of an explanation.2 It seems very doubtful, however, that something very satisfying could possibly come to view here. In this perspective the Principle of Sufficient Reason that envisions the prospect of explanation “all the way down” looks decidedly problematic. In language, as in nature, there seem to be phenomena that are utterly fortuitous and random, giving chance a secure purchase hold in both domains. NOTES 1

If lexical centrality (λ) tracked occurrence centrality (µ) then the lexical midpoint for English would be Le instead of Mo. Moreover Greek has M/N as its alphabetical center (α) but λα as the lexical midpoint (λ) of the large Greek-English Lexicon of Liddel and Scott in the latest (1968) edition.

2

I am grateful to Donald Tucker for compiling some of the information given in the tables.

79

Chapter 8 THE MUSICAL CHAIRS PARADOX

S

ince obviously not every player at musical chairs can be seated, we have:

(1) ~◊(∀x)Sx

or equivalently …(∃x)~Sx

where the quantifiers range over players. Nevertheless, the game is such that any players can possibly be seated. (2)

(∀x) ◊Sx

or equivalently

~(∃x)…~Sx

Now let it be that, by definition, u is the youngest unseated player so that u = df (ιx)(~Sx & ~(∃y)(Sy & Y(y, x))), with Y as “younger than.” Then by definition ~Su, seeing that ~S figures in u’s definition as one of its attributes. Accordingly, we would seem to have: (3) …~Su Thus u is necessarily unseated, which would mean that: (4) (∃x)…~Sx But (4) flatly contradicts (2). How is this chain of inconsistency to be broken? What is wrong is in fact the impropriety of the move to (3): the step of inferring that what is true by definition is true by necessity. For it is true only by hypothetical (or conditional) necessity that a certain particular individual comes to be specified by u’s definition, and not by absolute (or categorical) necessity. The move from our definition of u to (3) is therefore inappropriate—and with it the move from …(∃x)Sx to (∃x)…Sx. What is at issue here is a fallacy of reasoning that is sometimes encountered in philosophical contexts. It might be termed “Particularizing a Generic Necessity.” Thus let Z be a set of exclusive and exhaustive but individually contingent claims p1, …, pn. (For example, the set consisting of:

Nicholas Rescher • Epistemological Studies

“There are no birds on that wire,” “There is one bird on that wire,” “There is more than one bird on that wire.”). Then by the hypothesis of exhaustivity we shall have it that one or another of the pi must obtain: (5) …(∃i)pi But of course in the circumstances we cannot—should not—infer that whichever pi obtains does so by necessity. That is, we do not have: (6) (∃i)…pi The generic necessity of (5) cannot be particularized as per (6). To do so is to commit the fallacy in question. The fatalist who argues from the premiss that there must be a day on which Smith will die to the claim that there is a specific day on which Smith must die commits this fallacy. And so does the predestinarian who reasons for the premiss that someone-or-other must win the lottery, to the conclusion there is a fortune-favored someone who must win the lottery. Even as the move from …(∃x)Fx to (∃x)…Fx is invalid, so (by contraposition) is the move from (∀x)◊Gx to ◊(∀x)Gx. The step from distributive to collective possibility just does not work. The circumstance that any candidate may be elected does not possibilize the election of all candidates. And even if any given truth were known to some (finite) knower, it would not follow that there will be a (finite) knower to whom the totality of truth is known.

82

Chapter 9 TRANSCENDENTAL ARGUMENTATION AND HUMAN NATURE 1. TRANSCENDENTAL ARGUMENTATION

A

Transcendental Argument is fundamentally presuppositional in nature. The aim of such argumentation is to exhibit what are (in Immanuel Kant’s terminology) are the “conditions under which alone” some humanly essential way of proceeding is possible. We here have to do with what might be called functional necessity, relating to that which is indispensable to realizing a certain aim or objective, where this aim and objective is itself not merely some optimal desideratum but actually represents a human need—an essential requisite focus. Kant’s own example is that for the effective identification of an object existing in the world some sort of experiential interaction with it is indispensably required, so that experiential access is a functional necessity for object-identification. Accordingly, transcendental argumentation has the following generic studies: • To achieve the end E it is indispensably necessary for us humans to proceed X-wise. [Major premiss] • Achieving end E is a human need, something that is a requirement of human existence: it is something that is not really optimal for us but is mandated by our condition as the kind of being we are. [Minor premiss] Therefore: We humans must proceed X-wise in conducting our relevant affairs: for us, X-wise proceeding is a necessity.

Nicholas Rescher • Epistemological Studies

That minor, need-oriented premiss here possibilizes the shift form a conditional major to an unconditional conclusion: from a conditional to a categorical necessity. The major premisses upon which a Transcendental Argument hinges involves a claim of functional necessity. It looks to the conditions under which alone a certain end can be realized. (To inform another person of a certain fact you must somehow encode the information at issue in a certain symbolic form. To sustain human life over time you must somehow provide breathable oxygen and sustaining nourishment.) The minor pivots on the circumstance that this end is inexorably mandated for us as the kinds of being we are as rational creatures of the species homo sapiens. 2. THE RATIONALE How does a Transcendental Argument of this functionalistic sort secure its rationale cogency? What is the rationale of its validity? Descartes has it that I am a thing that thinks. Leibniz has it that I am a being that acts. Putting the two together we come to the recognition that we are beings that act under the direction of thought: intelligent agents, homo sapiens. The impetus inherent in the very nature of such a being is to comport itself as a rational agent—to engage in the intelligently managed pursuit of appropriate objectives—to do what it can to realize the positive potential at its disposal—to be what it can be in the direction of the good. On this basis, the prime directive for a rational agent comes simply to this: “Act rationally!” Accordingly such an agent is subject to a dual two-sided impetus. On the one side there is the impetus to self-preservation inherent in Spinoza’s conatus se presevandi. On the other side there is the impetus to selfrealization, a conatus se realizandi to develop itself fully as a being of its kind. A being that guides its actions by intelligence ipso facto thereby stands committed to the endeavor to meet its own needs as the sort of being that it is. To proceed in this manner is the crux of rational agency. Viewed against this background it is clear that the business of needsatisfaction that lies at the core of Transcendental Argumentation is a venture whose justifactory rationale is deep-rooted in the very nature of rational agency.

84

TRANSCENDENTAL ARGUMENTATION

3.

AUTHROPROCENTIC NECESSITIES

The minor premisses of a Transcendental Argument look to the “authroprocentic necessity” represented by an authentic human need. Now on this basis, a Transcendental Argument cannot proceed on the basis of some merely fortuitous and optimal desideratum; it has to look to something which is a non-negotiably necessary requisite for realizing the end at issue. (To be sure, the necessity at issue need not be a matter of what is logically demonstrable but can look to a contingent “fact of life” that represents a “natural” rather than a “logical” necessity.) What is at issue here cannot simply be human want or desideratum, but something indispensably required to enable a creature to exist and persist as a human being. It must, in sum, relate to what is essential to homo sapiens as such—something that is an actual need of ours. Such needs occur at two levels: the ontological and the personal. On the latter place there are the needs that are idiosyncratic to a particular individual; on the former side there are the generic needs that an individual has in virtue of the taxonomic type to which he belongs. It is, however, the former of these that is crucial in the present context. For present needs are authentic and appropriate only when they are subsumable under higher order needs (the supreme topmost needs alone being exempt here). For instance, my personal need for the company of my spouse instantiates the generic need of humans for the company of those they love, which in time is subsumed under the universal need of rational beings at large for interacting with affinity partners. Philosophers have over the years become increasingly reluctant to subscribe to generalized claims about human nature at large. Nevertheless, it is not only true but effectively obviously so to say that we humans are creatures that: • live for a finite lifespan amidst generally difficult conditions, • can suffer depression, anxiety, and pain, • are aware of our own condition in these respects. And correlated to these circumstances there is a diversified spectrum of needs that we humans have, individually and collectively, for our wellbeing and survival.

85

Nicholas Rescher • Epistemological Studies

Viewed in this light, it is plausible to say that our needs prominently include the following (among many others): • food and drink, • shelter against the elements and such inauspicious environmental conditions as extremes of heat and cold, • nurture during infancy and care during childhood. Without such resources, we humans just could not survive and flourish as the sorts of beings we are. And over and above these requisites for, existence there are further requisites inherent in the circumstances that make us humans as the kinds of beings we are as social and inquiry creatures, beings who must cooperate in the accession of action-generating information. Meeting a wide variety of demands is accordingly rendered incoherent upon us by our ontological place in reality’s scheme of things. 4. KNOWLEDGE AND INFORMATION MANAGEMENT An especially critical and philosophically interesting use of transcendental argumentation arises in connection with information management. After all, information is a critical need for homo sapiens as a being whose activities in the world are guided not by reflex or instinct but by intelligent thought grounded in our understanding of how things stand in the world. The accession, processing, storage, and recall of information is essential to us as the sorts of being we are. And these things that we must do in the course of information management—namely inquiry and the processing of its products—are things that we must do to instance ourselves as the sorts of beings we humans are. The only way to achieve this would be to encode information in some sort of symbolic-system and since such a process must unfold over time, it will have to be strings of symbols that are in operation. Those operations and processes that are involved in symbol-manipulation are integral to the “conditions under which alone” we can function as the sort of beings we are.

86

TRANSCENDENTAL ARGUMENTATION

5.

SOCIALIZATION AND COMMUNICATION

Man cannot live in isolation. From infancy on, he wants and, more drastically, needs to function in the context of others. Supportive community is needed at many levels—the family, the tribe, and the wider society. This again requires connections and rules for the management of human interaction in matters of cooperation and competition. Such norms are yet another prime instance of functional necessities for man and carry customs—and ultimately rules, regulations, and laws—in their wake. Take, for example, the human need for communication, for transmitting information and ideas from one person to another. This cannot be managed without some system for rendering and receiving signals, from encoding and decoding information in some conventionalized framework of interpretation. There has to be a “language” or a factor equivalent thereof with its characteristic manifold of practices and rules. Communicative norms are clearly one of the prime instances of factual necessity for man. 6. WHAT DO TRANSCENDENTAL ARGUMENTS ESTABLISH? What does a Transcendental Argument of the style envisioned here actually prove? It shows just exactly this, that a certain practice or policy is indispensably requisite for the realization of certain of our needs. Accordingly the upshot of a Transcendental Argument is a categorical injunction that mandates some modus operandi for us. It puts before us what is, in effect, a demand of rationality. For what of course can and should be concluded from such an argument is that the adoption of such a policy or procedure is rationally appropriate for us. After all, nothing that we do could be more rationale and appropriate than so to act as to meet authentic needs of ours. 7. TRANSCENDENTAL ARGUMENTATION AND PRACTICAL REASONING The definitive difference between factual (“theoretical”) and procedural (“practical”) ratiocination is that reasoning of the former sort leads to a conclusion of the format “Such-and-such is the case,” whereas the latter leads to a conclusion of the format “Such-and-such is to be done.” The former, that is to say, purports to establish a fact; the latter to validate an act-injunction.

87

Nicholas Rescher • Epistemological Studies

Viewed in this perspective, it is clear that Transcendental Argumentation of the sort at issue here is a path that takes us to the very verge threshold of practical reasoning. For, as we have seen, it seeks to establish a conclusion of the format • We humans must proceed X-wise in conducting our affairs [since a need of ours can only be met by doing so]. And the step from here to an outright injunction to action is mediated by what is effectively trivial injunction: “Do what is necessary to meet your needs” (or: “Act in such a way as to meet your needs”). And in the special case when the “action” at issue is a matter of propositional acceptance this means that transcendental argumentation can issue in factual conclusions in the practical or pragmatic order of deliberation.

88

Chapter 10 A MULTITUDE OF WORLDS? 1. HISTORICAL OVERVIEW

K

nowledge, of course, is knowledge of the truth, and on the traditional view truth is a matter of correspondence with reality. But how large and comprehensive is the realm of reality? Does it encompass just this world of ours or are there other worlds as well? The history of philosophy is pervaded by many-world theories. Most prominently these include: The Atomists The idea of many worlds was the invention of the ancient Greek atomists, Lencippus and Democritus. Their theory of possibility is highly instructive for possible-world deliberations. Adopting an Euclideanly infinistic view of space, the atomists espoused a theory of innumerable worlds: There are innumerable worlds, which differ in size. In some worlds there is no sun and moon, in others they are larger than in our world, and in others more numerous. The intervals between the worlds are unequal; in some parts there are more words, in others fewer; some are increasing, some at their height, some decreasing; in some parts they are arising, in others failing … There are some worlds devoid of living creature or plants or any moisture.1

On this basis, the atomists taught that every (suitably general) possibility is realized in fact someplace or other. Confronting the question of “Why do dogs not have horns: just why is the theoretical possibility that dogs be horned not actually realized?” the atomists replied that it indeed is realized, but just elsewherein another region of all-embracing space. Somewhere within this infinite manifold, there is another world just like ours in every respect save one: that its dogs have horns. That dogs lack horns is simply a parochial idiosyncrasy of the particular local world in which we interlocutors happen to find ourselves. Reality accommodates all possibilities of worlds alternative to this through spatial distribution: as the atomists saw

Nicholas Rescher • Epistemological Studies

it, all alternative possibilities are in fact actualized in the various subworlds embraced within one spatially infinite superworld.2 Plato (c. 428–347 BC) Plato took an entirely different approach to the plurality of worlds, one based on the dualism of inquiry thought inherent in the distinction of Panecrutes of Elea between the way of insightful truth and the way of mere popular opinion. In Books VI and VII of Plato’s Republic in his allegory of the Sun, the Cave, and the Divided Line,3 Plato contemplated two distinct worlds—the worlds of sight and of the familiar objects of our experiential world, and the world of thought, of which the former is but a pale reflection. The former is but a shadow world that suffices the crude needs of ordinary mortals; the latter is a realm of ideals accessible only to philosophers through the sophisticated resources of dialectic. Leibniz (1646–1716) G. W. Leibniz maintained that this actual world of ours is but one among innumerably many alternating possible worlds. These alternative worlds of course do not exist, but they are conceivable, and indeed—as Leibniz saw it—actually conceived of in the mind of God. Creation was, in effect, a matter of making a preferential choice within the manifold of possibility which encompassed the entire spectrum of logically coherent alternatives for organizing the arrangements of a world. Huygens (1629–1695) For Leibniz, those alternative worlds lay in a reality-detached realm of possibility. With Christiaan Huygens we return to the Atomistic nature of different worlds actually existing separately from one another in remote regions of space. Like the Atomists, Huygens contemplated other worlds existing in remote regions of the physical cosmos. In his ingenious work Cosmotheoros (published posthumously in 1698), Huygens envisioned different civilization of “plenetarians” with different lifestyles and cultures living their lives in various remote regions of the cosmos.4 And in many ways the intelligent beings of those different worlds live cognitive processes and resources much like our own, seeing that not only the principles of mathematics and the laws of nature are everywhere

90

A MULTITUDE OF WORLDS?

the same but also that the modes of such organization are bound to have much in common. More prominent among later theorists to deny Huygen’s position in this regard was William Whewell (1794–1866). In his 1853 treatise On the Plurality of Worlds Whewell argued extensively against the prospect of finding life on other planets of the universe. But with the rise of rocket science and radiotelescope in the later 20th Century, Huygens-like positions once more emerged in many quarters. Kant (1724–1804) Immanuel Kant appreciated the issue of worlds from an epistemological point of departure and correspondingly adopted position of world dualism. On the one hand stood the world of sense, the mundus sensibilus in which we exist as physical beings and which we observe via the senses at our disposal. But by contrast there is also the real worlds, the mundus intelligctulis, the real world of things in themselves standing apart form the domain of things-as-we know-them accessible to the senses. Of this world—a mere thought-creature—we can have not detailed knowledge whatsoever—it is a Lockean je-ne-sais-quoi standing apart from any sort of being whose detail is cognitively accessible to us. Heidegger (1889–1976) and Eddington (1882–1944) Sir Arthur Eddington envisioned a dualism of scientific and everyday reality. He contracted the physicist’s and the ordinary man’s table the one firm and solid, the other a largely empty manifold of electromagnetic vibrations. But what is really at issue here are only two variant perspectives on the same reality. And proceeding from a very different sort of perspective, Martin Heidegger adopts a fundamentally analogous position. He contemplates a fundamental duality of perspective as between the world-view of everyday life and that of science—the life-world and the world picture of science. But here too we have not different worlds but two ways to dealing with only one. Popper (1902–1994) Karl Popper envisioned three worlds:

91

Nicholas Rescher • Epistemological Studies

• World 1: The physical world: the world of “material” things as studied by the natural science. This is a realm in which all of impersonal nature participates. • World 2: the psychic world: The world of feeling and psychological processes. This is a realm in which not only intelligent beings participate but the “lower” animals do so also to some extent. • World 3: The world of thought, of ideas, of known thought-artifice, or symbolic processes. This is a realm in which only intelligent beings participate. Strictly speaking, those items should not really be characterized as worlds, but less ontologically as realms or domains.5 For what is at issue are different processes (physical, psychic, and cognitive) that all transpire in interrelated ways within one selfsame world. Carnap (1891–1970) and David Lewis (1941–2001) and Possible World Semanticists Following adumbration in Wittgenstein’s Tractatus,6 Rudolf Carnap effectively revised Leibniz’s approach to possible worlds. Without using Leibnizian terminology (or even mentioning Leibniz in this connection), he revised Leibniz’s logico-combinational approach to possible world specification. In Carnap’s wake, possible-world semanticists further extended and developed this line of approach into the flourishing enterprise of modal semantics. And their demarche was construed by various philosophers— David Lewis most prominent among them7—who treated the possible worlds of model theory not just as convenient heuristic devices, but as a realistic basis for metaphysical exposition. In this respect, those metaphysicians decidedly over-reached themselves, seeing that they could not possibly identify such worlds in specific detail. This did not matter much with Leibniz, because his application of the idea was essentially theological and he could delegate the problem of world-specification to the mid of God. To their discredit this modern atheolgoical sensible theorists have no means at their disposal for implementing this keystone conception of their theory.

92

A MULTITUDE OF WORLDS?

Everett and John Wheeler (1911–2008) and Multiverse Physics It is a stark feature of modern physics that it underdetermines the phenomena. In quantum theory, for example, physics determines only a spectrum of outcomes and not the one definite result that actual observation puts before us. So why should the quantum theorists’ probabilistic “wave packet” of multitude possibilities collapse to the one single result with which experience confronts us? The Everett-Wheeler theory here takes the bull by the horns. All of those other outcomes—so it insists—are also perfectly real. But they are realized in other sectors of a “multiverse,” a vast existential manifold, a “multiverse” whose existential order lies entirely outside ours. Those so called alternatives all exists but do so in “alternative universes” that are interactively disjoint from us. 2. A TAXONOMY OF POSITIONS With this brief outline of the historical situation as background, it is instructive to consider a systemic taxonomy of possible positions on the subject. This can take roughly the following form: I. A Mega-Universe Comprising a Vast Plurality of Actual Worlds • Distributed in Space —The Greek Atomists —Huygens (and Giordano Bruno?) • Distributed in Time —Sequential worlds arising over successive world eras • Distributed in Meta-Space: the Multiverse Theory —Everett-Wheeler: Multiverse theory

93

Nicholas Rescher • Epistemological Studies

II. A Proliferation of Possible Worlds •

A plurality of possible worlds that are not real but are mere possibilities, alternative to the real A. Actually (physically) possible (possible-world cosmologists) B. Not physically possible but possible as thought-constructs. Possible-world semanticists such as Leibniz, Wittgenstein, Carnap C. Not possible at all, but discussable (hypothetically projectable) III. Different Realms of Being Within a Single Actual World

Various many-world theories envision not a plurality of different worlds but rather a plurality of distinct aspects or facets of this actual world— different “dimensions of consideration,” as it were, that are available to apprehending intelligences. Theories which instantiate this approach include: • Plato’s distinction between the world of sense and the ideal world revealed by distended inquiry: the worlds of appearance and reality, as it were. • The frequently purported two-world theory, distinguishing the world of ordinary-life experience from the world as natural science depicts it (Rickert, Dilthey, Heidegger, Eddington). • The three-world theory of Popper, envisioning the manifold of a physical, a psychological and an intellectual world. • The many-worlds theory of cultural relativism which envisions as many thought-worlds as there are geographically and traditionally diverse socio-cultural perspectives of understanding (EvansPritchard, Collingwood). • The quasi-solipsism of a theory that envisions as many worlds as there are individual minds, each of which has to “live in its own world” as it were.

94

A MULTITUDE OF WORLDS?

The positions of this final group are not really multi-world theories in the strict sense of the term, because all of them can—and perhaps should—be regarded as maintaining not a plurality of worlds as such, but rather a plurality of different realms or modes of thought about the (real) world. Actually, with approaches of this general sort one can see the actual, real world as spawning a fusion or superposition of variant aspects representing different ways of viewing the world subject to different “perspectives of consideration.” (Of course such differences in perspective need not make for a diversity of objects.) And so, put in briefest outline, there are three basic approaches to the idea of a multitude of worlds, the mega-universe approach, the alternativeworlds approach, and the mode-of-conceptualization approach. None of them are without their difficulties. 3. PROBLEMS AND DIFFICULTIES OF A MEGAUNIVERSE THEORY The megauniverse theory hangs unconfidently in the horns of a disjointness dilemma. If those “other worlds” are merely located elsewhere in other sectors of this universe then we really do not have to do with other worlds but rather with one single albeit compartmentally divided world. On the other hand, if these other worlds are truly disjoint from ours—if they do not interact with ours in ways that induces their innovation necessary to explain features of our world—then we sever not only physical but also evidential content with those worlds and have no real reason to accept them. 4. PROBLEMS AND DIFFICULTIES OF ALTERNATIVE-WORLD PROLIFERATION The alternative-worlds approach would have us accept altogether different worlds, worlds removed from and indeed incompatible with our own in their make-up and their modus operandi.8 What do such worlds involve? For one thing, they must be worlds. As such they will have to be manifolds of concrete reality. To qualify as such, its constituent individuals must also be concrete as regards the definiteness of its make-up. Specifically, a world must be descriptively definite completethat is, any descriptively specifiable feature either must hold of the world or fail to hold

95

Nicholas Rescher • Epistemological Studies

of it; there is no other alternative, no prospect of being indecisive with regard to its make-up.9 A world must “make up its mind” about what to be like. In consequence the Law of Excluded Middle must apply: the world and its constituents must exhibit a definiteness of composition through which any particular sort of situation either definitely does or definitely does not obtain. A possible world must be decisive in its composition: its leaves cannot just be greenishthey have to pick out a particular shade; its rooms cannot contain around a dozen peoplethey have to commit to a definite number. Such a possible world is therefore a saturated (or complete or maximal) state of affairsone which must either include or preclude any state of affairs that can be described coherently.10 An alternative perspective is mereological: a possible world now being seen as simply the sum-total of the possible individuals that exist within it. 11 (The two approaches come to the same thing if we adopt a theory of reductive particularismor “methodological individualism” as it is sometimes calledaccording to which every state-of-affairs regarding things-in-general reduces to a collection of facts about some set of individuals.) The individuals of an alternative possible world would, of course, have to satisfy the condition of “compossibility” in being capable of being copresent in one common world.12 A possible world must accordingly be consistent: the “possibility” at issue demands logical coherence and any descriptive statement about such a world or its constituents cannot be both true and false. (The Law of Contradiction must apply.) If, as Ludwig Wittgenstein maintained, the actual world is the totality of actually existing state of affairs, then a merely possible world would presumably have to be a suitably comprehensive totality of compossible albeit inexistent states of affairs. Accordingly a world is not just any state of affairs,13 but will have to be a “saturated” or “maximal” state of affairs-at-largea state that affairs-intoto can assume, a synoptic totality that suffices to resolve if not everything then at least everything that is in theory resolvable. (Unlike the state of affairs that “A pen is writing this sentence” a world cannot leave unresolved whether that pen is writing with black ink or blue.) If an authentic world is to be at issue (be it existent or not) this entity must “make up its mind,” so to speak, about what features it does or does not have.14 Any assertion that purports to be about it must thus be either definitively true or definitively falsehowever difficult (or even impossible) a determination one way or the other may prove to be for particular inquirers, epistemo-

96

A MULTITUDE OF WORLDS?

logically speaking. Authentic worlds do and must accordingly have a definite character.15 Unfortunately, however, we can never manage to identify such a totality. Consider a state-of-affairs indicated by such a claim as “The pen on the table is red.” An item cannot just be red: it has to be a definite shade of redgeneric redness will not do.) Nor is it a state of affairs that “There are two or three people in the room”that state of affairs has to make up its mind. Nor again is it a state of affairs that “The butler did not do it”its being the wicked gardener who did the sort of thing that a state of affairs requires. No matter how much we say, the reality of concrete particulars will go beyond it. As regards those merely possible worlds, we simply have no way to get there from here. There is, of course, contingency in nature and the world’s stage encompasses processes with different possible outcomes—outcomes that would engender different world-histories. However, the possibility of different histories for the world does not engender a proliferation of possible worlds. There are alternatives all right, but no alternative worlds that we could ever identify. Possibilities do not transmute into objects. To be sure, the identification of our (actual) world is no problem. The matter is simplicity itself. All we need do is to indicate that what is at issue is this world of ours (thumping on the table).16 The very fact of its being the world in which we are all co-present together renders such an essentially ostensive identification of this world unproblematic in point of identification and communication. However, the matter is very different with other “worlds” that do not exist at all. One clearly cannot identify them by an ostension-involving indication that is, by its very nature, limited to the domain of the actual. Identification would have to be effected by other and different means. And here comes the difficulty. For the only practicable way to identify an unreal possible world is by some sort of descriptive process. And, as the preceding chapter has argued, this procedure is simply not practicable, since its unavoidably schematic character cannot provide for the uniqueness indispensable for the identification of a possible world. For what it would have us do is to project hypotheses specifying the make-up of that nonexistent possible world. But as noted above, such hypotheses can never be elaborated to the point of definiteness—that is, to the point where only a single unique realization of such a specification is available. Only a God capable of synoptic totum simul thinking could possibly effect the requisite determination of such a manifold and its members.

97

Nicholas Rescher • Epistemological Studies

Authentic world-descriptions are not available to finite beings. Their limitless comprehensiveness makes it impracticable to get a descriptive grip on the identifactory particularity necessary for anything worthy of being characterized as a nonexistent world. And so from this angle too we reinforce the thesis that the alternative reality of many hypothetical individuals and worlds is bound to deal in abstracta and thereby unable to present concrete and authentic objects. And here the situation as regards possible worlds is, if anything, even more problematic than that of possible individuals. Possible world realismoften also called modal realismis the doctrine that, apart from this actual world of ours, the realm of being, of what there is, also includes alternative worlds that are not actual but merely possible. Being—reality at large—is thus seen as something broader than mere actuality. There are two versions of the theory. Strong modal realism holds that those alternative worlds really exist, albeit in a different domain of their own, outside the range of our universe’s space-time framework. And weak modal realism holds that while those alternative worlds do not really exist, they nevertheless somehow quasi-exist or subsist on their own in total disconnection from anything going on in this actual world of oursapart, perhaps, of being thought about by real people. Let us begin with the former. The most emphatic sort of strong modal realism proposed in recent times is that of David Lewis.17 As he sees it, nonactual possible worlds are comparable to “remote planets, except most of them are much bigger than mere planets and they are not remote [since they are not located in our spatial-temporal realm].”18 All of these worldsand their contentsexist in the generic sense of the term, and all of them stand on exactly the same footing in this regard, although none exists in or interacts with another. (Existence in a narrower sense is always world-correlative, a matter of placement within some possible world.) This world of ours is nowise privileged in relation to the rest; it does not differ in ontological status but only in the parochial respect that we ourselves happen to belong to it. As Lewis puts it: Our actual world is only one world among others. We call it alone actual not because it differs in kind from all the rest but because it is the world we inhabit. The inhabitants of other worlds may truly call their own worlds actual, if they mean by “actual” what we do; for the meaning we give to “actual” is such that it refers at any world i to that world i itself. “Actual” is indexical, like “I” or “here,” or “now”: it depends for its reference on the circumstances of utterance, to with the world where the utterance is located.19

98

A MULTITUDE OF WORLDS?

And so, as Lewis’ approach has it, the manifold of possible worlds as a whole is the fundamental ontological given. Thus, strictly speaking, there are no “unrealized possible worlds” at allall possible worlds are realized, all of them exist as parts of one all-comprehensive reality, one vast existential manifold. It is just that they are spatiotemporally and causally disconnected, there being no way save that of thought alone to effect a transit from one to another. What Lewis, in effect, does is to abolish the idea of “nonexistent possibility” in favor of one vast existential realm in which our spatiotemporal real world is only one sector among many. (His theory is much like that of the Greek atomists, except that their worlds were emplaced within a single overarching space and could collide with one another.) With Lewis, as with Spinoza before him, reality is an all-inclusive existential manifold that encompasses the whole of possibility. He holds that it is a fallacy “to think of the totality of all possible worlds as if it were one grand world” because this invites the misstep of “thinking that there are other ways that grand worlds might have been.” Lewis thus projects an (extremely generous) conception of existence according to which (1) anything whatsoever that is logically possible is realized in some possible world; (2) the entire manifold of “nonexistent possible worlds” is actually real; (3) the existential status of all of these possible worlds is exactly alike, and indeed is exactly the same as our own “real” world; and (4) there is also a narrower, more parochial sense of existence/reality which comes to co-existence with ourselves in this particular worldthat is, being colocated with ourselves in this particular world’s spatiotemporal framework. But this position runs up against the decisive fact that one must “begin from where one is,” and we are placed within this actual world of ours. There is no physical access to other possible worlds from this one. For us other possible worlds cannot but remain mere intellectual projections mere “figments of the imagination.” The problem with Lewis’s strong actualism is that from our own starting point in the realm of the realthe only one that we ourselves can possibly occupythis whole business of otherwise existence is entirely speculative because our own access to the wider realm beyond our parochial reality is limited to the route of supposition and hypothesis.20 Our standpoint—the one we actually have—is the only one in whose terms our own considerations can proceed. The priority of actuality in any discussion of ours is inevitable: it is not a matter of overcoming some capriciously adopted and optimally alterable point of departure.

99

Nicholas Rescher • Epistemological Studies

But what of a weaker possible-world realism, one which, while holding that such worlds do not exist, nevertheless concedes them an attenuated form of being or reality—of quasi-existence in an actuality-detached domain of their own? Many philosophers deem even this sort of thing deeply problematic. As they were coming into increasing popularity, J. L. Mackie wrote that “talk of possible worlds … cries out for further analysis. There are no possible worlds except the actual one; so what are we up to when we talk about them?”21 And Larry Powers quipped that “The whole idea of possible worlds (perhaps laid out in space like raisins in a pudding) seems ludicrous.”22 However, while such disinclination to fanciful speculation seems plausible enough, the principal reason for rejecting the subsistence or quasireality of possible worlds lies in their cognitive inaccessibility. For being and identity are correlative, and as the previous discussion has stressed, there is simply no viable way of identifying such merely possible worlds and their merely possible constituents. The problem lies in thinking that the locution “a world just like ours except that …” can be implemented meaningfully. It cannot. For once one specifies any change in the world’s actual state of things that “except that” listing can never be brought to an end. Once we start to play fast and loose with the features of the world we cannot tell with any assurance how to proceed. Consider its law-structure, for example. If electromagnetic radiation propagated at the speed of sound how would we have to readjust our cosmology? Heaven only knows! To some extent we can conjecture about what consequences would possibly or probably follow from a reality-abrogating supposition. (If the law of gravitation were an inverse cube law, their significantly lesser weight would permit the evolution of larger dinosaurs.) But we cannot go very far here. We could not redesign the entire world—too many issues would always be left unresolved. In a well-articulated system of geometry, the axioms are independent—each can be changed without affecting the rest. But we have little if any knowledge about the interdependency of natural laws, and if we adopt an hypothesis to change one of them we cannot concretely determine what impact this will have on the rest. The specification of alternative possible worlds is an utterly impracticable task for us finite mortals. Even when viewed epistemically as mere methodological thought-tools, merely possible worlds and individuals remain deeply intractable. Seeing that we can only get at with unreal possibilities by way of assumptions and hypotheses, and that such assumptions and hypotheses can never succeed in identifying a concrete world, it follows that we can only

100

A MULTITUDE OF WORLDS?

ever consider such worlds schematically, as generalized abstractions. Once we depart from the convenient availability of the actual we are inevitably stymied regardless the identification of nonexistent particular worlds. Whatever we can appropriately say about such “worlds” will remain generic, able to characterize them only insofar as they are of a certain general type or kind. Possible-world theorists have the audacity to employ a machinery of clarification that utilizes entities of a sort of which they are unable to provide even a single identifiable example.23 As far as the ontology of the matter goes, perhaps the most sensible policy is to resort to Ockham’s Razor and refrain from any and all recourse to possible worlds.24 Such an approach has it that there only is one world— the real and actual one we all inhabit. But there is also the manifold of potential imaginative possibilities that can be projected by the thinkers of this world. The result is thus a theory of possibilities without possibilia, of de dicto supposition without de re objects, of possibilities that there be a cat on the mat without a possible-cat occupying some spot on the mat. Such a theory has no quarrel with the mode-of conceptualization approach to there being different aspects of our perspectives upon the world. Nor does it conflict with the multiverse theory of a complex and compound reality of what our world—our astrophysico universe—is but a single component. But it does reject the proliferation of “merely possible” worlds. It rests content with a variety of (de dicto) possibilities for a world without stipulating an object—a “merely possible” world or individual—in which those possibilities are realized. To get a firmer grip on the issue, we shall adopt the propositional operator Rw(p) for “p is realized in world w” (with w* as the actual world). On this basis there is one difficulty about Rw*(p) or “p is realized in the actual world” which comes simply to “p is true.” Nor is there any great difficulty about ◊(∃w)Rw(p) provided we do not get too serious with the idea of qualifying over possible worlds but simply construe this experience as anointing it to the de dicto thesis ◊p—i.e., p is possible. But (∃w)◊Rw(p) with its de re quantification is something else again—something we had best dismiss as ill-formed and inappropriate. These considerations carry back to one common principle: Ockham’s Razor of rational economy. Ockham’s own view aside, the principle is usually enlisted as an interlist upon otiose entities: Do not multiply entities beyond necessity. And just this is the idea at work here. For nonexistent individuals are just that: otiose entities. The idea (thought, migration) of things that do not exist is certainly there but thinking about what doesn’t

101

Nicholas Rescher • Epistemological Studies

exist does not mean that there is a nonexistent something that one is thinking about. The thesis “X thinks (∃x)Fx” does not entail “(∃x)X thinks Fx,” i.e., that there is some item such that X thinks F regarding this item. Thought about nonexistent objects does not require—let alone create— nonexistent thought-objects: the step from de dicto to de re is not appropriate. To talk meaningfully and instructively of alternative possibilities we do not need to postulate alternative possibles to serve as thought objects. As Kant so eloquently insisted, mere thought is creatively impotent (at least with us finite beings). Viewing matters from this perspective, it would seem to be the better part of wisdom to operate a theory of “possibilities without possibilia”—a theory that lets us unproblematically contemplate de dicto possibilities of the first ◊p without construing this as a quasi-actualistic claim of the format Rw(p) where w ranges over “possible worlds.” What is at the same time the supplement and the most inherently plausible approach here would seem to be one that is predicated on the ontology minimalistic line that is predicated in the following theses: (1) There is only one world, the real world. But there are also (i) various perspectives on it, and (ii) various suppositions launched from minds functioning within it. (2) If there were other (real) worlds, altogether interactively disparate from ours, then we could not learn of it. Absent interaction, there would be nothing about such disparate worlds that could explain any feature of ours. (3) Possibilities do not engender things: imagination does not create objects, it creates object-regarding ideas and contentions. There is no valid transit from de dicto possibilities to de re possibilia. (4) There are different ways of thinking about “the world.” But these do not engender different worlds. 5. PROBLEMS AND DIFFICULTIES OF THE MODE-OF-CONCEPTUALIZATION APPROACH

102

A MULTITUDE OF WORLDS?

The mode-of-conceptualization approach has one principle point of significant difficulty. It inheres in the question of where it stops. Why just the two Eddingtonean worlds—that of the plain man and that of the physicist? Why just the three Popperian worlds—the physicist, the psychic and the identicist? Why not the world of the dentist, the biologist, and the scene painter, the transport engineer? Indeed why not a Berkeleyean world of detached intellects with as many as there are world apprehending minds? Once different realms of being are associated with different perspectives of consideration why not let the world disintegrate into a supposition of an information of world-perspectives as with the Leibnizian minds and of which embodies a world representation from its own point of view? Once we begin to travel down this particular route there is simply no natural way to come to a finish. 6. CODA The problem of worlds is a particularly interesting philosophical issue because so many different themes and topics come together here: the metaphysics of nature, the resources of logic, the semantics of being and nonbeing, the anthology of intellectual culture, and others. And it is also of particular interest because it so vividly illustrated how readily philosophical issues spawn a proliferation of discordant positions. Many philosophical issues assume a dichotomous pro- and con-structure. By contrast, the problem of worlds is one that engenders a diversified variety of divergent lines of consideration. NOTES 1

Diels-Kranz, 68 A 40 [for Lucippus and Democritus]; tr. G. S. Kirk and J. E. Raven, The Presocratic Philosophers (Cambridge, 1957), p. 411.

2

On the Atomists theory of alternative worlds see Chap. XV of G. S. Kirk, J. E. Raven, and M. Schofield, The Presocratic Philosophers (Cambridge: Cambridge University Press, 1957; end ed. 1983).

3

Plato’s Republic is translated with notes and commentary in F. M. Cornford, The Republic of Plato (Oxford: Clarendon Press, 1941).

4

Christiaan Huygens Cosmotheoros was published in English translation in London in 1698.

103

Nicholas Rescher • Epistemological Studies

5

For comparable ideas Frege employed the term Reich (realm) in his “Funktion und Begriff.”

6

Ludwig Wittgenstein, Tractatus Logico-Philosophicus (London: Kegan Paul, Trench, Trubner & Co., 1922).

7

See David Lewis, On the Plurality of Worlds (Oxford: Basil Blackwell, 1985).

8

On possible worlds in literary theory in their interrelationship with philosophical issues see the author’s What If (New Brunswick: Transaction Publishers, 2005).

9

On this feature of concrete worlds see the author’s “Leibniz and Possible Worlds,” Studia Leibnitaina, vol. 28 (1995), pp. 129–62.

10

See, for example, Plantinga 1974.

11

Chihara 1998 stresses this distinction (see pp. 113–15).

12

Individuals described as “the elder of twin elephants” and “the one and only elephant ever” can not coexist in one selfsame world. This idea of coexistence is due to Leibniz who first introduced philosophers to talk about possible worlds. After long lying fallow since his day the idea was reactivated in contemporary philosophy in Saul Kripke’s groundbreaking essay of 1963.

13

“A possible world, then, is a possible state of affairsone that is possible in the broadly logical sense.” (Plantinga 1974, p. 44).

14

Some logicians approach possible worlds by way of possible world characterizations construed as collections of statement rather than objects. And there is much to be said on behalf of such an approach. But it faces two big obstacles: (1) not every collection of (compatible) statements can plausibly be said to constitute a world, but rather (2) only those can do so those which satisfy an appropriate manifold of special conditions intending that any “word characterizing” set propositions must both inferentially closed and descriptively complete by way of assuring that any possible contention about an object is either true or false.

15

Authentic worlds thus differ from the schematic “worlds” contemplated in such works as Rescher and Brandom 1979. These, of course, are not possible worlds as such but conceptual constructs.

16

Van Inwagen 1980 (pp. 419 ff.) questions that we can uniquely ostend “the” world we live in, since he holds that actual individuals can also exist in other possible worlds. But this turns matters upside down. For unless one has a very strange sort of finger its here-and-now pointing gesture does not get at things in those other

104

A MULTITUDE OF WORLDS?

worlds. There is no way of getting lost en route to a destination where we cannot go at all. 17

Lewis 1986, p. 2. The many worlds theory of quantum-mechanics projected by Everett and Wheeler can also be considered in this connection. Other “modal realists” (as they are nowadays called) include not only Leibniz but Robert Adams (see his 1979), and Robert Stalnaker (see his 1984).

18

Despite abjuring a spatial metaphor, Lewis’ theory in one of its versions required a metric to measure how near or far one possible world is from another. This leads to hopeless problems. Is a world with two-headed cats closer to or more remote from ours than one with two-headed dogs?

19

Lewis 1973, pp. 85–86.

20

Lewis 1986 devotes to this problem a long section (pp. 108–115) entitled “How Can We Know?” It is the most unsatisfactory part of his book, seeing that what it offers is deeply problematic, owing to its systematic slide from matters of knowledge regarding possibility de dicto to existential commitments de re.

21

John L. Mackie, Truth, Probability and Paradox (Oxford: Clarendon Press, 1973), p. 84.

22

Powers 1976, p. 95.

23

Leibniz, to be sure, was entitled to conjure with alternative possible worlds because they were, for him theoretical resources as instances God’s entia rationis. Were one to ask him where possible worlds are to come from, he would answer: “Only God knows.” As that is exactly correct—only God does so. We feeble humans have no way to get there from here. (On Leibniz’s theory of possibility see Mates 1986.)

24

Such a position was enthusiastically articulated in Quine 1948.

REFERENCES Adams, Robert M., “Theories of Actuality,” Nous, vol. 8 (1974), pp. 211– 231. Reprinted in Loux 1979, pp. 190-209. , “Primitive Thisness and Primitive Identity,” The Journal of Philosophy, vol. 76, (1979), pp. 5–26.

105

Nicholas Rescher • Epistemological Studies

Armstrong, David M., A Combinatorial Theory of Possibility (Cambridge: Cambridge University Press, 1989). Chihara, Charles S., The Worlds of Possibility: Modal Realism and the Semantics of Modal Logic (Oxford: Clarendon Press, 1998). Chisholm, R. M., The Encyclopedia of Philosophy, ed. by P. Edwards (New York, 1967), vol. 5, pp. 261–263. Cresswell, M. J., “The World is Everything that is the Case,” Australian Journal of Philosophy, vol. 50 (1972), pp. 1–13. Reprinted in Loux 1979, pp. 129–45. Felt, James W., “Impossible Worlds,” The International Philosophical Quarterly, vol. 23 (1983), pp. 251–265. Forbes, Graeme, The Metaphysics of Modality (Oxford: Oxford University Press, 1985). Lewis, David K., “Counterpart Theory and Quantified Modal Logic,” The Journal of Philosophy, vol. 65 (1968), pp. 113–26. Reprinted in Loux 1979, pp. 210–28. Lewis, David K., “Counterfactuals and Comparative Possibility,” Journal of Philosophical Logic, vol. 2 (1973), pp. 918–46; reprinted in Philosophical Papers, vol. 2 (Oxford: Oxford University Press, 1986). Loux, Michael J. (ed.), The Possible and the Actual: Readings in the Metaphysics of Modality (Ithaca, NY: Cornell University Press, 1979). Lycan, William G., “The Trouble with Possible Worlds,” in Loux 1979, pp. 274–316. ————, Philosophy of Language: A Contemporary Introduction (London: Routledge, 2000). Mackie, J. L., Truth, Probability and Paradox (Oxford: Clarendon Press, 1973).

106

A MULTITUDE OF WORLDS?

Mates, Benson, The Philosophy of Leibniz: Metaphysics and Language (New York: Oxford University Press, 1986). Plantinga, Alvin, The Nature or Necessity (Oxford: Oxford University Press, 1974). Powers, Larry, “Comments on Stalnaker’s ‘Propositions’” in A. F. MacKay and D. D. Merrill (eds.), Issues in the Philosophy of Language (New Haven: Yale University Press, 1976). Quine, W. V. O., “On What There Is,” The Review of Metaphysics, vol. 2 (1948), pp. 21–38, reprinted in From a Logical Point of View, 2nd ed. (New York: Harper Torchbooks, 1958, pp. 1–19, and also in L. Linsky (ed.), Semantics and the Philosophy of Language (Urbana, 1952), pp. 189–206. Rescher, Nicholas, Imagining Irrrealtiy (Chicago: Open Court, 2003). Rescher, Nicholas and Robert Brandom, The Logic of Inconsistency (Totowa, NJ: Rowman and Littlefield, 1979). Rosenkranz, Gary, “Reference, Intensionality, and Nonexistent Entities,” Philosophical Studien, vol. 50 (1980). ———, “Nonexistent Possibles and their Individuation,” Grazer Philosophische Studien, vol. 22 (1984), pp. 127–147. ———, “On Objects Totally Out of this World” Grazer Philosophische Studies, vol. 25/26 (1985–86). ———, Hacceity: An Ontological Essay (Dordrecht: Kluwer, 1993). Skyrms, Bryan, “Tractarian Nominalism,” Philosophical Studies, vol. 40 (1981), pp. 199–206. Stalnaker, Robert, “A Theory of Conditionals, Studies in Logical Theory (Oxford, 1968: American Philosophical Quarterly Monograph Series, No. 2), pp. 98–112; see pp. 111–12.

107

Nicholas Rescher • Epistemological Studies

———, Inquiry (Cambridge MA.: Bradford Books/MIT Press, 1984). van Inwagen, Peter, “Indexicality and Actuality,” The Philosophical Review, vol. 89 (1980), pp. 403–26. ———, “Indexicality and Actuality,” The Philosophical Review, vol. 89 (1989), pp. 403–26. Zalta, Edward N., Intensional Lotic and the Metaphysics of Intentionality (Cambridge, MS: Bradford Books/MIT Press, 1988).

108

NAME INDEX Adams, Marilyn McCord, 27n2 Adams, Robert M.,105n17, 105-06 Amaldi, Edoardo, 27n5 Aquinas, St. Thomas, 20 Aristotle, 23, 60, 63n1, 69 Armstrong, David M., 105 Avogadro, Amadeo, 23 Barnes, E. C., 27n3, 28 Beckman, Jan P., 27n2 Bochenstein, J. M., 73n2 Boltzmann, Ludwig, 24 Brandom, 104n15, 107 Bruno, Giordaano, 93 Burns, L., 44n12 Butts, Roberts M., 46 Carnap, Rudolf, 92, 94 Chihara, Charles S., 104n11, 106 Chisholm, R. M., 106 Collingwood, R. G., 94 Cornford, F. M., 103n3 Cresswell, M. J., 106 de Tocqueville, Alexis, 66, 73n1 Democritus, 89, 103n1 Descartes, Réne, 84 Diels, Hermann, 103n1 Dilthey, Wilhelm. 94 Diogenes Laertius, 43n2 Disraeli, Benjamin, 72 Duhem, Pierre, 46, 55n1 Eddington, Arthur, 91, 94 Eubulides of Megara, 29, 34n2 Euclid of Megara, 29 Evans-Pritchard, 94

Nicholas Rescher • Epistemological Studies

Everett, 93, 105n17 Felt, James W., 106 Forbes, Graeme, 106 Frege, Gottlob, 104n5 Galen, 43n3 Gillies, Donald, 55n1 Goclenius, Randolph, 43n3 Gödel, Kurt, 37 Hamilton, William Rowan, 20 Heidegger, Martin, 91, 94 Homer, 72 Hume, David, 46 Huygens, Christiaan, 90-91, 93, 104n4 Kant, Immanuel, 83, 91, 102 Keeler, L., 63n1 Kirk, G. S., 103n2 Kitcher, Paul, 28 Kranz, 103n1 Kripke, Saul, 104n12 Lamb, Charles, 15 Leibniz, G. W., 84, 90, 92, 94, 105n17, 105n23 Lencippus, 89 Lewis, David, 92, 98-99, 104n7, 105n17, 105n18, 105n19, 105n20, 106 Loux, Michael J., 105, 106 Lucippus, 103n1 Lycan, William G., 106 Mackie, John L., 100, 105n21, 107 Mates, Benson, 105n23, 107 McGee, V., 44n12 Mendeleev, D. I., 23 Mill, J. S., 47 Newton, 23

110

NAME INDEX

Nolan, D., 28 Ockham, William, 20, 27n2, 101 Panecrutes of Elea, 90 Peirce, C. S., 49 Petley, B. W., 28n6 Planck, Max, 24 Plantinga, Alvin, 104n10, 104n13, 107 Plato, 49, 54, 90, 94, 103n3 Popper, Karl, 91-92, 94 Powers, Larry, 100, 105n22, 107 Prantl, Carl, 43n3 Ptolemy, 23 Quine, W. V. O., 28, 55n1, 107, 105n24 Raphael, R. S., 66 Raven, J. E, 103n2 Rescher, Nicholas, 104n15, 107 Rickert, Heinrich, 94 Rosenkranz, Gary, 107 Sainsbury, R. M., 43n3, 44n12 Schofield, Malcolm, 103n2 Shakespeare, William, 15, 53 Skyrms, Bryan, 107 Sober, Elliot, 28 Socrates, 29 Sorensen, Theodore, 39-40, 43n8 Spinoza, Baruch, 84, 99 Stalnaker, Robert, 105n17, 108 Tucker, Donald, 79n2 van Inwagen, Peter, 105n16, 108 Walsh, Denis, 28 Weinberg, Steven, 27n5

111

Nicholas Rescher • Epistemological Studies

Weizsäcker, C. F. von, 27n5 Wheeler, John, 105n17, 93 Whewell, William, 46, 91 Williamson, Timothy, 39-40, 43n10 Wittgenstein, Ludwig, 92, 94, 104n6 Zalta, Edward N., 108

112

NICHOLAS RESCHER COLLECTED PAPERS PART I Four Volumes 1 – 4 Subscription ISBN 3-937202-82-X ca. 790 pp., Hardcover EUR 249,00 Vol. 1: Studies in 20th Century Philosophy ISBN 3-937202-78-1 215pp. Hardcover, EUR 75,00 Vol. 2: Studies in Pragmatism ISBN 3-937202-79-X 178pp., Hardcover, EUR 69,00 Vol. 3: Studies in Idealism ISBN 3-937202-80-3 191pp., Hardcover, EUR 69,00 Vol. 4: Studies in Philosophical Inquiry ISBN 3-937202-81-1 206pp., Hardcover, EUR 79,00 PART II Five Volumes 5 – 9 Subscription ISBN 3-938793-05-8 ca. 830pp., Hardcover, EUR 319,00 Vol. 5: Studie in Cognitive Finitude ISBN 3-938793-00-7 133pp., Hardcover, EUR 69,00 Vol. 6: Studies in Social Philosophy ISBN 3-938793-01-5 Hardcover, ca.185pp., EUR 79,00 Vol. 7: Studies in Philosophical Anthropology ISBN 3-938793-02-3 Hardcover, 165pp., EUR 79,00 Vol. 8: Studies in Value Theory ISBN 3-938793-03-1 Hardcover, 175pp., EUR 79,00 Vol. 9: Studies in Metaphilosophy ISBN 3-938793-04-X Hardcover, ca. 180pp., EUR 79,00

PART III Five Volumes 10 – 14 Subscription ISBN 3-938793-24-4 ca. 940 pp., Hardcover, EUR 298,00 Vol. 10: Studies in the History of Logic ISBN 3-938793-19-8 169 pp. Hardcover, € 69,00 Vol. 11: Studies in the Philosophy of Science ISBN 3-938793-20-1 273pp., Hardcover, € 79,00 Vol. 12: Studies in Metaphysical Optimalism ISBN 3-938793-21-X 96 pp. Hardcover, € 49,00 Vol. 13: Studies in Leibniz's Cosmology ISBN 3-938793-22-8 215 pp. Hardcover, € 69,00 Vol. 14: Studies in Epistemology ISBN 3-938793-23-6 ca. 180 pp. Hardcover, € 69,00

Complete Subscription Price Volumes 1-14, appr. 2600pp., Hardcover ISBN 3-938793-25-2 € 798,00

Frankfurt • Paris • Lancaster • New Brunswick P.O. Box 1541 • D-63133 Heusenstamm bei Frankfurt www.ontosverlag.com • [email protected] Tel. ++49-6104-66 57 33 • Fax ++49-6104-66 57 34