Epistemic Merit: And other Essays on Human Knowledge 9783110329216, 9783110328745

The present book continues Rescher’s longstanding practice of publishing groups of philosophical essays that originated

189 38 1MB

English Pages 139 [150] Year 2013

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Epistemic Merit: And other Essays on Human Knowledge
 9783110329216, 9783110328745

Table of contents :
Preface
TABLE OF CONTENTS
Chapter 1: Epistemic Merit
Chapter 2: Is Cognitive Self-Criticism Rationally Possible?
Chapter 3: The Problem of Future Knowledge
Chapter 4: Diminishing Returns
Chapter 5: Practical vs. Theoretical Reason
Chapter 6: On Evaluating Scientific Theories
Chapter 7: Authority
Chapter 8: Cognitive Diffusion
Chapter 9: Modeling in Pragmatic Perspective
Chapter 10: Historical Perspectives on the Systematization of Knowledge
Chapter 11: Communicative Approximation in Philosophy
Chapter 12: Particular Philosophies vs. Philosophy at Large
Chapter 13: Ultimate Explanation

Citation preview

Nicholas Rescher Epistemic Merit And other Essays on Human Knowledge

Nicholas Rescher

Epistemic Merit And other Essays on Human Knowledge

Bibliographic information published by Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliographie; detailed bibliographic data is available in the Internet at http://dnb.ddb.de

North and South America by Transaction Books Rutgers University Piscataway, NJ 08854-8042 [email protected] United Kingdom, Eire, Iceland, Turkey, Malta, Portugal by Gazelle Books Services Limited White Cross Mills Hightown LANCASTER, LA1 4XS [email protected]

Livraison pour la France et la Belgique: Librairie Philosophique J.Vrin 6, place de la Sorbonne; F-75005 PARIS Tel. +33 (0)1 43 54 03 47; Fax +33 (0)1 43 54 48 18 www.vrin.fr

2013 ontos verlag P.O. Box 15 41, D-63133 Heusenstamm www.ontosverlag.com ISBN 978-3-86838-178-8

2013 No part of this book may be reproduced, stored in retrieval systems or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use of the purchaser of the work Printed on acid-free paper FSC-certified (Forest Stewardship Council) This hardcover binding meets the International Library standard Printed in Germany by CPI buch bücher gmbh

Preface

T

he present book continues my longstanding practice of publishing groups of philosophical essays that originated in occasional lecture and conference presentations. (Details are given in the footnotes.) Notwithstanding their topical diversity they exhibit a uniformity of method in a common attempt to view historically significant philosophical issues in the light of modern perspectives opened up through conceptual clarification. Over half of the chapters (specifically numbers 2, 3, 4, 7, 9, 10, and 13) were written as contributions to some venture of scholarly publication. Details are given in the footnotes. I am grateful, as ever, to Estelle Burris for helping me to put this material into a form suitable for publication.

EPISTEMIC MERIT AND OTHER ESSAYS ON HUMAN KNOWLEDGE TABLE OF CONTENTS Preface Chapter 1:

Epistemic Merit

1

Chapter 2:

Is Cognitive Self-Criticism Rationally Possible?

11

Chapter 3:

The Problem of Future Knowledge

17

Chapter 4:

Diminishing Returns

39

Chapter 5:

Practical vs. Theoretical Reason

45

Chapter 6:

On Evaluating Scientific Theories

53

Chapter 7:

Authority

57

Chapter 8:

Cognitive Diffusion

69

Chapter 9:

Modeling in Pragmatic Perspective

79

Chapter 10: Historical Perspectives on the Systematization of Knowledge

89

Chapter 11: Communicative Approximation in Philosophy

115

Chapter 12: Particular Philosophies vs. Philosophy at Large

127

Chapter 13: Ultimate Explanation

137

Name Index

141

Chapter 1 EPISTEMIC MERIT 1. THE IDEA OF EPISTEMIC MERIT

W

ith virtually every sort of choice among alternatives, various different aspects of value are bound to come into consideration. Consider automobiles. In evaluating them with a view to their selective preferability, many different evaluative factors will have to be taken into account: economy of operation, mechanical soundness, driving maneuverability, rider comfort, crash safety, and many others. Or again, consider meals, where one can be superior to another in point of: availability, palatability, nourishability, presentation, economy, convenience (ease of preparation). Just the same sort of situation also prevails with regards to epistemic merit: here too various different factors will come into play. Epistemic or cognitive merit relates to the positivities and negativities of the claims or contentions that we deem ourselves to know. And it is clear that our convictions about things can exhibit a substantial variety of epistemic positivities. Prominent among these dimensions of propositional merit are: – truth – correctness – probability – plausibility – evidentiation/reliability – informativeness – precision/accuracy/detail – utility/applicability – importance/significance – novelty/originality/familiarity – interest

Nicholas Rescher • Epistemic Merit

2

Throughout this range, a statement bears the virtue at issue to the extent that what it claims to obtain does so. So in each case we are dealing with a sliding-scale range or contrast: – true/false – correct/incorrect – precise/imprecise – probable/improbable – plausible/implausible – well-evidentiated/ill-evidentiated – informative/uninformative – accurate/imprecise – useful/unuseful – important/unimportant – novel/familiar (trite) – interesting/uninteresting All of these scales of evaluation are applicable to our cognitive commitments and inclinations. Three different factors are at issue on this register, according as the merit relates to truthfulness/reliability, to informativeness, or to utility. The RELIABILITY-ORIENTED merits include: truth, correctness, probability, plausibility, and evidentiation. The INFORMATIVENESSORIENTED merits include informativeness, accuracy, and precision. The UTILITY-ORIENTED merits include importance, interest, and novelty/originality. And at this stage a further significant distinction comes into play as well. For on the one hand there stand the intrinsic merits relating to the inherent quantity of the information conveyed— its reliability and informativeness. On the other hand there are the utilitarian merits relating to the significance and value. Are the epistemic merits of our claims objective or do they lie in the subjectivity of their endorser’s mind? In virtually all cases the former situation obtains. Thus, for example, there is nothing subjective about the issue of whether a body of evidence supports a claim strongly or weakly, or whether a certain claim is precise or vague. The one significant exception here is the matter of interest. Whether or not a certain (putative) fact is interesting depends substantially on what

3

EPISTEMIC MERIT

the evaluator happens to be interested in. (Note, however, that importance is something else again!) Propositional merit as here understood is not a feature of what has become known in recent years as “virtue epistemology.” For this subject, as generally understood, addresses the merits of the proceedings and faculties of knowers, whereas the presently contemplated merits pertain to what is known (or taken to be so). All the same, the conception of epistemic merit is closely linked to the workings of rationality. For other things being equal it would clearly be irrational ever to prefer endorsing a claim of less epistemic merit to one greater. Rational preferability is thus a bridge that connects the merit of beliefs to the crucial virtues of their endorsers. 2. TENSION AMONG POSITIVITIES Ideally we would, of course want to have information that scored high in every dimension of merit: reliability, informativeness, and utility, etc. But in a difficult and complex world ideals are not all that easily realized—in this matter as in others. For the factors of propositional merit often stand in a state of competing tension with others, reflecting a general situation among multi-factual merits at large. Consider an automobile. Here the parameters of merit clearly include such factors as speed, reliability, repair infrequency, safety, operating economy, aesthetic appearance, road-handling ability. But in actual practice such features are so interrelated that they trade off against one another as complementary desiderata where more of A means less of B. Now it would be ridiculous to have a super-safe car with a maximum speed of two miles per hour. It would be ridiculous to have a car that is inexpensive to operate but spends three-fourths of the time in a repair shop. Invariably, perfection—an all-at-once maximization of every value dimension—is inherently unrealizable because of the inherent interaction of evaluative parameters.1 And this situation also holds in our present case. For example, it is a basic principle of epistemology that increased confidence in the correctness of our estimates can always be secured at the price of decreased accuracy. For in general an inverse relationship obtains between the definiteness or precision of our information and its substan-

Nicholas Rescher • Epistemic Merit

4

tiation: detail and security stand in a competing relationship. We estimate the height of the tree at around 25 feet. We are quite sure that the tree is 25±5 feet high. We are virtually certain that its height is 25±10 feet. But we can be completely and absolutely sure that its height is between 1 inch and 100 yards. Of this we are “completely sure” in the sense that we are “absolutely certain,” “certain beyond the shadow of a doubt,” “as certain as we can be of anything in the world,” “so sure that we would be willing to stake your life on it,” and the like. For any sort of estimate whatsoever there is always a characteristic trade-off relationship between the evidential security of the estimate, on the one hand (as determinable on the basis of its probability or degree of acceptability), and on the other hand its contentual detail (definiteness, exactness, precision, etc.). And so a complementarity relationship of the sort depicted in Display 1.1 obtains. This was adumbrated in the ideas of the French physicist Pierre Maurice Duhem (1981–1916) and may accordingly be called “Duhem’s Law.”2 In his classic work on the aim and structure of physical theory,3 Duhem wrote as follows: A law of physics possesses a certainty much less immediate and much more difficult to estimate than a law of common sense, but it surpasses the latter by the minute and detailed precision of its predictions … The laws of physics can acquire this minuteness of detail only by sacrificing something of the fixed and absolute certainty of common-sense laws. There is a sort of teeter-totter of balance between precision and certainty: one cannot be increased except to the detriment of the other.4

In effect, these two factors—security and detail—stand in a relation of inverse proportionality, as per the picture of Display 1. In this way too plausibility and novelty can play off against each other. The former is a matter of fitting into the context of what is accustomed and nonsurprizing; the latter is a matter of falling outside the range of the familiar. And these examples illustrate very general situation. What might be termed desideratum complementarity arises whenever different sorts of merit stand in such an opposing teeter-totter relationship rendering it inevitable that they cannot both achieve a maximal degree at one

5

EPISTEMIC MERIT

and the same time. This sort of situation is a clear indication that the idea of absolute perfection is simply inapplicable and inappropriate in many evaluative situations. The concurrent maximization in every relevant positivity is simply unavailable in this or indeed any other realistically conceivable world. All that one can ever reasonably ask for is an auspicious combination of values—an overall optimal profile whose nature is bound to depend on the use that its possessor purposes to make of the information at issue. Display 1 DUHEM’S LAW THE COMPLEMENTARITY TRADE-OFF BETWEEN SECURITY AND DEFINITENESS IN ESTIMATION increasing security (s)

s x d = c (constant)

increasing detail (d) NOTE: The shaded region inside the curve represents the parametric range of achievable information, with the curve indicating the limit of what is realizable. The concurrent achievement of great detail and security is impracticable.

3. EROTETIC MERIT The epistemic merits considered so far have been propositional: they relate to the positivities and negativities of our claims (statements, affirmations). But questions too can exhibit merit and deficiencies. Specifically these facets of erotetic—i.e., question-oriented—merit include such factors as: – difficulty – informativeness – importance – novelty

Nicholas Rescher • Epistemic Merit

6

– interest Several features of this list deserve note. The fact is that virtually all of them are anticipated in our previous register of claim-merits, the role exception here being difficulty—i.e., difficulty of securing a satisfactory answer—an issue which by its very nature is confined to questions. The other criteria are all found on the list of propositional merits. Note, however, that all of those preceding criteria relate to the specifically utilitarian range of issues. They all look to the merits of the proposed answers under the assumption that a correct (true) answer is at hand—an answer whose intrinsic merits can be taken for granted. Thus only the extrinsic/utilitarian merits are now in play. At this point the issue of intrinsic merit is left aside because the point is that the erotetic merits now at issue relate not specifically to particular propositions, but generically to any possible proposition that can afford an acceptable answers to the question at issue. 4. PROCEDURAL MERIT The conception of epistemic merit can be carried over pretty straightforwardly to the epistemic methods, procedures, and sources as well, seeing that the merit of such a process can be assessed in terms of the product that it delivers. All such resources have their limits as well, being of limited veracity, reliability, precision, etc. (Even the vary attestation of our senses is not altogether trustworthy.) The object of rational inquiry is to entrance our high-quality information by providing acceptable answers to our questions. In this context we invariably run two types of risks of error, namely the inappropriate rejection of claims, and their inappropriate acceptance. It must, however, be recognized that in general two fundamentally different kinds of misfortunes are possible in situations where risks are run and chances taken: 1. We reject something that, as it turns out, we should have accepted. We decline to take the chance, we avoid running the risk at

7

EPISTEMIC MERIT

issue, but things turn out favorably after all, so that we lose out on the gamble. 2. We accept something that, as it turns out, we should have rejected. We do take the chance and run the risk at issue, but things go wrong, so that we lose the gamble. If we are risk seekers, we will incur few misfortunes of the first kind, but, things being what they are, many of the second kind will befall us. On the other hand, if we are risk avoiders, we shall suffer few misfortunes of the second kind, but shall inevitably incur many of the first. The overall situation has the general structure depicted in Display 2. Display 2 RISK ACCEPTANCE AND MISFORTUNES Misfortune of kind 1 Misfortune of kind 2

Number of (significant) misfortunes

0 Type 1 (Risk avoiders)

50 Type 2.1 (Cautious calculators)

100 Type 2.2 (Daring calculators)

Type 3 (Risk seekers)

Increasing risk acceptance (in % of situations)

Clearly, the reasonable thing to do is to adopt a policy that minimizes misfortunes overall. It is thus evident that both risk-seeking and riskavoiding approaches will, in general, fail to be rationally optimal. Both proceedings engender too many misfortunes for comfort. The sensible and prudent thing is to adopt the middle-of-the-road policy of

Nicholas Rescher • Epistemic Merit

8

risk calculation, striving as best we can to balance the positive risks of outright loss against the negative ones of lost opportunity. Rationality thus counsels the line: Neither avoid nor court risks, but manage them prudently in the search for an overall minimization of misfortunes. The rule of reason calls for sensible management and a prudent calculation of risks; it standardly enjoins upon us the Aristotelian golden mean between the extremes of risk shunning and risk seeking. Turning now to the specifically cognitive case, it may be observed that scepticism succeeds splendidly in averting misfortunes of the second kind. The sceptic makes no errors of commission; by accepting nothing, he accepts nothing false. But, of course, he loses out on the opportunity to obtain any sort of information. Ignorance, lack of information, cognitive disconnection from the world’s course of things—in short, errors of omission—are also negativities of substantial proportions. The sceptic thus errs on the side of safety, even as the syncretist errs on that of gullibility. The sensible course is clearly that of a prudent calculation of risks. Ultimately, we face a situation of value trade-offs. The merit of an epistemic process and procedure is not determined by its exclusion of any risk of error, which is in the end impossible, but by minimizing the negativities we incur in the inevitable acceptance of such risks. The question becomes: Are we prepared to run a greater risk of mistakes to secure the potential benefit of an enlarged understanding? In the end, the matter is one of priorities—of safety as against information, of ontological economy as against cognitive advantage, of an epistemological risk aversion as against the impetus to understanding. The ultimate issue is one of values and priorities, weighing the negativity of ignorance and incomprehension against the risk of mistakes and misinformation. There is a delicate balance between undue scepticism and gullibility. The crucial fact is that inquiry, like virtually all other human endeavors, is not a cost-free enterprise. The process of obtaining plausible answers to our questions also involves costs and risks. Whether these costs and risks are worth incurring depends on our assessment of the potential benefit to be gained. And unlike the committed sceptic, most of us deem the value of information about the world we live in to

9

EPISTEMIC MERIT

be a benefit of immense value something that is well worth substantial risks. 5. CONCLUSION Perfection is unattainable in an imperfect world. And this holds for the epistemic/cognitive domain as well. In nature there is no such thing as a perfectly efficient machine and in cognition there is no such thing as a perfect provider of knowledge. In this realm trade-offs and compromises are unavoidable: invariably embracing realization of one positivity calls for sacrificing that some other. In this world cognitive merit is on limited supply—a circumstance that speaks not for scepticism but for causation. NOTES 1

It is not that economists have not recognized the problem, but just that they have no workable handle on its solution. As Kevin Lankaster put it in one of the few attempts, “we shall make some assumptions which are, in balance, neither more nor less heroic than those made elsewhere in our present economic theories.” (See his “A new Approach to Consumer Theory,” The Journal of Political Economy, vol. 74 (1966), pp. 132–57; see p. 135.) On the scale of admissions of defeat this ranks close to the Emperor Hirohito’s statement that “the war situation has developed not necessarily to Japan’s advantage.”

2

It is alike common and convenient in matters of learning and science to treat ideas and principles eponymously. An eponym, however, is a person for whom something is named, and not necessarily after whom this is done, seeing that eponyms can certainly be honorific as well as genetic. Here at any rate eponyms are sometimes used to make the point that the work of the person at issue has suggested rather than originated the idea or principle at issue.

3

La théorie physique: son objet, et sa structure (Paris: Chevalier and Rivière, 1906); tr. by Philip P. Wiener, The Aim and Structure of Physical Theory (Princeton, Princeton University Press, 1954.) This principle did not elude Neils Bohr himself, the father of complementarity theory in physics: “In later years Bohr emphasized the importance of complementarity for matters far removed from physics. There is a story that Bohr was once asked in German what is the quality that is complementary to truth (Wahrheit). After some thought he answered clarity (Klarheit).” Stephen Weinberg, Dreams of a Final Theory (New York: Pantheon Books, 1992), p. 74 footnote 10.

4

Duhem, op. cit., pp. 178–79. Italics supplied.

Chapter 2 IS COGNITIVE SELF-CRITICISM RATIONALLY POSSIBLE? (Can Cognitive Subjectivity Be Transcended?)

C

an our thought possibly achieve a higher “external” standpoint from which the quality of our thinking can itself be assessed? Seemingly not. For as Kant insisted, the “I think” is ubiquitous throughout the whole range of our thinking, conjoined inseparably to our thought as our shadow is attached inseparably to our body. And this being so, we seem destined to remain within the realm of our thinking and so cannot survey it from without. After all, the instruction, “Tell me what is the case over and above and apart from what you think to be the case,” asks for the impossible. To assert “P is so but I do not think that it is so” is to become enmeshed in a meaningless self-contradiction: to characterize a fact as such is to claim knowledge of it. Various philosophers from Hegel onwards have urged this idea that we cannot step outside of our thought to criticize the quality of our thinking. As they see it, we are “enclosed in the prison of our thought”—unable to get outside and take a critical view of it and to consider limitations. With respect to our own thinking, so they maintain, self-criticism is self-contradiction. In support of this position there stands the circumstance that on casual consideration, it would seem that one cannot coherently maintain that “Reality is different from what I think it to be.” For consider the following reasoning: (1) I think that reality is different from what I think it to be. (Supposition) (2) I think reality to be X-wise. (Supposition)

Nicholas Rescher • Epistemic Merit

12

(3) I think that reality is different from being X-wise, and accordingly not X-wise. (Deductive inference from (1) and (2).) It is clear here that this reasoning must be incoherent, since (2) and (3) are patently incompatible. And yet, on the other hand, we clearly do not—cannot—endorse the megalomaniacal claim to infallibility inherent in the generalization: • “Whenever I think p to be true, it is indeed so.” A realistically cautious and fallibilistic stance towards our knowledge is certainly warranted. In following along these traits of thought, it deserves to be noted that a significant equivocation will affect my endorsement of the idea that I think some falsehood to be true. For let I(P) abbreviate “I think the proposition P to be true” or simply “I endorse P.” And now contrast (3) I(∃p)(Ip & ~p) with (4) (∃p)I(Ip & ~p) Given that I(P & Q) ≡ (I(P) & I(Q)), and that I(I(P)) → I(P), the second of these entails the untenable (∃P)(I(~p) & Ip) So (4) is clearly unacceptable, unlike its unproblematic and indeed plausible cousin (3). Moreover, the preceding reflections do not as yet confront the crucial fact of an imprecision in many of our cognitive commitments. For often as not our beliefs are acknowledged as not being exact, precise,

13

IS COGNITIVE SELF-CRITICISM RATIONALLY POSSIBLE?

and detailed. So with this idea in mind, let us substitute for (2) above its somewhat weakened version: (2′) I think reality to be approximately X-wise. And let us analogously accept that (3) above should actually take the somewhat weakened version: (3′) I think that realty is not exactly (or not precisely) X-wise. And it is now clear that no contradiction will be forthcoming. For it is not only possible but sensible to subscribe to theses of the format: • “As I see it, P is approximately true, though I cannot claim it with exactness.” In the end, then, it is not only possible but appropriate to subscribe to the idea: • “I do not think that reality is exactly as I think it to be.” The acknowledged complexity of the real means that finite intelligences are destined to have an oversimplified—and thereby imperfect—grasp of its make-up. And there is no good reason why they should have to refrain from acknowledging this. In the end, cognitive self-criticism is not only possible but rationally mandated. It is clear that there are specifiable limits to our knowledge. Thus realizing that I am not omniscient, I thereby know that: • There are facts (truths) that I do not know. To be sure I cannot identify any of them. For to hold of a specific claim P that it represents a truth that I do not know, I would need to determine that P is indeed a truth—and thereby would have to know it. As far as I am concerned, the condition – is a truth that I do not know

Nicholas Rescher • Epistemic Merit

14

represents a vagrant predicate which, as far as I am concerned, simply has no known (or indeed knowable) address. Again, realizing full well that I am fallible, I cannot but endorse • Some of my beliefs are false. But again I cannot identify any of them. For if P is indeed a (current) belief of mine, then I cannot maintain its falsity without embarking in the absurdity of self-contradiction.1 To be sure, a critical assessment of one’s beliefs cannot but underwrite unalloyed confidence in some of them. This indicates not only— on the positive side—the criteria “I think” and “I exist,” but also such sceptical beliefs as • Some of my convictions are false. For there is simply no way for this conviction of mine to fail to be true, seeing that its denial would of course enmesh us in self-contradiction. But moving in this direction there is nothing infeasible about • All our generalizations are flawed—this one included. Nor is any vitiating self-contradiction inherent in: • All of our generalizations admit of exceptions—this one included. A cautious, fallibilistic stance towards our knowledge is not only possible but rationally well-advised. For in the end, all universal claims regarding our beliefs are imperfect and capable of improvement—all of them admit of emendation, clarification, elaboration. And there is no reason for denying that that contention is also self-applicable, so one can in theory do a better job of conveying the point that it is designed to make. The lesson of such considerations is clear. Provided we do not take an unduly rigoristic stance towards the precision, detail and generality of our thinking—but are prepared from the outset to see it as inherent-

15

IS COGNITIVE SELF-CRITICISM RATIONALLY POSSIBLE?

ly rough and approximative—then there is no reason of general principle why we should not be prepared to view our thought about reality in a fallibilistically self-critical light. Such a contention wears it rationale on its sleeves: it depicts realty from the limited point of view of an imperfect knower. In this sense and in this way it is the aspect of an idealism that coordinates the nature of our claims to knowledge with the situation of the knower. But let us return to the beginning. The doctrine of pervasive subjectivity has it that I cannot get out beyond my own thoughts—that I cannot scrutinize my thoughts and beliefs ab extra. This may be true in the truistic sense that whatever I do will (ex hypothesi) be something done by me. But it is certainly false in the deeper sense that I cannot sensibly take a critical stance towards what I think to be so. Thought, after all, can take different forms: there is what I think/believe to be so and what I think/believe to be possible. I can consider not only what I actually do but also what I possibly might believe. Hypothesis is a resource that enables us to transcend the limits of what we think or believe about matters of fact.2 In a letter of August 1650 addressed to the General Assembly of the Church of Scotland, Oliver Cromwell writes: “I beseech ye, in the bowels of Christ, to think it possible that he may be mistaken.” He was not asking the impossible of those Scotsmen. NOTES 1

On these vagrant predicates see the author’s Epistemetrics (Cambridge: Cambridge University Press, 2006).

2

Does it get us beyond the limits of what we think/believe about matters of possibility? Presumably not, because there just is no “beyond the limits of possibility” (although believed possibly is something else again).

Chapter 3 THE PROBLEM OF FUTURE KNOWLEDGE 1. PREDICTING FUTURE KNOWLEDGE

P

hilosophers since Aristotle have stressed that knowledge about the future poses drastic problems.1 And even the issue of knowing the future of knowledge itself is particularly challenging. No-one can possibly predict the details of tomorrow’s discoveries today. To be sure, there is no inherent problem about predicting that certain discoveries will be made. But their nature is bound to be unfathomable. After all, if we knew the details and could solve tomorrow’s problems today, then they simply would not be tomorrow’s problems. We may assume—or suppose—that homo sapiens will continue to exist and to do so in a form that will enable him to pursue the prospect of inquiry into the nature of things. With this supposed, we do actually know some important facts about the body of knowledge that will be available to the knowers of the future. One of these relates to retrospective knowledge: knowledge of particular facts regarding the past. That Caesar crossed the Rubicon, that Napoleon lost at Waterloo, that Hitler led Germany to invade Poland in 1939, that there were 48 states in the continental US in the year 2000—these and their like are parts of our currently available body of knowledge that will continue in place in the future. And another of these preserved kinds of knowledge relates to various trans-temporal facts: the speed of sound, the specific gravity of lead, the molecular structure of water, the evolutionary history of man. Such facts will continue securely in place— at least in a rough and approximate formulation. However, that aspect of the future which is most evidently unknowable is the future of invention, of discovery, of innovation—and particularly in the case of science itself. As Immanuel Kant insisted long ago that every new discovery opens the way to others, every question that is answered gives rise to yet further questions to be in-

Nicholas Rescher • Epistemic Merit

18

vestigated.2 And the fruits of future science are not yet ripe for present picking. The landscape of natural science is ever-changing: innovation is the very name of the game. Not only do the theses and themes of science change but so do the very questions. Scientific inquiry is a creative process of theoretical and conceptual innovation; it is not a matter of pinpointing the most attractive alternative within the presently specifiable range, but one of enhancing and enlarging the range of envisageable alternatives. Such issues pose genuinely open-ended questions of original research: they do not call for the resolution of problems within a preexisting framework but for a rebuilding and enhancement of the framework itself. Most of the questions with which present-day science grapples could not even have been raised in the state-of-the-art that prevailed a generation ago. It is in principle infeasible for us to tell now but only how future science will answer present questions but even what questions will figure on the question agenda of the future, let alone what answers they will engender. In this regard, as in others, it lies in the inevitable realities of our condition that the details of our ignorance are—for us at least—hidden away in an impenetrable fog of obscurity. And so, the contrast between present knowledge and future knowledge is clearly one that we cannot characterize in detail. It would be utterly unreasonable to expect prognostications of the particular content of scientific discoveries. It may be possible in some cases to speculate that science will solve a certain problem in the future, but how it will do so, lies beyond the ken of those who antedate the discovery itself. If we could predict discoveries in detail in advance, then we could make them in advance.3 In matters of scientific importance, then, we must be prepared for surprises. Commenting shortly after the publication of Frederick Soddy’s speculations about atomic bombs in his 1950 book Science and Life, Robert A. Millikan, a Nobel laureate in physics, wrote that “the new evidence born of further scientific study is to the effect that it is highly improbable that there is any appreciable amount of available subatomic energy to tap.”4 In science forecasting, the record of even the most qualified practitioners is poor. For people may well not even be able to con-

19

THE PROBLEM OF FUTURE KNOWLEDGE

ceive the explanatory mechanisms of which future science will make routine use. In inquiry as in other areas of human affairs, substantial upheavals can come about in a manner that is sudden, unanticipated, and sometimes unwelcome. Major scientific breakthroughs often result from research projects that have very different ends in view. Louis Pasteur’s discovery of the protective efficacy of inoculation with weakened disease strains affords a striking example. While studying chicken cholera, Pasteur accidentally inoculated a group of chickens with a weak culture. The chickens became ill, but, instead of dying, recovered. Pasteur later reinoculated these chickens with fresh culture—one strong enough to kill an ordinary chicken. To Pasteur’s surprise, the chickens remained healthy. Pasteur then shifted his attention to this interesting phenomenon, and a productive new line of investigation opened up. In empirical inquiry, we generally cannot tell in advance what further questions will be engendered by our endeavors to answer those on hand. New scientific questions arise from answers we give to previous ones, and thus the issues of future science simply lie beyond our present horizons. It is a key fact of life that ongoing progress in scientific inquiry is a process of conceptual innovation that always places certain developments outside the cognitive horizons of earlier workers because the very concepts operative in their characterization become available only in the course of scientific discovery itself. (Short of learning our science from the ground up, Aristotle could have made nothing of modern genetics, nor Newton of quantum physics.) The major discoveries of later stages are ones which the workers of a substantially earlier period (however clever) not only have failed to make but which they could not even have understood, because the requisite concepts were simply not available to them. Thus, it is effectively impossible to predict not only the answers but even the questions that lie on the agenda of future science. Detailed prediction is beyond the reach of reasonable aspiration in those domains where innovation is preeminently conceptual.

Nicholas Rescher • Epistemic Merit

20

2. DETAIL vs. GENERALITY Forecasts of scientific developments conform to the vexatious general principle that, other things being equal, the more informative a forecast it, the less secure it is, and conversely, the less informative, the more secure it is. It is a fundamental law of epistemology that increased confidence in the correctness of our estimates can always be secured at the price of decreased accuracy. For in general an inverse relationship obtains between the definiteness or precision of our information and its substantiation: detail and security stand in a competing relationship. We estimate the height of the tree at around 25 feet. We are quite sure that the tree is 25±5 feet high. We are virtually certain that its height is 25±10 feet. But we can be completely and absolutely sure that its height is between 1 inch and 100 yards. Of this we are “completely sure” in the sense that we are “absolutely certain,” “certain beyond the shadow of a doubt,” “as certain as we can be of anything in the world,” “so sure that we would be willing to stake your life on it,” and the like. For any sort of estimate whatsoever there is always a characteristic trade-off relationship between the evidential security of the estimate, on the one hand (as determinable on the basis of its probability or degree of acceptability), and on the other hand its contentual detail (definiteness, exactness, precision, etc.). An ironic but critically important feature of scientific inquiry is that the unforeseeable tends to be of special significance just because of its unpredictability. The more important the innovation, the less predictable it is, because its very unpredictability is a key component of importance. Science forecasting is beset by a pervasive normality bias, because the really novel often seems so bizarre. A. N. Whitehead has wisely remarked: If you have had your attention directed to the novelties in thought in your own lifetime, you will have observed that almost all really new ideas have a certain aspect of foolishness when they are first produced.5

Before the event, a revolutionary scientific innovation will, if imaginable at all, generally be deemed outlandishly wild speculation—mere science fiction, or perhaps just plain craziness.6

21

THE PROBLEM OF FUTURE KNOWLEDGE

3. IS FUTURE KNOWLEDGE DIMINISHING? Some theorists have maintained that as science progresses, the magnitude of the issues grows ever smaller and smaller. Later questions, so they hold, are always lesser questions, so that later science is always lesser science. Successive innovation becomes a matter of increasing refinement in detail, and furnishes new materials whose inherent significance decreases continually—exactly as with the decimal expansion of π or 2 . Scientific inquiry would thus be conceived of as analogous to terrestrial exploration, whose product—geography—yields results of continually smaller significance which fill in ever more minute gaps in our information. First explorers discover continents, later ones find river sources, and still later ones conquer mountains. On such a view, the later investigations yield findings of ever smaller scope and significance, with each successive accretion making a relatively smaller contribution to what has already come to hand. The advance of science leads, step by diminished step, toward a fixed and final view of things. This general position is central to Charles Sanders Peirce’s vision of ultimate convergence in scientific inquiry: As the investigation goes on, additions to our knowledge … are of less and less worth. Thus, when Chemistry sprang into being, Dr. Wollaston, with a few test tubes and phials on a tea-tray, was able to make new discoveries of the greatest moment. In our day, a thousand chemists, with the most elaborate appliances, are not able to reach results which are comparable in interest with those early ones. All the sciences exhibit the same phenomenon …7

But such a theory encounters deep difficulties. For any such picture of convergence, however carefully crafted, will shatter against the conceptual innovation that continually brings entirely new, radically different scientific concepts to the fore and brings in its wake an ongoing wholesale revision of “established fact.” Consider how many facts about a simple object—a sword, for example—were unknown to the ancients. They did not know that it contained carbon or that it con-

Nicholas Rescher • Epistemic Merit

22

ducted electricity. The very concepts at issue (“carbon,” “electricityconduction”) were outside their cognitive range. There are key facts (or presumptive facts) even about the most familiar things—trees and animals, bricks and mortar—that were unknown a hundred years ago. This ignorance arises because the required concepts have not been formulated. It is not just that the scientists of antiquity did not know what the half-life of californium is but that they couldn’t have understood this fact if someone had told them about it. Ongoing scientific progress is emphatically not simply a matter of increasing accuracy by extending the numbers in our otherwise stable descriptions of nature out to a few more decimal places. It will not serve to take the preservationist stance that the old theories are generally acceptable as far as they go and merely need supplementation; significant scientific progress is genuinely revolutionary in that there is a fundamental change of mind as to how things happen in the world. Progress of this caliber is generally a matter not of adding further facts to tell ever diminishing gaps—on the order of filling in a crossword puzzle—but of changing the framework itself. The fact is that in natural science even small innovations with respect to data can engender large changes in theoretical systematization so as to render future science presently inscrutable. 4. IS FUTURE KNOWLEDGE PERFECTIBLE? How far can the scientific enterprise advance toward a definitive understanding of reality? Might science attain a point of recognizable completion? But is the achievement of perfected science actually a genuine possibility, even in theory when all of the “merely practical” obstacles are put aside as somehow incidental? After all, what would perfected science be like? What sort of standards would it have to meet? Clearly, it would have to complete in full the discharge of natural science’s mandate or mission. It thus appears that if we are to claim that our science has attained a perfected condition, it would have to satisfy (at least) the four following conditions:

23

THE PROBLEM OF FUTURE KNOWLEDGE

1. Erotetic completeness: It must answer, in principle at any rate, all those descriptive and explanatory questions that it itself countenances as legitimately raisable, and must accordingly explain everything it deems explicable. 2. Predictive completeness: It must provide the cognitive basis for accurately predicting those eventuations that are in principle predictable (that is, those which it itself recognizes as such). 3. Pragmatic completeness: It must provide the requisite cognitive means for doing whatever is feasible for beings like ourselves to do in the circumstances in which we labor. 4. Temporal finality (the omega-condition): It must leave no room for expecting further substantial changes that destabilize the existing state of scientific knowledge. Each of these modes of substantive completeness deserves detailed consideration. First, however, one brief preliminary remark. It is clear that any condition of science that might qualify as “perfected” would have to meet certain formal requirements of systemic unity. If, for example, there are different routes to one and the same question (for instance, if both astronomy and geology can inform us about the age of the earth), then these answers will certainly have to be consistent. Perfected science will have to meet certain requirements of structural systematicity in the manner of its articulation: it must be coherent, consistent, consonant, uniform, harmonious, and so on. Such requirements represent purely formal cognitive demands upon the architectonic of articulation of a body of science that could lay any claim to perfection. Interesting and important though they are, we shall not, however, engage these formal requirements here, our present concern being with those four just-mentioned substantive issues.8

Nicholas Rescher • Epistemic Merit

24

5. THEORETICAL ADEQUACY: ISSUES OF EROTETIC COMPLETENESS Could we ever actually achieve erotetic completeness—the condition of being able to resolve, in principle, all of our (legitimately posable) questions about the world? Could we ever find ourselves in this position?9 In theory, yes. A body of science certainly could be such as to provide answers to all those questions it allows to arise. But just how meaningful would this mode of completeness be? The reality is this erotetic completeness is an unattainable mirage. We can never exhaust the range of open questions. For the world’s furnishings are cognitively opaque; we cannot see to the bottom of them. Knowledge can become more extensive without thereby becoming more complete. And this view of the situation is rather supported than impeded if we abandon a cumulativist/preservationist view of knowledge or purported knowledge for the view that new discoveries need not supplement but can displace old ones. It is sobering to realize that the erotetic completeness of a state of science S does not necessarily betoken its comprehensiveness or sufficiency. It might reflect the paucity of the range of questions we are prepared to contemplate—a deficiency of imagination, so to speak. When the range of our knowledge is sufficiently restricted, then its erotetic completeness will merely reflect this impoverishment rather than its intrinsic adequacy. Conceivably, if improbably, science might reach a purely fortuitous equilibrium between problems and solutions. It could eventually be “completed” in the narrow erotetic sense— providing an answer to every question one can also in the thenexisting (albeit still imperfect) state of knowledge—without thereby being completed in the larger sense of answering the questions that would arise if only one could probe nature just a bit more deeply. And so, our corpus of scientific knowledge could be erotetically complete and yet fundamentally inadequate. Thus, even if realized, this erotetic mode of completeness would not be particularly meaningful. (To be sure, this discussion proceeds at the level of supposition contrary to fact. The exfoliation of new questions from old in the course of scien-

25

THE PROBLEM OF FUTURE KNOWLEDGE

tific inquiry that is at issue in Kant’s Principle of question-propagation spells the infeasibility of ever attaining erotetic completeness.) After all, any judgment we can make about the laws of nature—any model we can contrive regarding how things work in the world—is a matter of theoretical triangulation from the data at our disposal. And we should never have unalloyed confidence in the definitiveness of our data base or in the adequacy of our exploitation of it. Observation can never settle decisively just what the laws of nature are. In principle, different law-systems can always yield the same observational output: as philosophers of science are wont to insist, observations underdetermine laws. To be sure, this worries working scientists less than philosophers, because they deploy powerful regulative principles—simplicity, economy, uniformity, homogeneity, and so on—to constrain uniqueness. But neither these principles themselves nor the uses to which they are put are unproblematic. No matter how comprehensive our data or how great our confidence in the inductions we base upon them, the potential reversibility of our claims cannot be dismissed. We can reliably estimate the amount of gold or oil yet to be discovered, because we know the earth’s extent and can thus establish a proportion between what we know and what we do not. But we cannot comparably estimate the amount of knowledge yet to be discovered, because we have and can have no way of relating what we know to what we do not. But (to hark back to Hegel), with respect to the realm of knowledge, we are not in a position to draw a line between what lies inside and what lies outside—seeing that, ex hypothesi we have no cognitive access to that latter. One cannot make a survey of the relative extent of knowledge or ignorance about nature except by basing it on some picture of nature that is already in hand—that is, unless one is prepared to take at face value the deliverances of existing science. This process of judging the adequacy of our science on its own telling is the best we can do, but it remains an essentially circular and consequently inconclusive way of proceeding. The long and short of it is that there is no cognitively adequate basis for maintaining the completeness of science in a rationally satisfactory way.

Nicholas Rescher • Epistemic Merit

26

6. PRAGMATIC COMPLETENESS The arbitrament of praxis over our scientific contentions—not then theoretical merit but practical applicability—affords the best standard of adequacy. But could we ever be in a position to claim that science has been completed on the basis of the success of its practical applications? On this basis, the perfection of science would have to manifest itself in the perfecting of control—in achieving a perfected technology. But just how are we to proceed here? Could our natural science achieve manifest perfection on the side of control over nature? Could it ever underwrite a recognizably perfected technology? The issue of “control over nature” involves much more complexity than may appear on the first view. For just how is this conception to be understood? Clearly, in terms of bending the course of events to our will, of attaining our ends within nature. But this involvement of “our ends” brings to light the prominence of our own contribution. For example, if we are inordinately modest in our demands (or very unimaginative), we may even achieve “complete control over nature” in the sense of being in a position to do whatever we want to do, but yet attain this happy condition in a way that betokens very little real capability. One might, to be sure, involve the idea of omnipotence, and construe a “perfected” technology as one that would enable us to do literally anything. But this approach would at once run into the old difficulties already familiar to the medieval scholastics. They were faced with the challenge: “If God is omnipotent, can he annihilate himself (contra his nature as a necessary being), or can he do evil deeds (contra his nature as a perfect being), or can he make triangles have four angles (contrary to their definitive nature)?” Sensibly enough, the scholastics inclined to solve these difficulties by maintaining that an omnipotent God need not be in a position to do literally anything but rather simply anything that it is possible for him to do. Similarly, we cannot explicate the idea of technological omnipotence in terms of a capacity to produce any result whatsoever, wholly without qualification. We cannot ask for the production of a perpetuum mobile, for spaceships with “hyperdrive” enabling them to attain transluminar velocities, for devices that predict essentially stochastic processes such as the disintegrations of transuranic atoms, or for piston devices that

27

THE PROBLEM OF FUTURE KNOWLEDGE

enable us to set independently the values for the pressure, temperature, and volume of a body of gas. We cannot, in sum, ask of a “perfected” technology that it should enable us to do anything that we might take into our heads to do, no matter how “unrealistic” this might be. All that we can reasonably ask of it is that perfected technology should enable us to do anything that it is possible for us to do—and not just what we might think we can do but what we really and truly can do. A perfected technology would be one that enabled us to do anything that can possibly be done by creatures circumstanced as we are. But how can we deal with the pivotal conception of “can” that is at issue here? Clearly, only science—real, true, correct, perfected science—could tell us what indeed is realistically possible and what circumstances are indeed inescapable. Whenever our “knowledge” falls short of this, we may well “ask the impossible” by way of accomplishment (for example, spaceships in “hyperdrive”), and thus complain of incapacity to achieve control in ways that put unfair burdens on this conception. Power is a matter of the “effecting of things possible”—of achieving control—and it is clearly cognitive state-of-the-art in science which, in teaching us about the limits of the possible, is itself the agent that must shape our conception of this issue. Every law of nature serves to set the boundary between what is genuinely possible and what is not, between what can be done and what cannot, between which questions we can properly ask and which we cannot. We cannot satisfactorily monitor the adequacy and completeness of our science by its ability to effect “all things possible,” because science alone can inform us about what is possible. As science grows and develops, it poses new issues of power and control, reformulating and reshaping those demands whose realization represents “control over nature.” For science itself brings new possibilities to light. (At a suitable stage, the idea of “splitting the atom” will no longer seem a contradiction in terms.) To see if a given state of technology meets the condition of perfection, we must already have a body of perfected science in hand to tell us what is indeed possible. To validate the claim that our technology is perfected, we need to preestablish the completeness of our science. The idea works in such a way that claims to perfected control can rest only on perfected science.

Nicholas Rescher • Epistemic Merit

28

In the final analysis, then, we cannot regard the realization of “completed science” as a meaningful prospect—because we cannot really say science-independently what it is that we are asking for. And this consideration decisively substantiates the idea that we must always presume our knowledge to be incomplete in the domain of natural science. 7. PREDICTIVE COMPLETENESS The difficulties encountered in using physical control as a standard of “perfection” in science will also hold with respect to prediction, which, after all, is simply a mode of cognitive control. Suppose someone asks: “Are you really still going to persist in plaints regarding the incompleteness of scientific knowledge when science can predict everything?” The reply is simply that science will never be able to predict literally everything: the very idea of predicting everything is simply unworkable. For then, whenever we predict something, we would have to predict also the effects of making those predictions, and then the ramification of those predictions, and so on ad indefinitum. The very most that can be asked is that science put us into a position to predict, not everything, but rather anything that we might choose to be interested in and to inquire about. And here it must be recognized that our imaginative perception of the possibilities might be much too narrow. We can only make predictions about matters that lie, at least broadly speaking, within our cognitive horizons. Newton could not have predicted findings in quantum theory any more than he could have predicted the outcome of American presidential elections. One can only make predictions about what one is cognizant of, takes note of, deems worthy of consideration. In this regard, one can be myopic either by not noting or by losing sight of significant sectors of natural phenomena. In the end it is science itself that determines the limits to predictability—insisting that some phenomena (the stochastic processes encountered in quantum physics, for example) are inherently unpredictable. And this is always to some degree problematic. After all, the most that science can reasonably be asked to do is to predict what is predictable. But this will have to be what it itself sees as in principle

29

THE PROBLEM OF FUTURE KNOWLEDGE

predictable. No more can be expected of science than answering every predictive question that it itself countenances as proper. And not only is this problematically circular, but we must once more recognize that any given state of science might have gotten matters quite wrong. With regard to predictions, we are thus in the same position that obtains with regard to actually interventionist (rather than “merely cognitive”) control. Here, too, we can unproblematically apply the idea of improvement—of progress. But it makes no sense to contemplate the achievement of perfection. For its realization is something we could never establish by any practicable means. 8. TEMPORAL FINALITY And now on to temporal finality. Scientists from time to time indulge in eschatological musings and tell us that the scientific venture is approaching its end.10 And it is, of course, entirely conceivable that natural science will come to a stop, and will do so not in consequence of a cessation of intelligent life but in C. S. Peirce’s more interesting sense of completion of the project: of eventually reaching a condition after which even indefinitely ongoing inquiry will not—and indeed in the very nature of things cannot—produce any significant change, because inquiry has come to “the end of the road.” The situation would be analogous to that envisaged in the apocryphal story in vogue during the middle 1800s regarding the Commissioner of the United States Patents who resigned his post because there was nothing left to invent.11 Such a position is in theory possible. But here, too, we can never effectively determine that it is actual. There is no practicable way in which the claim that science has achieved temporal finality can be validated. The question “Is the current state of science, S, final?” is one for which we can never legitimate an affirmative answer. For the prospect of future changes of S can never be precluded. After all, one cannot plausibly move beyond “We have (in S) no good reason to think that S will ever change” to obtain “We have (in S) good reason to think that S will never change.” Moreover, just as the appearance of erotetic and pragmatic equilibrium can be a product of narrowness and weakness, so can temporal finality. We may think that science is unchangeable simply because

Nicholas Rescher • Epistemic Merit

30

we have been unable to change it. But that’s just not good enough. Were science ever to come to a seeming stop, we could never be sure that it had done so not because it is at “the end of the road” but because we are at the end of our tether. We can never ascertain that science has attained the X-condition of final completion, since from our point of view the possibility of further change lying “just around the corner” can never be ruled out finally and decisively. No matter how final a position we appear to have reached, the prospects of its coming unstuck cannot be precluded. As we have seen, future science is inscrutable. We can never claim with assurance that the position we espouse is immune to change under the impact of further data—that the oscillations are dying out and we are approaching a final limit. In its very nature, science “in the limit” relates to what happens in the long run, and this is something about which we in principle cannot gather information: any information we can actually gather inevitably pertains to the short run and not the long run. We can never achieve adequate assurance that apparent definitiveness is real. We can never consolidate the claim that science has settled into a frozen, changeless pattern. The situation in natural science is such that our knowledge of nature must ever be presumed to be incomplete—and thereby inadequate overall. One is thus led back to the stance of the idealistic tradition from Plato to Royce that human knowledge inevitably falls short of recognizably “perfected science” (the Idea, the Absolute), and must accordingly be looked upon as deficient. Our knowledge of the real is something we can certainly improve upon—but not something we can perfect. As best we can judge, science is destined to remain incomplete. 9. THE PROBLEM OF FUTURE SCIENCE IN ITS RELATION TO REALITY How then are we to relate present-day science to the science of the future? The preceding considerations must inevitably constrain and condition our attitude toward the natural mechanisms envisaged in the science of the day. We certainly do not—or should not—want to reify (hypostasize) the “theoretical entities” of our current science as current science sees them—to say flatly and unqualifiedly that the con-

31

THE PROBLEM OF FUTURE KNOWLEDGE

trivances of our present-day science correctly depict the furniture of the real world. We do not—or at any rate, given the realities of the case, should not—want to adopt categorically the ontological implications of scientific theorizing in just exactly the state-of-the-art configurations presently in hand. An unacceptable fallibilism precludes the claim that what we purport to be scientific knowledge is in fact real knowledge, and accordingly blocks the path to a scientific realism that maintains that the furnishings of the real world are exactly as our science states them to be inconclusive. If the future is anything like the past, if historical experience affords any sort of guidance in these matters, then we know that all of our scientific theses and theories at the present scientific frontier will ultimately require revision in some (presently altogether indiscernible) details. All the experience we can muster indicates that there is no justification for viewing our science as more than an inherently imperfect stage within an ongoing development. The ineliminable prospect of far-reaching future changes of mind in scientific matters destroys any prospect of claiming that the world is as our science claims it to be— that present science’s view of nature’s constituents and laws is definitively correct. Our prized “scientific knowledge” is no more than our “current best estimate” of the matter. The step of reification is always to be taken provisionally, subject to a mental reservation of presumptive revisability. We cannot but acknowledge the prospect that we shall ultimately recognize many or most of our frontier scientific theories to need revision and that what we proudly vaunt as scientific knowledge is a tissue of hypotheses—of tentatively adopted contentions of many or most of which we will ultimately come to regard as requiring serious revision or perhaps even abandonment. And so a clear distinction must be maintained between “our conception of reality” and “reality as it really is.” We must—and do— realize that there is precious little justification for holding that presentday natural science describes reality and depicts the world as it really is. And this constitutes a decisive impediment to any straightforward realism. It must inevitably constrain and condition our attitude towards the natural mechanisms envisioned in contemporary science. We certainly do not—or should not—want to reify (hypostatize) flat-

Nicholas Rescher • Epistemic Merit

32

out the “theoretical entities” of present-day science, to say flatly and without qualification that the contrivances of our present-day science correctly depicts the nature of things as they actually and ultimately are. This situation blocks the option of scientific realism of any straightforward sort. Not only are we not in a position to claim that our knowledge of reality is complete (that we have gotten at the whole truth of things), but we are not even in a position to claim that our “knowledge” of reality is correct (that we have gotten at the real truth of things) in a way that precludes the need for any future qualification and emendation. Such a position calls for the humbling view that just as we think our predecessors of a century ago had a fundamentally inadequate grasp on the “furniture of the world,” so our successors of a millennium hence will take a similar view or our purported knowledge of things. We do not—or at any rate, given the realities of the case, should not—want to adopt categorically the ontological implications of scientific theorizing in just exactly the state-of-the-art configuration presently in hand. A realistic acknowledgment of scientific fallibilism precludes the claim that the furnishings of the real world are exactly as our science states them to be—that electrons actually are just exactly as the latest Handbook of Physics claims them to be. In relation to what is to come, our present thought about nature is no more than our inadequate anticipation—an estimate rather than a specification. And the fact of it is that we shall never be able to make claims about reality that go beyond what we presently think to be the case: reality as we can deal with it will always have to be our reality— reality as we presently conceive it to be. 10. EPISTEMIC VISTAS Thought about nature is a complex issue: if only because nature’s make-up can be known with different degrees of adequacy. The scale at issue in Display 1 is significant and instructive in this regard. It seems unavoidable that when X is a large scale and complex fact regarding nature, then as one moves down that list one is soon out of one’s depth, since in substantial matters of scientific inquiry we can

33

THE PROBLEM OF FUTURE KNOWLEDGE

seldom get beyond (2). And the fact of it is that we do and cannot but realize that there is a potential gap between (3) and (4) and that no matter when that future date may fall there is room for error. Display 1 STEPS TOWARDS OBJECTIVITY (1) I think that . . .

[SELF-IMPRESSION]

(2) We (nowadays) think that . . .

[GROUP-IMPRESSION]

(3) They will then think that . . .

[FUTURE IMPRESSION]

(4) They will eventually or ultimately think that . . .

[LONG-RUN ULTIMACY]

(5) It is to be thought (because actually true) that . . .

[IDEALITY]

SUBJECTIVE

OBJECTIVE

Moreover, the instruction “Tell me what X is actually like, over and above and apart from what you think it to be” is an instruction that we cannot obey. The progressiveness of science means that the merely seems/actually is distinction is one we cannot operate with respect to matters of exact detail. After all, who must X be in order for the following equivalence to hold: X thinks p to be the case ≡ p is actually the case Pretty well nobody maintains that “X = the scientific community of the present day” will do the job. But for a time C. S. Peirce thought that the following would work out X = the scientific community of the long-run future But worries about the eventuation of the future led him to abandon this idea and move on to:

Nicholas Rescher • Epistemic Merit

34

X = the scientific community in its ideal formation And this step put Peirce into proximity with the later Josiah Royce who thought that X = the Absolute (in its communal form) would work out. But of course the problem is that we actually live in the real and not the ideal world. For those confined to the mundane idealistic of the actual, those idealizations do not afford much help. It has to be acknowledged that with many matters—those of scientific futurity emphatically included—an assured knowledge of the truth (the whole truth and nothing but the truth) is inaccessible to finite minds. Those whose—like Euro-Idealists (German and British alike)—were to equate truth and knowledge have no alternative to following Josiah Royce into postulating an Absolute or Ideal Mind able to go where finite minds cannot reach. Granted, the idea of “naturalizing” the Absolute by reconceptualizing it as the work of finite minds in their aggregate totality is ingenious. But its visionary and unrealizable nature makes it an ultimately impracticable and futile expedient. We simply have to concede that Table I’s ladder of objectivity cannot be descended as deep as we might like. The faith we lack in the present seems unavailable for the future also. 11. SCIENCE AND REALITY We are now in a position to place into clearer relief one of the really big questions of philosophy: How close a relationship can we reasonably claim to exist between the answers we give to our factual questions at the level of scientific generality and precision and the reality they purport to depict? Scientific realism is the doctrine that science describes the real world: that the world actually is as science takes it to be, and that its furnishings are as science envisages them to be.12 But is such a “scientific realism” a tenable position? It is quite clear that it is not. There is clearly insufficient warrant for and little plausibility to the claim that the world indeed is as our sci-

35

THE PROBLEM OF FUTURE KNOWLEDGE

ence claims it to be and that our science is correct science and offers the definitive “last word” on the issues. We really cannot reasonably suppose that science as it now stands affords the real truth as regards its creatures-of-theory. One of the clearest lessons of the history of science is that where scientific knowledge is concerned, further discovery does not just supplement but generally emends the bearing of our prior information. Accordingly, we have little alternative but to take the humbling view that the incompleteness of our purported knowledge about the world entails its potential incorrectness as well. It is now a matter not simply of gaps in the structure of our knowledge—or errors of omission. There is no realistic alternative but to suppose that we face a situation of real flaws as well—or errors of commission. This aspect of the matter endows incompleteness with an import far graver than meets the eye on first view.13 Realism equates the paraphernalia of natural science with the domain of what actually exists. But this equation would work only if science, as it stands, has actually “got it right.” And this is something we are surely not inclined—and certainly no entitled—to claim. We must recognize that the deliverances of science are bound to a methodology of theoretical triangulations from the data which binds them inseparably to the “state-of-the-art” of technological sophistication in data acquisition and handling. As far as the actual condition of nature is concerned, science merely purports but does not achieve: it may provide us with the very best achievable estimate, but all the same this is still an estimate. The step of reification is always to be taken qualifiedly, subject to a mental reservation of presumptive revisability. We do and must recognize that we cannot blithely equate our frontier scientific theories with the truth. We do and must realize that the affirmations of science are inherently fallible and that we can only “accept” provisionally subject to a clear realization that they may need to be corrected or even abandoned.

Nicholas Rescher • Epistemic Merit

36

12. RAMIFICATIONS OF IMPREDICTABILITY One further implication of these deliberations deserves comment. Man (Homo sapiens) is an intelligent being whose actions in the world are guided by his beliefs—by what he takes to be knowledge in matters of fact. But insofar as the future development of human knowledge is impredictable, so this will also be the case with respect to human doings and dealings, so that human actions and affairs will also be in substantial measure impredictable. And thus insofar as man is an integral and interactive part of nature so there will be a sector of the natural world that will be impredictable as well. In relation to the cultivation of knowledge, the chance and contingency that afflicts human affairs is bound to involve some part of nature as well. To be sure, man and his dealings are small potatoes in the cosmic scheme of things. But nevertheless we are a part of the world’s overall drama and here the role of our thought cannot simply be set at naught. And to this extent idealism, too, must be conceded its place.14 REFERENCES Adams, Marilyn McCord, William Ockham, vol. II (Notre Dame, Ind.: University of Notre Dame Press, 1987). Anonymous, “The Future as Suggested by Developments of the Past Seventy-Five Years,” Scientific American, vol. 123 [1920], p. 321. Badash, Lawrence “The Completeness of Nineteenth-Century Science,” Isis, vol. 63 (1972), pp. 48–58. Feynmann, Richard, The Character of Physical Law, (Cambridge, Mass.: MIT Press, 1965). Harré, Rom, Principles of Scientific Thinking (Chicago: University of Chicago Press, 1970). Hawkins, S. W., “Is the End in Sight for Theoretical Physics?” Physics Bulletin, vol. 32 (1981), pp. 15–17. Jeffrey, Eber, “Nothing Left to Invent,” Journal of the Patent Office Society, vol. 22 (July 1940), pp. 479–481. Kuhn, Thomas, The Structure of Scientific Revolution, 2d ed. (Chicago: University of Chicago Press, 1970). McKinnon, E., (ed.), The Problem of Scientific Realism (New York: AppletonCentury-Crofts, 1972). Peirce, C. S., Collected Papers, Vol. V, ed. by C. Hartshorne and P. Weiss (Cambridge, Mass.: Harvard University Press, 1934). Rescher, Nicholas, Cognitive Systematization (Oxford: Blackwell, 1979).

37

THE PROBLEM OF FUTURE KNOWLEDGE

———, Kant and the Reach of Reason: Studies in Kant’s Theory of Rational Systematization (Cambridge: Cambridge University Press, 2000). ———, Methodological Pragmatism (Oxford: Blackwell, 1977). ———, Scientific Progress (Oxford: Blackwell, 1979). Sellars, Wilfred, Science Perception and Reality (London: Humanities Press, 1963). Stent, Gunther, The Coming of the Golden Age, (Garden City, N.Y.: Natural History Press, 1969). Suppe, Frederick, (ed.), The Structure of Scientific Theories, 2nd ed. (Urbana: University of Illinois Press, 1977). Whitehead, A. N., in John Ziman, Reliable Knowledge (Cambridge: Cambridge University Press, 1969), pp. 142–143.

NOTES 1

That contingent future developments are by nature cognitively intractable, even for God, was a favored theme among the medieval scholastics. On this issue see Marilyn McCord Adams, William Ockham, vol. II (Notre Dame, Ind.: University of Notre Dame Press, 1987), chap. 27.

2

On this theme see the author’s Kant and the Reach of Reason: Studies in Kant’s Theory of Rational Systematization (Cambridge: Cambridge University Press, 2000).

3

As one commentator has wisely written: “But prediction in the field of pure science is another matter. The scientist sets forth over an uncharted sea and the scribe, left behind on the dock is asked what he may find at the other side of the waters. If the scribe knew, the scientist would not have to make his voyage” (Anonymous, “The Future as Suggested by Developments of the Past Seventy-Five Years,” Scientific American, vol. 123 [1920], p. 321).

4

Quoted in Daedalus, vol. 107 (1978), p. 24.

5

A. N. Whitehead, as cited in John Ziman, Reliable Knowledge (Cambridge: Cambridge University Press, 1969), pp. 142–143.

6

See Thomas Kuhn, The Structure of Scientific Revolution, 2nd ed. (Chicago: University of Chicago Press, 1970), for an interesting development of the normal/revolutionary distinction.

7

C. S. Peirce, Collected Papers, Vol. V, ed. by C. Hartshorne and P. Weiss (Cambridge, Mass.: Harvard University Press, 1934), sect. 7.144. See also Peirce’s important 1898 paper on “Methods for Attaining Truth,” in ibid., sects. 5.574 ff.

Nicholas Rescher • Epistemic Merit

38

NOTES 8

The author’s Cognitive Systematization (Oxford: Blackwell, 1979) deals with these matters.

9

Note that this is independent of the question “Would we ever want to do so?” Do we ever want to answer all those predictive questions about ourselves and our environment, or are we more comfortable in the condition in which “ignorance is bliss”?

10

This sentiment was abroad among physicists of the fin de siècle era of 1890–1900. (See Lawrence Badash, “The Completeness of Nineteenth-Century Science,” Isis, vol. 63 [1972], pp. 48–58.) And such sentiments are coming back into fashion today. See Richard Feynmann, The Character of Physical Law, (Cambridge, Mass.: MIT Press, 1965), p. 172. See also Gunther Stent, The Coming of the Golden Age, (Garden City, N.Y.: Natural History Press, 1969); and S. W. Hawkins, “Is the End in Sight for Theoretical Physics?” Physics Bulletin, vol. 32 (1981), pp. 15–17.

11

See Eber Jeffrey, “Nothing Left to Invent,” Journal of the Patent Office Society, vol. 22 (July 1940), pp. 479–481.

12

For some classical discussions of scientific realism, see Wilfred Sellars, Science Perception and Reality (London: Humanities Press, 1963); E. McKinnon, (ed.), The Problem of Scientific Realism (New York: Appleton-Century-Crofts, 1972); Rom Harré, Principles of Scientific Thinking (Chicago: University of Chicago Press, 1970); and Frederick Suppe, (ed.), The Structure of Scientific Theories, 2nd ed. (Urbana: University of Illinois Press, 1977).

13

Some of the issues of this discussion are developed at greater length in the author’s Methodological Pragmatism (Oxford: Blackwell, 1977), Scientific Progress (Oxford: Blackwell, 1979), and Cognitive Systematization (Oxford: Blackwell, 1979).

14

This chapter was originally published in Mind and Society, vol. 11 (2012).

Chapter 4 DIMINISHING RETURNS (In Factual Inquiry)

N

ot every insignificant smidgeon of information qualifies for the proud title of knowledge. After all, the person whose entire body of information consists of disconnected trivia really knows virtually nothing. Knowledge is not the mere aggregation of information, the brute aggregation of facts. It is a matter of significant extension of what we already know, of informative additions to our knowledge, of the accession of important information. As a simple illustration, let us suppose object-descriptive color taxonomy—for the sake of example, an over-simple one based merely on Blue, Red, and Other. Then that single item of knowledge represented by “knowing the color” of an object—viz. that it is red—is bound up with many different items of (correct) information on the subject (that it is not Blue, is rather similar to some shades of Other, etc.). As such information proliferates, we confront a situation of redundancy and diminished productiveness. Any knowable fact is always potentially surrounded by a massive penumbral cloud of relevant information. And as our information grows to be ever more extensive, those really significant facts become more difficult to discern. Cognitive progress brings ever greater complexity in its wake. Its ongoing development involves us in ever more fine-grained informative detail. Even as “fleas have ever-smaller fleas to bite ’em” so with issues settled at one level of generality there are ever more deeply subordinated—and thus smaller-scaled questions that are parasitically dependent upon earlier ones. But those increasingly fine-geared distinctions and subtler considerations provide an ever smaller informative yield. And so while a great mass of information can be exfoliated about any sort of thing (be it a tree, a tool, or a person) the really salient and significant knowledge of the item points is generally something that comes compactly at the outset.

Nicholas Rescher • Epistemic Merit

40

Our knowledge certainly increases with the addition of information, but at a far less than proportional rate. In fact, the increase in knowledge (∆K) that a given mass of new information ∆I adds to what is already in hand is inversely proportional to the size of that body of information we already have (I): ∆K = Fehler! On this basis it transpires that: K = ∫Fehler! ≈ log I Knowledge increases not as the volume of information itself, but only in proportion to its logarithm. Initially a sizable proportion of the available is high grade—but as we press further this proportion of what is cognitively significant gets ever smaller. To double knowledge we must quadruple information. As science progresses, the important discoveries that represent real increases in knowledge are surrounded by an ever vaster penumbra of mere items of information. (The mathematical literature of the day yields an annual crop of over 200,000 new theorems.1) Now if we make the natural supposition that the accession of new information is proportional to the investment of resources expended in its acquisition, so that I ≈ R, then we have it that the body of achieved knowledge is aligned to the logarithm of the resources expended for its acquisition: K ≈ log R And this Law of Logarithmic Returns (as it may be called) means that the growth if knowledge is subject to the condition that a stable— temporally linear—knowledge requires an exponential increase in the resources requires for in its acquisition. These purely theoretical deliberations are substantiated by a considerable body of empirical findings.2 And they mean that as long as the accession of further information requires the investment of a given among of resource investable (in time, effort, and resource allocation)

41

DIMINISHING RETURNS

there will be ongoingly diminishing returns of acquisition for further such investment. (For example a National Science Foundation study concludes that the same resources that produced 100 publications [in top scientific journals] in 2001 would have produced 129 publication in 1990.3)

The implications for cognitive progress of this disparity between knowledge and mere information are not difficult to discern. Nature imposes increasing resistance barriers to intellectual as to physical penetration. Consider the analogy of extracting air for creating a vacuum. The first 90% comes out rather easily. The next 9% is effectively as difficult to extract than all that went before. The next .9 is proportionally just as difficult. And so on. Each successive order-ofmagnitude step involves a massive cost for lesser progress; each successive fixed-size investment of effort yields a substantially diminished return. The Law of Logarithmic Returns presents us with an epistemological

analogue of the old Weber-Fechner law of psychophysics, asserting that inputs of geometrically increasing magnitude are required to yield perceptual outputs of arithmetically increasing intensity. For the presently contemplated law envisions a parallelism of perception and conception in this regard. It stipulates that (informational) inputs of geometrically increasing magnitude are needed to provide for (cognitive and thus) conceptual outputs of arithmetically increasing magnitude. And so this Law of Logarithmic Returns as a principle of the realm of conception parallels the Weber-Fechner Law in the epistemics of perception. In searching for meaningful patterns, the ongoing proliferation of data-points makes contributions of rapidly diminishing value.4 This general situation is reflected in Max Planck’s appraisal of the problems of scientific progress. He wrote that “with every advance [in science] the difficulty of the task is increased; ever larger demands are made on the achievements of researchers, and the need for a suitable division of labor becomes more pressing.”5 The Law of Logarithmic Returns would at once characterize and explain this circumstance of what can be termed Plank’s Principle of Increasing Effort to the effect that substantial findings are easier to come by in the earlier

Nicholas Rescher • Epistemic Merit

42

phase of a new discipline and become ever more difficult in the natural course of progress. As science progresses within any of its established branches, there is a marked increase in the over-all resource-cost of realizing scientific findings of a given level intrinsic significance (by essentially absolutistic standards of importance).6 At first one can skim the cream, so to speak: they take the “easy pickings,” and later achievements of comparable significance require ever deeper forays into complexity and call for an ever-increasing investment of effort and material resources. And it is important to realize that this cost-increase is not because latter-day workers are doing better science, but simply because it is harder to achieve the same level of science: one must dig deeper or search wider to find more of the same kind of thing as before. And this at once explains a change in the structure of scientific work that has frequently been noted: first-rate results in science nowadays come less and less from the efforts of isolated workers but rather from cooperative efforts in the great laboratories and research institutes.7 The idea that science is not only subject to a principle of escalating costs but to a law of diminishing returns as well is due to the nineteenth-century American philosopher of science Charles Sanders Peirce (1839–1914). In his pioneering 1878 essay on “Economy of Research” Peirce put the issue in the following terms: We thus see that when an investigation is commenced, after the initial expenses are once paid, at little cost we improve our knowledge, and improvement then is especially valuable; but as the investigation goes on, additions to our knowledge cost more and more, and, at the same time, are of less and less worth. All the sciences exhibit the same phenomenon, and so does the course of life. At first we learn very easily, and the interest of experience is very great; but it becomes harder and harder, and less and less worthwhile … (Collected Papers, Vol. VII [Cambridge, Mass., 1958], sect. 7.144.)

Here, as elsewhere, Peirce made manifest an intellect of stressed insightfulness. In regard to knowledge acquisition in general, and especially in natural science, diminishing returns on effort are a fact of life.8

43

DIMINISHING RETURNS

REFERENCES Glass, Bentley, “Milestones and Rates of Growth in the Development of Biology,” The Quarterly Review of Biology, vol. 54 (March 1979), pp. 31–53. Javitz, H. et. al. U. S. Academic Scientific Publishing: Working Paper SRS 11-201. Arlington, VA (National Science Foundation: Division of Science Resource Statistics, 2010). Price, Derek J., Little Science, Big Science (New York: Columbia University Press, 1963). Price, Derek J., Science Since Babylon, 2nd ed. (New Haven CN: Yale University Press, 1975). Rescher, Nicholas, Scientific Progress (Oxford: Basil Blackwell, 1978). Shockley, William, “On the Statistics of Productivity in Research,” Proceedings of the Institute of Patio Engineers, vol. 45 (1957), pp. 279–90. Wagner-Döbler, Roland, “Rescher’s Principle of Decreasing Marginal Returns for Scientific Research,” Scientometrics, vol. 50 (2001), pp. 419–36. Wible, James R., Economics of Science (London and New York: Routledge, 1998). Wible, James R., “Rescher on Economy of Research” in P. Mirowski and E. Serit (eds.), Science Bought and Sold (Chicago: University of Chicago Press, 2002), pp. 209–14.

Nicholas Rescher • Epistemic Merit

44

NOTES 1

See Stanislaw M. Ulam, Adventures of a Mathematician (New York: Scribner, 1976).

2

For substantiation see the References listed below.

3

See Javitz et. al. (2012).

4

On the relevance of the Weber-Fechner law to scientometrics see Derek Price, Little Science, Big Science (op. cit.), pp. 50–51.

5

Max Planck, Vorträge und Erinnerungen, 5th ed. (Stuttgart, 1949), p. 376; italics added. Shrewd insights seldom go unanticipated, so it is not surprising that other theorists should be able to contest claims to Planck’s priority here. C. S. Peirce is particularly noteworthy in this connection.

6

The following passage offers a clear token of the operation of this principle specifically with respect to chemistry: Over the past ten years the expenditures for basic chemical research in universities have increased at a rate of about 15 per cent per annum; much of the increase has gone for superior instrumentation, [and] for the staff needed to service such instruments. … Because of the expansion in research opportunities, the increased cost of the instrumentation required to capitalize on these opportunities, and the more highly skilled supporting personnel needed for the solution of more difficult problems, the cost of each individual research problem in chemistry is rising rapidly. (F. H. Wertheimer et al., Chemistry: Opportunities and Needs [Washington, D.C., 1965; National Academy of Sciences/National Research Council], p. 17.)

7

The talented amateur has virtually been driven out of science. In 1881 the Royal Society included many fellows in this category (with Darwin, Joule, and Spottiswoode among the more distinguished of them). Today there are no amateurs. See D. S. C. Cardwell, “The Professional Society” in Norman Kaplan (ed.), Science and Society (Chicago, 1965), pp. 86–91 (see p. 87).

8

This chapter was written as a contribution to The Internet Encyclopedia of Philosophy.

Chapter 5 PRACTICAL VS. THEORETICAL REASON 1. THE DISTINCTION

R

eason enjoins us to do what appears optimal—what is for the best in the circumstances as we discernibly confront them. However, a sharp distinction can be—and standardly is—drawn between theoretical and practical reason. The former is directed at belief—at what is to be accepted as true; the latter is directed at action—at what is to be done. Theoretical or alethic reasoning relates specifically to what we are to accept as true: its goal is to establish factual claims as appropriate answers to our factual questions. By contrast, practical reasoning relates to action, to answering our procedural questions about appropriate doings and dealings. And these two modes of reasoning are closely interrelated. For action is grounded by belief. (Is there a bomb under the bed?) And our beliefs are generally the fruit of actions—of process of inquiry and investigation. All the same, despite their interconnectedness within the overall manifold of reason, practical and theoretical reasoning are distinctive sorts of enterprises subject to distinctive sorts of rules and procedures. And here this issue of purposiveness becomes critical. 2. OUTCOME DETERMINATION In theoretical deliberations, the appropriately determinative factor is that of confirmative grounding. Here the proper rationale largely pivots on the issue of substantiating evidence. By contrast, in practical matters the appropriately determinative factor is that of conduciveness to goal realization. Here the proper rationale pivots on the issue of aim-achieving efficacy. In theoretical deliberation factuality is pivotal, in practical deliberation it is purposive efficacy.

Nicholas Rescher • Epistemic Merit

46

The salient difference between the two spheres lies in the circumstance that there are going to be non-evidential pathways to the validation of practical conclusions. To validate an answer to a practical “What to do?” question we need not establish that an appropriate answer of the format “Do A” will actually lead to realization of the correlative goal. Rather, it will suffice to show either • that doing A affords as good or better a chance of realizing this goal than does any of the available alternative ways of proceeding (A represents the most promising available option) or • that doing A will lead to realizing the desired goal if any mode of procedure can do so: that only by doing A can we hope to realize our goal even when the chances of success are altogether minimal. In the circumstances, it is A or nothing. (Promising or not, A is the only game in town.) In this light it is clear that a negativistic finding of this-or-nothingbetter can prove sufficient for resolving a practical issue: with practical conclusions, by contrast, the absence of negative con-considerations can prove conclusive. However, with resolving a factual question this clearly is not so. Theoretical conclusions demand positive pro-considerations. The two realms are thus critically different. The prospect of this-or-nothing-better argumentation in nature of practical reasoning can be approached from two directions, namely the ontological: • the factual/ontological aspect of there in fact exists nothing better and the epistemological: • the cognitive/epistemological aspect of we know of nothing better: as far as we can tell, this resolution is as good as any.

47

PRACTICAL VS. THEORETICAL REASON

Now as far as the rationality of the matter goes, this second approach has to be taken in stride. It is, after all, unreasonable to ask of someone to do better than the best they can. The injunction: “Don’t just tell me what you think is true, tell me what actually is so!” is one that is in principle incapable of implementation. With creatures like ourselves that are of limited capacity and whose cognitive range is inevitably restricted, assurance often cannot be attained that “doing the best that one can as best one sees it” will produce a rationally optimal resolution—that those apparent optima we can attain will actually yield real optima. But this consideration does not preclude rational action. In practical matters we have no real alternative to accepting the best that we can manage in the circumstances as being good enough. In practical contexts it can be decisive that as far as we can see no other, alternative proceeding affords a better promise of success. However, this sort of ignorance-based proceeding is simply inappropriate in theoretical matters. Here our inability to establish that something is not so affords no basis for any positive resolution, the absence of sufficient counter-indications never affords a basis for thesis-acceptance. But when it comes to practical courses of action it can suffice in suitable circumstances to validate action. And so we have to let practical reasoning go its own distinctive way. 3. THE TURN TO METHODS One particularly important sector of practical reasoning will concern itself with methodological issues—with methods, procedures, strategies. Its issues are not the specific decisions regarding what to do here and how, but the generic procedural processes for determining them. This area of concern can, however, be approached via the consideration that the adoption of a generic method of procedure is itself one of the most far-reaching actions we can make. On this basis the standard ground-rules of practical reasoning apply here as well. Methods, after all, are inherently goal oriented. And so with goal realizing efficacy as the pivotal consideration various rather different sorts of good reasons can constitute a cogent rationale for adopting a practical policy or method or procedure.

Nicholas Rescher • Epistemic Merit

48

1. No choice: this or nothing. There just are no practicable alternatives. The envisioned procedure is the only game in town: there just is no other prospect in sight. 2. Dominance: There are alternatives, but it can be shown that one particular one among them will succeed in goal attainment if any of them can do so. 3. Theoretical optimality: there is good reason via general principles to see the policy as the most promisingly effective among the available alternatives. 4. Performative superiority: the empirical experience of trial and error yields a track record of success superior to that of the available alternatives. 5. Faute de mieux: there are not theoretical or empirical grounds for thinking that some other, alternative will prove superior: by all visible indications the envisioned alternative is as good as any. Given that methods are purposive instrumentalities, it is only to be expected that the adoption of a method can and should also be validated in the functional-efficiency terms of assessment that is operative throughout the practical domain. And this situation regarding methods-in-general also holds specifically for cognitive methods and procedures for answering to our factual questions. 4. DECISIONAL METHODISM The recourse to methods as the focus of appraisal opens up an entirely different and distinct route to the pragmatic validation of actions—one that abandons the issue of immediate practical utility in favor of method-mediated appropriateness. Here we do not validate an actionchoice directly, but rather by a course of reasoning that proceeds

49

PRACTICAL VS. THEORETICAL REASON

obliquely and at arm’s length via methodology as per the following line of reasoning: • In situations of the sort at hand the appropriate resolutionprocess is to proceed X-wise. • Proceeding X-wise indicates that A is to be due in the prevailing circumstances. ∴ Doing A is the appropriate thing to do here-and-now. The primacy of practical reason lies in the consideration that (1) We neither do nor can have a categorically certain assurance that in following the guidance of theoretical reason we shall achieve its ultimate objectives of arriving at the actual truth of things, while nevertheless (2) by all discernible indications following this guidance is the most and best that we can manage in the circumstances. We can never rest complacently confident that in following reason’s directions we are not frustrating the very purposes for whose sake we are calling upon the guidance of reason. We have to recognize the “fact of life” that it is rationally advisable to do the best we can, while nevertheless realizing all the while that it may prove to be defective and even incorrect. The following three points are crucial in this regard. (1) It is a matter of life and death for us to live in a setting where we ourselves are in large measure predictable for others, because only on this basis of mutual predictability can we achieve conditions essential to our own welfare. (2) The easiest way to become predictable for others is to act in such a way that they can explain, understand, and anticipate my actions on the basis of the question “What would I do if I were in his shoes?” (3) In this regard the “apparent best” is the obvious choice, not only because of its (admittedly loose) linkage to optimality per se, but also because of its “saliency.” The quest for “the best available” leads one to fix on that alternative at which others too could be expected to arrive in the circumstances—so that they can also understand one’s choices.

Nicholas Rescher • Epistemic Merit

50

5. INTERCONNECTION Theoretical and practical reasoning are substantially interconnected. Thus the circumstance that procedure P is the most (or only) effective of the available ways to reach the goal G is bound to include factual considerations. And in many factual matters the practical consideration that it is by accepting that factual thesis T that we most effectively serve the interests and aims of inquiry can prove decisive. For, after all, empirical inquiry is itself a procedure devised and operational with certain purposes in view—the removal of ignorance, the resolution of doubt, and the guidance of action prominent among them. 6. THE PROMINENCE OF PRACTICE And so the pragmatic aspect of the matter has yet another side. The pivotal role of rationality as a coordination principle must also be emphasized. Adequate cultivation of our individual interest requires a coordination of effort with others and imposes the need for cooperation and collaboration1. But this is achievable only if we “understand” one another. And here rationality becomes critical. It is a crucial resource for mutual understanding, for rendering people comprehensible to one another, so as to make effective communication and cooperation possible. And so in the end (in the order of justification practical reason enjoys the priority because our reliance on reason—even theoretical reason—is ultimately justified by pragmatic considerations. Philosophers of pragmatic inclination have always stressed the ultimate inadequacy of any strictly theoretical defense of cognitive rationality. And their instincts in this regard are surely right. One cannot marshal an ultimately satisfactory defense of rational cognition by an appeal that proceeds wholly on its own grounds. In providing a viable justification the time must come for stepping outside the whole cognitive/theoreticcal sphere and seeking for some extra-cognitive support for our cognitive proceedings. It is at just this stage that a pragmatic appeal to the condition of effective action properly comes into operation.

51

PRACTICAL VS. THEORETICAL REASON

NOTES 1

On this theme see R. Axelrod and W. D. Hamilton, “The Evolution of Cooperation,” Science, vol. 211 (1981), pp. 1390–96.

Chapter 6 ON EVALUATING SCIENTIFIC THEORIES

F

or a newly proposed scientific theory to qualify for replacing an earlier one, it must either be better evidentiated by the data or be more successful than its predecessors at realizing one or another of the salient aim of scientific inquiry: explanation, prediction, and control over phenomena. And for one of two rival theories to qualify for acceptance rather than the other it, must exhibit superiority along the same lines. In this way the progressive nature of science calls for the comparative assessment of the scientific merit of theories. But how is such an assessment to be made to work? In general terms the answer there is pretty straight-forward. One scientific theory can be superior to another in various different critical respects. As inventory of these aspects of superior performance would include the following: I. EVIDENTIATION Being spoken for by a larger or a more detailed (accurate) body of empirical data. II. EXPLANATION 1. explaining more phenomena 2. affording a better (more simple or more natural or more accurate) explanation for the same phenomena 3. explaining the other theory itself (perhaps along with others) 4. fitting more harmoniously into the general fabric of explanatory understanding

Nicholas Rescher • Epistemic Merit

54

III. PREDICTION 1. predicting more —i.e., “new”—phenomena 2. affording more accurate predictions regarding the same phenomena IV. APPLICATION AND CONTROL 1. providing guidance for effective manipulation and control across a wider range of action 2. providing for more accurate and detailed control It seems to be a fact of life that theoreticians will not give up on a scientific theory simply because of its failings in this or that respect, as in politics, you cannot defeat something with nothing, and “explanations” can always be devised to excuse shortcomings. Rather, the rule seems to be that a scientific theory will be abandoned only when another that is visibly superior to it comes along. It is this consideration that endows the factor of comparative superiority with critical significance. The preceding considerations immediately provoke the question: “Is a comparatively superior theory thereby truer than its inferior?” This question poses the thorny issue of comparative truth. For starters, it is clear that superiority in some respect does not establish a theory as true. Nor does it even establish it as “truer” than its rivals in the sense of proving to be false in a smaller range of circumstances. All that it means is faring better at some of the things that theories are supposed to do. The problem here is clearly one of circularity and questionbegging. In scientific matters we cannot reasonably employ “establishing the truth” or “approximating the truth” as test-criteria for the acceptability of theories, simply because those theories themselves are our only possible avenues for getting at what the truth of the matter actually is. The merit of “getting more of or getting nearer to the actual truth” is something we cannot possibly monitor directly, seeing that

55

ON EVALUATING SCIENTIFIC THEORIES

we have no way of judging “to reach truth in scientific matters” independently of our theorizing. Even the idea that the deliverances of a superior theory are more likely to be true is deeply problematic because there is no effective way of bringing the idea of likelihood into operation here. But given that we cannot claim that superior theories are truer— that they do actually or probably deliver more of the truth into our hands—just wherein can their stronger claims to acceptance really lie? The answer is straightforward: it lies in their very superiority, their capacity to function more effectively in serving those aims and purposes for whose sake the scientific project has been instituted as a human pursuit: explanation, prediction, and control over nature. The issue of truth—of whether superior theories are “more likely to be true” or “closer to the truth” in some other sense—is deeply problematic. But that superior theories are better qualified to serve as estimates of the truth and that their adoption as such is not only rationally authorized but rationally mandated is nevertheless clear. We are entitled to regard the deliverances of a superior theory as affording us a better estimate of the truth—one whose acceptance we ought to prioritize over its rival. However, the superiority of those theories functions in the practical order regarding the rationality as regards acceptance-as-true rather than in the factual order bearing directly on the actual truth of things. The relationship of superior theories to “the truth” is something oblique rather than direct, established by considerations in the practical rather than the theoretical order of reasoning.

Chapter 7 AUTHORITY 1. INTRODUCTION

A

lexis de Tocqueville sagely observed that:

A principle of authority must … always occur, under all circumstances, in some part or other of the moral and intellectual world … Thus the question is not to know whether any intellectual authority exists in an age of democracy, but simply by what standard it is to be measured.1

To be sure, authority is usually considered only in its socio-political dimension of communal authority, and it is generally viewed in its coercive aspect with a view to the power of some to control the doings of others. But this sort of thing is not the main subject of present concern. Rather, the sort of authority that will be at the forefront here is that which is at issue when we speak of someone as being a recognized authority in some field of endeavor—the kind of authority that is at work when we acknowledge someone as an expert with regard to some sector of thought and action. It occasions surprise that this sort of authority is an unduly underdeveloped topic. Important though it is, alike in ordinary life, in the theory of knowledge, and in ecclesiastical affairs, there is a dearth of serious study of the topic. For example, philosophical handbooks and encyclopedias—even those that are themselves deemed authoritative—are generally silent on the subject.2 All the same, authority is a complex and many-sidedly significant issue that deserves closer examination. 2. EPISTEMIC VS. PRACTICAL AUTHORITY Epistemic or cognitive authority is a matter of credibility with respect to claims regarding matters of fact! We acknowledge someone as an

Nicholas Rescher • Epistemic Merit

58

authority insofar as we are prepared to accept what they say. By contrast, practical or pragmatic authority is at issue in regard to action: it is a matter of guidance not in relation to what we are to accept or believe, but in relation to what we are to do. There are, accordingly, two prime forms of authority, the cognitive and the practical, the former relating to information and the latter to action. Practical authority can be either mandatory or advisory: it can be exercised either persuasively or coercively. And it can arise both with the question “What must I do?” and the question “What should I do?” But only mandatory authority can be delegated (e.g., by the captain of a ship to his first mate). With advisory authority, authoritativeness must be acknowledged by the recipient; it cannot simply be transferred by someone else’s delegation. Both cognitive authoritativeness and epistemic authoritativeness have to be acknowledged freely. Unlike practical authority they cannot be imposed. We thus acknowledge some person or source as a cognitive authority when we incline to acknowledge their informative claims as true. Now there are basically two sorts of epistemic issues: issues of fact and issues of interpretation. “What did George Washington’s Farewell Address say and where did he deliver it?” is a purely factual issue. “What was the objective of Washington’s Farewell Address and what effect did it have on American policy?” involves a good deal of interpretation. Being authoritative with respect to facts is a relatively straightforward and objective matter. Being authoritative on matters of interpretation is something more complex that turns on factors not just of information but of judgmental wisdom. The trustworthiness of science—and of course information sources at large—generally requires a track record. But not always. For in the end, one must give trust not by evidentiation but by presumption: by letting the data of some source not as acceptable provisionally, until something conflicting comes to light. In epistemic matters one must at some point give unevidentiated trust—at least provisionally and presumptively—because otherwise we would embark on an infinite regress that would render unable ever to evidentiate anything. The scientific community is itself the prime arbiter of cognitive authority. Peer acknowledgment by fellow experts is the crux here. But practical authority is more democratic. It is generally established

59

AUTHORITY

through public acknowledgment at large. The honorific of being a “recognized authority” tends to stay linked within the several bodies and specialties of science but is common there. In practical matters the description is something extremely rare. However, authority is not something that operates across the board. It is logically and theoretically limited in scope. 3. SCIENTIFIC AUTHORITY AND ITS LIMITS Scientific authority has two prime aspects. First there is the issue of authority IN science. This pivots on the expertise of individuals. But there is the issue of the authority OF science as an enterprise. This is a matter of its capacity to resolve adequately the questions that intrigue us and the problems that confront us. Either way, the authority of science is immense. It is grounded in the splendid success of the enterprise in matters of explanation, prediction, and the technological application. There is no (reasonable) way to deny the epistemic authoritativeness of science in its own sphere. But nevertheless, it is a decidedly limited authority—ardent enthusiasts to the contrary notwithstanding. For science as a human enterprise addresses issues of what is and can be in nature—of actual and potential fact. However, issues of value—not of what the facts are, but what they ideally should be—lie outside its scope and province. Accordingly science is effectively authoritative in issues of means—of how to go about getting ourselves from here to there. But matters of ends and goals—of where it is that we should endeavor to go with our efforts in this world—are questions on which the scientists speaks with no more authority than anyone else. 4. THE VALIDATION FOR ACKNOWLEDGING AUTHORITY The acknowledgment of cognitive authority must be earned. And the rationale for acknowledging authority in a given domain is substantially uniform—it is a matter of the beneficiary’s demonstrated competence in facilitating realizing the ends of the particular domain at issue. With cognitive authority there must be a demonstrated evidence of a capacity to provide credible answers to our questions. With prac-

Nicholas Rescher • Epistemic Merit

60

tical authority there must analogously be a capacity to afford effective guidance. Unfortunately, in matters of credibility authorities are all too often pitted against authorities. (Think here of Raphael’s famous painting of “The School of Athens.”) How, then, is one to proceed? In practical matters controlling authority can come to an individual simply by commission—by being “put in charge.” But advisory authority must be earned via trust. And authority in cognitive matters has to be earned. Acknowledging someone’s epistemic authority is a matter of trust. And with trust one risks error, misinformation, deceit. And in conceding (epistemic) authority to someone I risk that they may be “talking through their hat.” But in conceding practical authority to someone, I risk not just being wrong but actual damage, injury, misfortune to myself and others. I trust someone with respect to a practical issue. I entrust to them some aspect of my (of somebody’s) interests and involve not just error but injury. So why do people ever accept the authority of some person or source—why do they concede it to some other person or agency? The key here is the inescapable fact of the limitedness of our personal capabilities. We simply cannot manage in this world all by ourselves. Neither in matters of cognitive know-that nor in that of practical know-how are we humans sufficiently competent as individuals. In both cases alike we concede authority to the experts because we acknowledge them to be more competent than ourselves. We resort to them because we believe them to afford a more promising path to issue-resolution than the one we would contrive on our own. All this is simply a matter of common sense. Division of labor is inevitable here and means that we must, much of the time, entrust our own proceedings at least partially to others. However, the acknowledgement of authority is not an end in itself—it has a functional rationale. It is rationally warranted only when it conduces to some significant good—when it serves a positive role in facilitating the realization of a better quality of life, enabling its adherents to conduct the affairs more productively and have them lie as wiser, happier, and better people. But is relying on the authority of others not simply taking “the easy way out”? Is someone who concedes the authoritativeness of another

61

AUTHORITY

person or agency not simply shirking his responsibility? By no means! In these matters of decision, responsibility cannot be offloaded. It stands to the individual like his own shadow. The individual himself is always the responsible decider. It is he who acknowledges that authority, seeks its counsel, and adopts it on this occasion. The “just following advice” excuse is even less exculpatory of responsibility than is its cognition of “just following orders.” The fact is that in conceding authoritativeness to some individual or source we never leave responsibility behind. We are justified in acknowledging authority only where we ourselves have good ground for imputing authoritativeness. But what can be the rationale of such a step? In the final analysis it is self-interest. For there is no point in ceding authority to someone for the guidance of one’s own actions unless one has good reason to believe that this source has one’s own best interest at heart. Conceding practical authority makes good sense only in the presence of substantial indications that acting on this source’s counsel will actually conduce to our best interests. As best one can tell, the ultimate goal of human endeavor and aspiration here on earth is to make us—individually and collectively—into wiser, better, happier people. These correspond to three fundamental sectors of our condition: the cognitive, moral, and affective. And these in turn are correlative with knowledge, action, and value, the concerns of the three prime branches of traditional philosophizing, namely epistemology (“logic” as usually conceived), ethics, and value theory (axiology). Man’s overall well-being—eudaimonia, as Aristotle called it—is spanned by the factors of this range. As philosophers have stressed from antiquity onward, how we fare in regard to this trio of prime desiderata—i.e., in terms of wisdom, goodness, and happiness—provides the basis for rational endeavor. And the concession of authority is part and parcel of this project. 5. ECCLESIASTICAL AUTHORITY Let us now turn to the issue of specifically ecclesiastical authority and begin at the beginning here. Where does ecclesiastical authority come from? Any why is it needed?

Nicholas Rescher • Epistemic Merit

62

Ecclesiastical authority roots in “the consent of the governed.” Here to be authoritative is to be accepted as such within a given section of religious commitment. The endorsement of a faith community is the ultimate basis for ecclesiastical authority. But why is such authority needed? If a Church is to be more than a social fellowship of kindred spirits it requires a coordinating manifold of doctrinal and behavioral principles and at this point a stabilizing magisterium of shared commonality whether by gentle suasion or firm discipline, some sort of coordinative agency must be provided for. And just here lies the rationale of authority. But just why is it that individuals should acknowledge the teaching authority (“magisterium”) of his or her particular religion—at any rate in those religions that lay a claim thereto? The answer here lies in the fact that such acceptance is simply part and parcel of being a member of that particular religion. This is not the place to pursue this issue itself. (Why should one be a religious person—and indeed one of his or that particular faith?) The crucial point for present purposes is that the issue of relevant authoritativeness is automatically encompassed and resolved within this larger issue of enrollment in a religious tradition. To be sure, if I am to put my trust in a bank or in an encyclopedia—or in a Church—I must have good reason to think that they have at heart the best interests of people like me. And so, if I am to be rational about conceding to a religious community, authority over myself in matters of faith and morals; there will have to be good grounds for thinking that its exponents and expositors have given hard and cogent thought to how matters can and should be taken to stand in the relevant range of issues with regard to people like myself. But seeing that adopting a religion involves commitment taken “on faith” that goes beyond what rational inquiry (in its standard “scientific” form) can manage to validate, how can a rational person appropriately adhere to a religion? How can there be a cogent rationale for a faith whose doctrines encompass reason-transcending commitments? The answer lies in the consideration that factual claims are not the crux here. For religious commitment is not a matter of historically factual correctness so much as one of life-orienting efficacy, since the

63

AUTHORITY

sort of “belief” at issue in religion is at bottom a matter of lifeorientation rather than historical information. After all, religious narratives are by and large not historical reports but parables. The story of the Good Samaritan is paradigmatic here. From the angle of its role in Christian belief, its historical accuracy is simply irrelevant. What it conveys is not historical reportage but an object-lesson for the conduct of life. And much of religious teaching is always just like that a resource of life-guidance rather than one of information. Just this is the crux of the authority in relation to the “faith and morals” at issue with the putative authoritativeness of the Church. What is at issue looks not to historical factuality but to parabolic cogency—the ability to provide appropriate life-orientation for us— putting people on the right track. It is a matter of achieving appropriate life-goals, realizing rational contentment (Aristotelian eudemonia), getting guidance in shaping a life one can look back on their lives with rational contentment. What, after all, is it that conscientious parents want for their children? That they be happy and good! (Some will say rich, but that clearly is a desideratum only insofar as it will conduce to happiness!) And so effectively when one asks for expert guidance the issue of effectiveness will have to be addressed in these terms. And so, ultimately the rationale for conceding authority will inhere in the consideration that in doing we facilitate and foster the realization of those prime human desiderata. So two considerations will clearly be paramount here: • Leading satisfying lives. • Becoming good people. And on this basis what religious authority properly seeks to provide is not historical information but direction for the conduct of life. Religion can thus be viewed in the light of a purposive venture. Insofar as based rationally (rather than just emotionally or simply on traditionary grounds), it is something we do for the sake of ends— making peace with our maker, our world, our fellows, and ourselves. And ample experience indicates that motivation to think and act to-

Nicholas Rescher • Epistemic Merit

64

ward the good flourishes in a community of shared values. In this context it makes sense to see as authoritative those who—as best we can tell—are in a good position to offer us effective guidance towards such life-enhancing affiliations. So why would a rational person subscribe to the authority of a church (an “organized religion”) in matters of faith and morals? Why would such an individual concede authority to those who speak or write on its behalf? Effectively for the same reason that one would concede authority in other practical matters that one deems it important to resolve, namely when (1) one recognizes one’s own limitations in forming a cogent resolution (2) one has grounds for acknowledging the potential authority as thoughtful and well informed with respect to the issues and finally (3) one has good reason to see this authority as well-intentioned. And it is clear that ecclesiastic authoritativeness can and should be appraised on this same basis. On Christian principles, the doctrinal and moral authority of the Church is based on biblical, revelatory, and rational considerations. The first two of these are evident. And the last line roots in the fact that for the sake of communal unity and integrity there must be some agency. And so, the ceding of authority in matters of faith and morality is rationally appropriate where it serves effectively in the correlative range of humans ends—is life-enhancing in serving to make us wiser, better, happier people. And this is so with ecclesiastical authority as with much as authority of any other kind. But is ceding authority not a gateway to disaster? What of the imams who turn faithful devotees into suicide bombers. What of cults and their deluded and abused adherents? The point here is simply that like pretty much anything else, authority-concession is a resource that can be used and misused. The knife that cuts the bread can wound the innocent. The brick that forms the wall can smash the window. Authority too can be ceded reasonably and inappropriately. Here, as elsewhere, the possibility of abuse calls for sensible care with regard to such prospects, not for their abandonment.

65

AUTHORITY

6. WHICH ONE? But just which religion are individuals to deem authoritative for themselves! Granted, there are always alternatives, and on casual thought it may seem plausible to think of them as being spread out before us as a matter of choice. But this is quite wrong. The fact of it is that in matters of religion, the issue of reasonable choice is in general not something people face prospectively by overtly deciding upon a religious affiliation. On the contrary, it is something they can and generally will do only retrospectively, in the wake of an already established commitment. And, perhaps ironically, the very fact that a commitment is already in place as a fait accompli itself forms a significant part of what constitutes a reason for continuing it. At this point William James’ classic distinction between live and dead options comes into play. Never—or virtually never—do people confront an open choice among alternative religions. For one thing, the realities of place and time provide limits. Homer could not have chosen to be a Buddhist. And cultural accessibility also comes into it. The Parisians of Napoleon’s day could hardly become Muslims. Once one has “seen the light” and adopted a religion, one cannot but take the view that there is “one true religion.” To do otherwise would be being unserious. Yet to say this is not to say that there are not alternatives. But in such matters, they are blocked by personal background and disposition. Benjamin Disraeli could hardly have become a Mormon. Authoritativeness must be something underpinned by a basis of personal experience. How will these present deliberations about authority apply to the Church of Choice? To exert ecclesiastical authority, an agency must secure from its catchment of co-religionists a fairly-earned recognition as a reliable guide in matters of religious faith and practice. Appropriate acknowledgment in matters of ecclesiastical authority is—and must be—a matter of free acceptance, just as in the case with cognitive authority. And if such acceptance is to be rationally warranted, then it has to be rooted in cogent rationale. First there has to be a determination of thematic range. The Church makes no claims to authority in matters of chemistry, of numismatics,

Nicholas Rescher • Epistemic Merit

66

or of Chinese literature, and there is no reason to attribute to it any authoritativeness in those matters. But things stand rather differently in matters of doctrine and works. These are issues to which the doctors and theologian of the Church have given careful, devoted, and serious attention for generations, and insofar as the teaching institutions of the Church have reached a significant consensus in these matters it is only reasonable to acknowledge its teachings as reasonably based. The Catholic Church teaches that the pope is the ultimate authority in matters of faith and morals, and holds every Catholic to be obliged through his faith to accept this fact. It bases this position partly on grounds of revelation and tradition and partly on ground of reason in that coherent doctrine requirements an ultimate arbiter. And it teaches that the doctrinal claims of the papacy are maximal in this regard. It is thus the Church’s position that in the existing circumstance the reasonable person is bound by virtue of this very reasonableness, to see the matter in the Church’s way, not because the Church is the Church, but because the Church is seriously committed to being as rational about these issues as the nature of the case permits. At bottom, then, there is a uniform basis for the acknowledgment of authority, to wit the beneficial result of such a step in facilitating a realization of the particular enterprise at issue. With cognitive authority this relates to the accession of information; with practical authority it relates to effective action, and with ecclesiastical authority it is a matter of achieving a life of spiritual content. In every dimension the crux is the realization of a significant benefit. But this functionally rational aspect of the matter is not the whole of it. Man does not live by reason alone. And there is no ineluctable necessity for religious commitment to require reason’s Seal of Approval. True, from the strictly rational point of view religion exists to serve the interests of life. But other factors are at work in the good life apart from reason, factors that can lead a person to undertake commitment to a particularly mode of religiosity: family tradition, social solidarity, personal inclination, the impetus of one’s experience, and so on. Nevertheless, religious commitment, and with it the acknowledgement of ecclesiastical authority, is rational insofar as there can be brought to

67

AUTHORITY

bear the goal-oriented perspective of life-enhancement in the largest and most comprehensive sense of the idea.3 NOTES 1

Alexis de Tocqueville, Democracy in America, ed. by Thomas Bender (New York: Random House, 1982; Modern Library College edition), p. 299.

2

The only philosophical treatise on authority I know of is Was ist Autorität? by Joseph M. Bochenski (Freiburg im Breisgau; Basel; Wien: Herder, 1974). Curiously, seeing that its author is a priest, the book treats ecclesiastical authority in only a single rather perfunctory paragraph.

3

This chapter was originally published in the Blackwell’s Companion to Science and Christianity.

Chapter 8 COGNITIVE DIFFUSION 1. TWO MODES OF COGNITIVE DIFFUSION

P

hilosophical epistemology standardly concerns itself with the knowledge of individuals. The functioning of group knowledge and its nature as constructed through communication is usually left out of sight. But nevertheless it is an inherently important topic. The aim of this essay is to specify a vocabulary for depicting the conceptual structure of the process of cognitive—diffusion—the spreading of knowledge across a multiplicity of knowers. It is needful here to distinguish between conceptual and doctrinal diffusion. The former relates to creating a shared commonality of objects of concern—issues, ideas, concepts, problems, agenda-items. The latter relates to creating a shared commonality of opinions, doctrines, beliefs, thesis-acceptances. With the former, the issue is: Has one heard of it or not? With the latter, the question is: Does one accept (endorse) it or not? A pervasive uniformity of objects of concern is compatible with disunity of doctrine. (Think of the philosophical community in relation to the free will problem.) With thematic or ideational diffusion there is a dichotomy between those who are aware of an idea and those who are ignorant about it. And doctrinal diffusion is possible under two distinct headings: access and acceptance. Then too among those who are aware of a thesis there is the trichotomy between those who accept it, those who reject it and those who remain undecided. Access in general spreads far more rapidly than endorsement. Only some fraction of those who hear of some claim will accept it. This fraction very much depends (1) on what “it” is; (2) on also how many others accept it.

Nicholas Rescher • Epistemic Merit

70

2. COGNITIVE LANDSCAPES: SENDERS AND RECIPIENTS Cognitive agents—or cognizers as we may call them—can be viewed as nodes in a network of communicative interconnection. And here there will be senders and receivers. And when that is received is also sent on, there will be transmitters. Communication takes various forms depending on whether or not a specific recipient is at issue. With sending, a message is oriented specifically from A to a designated recipient B. With emission, it is tossed into the sea, as it were, to wash up where it lists, its recipients being self-selected as it were. Cognitive diffusion can of course take either form—though it is generally of the more diffuse, emissive sort. All of these proceedings—emitting, receiving, and transmitting can relate not only to ideas or concepts, but also—and no less significantly—to substantive theses as well. A commonality of acceptance establishes a cognitive liaison or linkage among cognizers. One of these may be said to be “proximate” to another to the extent that it accepts the information that this other transmits. However such proximity does not function in the way of ordinary distance, since with ordinary distance X is just as far distant from Y and Y is from X. In the present case, however, such symmetry does not obtain. X can accept 90% of what Y emits, while the reverse is true only to the extent of 20%. The “network of influence: comprised in a cognitive community can accordingly be mapped by a series of links of the format X

Y

Where the relative size of a node presents its total emissions, and the relative width of the connector at once represents the proportion of what is emitted in relation to what is accepted by the recipient.

71

COGNITIVE DIFFUSION

3. DIFFUSION The idea of a cognitive landscape can be projected on the basis of distance/proximity relations among cognizers. As a claim/idea diffuses over a cognitive landscape, access to it will spread from a given cagent to others in its communicative range. (It will “infect” them cognitively, so to speak.) Such transmission has two important aspects: (1) the extent of diffusion, and (2) the speed of diffusion. The diffusion of information (i.e., beliefs) can happen either naturally (as in the case of rumors) or by deliberate contrivance (e.g., a “telephone tree” to spread the word about say school closings due the weather conditions). The management of designed diffusion presents issues (e.g., the usual “Every parent calls two others” telephone tree design does not have sufficient redundancy to allow for unavailability). Senders make information available. They emit information-onoffer. Only rarely do they enjoin (mandate, require) acceptance, but these are exceptions (telephone trees, fire alarms, the dinner gong). Here the sender determines not just availability but virtually ensures acceptance (to hear the dinner gong is to realize that dinner is about to be served). As information-on-offer spreads across a cognitive landscape there comes into play the decided difference between the insiders who have received it and the outsiders who have not and among the insiders one can distinguish among: • the acceptors who believe the information, • the rejectors who disbelieve the information, • the undecided who suspend judgment. 4. RELATIONS AMONG COGNITIVE AGENTS: ATTITUDES TO SENDERS As already noted, the cognitive proximity between two communicators is not a single fixed quantity (like the distance between two places) but rather a pair of quantities, the one repeating the extent to which

Nicholas Rescher • Epistemic Merit

72

X accepts what Y emits and the other the reverse extent to which Y accepts what X emits. These two quantities accordingly reflect the extent to one communicative agent trusts the other. And this is asymmetric: one communicator can trust another more (or less) than is the case in reverse. Trust can be assessed via acceptance probabilities: the more I trust you, the more likely I am to accept something simply because you do so. Trust can also be measured probabilistically, via the probability that p will be among A’s beliefs given that it is among B’s. Moreover, a cognitive agent can be varyingly inclined to a thesis: it can be more or less receptively sympathetic (or antipathic) towards given thesis across a range of pro/indifferent/con with respect to its acceptance. Hierarchy manifests itself in the power or strength of influence. A source of higher status will bring someone around to its own point of view to a greater extent than one of lower status. High status sources thus exert more conformity pressure. 5. CONFORMITY PRESSURES With respect to cognitive landscapes numbers count. The more communicators agree, the stronger the pressure on others to join in. Moreover, an admixture of democracy and aristocracy prevails in cognitive matters. Let it be that whenever a majority accepts something, this very circumstance exerts a cognitive “conformity pressure” on the holdouts to convert. The strength of this pressure is measured by the probability of a change of mind over the next unit of time (e.g. years). (Note: it can also be that the strength of the pressure, rather than being constant, will increase with the size of the majority.) Such a pressure can be exerted either by the masses at large (democratic case) or by elite individuals. It is instinctive to consider the effects of conformity pressure over time. Let us suppose that the natural pro/con distinction with respect to some controverted claim is the mild majority 55/45. And let us further suppose a conformity pressure of 5% over annum—i.e., that each year 5% of the minority hold-outs will join the majority. Then after a single generation of 30 years, the pro/con distribution will be 90/10.

73

COGNITIVE DIFFUSION

From a small minority there has been a shift to a substantial consensus. 6. ATTITUDES TOWARDS MESSAGES: PRO OR CON INCLINATIONS The receptivity of informative messages (claims) is a function of several salient factors • the nature of the sender — what is his hierarchical status? — what is his track record? — what does one think of his methods and his sources? • the nature of the receiver — is he sceptical or trusting? • the nature of the message — is it sympathetic to oneself? — does it harmonize with one’s preexisting commitment? • the relation between the sender and the receiver — is there trust? — is there a pro-attitude? • the position of the under community — is there a firm consensus of communal conformity?

Nicholas Rescher • Epistemic Merit

74

The following points are salient in this regard: (1) Elite senders carry greater conviction. (2) Recipients who have a need or at least a good case for information are more likely to accept it. (3) Plausible information that fits smoothly into established patterns will carry easier conviction. And (4) where there is trust between recipients—and where there is introduction between them—informative claims are far more likely to find acceptance. The substantive nature of the message is a particularly prominent consideration here. Its fit into the preexistent context makes all the difference. Some messages are “music to our ears”: we welcome them and make them beneficiaries of a pro-attitude. But then too other messages can be decidedly unwelcome: the subject of a con-attitude. Thus we take some on board eagerly and secure them with open arms. Others we resist and charge with the impediment of added burdens of proof. When a low-status source takes an unfamiliar line, it does well to carry some high-status sources over to its side. (It took von Neumann to get people to take Gödel’s work seriously.) It is (or would be) a matter of considerable interest to secure a fuller informed understanding as to how these interact with one another in determining the extent and the speed of diffusion—to see, for example, how the interplay between trust in success and antipathy to claims makes itself out with respect to diffusion. 7. ISSUES OF EXTENT AND TIMING Two salient parameters of cognitive diffusion are •

the final extent of diffusion as determined by the ultimate percentage of communicative agents that ultimately accept the thesis at issue,



the rapidity of diffusion as measured by the time (number of iterations) it takes for the diffusion to reach 90 percent of its final extent.

75

COGNITIVE DIFFUSION

Various circumstances under which cognitive diffusion proceeds will have a critical impact upon its extent. And in particular: • The weaker the (average) affinity interlinkage among units, the slower the diffusion and the less the ultimate extent of acceptance. With doctrinal diffusion there arises the particularly interesting contrast between unopposed diffusion where spread need only overcome ignorance and unknowing, and opposed diffusion where there is outright resistance to the claims at issue. • The greater the resistivity (con-inclination) to a thesis, the slower the diffusion and the less the ultimate extent of acceptance. • As long as there is any resistivity (con-inclination) at all to a thesis, universal consensus will never be reached. There will always be isolated pockets of dissent. 8. DISAGREEMENT, DIVISION, AND BALKANIZATION Differently situated resisters have differential impact upon diffusion from a given source. For example if those resisters are isolated and disjoint their influence would be (expected to be) of lesser significance than if a closer interrelationship among them energized and strengthened their influence. (Leibniz’s philosophical thought was greatly impeded in its impact upon English philosophy in consequence of his dispute with the British Newtonians.) A characteristic mode of diffusion occurs with the splitting of a complex position. Thus suppose that the inaugurating source emits a three-prong position p + q + r. There will now be eight different positions depending on which of p, q, r is accepted and which rejected. And of course if these components are themselves complex, this process can splinter yet further. (Essentially this sort of thing happened during the Reformation as the departure from Rome itself split along even more complex divisions.) Even with the universal availability of information there can thus be balkanization regarding acceptance. The

Nicholas Rescher • Epistemic Merit

76

result can be not a doctrinal bifurcation but an archipelago of diverse doctrines. As a certain claim or thesis spreads its acceptance across the cognitive landscape three sorts of ultimate outcomes are of particular note: • general consensus with respect to across-the-board dominance in point of its acceptance (or rejecting), • binary disensus by way of a splitting of the group into opposing pro- and anti-camps, • balkanization into a plausibility of rival camps (“schools of thought”) in various reactions to different components or aspects of the claim at issue. 9. CONCLUSION As these deliberations indicate, the formation of group knowledge through the diffusion of information is a complex topic that involves: • issues of factual evidentiation and doctrinal relevancy, • issues of substantive novelty, notoriety, or utility, • issues of personal sympathy/antipathy towards particular claims, • issues of interpersonal liaison and attitude (especially trust). The interplay of these issues is so complex and variegated that a general theory of cognitive diffusion will be very difficult to develop. In the end each particular situation has so many twists and turns that— like a fragment—it is likely to be unique. The best that can be done is to trace out a spectrum of typical and recurrent situations. Cognitive diffusion is one of those issues where questions are easy and answers difficult. The salient questions are straightforward, relating on them simply to the extent and speed of the diffusion process. And the factors in which those answers depend (principally the atti-

77

COGNITIVE DIFFUSION

tude of the commentators to the inductive communicated messages and their stance those communicate them) are also rather straightforward. But nevertheless the issue of the ultimate outcome of the process depends on these facts in so subtle and complex a way that even with the best of models the outcome will be very hard to foretell.1 NOTES 1

This chapter comprises part of the author’s contribution to a joint inquiry with Patrick Grim.

Chapter 9 MODELING IN PRAGMATISM PERSPECTIVE 1. WHY MODELING?

M

odeling—the creation of physical or conceptual artifacts to mimic the behavior of natural systems—is one of our principal resources for getting a cognitive grip on the nature of things. As such, modeling is a purposive device—a practice defined by its aims and ends-in-view. By its very nature as such, a model must be a model of something; but will also always be a mode for something, representing some feature of its object for the realization of some purpose. However, modeling is purposively many-faceted. It has no single unique and overarching aim but can potentially serve a great variety of purposes. Specifically, this purposive range includes: • process description (as per computer simulations of natural processes.), • discovery (finding otherwise undetectable modes of behavior), • prediction (forecasting, population-change models, electionsampling, disease diffusion), • explanation illustration (architectural models; orreries to illustrate the solar system), • experimenting (in otherwise unavailable conditions), • testing (wind tunnels, wave simulators), • amusement/entertainment (building model airplanes or ships),

Nicholas Rescher • Epistemic Merit

80

• training/teaching (simulating airplane operations in flight simulators), • planning (Kriegspiel, military table exercises). There is thus no single way of assessing the adequacy of a model. This is a highly variable business because adequacy is here a question of a model’s ability to serve efficiently and effectively in the realization of the particular purposes for which it has been divided. The aim of the enterprise of modeling is to construct an artificial manifold M, the model, whose salient features or operations replicate those of a corresponding reality R. We resort to models primarily because of incapacity: how something works is too complex for us to manage and we resort to a simplified simulacrum to stand in its stead. A model seeks to replicate the salient features of its object in a simpler, more manageable, more perspicuous way. The aim is to provide a larger, functionally more complex whole with a simulacrum whose mode of operation mirrors that of this object in those respects, at least, that are relevant and informative in the setting of an investigation. The name of the game is to make tractable a complex reality which—in its real-life elaborateness is not effectively manageable. Accordingly, a model is a cognitive tool that is devised in such a way that there is a good reason (albeit never conclusive reason) to think that a question about reality that we resolve on its basis is correctly answered. One very instructive way to classify models is with respect to the purposes they are designed to serve. And another useful way to classify models is by their modus operandi and the processes which they employ. This will cover such a range as • mechanical models (model trains), • mathematical models (via systems of equations, e.g. for economic processes), • computerized models (computerized models of crypographic machines),

81

MODELING IN PRAGMATIC PERSPECTIVE

• drawings (flow diagrams for production processes), • physical constructions (model ships, artifact’s models). Exactly because reality is complex and models oversimplify, the relation between the two can be many-faceted. For a model can successfully capture some feature of its object and yet perform miserably with respect to others. Mere stick figures may render good service in modeling human locomotion but it is of no help whatsoever with regard to human digestion. 2. HOW MODELING CAN GO WRONG Since the adequacy of a model is a functional issue—viz. to enable us to answer questions about a reality that is not accessible to understating with equal ease and convenience, successful modeling is not a matter of how closely that model resembles its correlative reality overall. Models are instrumental devices by nature. And as with any instrument, adequacy must be judged by experience—the salient question being whether the tool effectively accomplishes its work. It is a matter of the extent to which it engenders success in answering the questions of the particular problem-range that is at issue. Models can be evidentiated in terms of their performance. In this regard their salient aspects include: • functional efficacy (error avoidance in prediction, explanation, etc.), • consistency of product (robustness), • consilience (consonant outcomes) with other models, • detail, • conformity to contextual information.

Nicholas Rescher • Epistemic Merit

82

And so, like any purposive endeavor modeling can succeed and/or fail in varying degrees: can realize its objective more or less fully and adequately. Obviously the most drastic failure-mode to which modeling is subject is distortion through mis-describing the modus operandi of the item being modelled. But this is simply a generic term for modeling failure and does not get down to the productive factors that lie at the causal source of the thing. These will specifically include the four: • Oversimplification by omitting significant features that are there. • Overcomplexification by introducing distributing factors that are not there. • Over-estimation by representing some factor as present to a greater extent than it actually is. • Under-estimation by representing some factor as absent to a greater extent than it actually is. Among all of these failure-causes, it is oversimplification that is by far the most common and threatening. Two factors are of prime significance here: the parameters that describe a specific state of affairs and the equations or other relationships that connect state of affairs with one another. Neither of these is in general a straightforward matter. The parameter-values must be usually measured obliquely via interactions of various kinds and the relationships are generally more complex than appears on first view. And significant obstacles are likely to arise on both scores. 3. OVERSIMPLIFICATION AS A GATEWAY TO ERROR Oversimplification becomes a serious cognitive impediment by failing to take note of factors that are germane to the matters at hand, thereby doing damage to our grasp of the reality of things. Whenever we unwittingly oversimplify matters we have a blind-spot where some facet of reality is concealed from our view.

83

MODELING IN PRAGMATIC PERSPECTIVE

For oversimplification consists in the omission of detail in a way that creates or invites a wrong impression in some significant—i.e., issue-relevant—regard. In practice the line between beneficial simplification and harmful oversimplification is not easy to draw. Often as not it can only be discerned with the wisdom of retrospective hindsight. For whether that loss of detail has negative consequences and repercussions is generally not clear until after a good many returns are in. The root cause of oversimplification is ignorance: we oversimplify when there are features of the processes at issue about which we are ignorant. And it is somewhere between hard and impossible to come to terms with this. Ignorance is the result of missing information, and one level of this is inevitably tenuous. For the most part, oversimplification involves loss. The student who never progresses from Lamb’s Tales from Shakespeare to the works of the bard of Avon himself pays a price not just in detail of information but in the comprehension of significance. And the student who substitutes the Cliff’s Notes version for the work itself suffers a comparable impoverishment. To oversimplify a work of literature is to miss much of its very point. Whenever we oversimplify matters by neglecting potentially relevant detail we succumb to the flaw of superficiality. Our understanding of matters then lack depth and thereby compromises its cogency. And this is not the worst of it. One of the salient aspects of oversimplification lies in the fundamental epistemological fact that errors of omission often carry errors of commission in their wake: that ignorance plunges us into actual mistakes. For where Reality is concerned, incompleteness in information invites incorrectness. Oversimplification is, at bottom, nothing but a neglect (or ignorance) of detail. Its roots lie in a lack of detail— in errors of omission. When we fill in gaps and omissions—as we all too generally do—we are likely to slide along the slippery slope of allowing simplification lead us into error. Consider a domain consisting of a 3 x 3 tic-tac-toe square. And consider having X’s everywhere except for a blank in the middle. You will then be tempted to fill that middle square also by an X rather than an O. After all, this maximizes the number of available universal generalizations: X’s in all the columns, X’s in all the rows, X’s along every diagonal, etc. Still, reality might not be all that cooperative, and un-

Nicholas Rescher • Epistemic Merit

84

ravel your neat uniformity by having an O at the middle. And you are then misled with regard to something very fundamental—namely the kinds of laws and regularities that obtain. Whenever there is a blank in our knowledge, the natural and indeed the sensible thing to do is to fill it in in the most direct, standard, plausible way. We assume that the person we bump into in the street speaks English and say “oops, sorry”—even though this may well prove to be altogether unavailing. We regard the waiter in the restaurant as ours even where it is the brother who bears a family resemblance. We follow the most straightforward and familiar routes up to the point where a DETOUR sign appears. Time and again we willingly and deliberately adopt the policy of allowing oversimplification to lead us into error because we realize it does so less frequently than the available alternatives. Modeling runs into problems because it is generally a venture trying to give a simpler and more readily manageable picture of a complex reality. And the prime source of error here is obviously oversimplification. For oversimplification is the bane of modeling. A model is, in effect, a theory. And the adequacy of models hinges on the same factor as the adequacy of theories, namely their application. We use these models to guide actions and predictions. And when these fail to work out, we know that something is amiss with our models. There is, however, one very important difference between theories and models. Our scientific theories are crucial to explain not only how nature works but also to explain the fact that it works in a certain sort of way. By contrast our models at best serve to explain how things work. Explaining how it is that this works in this way is something that requires more powerful instruments than mere modeling. And only after we have achieved this deeper level of understanding can we explain why it is that our models succeed to the extent that they do. Modeling thus comes neat to the start of the development of scientific understanding. It is not its final terminus.

85

MODELING IN PRAGMATIC PERSPECTIVE

4. THE IMPACT OF IGNORANCE What is at work here is one of the fundamental principles of epistemology: We are bound to be ignorant regarding the details of our ignorance. I know that there are facts about which I am ignorant, but I cannot possibly know what they are. For to know what such-and-such is a fact about which I am ignorant, I would have to know that this is a fact—which by hypothesis is something that I do not know. And the same situation prevails on a larger scale. We can know that in various respects the science of the present moment is incomplete—that there are facts about the working of nature that it does not know. But of course I cannot tell you what those missing facts are. Our own ignorance is something that it is very hard to get a cognitive grip on. I can tell that I am ignorant of something-or-other. But I cannot ever tell just what this is. To know just what the fact is of which I am ignorant, I would need to know this fact itself—which, by hypothesis, I do not. We can plausibly estimate the amount of gold or oil yet to be discovered, because we know the earth’s extent and can thus establish a proportion between what we have explored and what we have not. But we cannot comparably estimate the amount of knowledge yet to be discovered, because we have and can have no way of relating what we know to what we do not. At best, we can consider the proportion of currently envisioned questions we can in fact resolve; and this is an unsatisfactory procedure. For the very idea of cognitive limits has a paradoxical air. It suggests that we claim knowledge about something outside knowledge. But (to hark back to Hegel), with respect to the realm of knowledge, we are not in a position to draw a line between what lies inside and what lies outside—seeing that, ex hypothesi, we have no cognitive access to that latter. One cannot make a survey of the relative extent of our knowledge or ignorance about nature except by basing it on some overall picture or model of nature that is already in hand via prevailing science. But this is clearly an inadequate procedure. This process of judging the adequacy of our knowledge on its own telling may be the best we can do, but it remains an essentially circular and consequently inconclusive way of proceeding. The long and short of it is that there is no cognitively satisfactory basis for maintaining

Nicholas Rescher • Epistemic Merit

86

the adequacy of our oversimplified models short of subjecting them to the risky trial of bitter experience. 5. SOME RETROSPECTS In concluding I would like to mention one personal venture in empirical modeling to illustrate the preceding perspective. Let me narrate a bit of ancient history. During the years 1954–56 I worked at RAND Corporation in Santa Monica, which continues in existence as a major think tank on public policy issues but in those days devoted almost exclusively to USAF concerns. Now the military is generally fighting the last war, and at that stage the air force was thinking back anxiously to the disaster that befell its navel sister service at Pearl Harbor a little over a decade before. So our modeling took a war-gaming slant on the issue of what sort of response attack the USSR could—with their then-available resources—inflict on US retaliatory capabilities. As best I recall, we came to three conclusions: (1) That the operations would have to be an immensely complex affair carried on in vast scale. (2) That this would almost certainly be a process that could not be carried on with absolute secrecy in the face of our then-available observational capabilities. But (3) that the task would become vastly more difficult if our then operative policy of the forward basing of the Strategic Air Command were changed to one resolving to more extensive use of bases in North America. In the course of working out the detail of so complicated a conjectural exercise it became clear that there is simply no way of proceeding there without making a great many oversimplifying assumptions, but that for the specific inquiry at hand this did not matter because the purpose of the investigation attuned accepting certain unrealism for the sake of a worst-case scenario. (After all, in making one’s defensive preparations a certain unrealism in over-crediting the enemy is a pardonable sin.) But in most modeling situations such an acceptance of palpable unrealism is not justifiable. Thus consider a very different contrast case. In the late 1960’s, the “Club of Rome” sponsored a study of economic-industrial growth on a world-wide basis by the MIT System Dynamics Group. Its findings looked to the neo-Malthusian “limits and growth” (to use the phrasing

87

MODELING IN PRAGMATIC PERSPECTIVE

of its final report). The upshot was the idea that unless various politically and socially unrealizable changes were made—and made rapidly—the world’s social and economic system would collapse by the year 2040. Now the Club’s dire prediction may well come to be realized eventually—(after all, eventually is a long time). But it was grossly off target with regard to anything like its contemplated timespan. The fact of it is that the oversimplifications on which the analysis was based made for an unrealistic acceleration of the trends and tendencies at work. And this sort of thing can create big problems. For when a model is used as basis for large-scale policy decisions its mis-firing can have consequences that are not just local to the particular issue at hand but can call the entire process into question. The use of model for deciding matters of policy in ways that are secure to detail must always be sign-posted PROCEED WITH CARE. Because when the aim of the modeling enterprise is to institute public policy changes, any intrusion of unrealism can all too readily set in motion a counter-reaction that can defeat our faith in rational inquiry itself. In putting our models to work we do well to confront the prospects of oversimplification and its implications. And in doing so we have to realize that a significant structural imbalance is at work here as between two sorts of issues: the defenses which seek the maintenance of a status quo and the offensive which seeks to change it. If I take defensive measures on the basis of oversimplifying the difficulties posed by the offensive, my position is strengthened. But if I take offensive measures on the basis of oversimplifying the requirements for a successful defense, I risk disaster. The purposive nature of the enterprise at hand has important implications for the lessons that we can responsibly draw from our models. It is salient among these that the nature of our models can and should reflect the purposes of their use. If the purposes at issue are predictive accuracy in faithfully depicting the actual phenomena, then great demands will be made on our models. Oversimplification is now a fatal flaw and by and large every practicable step must be taken to avoid it.

Nicholas Rescher • Epistemic Merit

88

But on the other hand if it is only general guidance that we require, then requirements of detailed and precise faithfulness can be relaxed. We need not know the impending rainfall to within two decimal places if all we have to decide is whether or not to take an umbrella. In modeling as elsewhere, practice can and should appropriately be coordinated to purpose.1 NOTES 1

This chapter was written for a project on aspects of modeling organized by Patrick Grim of SUNY—Stony Brook. It has profited from interchanges with Joshua M. Epstein.

Chapter 10 HISTORICAL PERSPECTIVES ON THE SYSTEMATIZATION OF KNOWLEDGE 1. THE CONCEPT OF SYSTEMATIZATION

A

lthough the use of the term “system” in this connection is of relatively recent date, the underlying idea of what we nowadays call a “system” was certainly alive in classical antiquity—with Euclidean systematization of geometry providing a paradigm for this conception. In fact, it has been insisted throughout the history of Western philosophy that men do not genuinely know something unless this knowledge is actually systematic. Plato’s position in the Theaetetus is that a known fact must have a logos (rationale), Aristotle’s insistence in his Posterior Analytics that strict (scientific) knowledge of a fact about the world calls for its accounting in causal terms, Spinoza’s celebration of what he designates as the second and third kinds of knowledge (in Book II of Ethics and elsewhere), all point to the common, fundamental idea that what is genuinely known is known in terms of its systematic footing within the larger setting of a rationale-providing framework of explanatory order. The root idea of system is that of structure or organization, of integration into an orderly whole that functions as an “organic” unity. Thus from antiquity to Hegel and beyond, cognitive theoreticians have insisted that our knowledge should be developed in a systematic manner that it should be articulated within the unifying framework of an all-embracing cognitive structure. The notion of cognitive systematicity thus encapsulates the ancient ideal that our knowledge would be developed architectonically and should be organized within an articulated structure that exhibits the linkages binding its component parts into an integrated whole and leaves nothing wholly isolated and disconnected. A cognitive system is to provide a framework for linking the disjecta membra of the bits and pieces of our knowledge into a cohesive unity.

Nicholas Rescher • Epistemic Merit

90

But while the concept of cognitive systematization is very old, the term “system” itself was not used in this sense until much later. In ancient Greek, systema originally meant something joined together—a connected or composite whole. The term figures in Greek antiquity to describe a wide variety of composite objects—medications, military formations, musical configurations, among others.1 The technicalization of the term began with the Stoics, who applied it specifically to the physical universe—the composite cosmos of “heaven and earth.”2 Apart from this, the term continued in use throughout classical texts in its very broad ordinary sense. The Renaissance gave the term a renewed currency. At first it functioned here too in its ancient sense of a generic composite. But in due course it came to be adopted by Protestant theologians of the 16th century to stand specifically for the comprehensive exposition of the articles of faith, along the lines of a medieval summa. By the late 16th century, the philosophers had borrowed the term “system” from the theologians, using it to stand for a synoptically comprehensive and connected treatment of a philosophical discipline: logic, rhetoric, metaphysics, ethics, etc.3 (It was frequently employed in this descriptive sense—in the title of expository books.) And thereafter the use of the term was generalized in the early 17th century to apply to such a synoptic treatment of any discipline whatever.4 This post-Renaissance redeployment of the term system had a farreaching significance. In the original (classical) sense, a system was a physical thing: a compositely structured complex. In the more recent sense, a system was an originally structured body of knowledge—not a mere accumulation or aggregation or compilation of miscellaneous information (like a dictionary or encyclopedia), but an organized and connectedly articulated exposition of an organically unified discipline. It’s just this sense of “system” that was eventually encapsulated in Christian Wolff’s formula of a system as “a collection of truths duly arranged in accordance with the principles governing connections” (systema est veritatum inter se et cum principiis suis connexarum congeries).5 Moreover, a system is not just a constellation of interrelated elements, but one of elements assembled together in an organic unity by linking principles within a functionally ordered complex of rational interrelationships. The duality in the applicability of the systems-

91

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

terminology to physical and intellectual complexes thus reflects a long-standing and fundamental feature of the conception at issue. A further development in the use of the term occurred in the second half of the 17th century. Now “system” came to be construed as a particular approach to a certain subject—a particular theory or doctrine about it as articulated in an organized complex of concordant hypotheses. This is the sense borne by the term in such phrases as “the system of occasional causes” or “the Stoic system of morality.” Leibniz was a prime promoter of this usage. He often spoke of his own philosophy as “my (new) system” of preestablished harmony, contrasting it with various rival systems.6 System was now understood as a doctrine or teaching in its comprehensive (i.e., “systematic) development. In the wake of this redeployment of the term in relation to a baroque proliferation of competing doctrines, philosophy now came to be viewed as a battle-ground of rival systems. This use of “system” to stand for a comprehensive (if controversial) particular philosophical doctrine opened the conception up to criticism, and brought systems into disrepute in the age of reason. Thus Condillac developed a judicious critique of systems in his celebrated Treatise on Systems.7 He distinguished between systems based on speculation (“abstract principles,” “gratuitous suppositions,” “mere hypotheses”) and those based upon experience. A system cannot be better than the principles on which it is based, and this invalidates philosophical systems, since they are based upon hypotheses along the lines disdained in Isaac Newton’s famous dictum: Hypotheses non fingo. Scientific systems, on the other hand, were viewed in a very different light. For Condillac, systems can thus be either good or bad—the good systems are the scientific systems, based on “experience”; the bad systems are the philosophical ones, based on speculative hypotheses. 2. THE THEORY OF COGNITIVE SYSTEMS The post-Renaissance construction of systematicity emphasized its orientation towards specifically cognitive or knowledge-organizing systems. The theory of such cognitive systems was launched in the second half of the 18th century, and the principal theoreticians were two

Nicholas Rescher • Epistemic Merit

92

German contemporaries: Johann Heinrich Lambert (1728–1777) and Immanuel Kant (1724–1804).8 The practice of systematization that lay before their eyes was that of the great 17th century philosopherscientists: Descartes, Spinoza, Newton, Leibniz, and the subsequent workers of the Leibnizian school—especially Christian Wolff. To be sure, the main use of the system-concept with all these later writers relates not to its potentially physical applications, but to its specifically cognitive applications to the organization of information. A cognitive system is to a structured body of information, organized in accordance with taxonomic and explanatory principles that link this information into a rationally coordinated whole.9 Its governing functional categories are those of understanding, explanation, and cognitive rationalization. To be sure, one and the same system can be presented differently: it can be developed in an analytic or a synthetic way or—in the case of an axiomatic system—it can be developed from these rather than those axioms. What counts for a cogent system is the explanatory connection of ideas and not the particular style or format of their presentation. A system individuated through general features relating to its content and its rational architectonic, and not through the particular sequences of its expository development. As long as we are enabled to traverse the same cognitive terrain, the order in which we do so is immaterial. Cognitive systematization is thus an epistemological notion and not a literary one—a matter of the organization of information and not its mode or presentation. The basic paradigm of a system is that of an organism, an organized whole of interrelated and mutually supportive part functioning as a cohesive unit. Kant puts the matter suggestively as follows: … only after we have spent much time in the collection of materials in somewhat random fashion at the suggestion of an idea lying hidden in our minds, and after we have, indeed, over a long period assembled the materials in a merely technical manner, does it first become possible for us to discern the idea in a clearer light, and to devise a whole architectonically in accordance with the ends of reason. Systems seem to be formed in the manner of lowly organisms, through a generatio aequivoca from the mere confluence of assembled concepts, at first imperfect,

93

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

and only gradually attaining to completeness, although they one and all have had their schema, as the original germ, in the sheer selfdevelopment of reason. Hence, not only is each system articulated in accordance with an idea, but they are one and all organically united in a system of human knowledge, as members of one whole …10

Lambert contrasts a system with its contraries, all “that one might call a chaos, a mere mixture, an aggregate, and agglomeration, a confusion, and uprooting, etc.” (“[alles] was man ein Chaos, ein Gemische, einen Haufen, einen Klumpen, eine Verwirrung, eine Zerüttung etc. nennt”).11 3. THE TRADITIONAL PARAMETERS OF SYSTEMATIZATION In synthesizing the discussions of the early theoreticians of the system-concept one sees the following features to emerge as the definitive characteristics of systematicity: 1. wholeness: unity and integrity as a genuine whole that embraces and integrates its constituent parts, 2. completeness: comprehensiveness: avoidance of gaps or missing components, nothing needful left out, 3. self-sufficiency: connectedness, interrelationship, interlinkage, coherence (in one of its senses), a conjoining of the component parts, rules, laws, linking principles; if some components are changed or modified, then others will react to this alteration, 4. cohesiveness: independence, self-containment, autonomy, 5. consonance: consistency and compatibility, coherence (in another of its senses), absence of internal discord or dissonance; harmonious mutua1 collaboration or coordination of components “having all the pieces fall into place,”

Nicholas Rescher • Epistemic Merit

94

6. architectonic: a characteristic structure or arrangement of duly ordered component parts; generally in an hierarchic ordering of sub- and super-ordination; functional simplicity: elegance, harmony and balance, tidiness in the collaboration or coordination of components, 7. functional unity: purposive interrelationship; a unifying rationale or telos, that finds it expression in some synthesizing principle of functional purport, 8. functional regularity: rulishness and lawfulness, regularity of functioning, uniformity, normality (conformity to “the usual course of things”), 9. mutual supportiveness: the components of a system so combined under the aegis of a common purpose or principle as to conspire together mutual collaboration for its realization; functional efficacy: efficiency, effectiveness, adequacy to the common task. A system, properly speaking, must exhibit all of these characteristics, but it need not do to the same extent—let alone perfectly. These various parameters of systematicity reflect matters of degree, and systems can certainly vary in their embodiment. Systematicity, accordingly, emerges an internally complex and multi-criterial conception, which embraces and synthesizes all the various aspects of an organic, functionally effective whole. The paradigmatic system is a whole that has subordinate wholes whose existence and functioning facilitate—and indeed make possible—the contrived existence and functioning of the whole. A true system is subject to a pervasive functional unity of interrelated components, a unity correlative with the notion of completeness. Kant put the matter as follows: The unity of the end to which all the parts relate and on the idea of which they all stand in relation to one another, makes it possible for us to determine from our knowledge of the other parts whether any part be missing, and to prevent any arbitrary addition, or in respect of its com-

95

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

pleteness (to discover) any omission that does not accord with the predetermined limits …12

Interestingly enough, the conception of systematicity is thus itself a system-oriented conception: a whole that encompasses a congeries of closely interrelated and mutually complementary conceptions. It is a composite idea, a complex Gestalt in whose make-up various, duly connected, structural elements play a crucial role: The conception of organism and organic unity clearly provides a unifying center for this range of ideas. Their focal point is the coordinated collaboration of mutually supportive parts operating in the interest of a unifying aim or principle.13 Many of our concepts are clusters of elements that are in theory disparate but in fact held together by the systematic order of the world. Rather than representing a fusion of diverse conceptual elements whose coming together is underwritten by’ purely conceptual and semantical relationships, the concurrence basic to the concept rests on a strictly empirical foundation. There is no logical guarantee that their conceptually distinguishable factors must go together; their by-and-large coordination is a matter of contingent fact. Concepts of this fact-coordinative sort rest on presuppositions whose content is factual, reflecting a view of how things go in the world. Such concepts are developed and deployed against a fundamentally empirical backdrop—a Weltanschauung, or rather, some miniscule sector thereof. The crucial characteristic of such cases of the conjoining of a plurality of factors that are in theory separable from one another but in practice generally found in conjunction. At the base of such a concept, then, lies an empirically underwritten coordination that places the various critical factors into a symbiotic, mutually supportive relationship. Accordingly, the concept is fact-coordinated in exactly this respect of envisaging a coming together of theoretically distinct factors whose union is itself the product not of conceptual necessity but of the contingently constituted general run of things.14 It is clear that systematicity is itself a fact-coordinative concept of just this sort, one which holds together in a symbiotic a systemic union of elements which—from the aspect of purely theoretical considerations—might well wander off on their own separate ways.

Nicholas Rescher • Epistemic Merit

96

It is a crucial and very interesting aspect of the idea of system that it is fundamentally amphibious, applying alike to material systems (such as organisms) and intellectual systems (such as organically integrated bodies of knowledge). The idea is fundamentally neutral as between its material and its cognitive applications.15 This pervasive analogy between physical and intellectual systems in manifesting a complex of common elements was already stressed by both Lambert and Kant. The idea of systematization is intimately intertwined with that of planning in its generic sense of the rational organization of materials.16 Planning also exhibits the “amphibious” character of systematization. On the physical side one can have such projects as town planning, architecture, and landscape gardening. On the cognitive side one can plan the organization of information for the purpose of explanatory or deductive or dialectical (persuasive) or mnemonic codification. To be sure, some recent writers urge the need for maintaining a careful line of separation between intellectual systems, where systems-talk relates to “formulations of various kinds that are used for descriptive or conceptual-organizational purposes in science,” and physical systems as “extra-linguistic entities which, in fact, might be described or referred to by such formulations.”17 But any rigid bifurcation seems ill-advised. It is wrong to think that two different systemsconcepts are at issue. As our historical considerations show, we are dealing with deep-rooted parallelism, a duality of application of one single underlying conception.18 And actually, the development of general systems-theory over the past generation should be seen as an attempt to forge a comprehensive unifying framework within which all of the diverse applications of the systems idea could be accommodated—physical systems (be they natural or artificial), process control systems, and cognitive systems alike. 4. THE SYSTEMATICITY OF “THE TRUTH” The conception that all truths form one comprehensive and cohesive system in which everything has its logically appropriate place is one of the many fundamental ideas contributed to the intellectual heritage of the West by the ancient Greeks. The general structure of the con-

97

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

cept can already be discerned in the Presocratics, especially in the seminal thought of Parmenides.19 The conception that all knowledge—that is all of truth as humans can come to have epistemic control of it—forms a single comprehensive unit that IS capable of a deductive systematization on essentially Euclidean lines is the guiding concept of Aristotle’s theory of science as expounded in Posterior Analytics. Inquiry is the pursuit of truth. And the domain of truth is in itself clearly a system. Let us consider the way in which the idea that “truth is a system” is to be understood. Three things are at issue: the set T of truths must have the features of comprehensiveness (or completeness), consistency, and cohesiveness (unity). The first two of these are familiar and well understood. Let us concentrate on the third. One way to explicate cohesiveness is in terms of inferential interdependence: The propositional set T exhibits the feature of inferential interlinkage in that every T-element is inferentially dependent upon at least some others: Whenever Q ∈ T , then there are elements P1, ,P2, …, Pn∈T (all suitably distinct from Q) such that: P1, ,P2, …, Pn ├ Q

This feature is at bottom merely a matter of sufficient redundancy. And it is clear that such redundancy does and must characterize truths. Assume that p and ~q are both true (and so q false). Then clearly both of the following will also be truths: p v q, q~(r &~r). And so if our initial two propositions were excised from the set that represents the truths, they would both still be derivable from the remainder. The set of all truths does indeed exhibit this sort of systematic constrictiveness that makes each element inferentially redundant. The situation illustrated by such examples is a perfectly general one. Each and every truth P1, is a member of a family of related truths P1, ,P2, P3, …, Pn of such a kind that, even when P1 is dropped from explicit membership in the list, the remainder will collectively still yield P1. (The trio P1,P, P2, P v Q, P Q yields an example.) This circumstance reflects what might be called the systematic constrictiveness of the truth: the fact that truths constitute a mutually determina-

Nicholas Rescher • Epistemic Merit

98

tive domain such that, even if some element is hypothetically deleted, it can nevertheless be restored from the rest.20 Thus when we formulate our knowledge-claims systematically, we are endowing them with verisimilitude in the root sense of “resemblance to the truth.” One arrives at the inference: Knowledge must reflect the truth. The truth is a system. ∴Knowledge should be a system. This idea—that if our truth-claims are to approximate to the truth itself, then they too must be capable of systematic development— provides one of the prime grounds adopting the systematicity of knowledge as a regulative ideal. 5. COGNITIVE SYSTEMATICITY AS A KEY STANDARD OF “SCIENTIFICITY” Their systematicity authenticates the claims of individual theses as actually belonging to a science. Let us explore more extensively the contention—deep rooted throughout the epistemological tradition of the West—that the proper scientific development of our knowledge should proceed systematically. Scientific systematization has two aspects. The first is methodological and looks to the unity provided by common intellectual tools of inquiry and argumentation. (This aspect of the unity of a shared body of methodological machinery was the focus of the “Unity of Science” movement in the heyday of logical positivism in the 1920’s and 30’s.) But of course, there should be a substantive unity as well. Something would be very seriously amiss if we could not bring the various sectors of science into coordination and consonance with one another. And even when there are or appear to be conflicts and discordances, we should be able to explain them and provide a rational account for them within an overarching framework of explanatory principles. Scientific explanation in general proceeds along subsumptive lines, particular occurrences in nature being explained with reference to

99

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

covering generalizations. But the adequacy of such an explanation hinges upon the status of the covering generalization: is it a “mere empirical regularity,” or is it a thesis whose standing within our scientific system is more firmly secured? This latter question leads straightaway to the pivotal issue of how firmly the thesis is embedded within its wider systematic setting in the branch of science at issue. Systematization here affords a criterion of the appositeness of the generalization deployed in scientific explanation. An empirical generalization is not to be viewed as fully adequate for explanation purposes until it can lay claim to the status of a law. And a law is not just a summary statement of observed-regularities-todate, it claims to deal with a universal regularity purporting to describe how things inevitably are: how the processes at work in the world must invariably work, how things have to happen in nature. Such a claim has to be based upon a stronger foundation than any mere observed-regularity-to-date. The coherence of laws in patterns that illuminate the functional “mechanisms” by which natural processes occur is a critical element—perhaps the most pivotal one—in furnishing this stronger foundation this “something more” than a mere generalization of observations. An “observed regularity” does not qualify for acceptance as a “law of nature” simply by becoming better established through observation in additional cases. What is needed is integration into the body of scientific knowledge.21 The systematicity of knowledge is to be construed in the first instance as a category of understanding, akin in this regard to generality, simplicity, or elegance. Its primary concern is with form rather than matter: in the first analysis, it bears upon the organizational development of our knowledge rather than with the substantive content of what is known and deals with cognitive structure rather than subject-matter materials. Cognitive systematicity is a feature not so much of the substance of our knowledge as with its architecture or organization. Just as one selfsame range of things can be characterized simply or complexly, so it can be characterized systematically or unsystematically. Systematicity relates to the first instance not to what we know—the facts at issue in the items of information at our disposal— rather, it represents a feature of how we proceed to organize our knowledge of them. (These two aspects are, however, so interrelated

Nicholas Rescher • Epistemic Merit

100

that what can be claimed to hold in the first analysis cannot be maintained without qualification in the final analysis.) Systematicity is not only a prominent (if partial) aspect of the structure of our knowledge, but is a normatively desirable aspect of it— indeed a requisite for genuinely scientific knowledge. It is, accordingly, correlative with the regulative ideal represented by the injunction: develop your knowledge so as to endow it with a systematic structure. To understand an issue properly—that is to say, scientifically—we must grasp it in its systematic setting. Sapientis est ordinare affirms the sage dictum of which St. Thomas Aquinas was fond. It is through the heritage of the Leibniz-Wolff tradition in particular that systematization has become for the moderns too an ongoing vehicle for the ancient ideal of a scientia—a body of knowledge developed as a comprehensive whole according to rational principles. Prominent in the historical background here is Leibniz’s bold vision of a scientia universalis—a synoptic treatment of all knowledge—encyclopedic in scope, yet ordered not by the customary, conventional and arbitrary arrangement of letters of the alphabet, but a rational arrangement of topics according to their immanent cognitive principles. The prospect of organizing a body of claims systematically is crucial to its claims to be a science. Systematization monitors the adequacy of the rational development (articulation) of what we claim to know, authenticating the whole body of claims, collectively, as a science. As Kant put it, “systematic unity is what first raises ordinary knowledge to the rank of science.22 To know something scientifically is to exhibit it in an appropriate systematic context. Systematicity is the very hallmark of a science. To quote Kant: “Every discipline (Lehre) if it be a system—that is, a cognitive whole ordered according to principles—is called a science.”23 For him systematicity enters into the very conception of a science. For him, a “science” is—virtually by definition—a branch of knowledge that systematizes our information in some domain of fact; he espouses the schema: the science of X— the systematization of all of our attainable knowledge regarding X.24 In a remarkably Hegelian vein, Kant wrote: Systems seem to be formed … in the sheer self-development of reason. Hence, not only is each system articulated in accordance with an idea,

101

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

but they are one and all organically united in a system of human knowledge, as members of one whole, and so as admitting of an architectonic of all human knowledge, which, at the present time, in view of the great amount of material that has been collected, or which can be obtained from the ruins of ancient systems, is not only possible, but would not indeed be difficult.25

This idea of the systematically comprehensive self-development of reason is present in much of the subsequent philosophical tradition, and is particularly prominent in the School of Hegel. The systematic idea in the context of science embraces not only the more modest view that the several branches of empirical inquiry (scientific knowledge) exhibit a systematic structure severally and separately, but also the more ambitious doctrine that the whole of natural science forms one single vast and all-comprehending system. The conception of scientific systematization points towards the ideal of a perfect science within which all the available and relevant facts about the world occupy a suitable place with due regard to their cognitive connections. Indeed, not only should scientific knowledge approximate—and, ideally, constitute—one vast synoptic system but a discipline is hallmarked as authentically scientific by its inclusion within this over-all system. To be sure, no one claims that such synoptic and comprehensive systematization is a descriptive aspect of scientific knowledge as it stands today (or will stand as some other historical juncture). But it represents an idealization towards which science can and should progress along an evolutionary route. 6. THE TELEOLOGY OF SYSTEMATIZATION Systematization provides a regulative ideal of cognitive development throughout the domain of our knowledge—alike in its formal and its factual hemispheres. Now the systematization of formal knowledge— particularly in the spheres of mathematics, logic, formal linguistics— is, to be sure, a noble and ancient project whose pursuit among the Greeks provided the very foundation of the enterprise of cognitive systematization. However we shall here put aside almost entirely the

Nicholas Rescher • Epistemic Merit

102

issue of the systematization of formal knowledge of and focus upon the factual sector. It is the systematization our factual, empirical knowledge of the contingent arrangements of this world that will occupy us throughout the coming pages.26 To be sure, fascinating and important issues arise in the formal domain. Nevertheless the systematization of our factual knowledge has its distinctive problems and ramifications and it is these which shall concern us here. In inquiry as in walking one must progress one step at a time, and the complexities of the case are such that a division of labor is advisable. There is no rational basis for issuing in advance—prior to any furtherance of the enterprise itself—a categorical assurance that the effort to systematize our knowledge of the world bound to succeed. The coherence, consistency, uniformity of our factual knowledge is (as we shall see) not something that can be guaranteed a priori, as having to obtain on the basis of the “general principles” of the matter. These factors represent a family of regulative ideals towards whose realization or cognitive endeavors should strive. The drive for systematicity is the operative expression of such an ideal, and not something whose realization can be taken for granted as already certain and settled. The aspect of systematic pattern and generality—of rulishness— has a deep Darwinian rationalization. To make our way in a difficult world we men, as rational animals, need to exploit regularities for our effective functioning. Now rules are easiest to grasp, master, to apply, and to transmit if they themselves are organized in rulish patterns— i.e., are developed systematically. And the concern for system is nothing else than this drive for metarulishness, this effort to impart to our principles of behavioral and intellectual procedure a structure that is itself principled. But the question remains: what rational considerations render systematicity so desirable—what is the grounding of its status as a regulative ideal in cognition? What is the point of organizing our knowledge as a system—what, in short, does systematicity do for us? After all, systematization is a pointful action and system is a functional category—systematizing is something that has to have a purpose to it. This purposive aspect of the matter needs closer scrutiny. Knowledge is organized with various ends in view—in particular, the heuristic (to make it easier to learn, retain, and utilize) and the

103

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

probative (to test and thereby render it better supported and more convincing). To be sure, in the present study of cognitive systematization it will, in effect, be the monograph and not the textbook that is the paradigm. We shall put aside the psychological aspects of knowledge-acquisition and utilization (learning, remembering, etc.), focusing upon the rational aspect of organizing knowledge in its probative and explanatory dimensions. Our concern is thus with the systematization of knowledge as a matter of planning for the organization of knowledge for theoretical and purely cognitive purposes (rather than deductive or heuristic ones). We shall focus on probatively oriented systematizations and put heuristic issues aside. Given our focus on strictly probative issues, the systematic development of knowledge—or purported knowledge—will be seen to serve three major interrelated functions: 1. Intelligibility. Systematicity is the prime vehicle for understanding, for it is just exactly their systematic interrelationships which render factual claims intelligible. As long as they remain discrete and disconnected, they lack any adequate handle for the intellect that seeks to take hold of them. 2. Rational Organization. Systematicity—in its concern for such desiderata as simplicity, uniformity, etc.—accords the means to a probatively rational and scientifically viable articulation and organization of our knowledge. The systematic development of knowledge is thus a key part of the idea of science. 3. Verification. Systematicity is a vehicle of cognitive qualitycontrol. It is plausible to suppose that systematically developed information is more likely to be correct—or at any rate less likely to be defective—thanks to its avoidance of the internal errorindicative conflicts of discrepancy, inconsistency, disuniformity. This indicates the service of systematization as a testing-process for acceptability—an instrument of verification. Let us consider these three themes more closely.

Nicholas Rescher • Epistemic Merit

104

7. SYSTEMATICITY AND UNDERSTANDING Its orientation towards the provision of a rationale makes systematization an indispensable instrument of cognitive rationality. Within a systematic framework, the information to be organized is brought within the controlling aegis of a network of rule-governed explanation and evidential relationships. The facts are thus placed within patterns of order by way of reference to common principles, and their explanatory rationalization is accordingly facilitated. Systematization is a tool of explanation and we explain things with an end in view—viz., to make them intelligible. But what does this “intelligibility” involve? Its definitive theses are recognition and appropriation (aneignen) familiarization reduction to the ordinary, put matters “into one’s own terms” and render them “only natural and to be expected.” A cognitive system provides illumination; their systematic interconnections render the facts at issue amenable to reason by setting them within a framework of ordering principles that bring their mutual interrelationships to light. Systematicity provides the channels through which explanatory power can flow. Evidential or explanatory cohesion provides the system, establishing synthesis which does the job of “accounting for its theses in both senses of this term—explaining-the-fact and also providing-evidencefor-its-claims-to-factuality. (Although all the early writers on the subject construed these unifying linkages of cognitive systems in deductive terms, it is in fact quite immaterial whether these evidential or explanatory links are deductive or inductively evidential: the structure of the interrelationships and not the nature of its links determine cognitive systematicity.) In this way, systematization facilitates understanding because the system provides the structure of interrelatedness through which the cognitive role of its elements are made manifest. A second major aim of cognitive systematization is that of providing the requisite means for authenticating a body of knowledge claims as scientific. This is something which we can, at this stage, pass by with a mere “listing for the record.”

105

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

8. SYSTEMATICITY AND ERROR-AVOIDANCE Systematic development controls the adequacy of articulation of our body of knowledge (or purported knowledge). This is evident from a consideration of the very nature of the parameters of systematicity: consistency, consonance, coherence—and even completeness (comprehensiveness). Their advantages of injecting these factors into the organizing articulation of our knowledge are virtually self-evident. In the pursuit of factual knowledge we strive to secure information about the world. We, accordingly, endeavor to reject falsehoods, striving to assure that to the greatest feasible extent the wrong ones are kept out of our range of cognitive commitments. And the pursuit of consistency, consonance, coherence, completeness, etc. clearly facilitates the attainment of this ruling objective. Systematization is a prime instrument of error avoidance. How serious a matter is error? Some regard it as akin to a heinous crime. W. K. Clifford certainly thought so. In his classic 1877 essay on “The Ethics of Belief” (to which William James’ even more famous essay of 1895 on “The Will to Believe” offers a reply) Clifford maintained that: “It is wrong, always, everywhere, and for everyone to believe anything upon insufficient evidence.”27 But if error-avoidance is to be our be-all and end-all, a straightforward solution lies before us. We can simply adopt the skeptic’s course of refusing to accept anything whatsoever. James quite properly argued against Clifford that the enterprise of inquiry is governed not only by the negative injunction “Avoid error!” but no less importantly by the positive injunction “Achieve truth.” And, in the factual area—where the content of our claims outstrips the evidence we can ever gather from them—this (so he insists) demands the risk of error. There’s nothing irrational about this risk, quite to the contrary: “a rule of thinking which would absolutely prevent one from acknowledging certain kinds of truth, if those kinds of truth were really there, would be an irrational rule.”28 There are in fact very different sorts of “errors.” There are errors of the first kind—errors of omission arising when we do not accept P when P is in fact the case. These involve the sanction (disvalue) of ignorance. And there are also errors of the second kind—errors of commission—arising when we accept P when in fact not-P. These in-

Nicholas Rescher • Epistemic Merit

106

volve the mark of cognitive dissonance and outright mistake. And clearly both sorts of mis-steps are errors. The rules of the cognitive game call not only for rejecting falsehoods and keeping the wrong things out but also for accepting truths and assuring that the right things get in. Let us look at the matter in a Jamesian perspective as one combining, balancing the desideratum of error-avoidance off against that of information-loss. The normative aims of knowledge-pursuit that are correlative with scientific development are clearly facilitated by systematization. The key idea is the systematization of our knowledge facilitates the realization of its governing objective the engrossment of information: optimal balance of truth over falsehoods. Systematization is presumptively error-minimizing with respect to the two kinds of cognitive errors. This step from error-avoidance to truth acceptance brings us to the threshold of an important idea, that of the “Hegelian Inversion.” 9. THE HEGELIAN INVERSION: THE TRANSFORMATION FROM A DESIDERATUM OF EXPOSITION TO A TEST OF ACCEPTABILITY An important further extension in this range of ideas came into prominence with Hegel and his followers in the 19th century. This is the transformation from the earlier conception of systematicity as the hallmark of science as per the equation: a science = a systematically developed body of knowledge Into its redeployment as a test or standard of cognitive acceptability, as per the equation: true (presumptively) = meriting inclusion within a science capable of being smoothly integrated into the system of scientific knowledge We thus arrive at the inversion from the implication

107

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

If an item is part of our (purported) knowledge, then it is systematizable with the whole of it. to that of the implication: If an item is systematizable with the whole of our (purported) knowledge, then it should be accepted as a part of it. Systematicity now set up as a testing criterion of (presumptive) truth and thus becomes a means for enlarging the realm of what we accept as true rather than merely affording a device for organizing preestablished truth. The preceding discussion stressed the over-all systematicity of “the truth”—the fact that the totality of true theses must constitute a cohesive system. Systematicity was thus presented as a crucial aspect of truth. The presently envisaged approach in effect takes the step of establishing this significant aspect of truth into a definitive aspect of it: in sum, as a CRITERION. This line of development points towards a new and importantly different role for systematicity. Its bearing is now radically transformed. From being a hallmark of science (as per the regulative idea that a body of knowledge-claims cannot qualify as a science if it lacks a systematic articulation), systematicity is transmuted into a standard of truth—an acceptability criterion for the claims that purport to belong to science. From a desideratum of the organization of our “body of factual knowledge,” systematicity is metamorphosed into a qualifying test of membership in it—a standard of faciticty. His idea of systematicity as an arbiter of knowledge (to use F. A. Bradley’s apt expression) was implicit in Hegel himself, and developed by his followers particularly those of the English Hegelian school inaugurated by T. H. Green. This Hegelian Inversion leads to one of the central themes of the present discussion—the idea of using systematization as a control of substantive knowledge. F. H. Bradley put the matter as follows: The test (of truth) which I advocate is the idea of a whole of knowledge as wide and as consistent as may be. In speaking of system (as the

Nicholas Rescher • Epistemic Merit

108

standard of truth) I always mean the union of these two aspects … (which) are for me inseparably included in the Idea of system …29

To see more vividly some of the philosophical ramifications of this Hegelian approach, let us glance back once more to the theological role of systematicity in its historical aspect: The point of departure was the Greek position (in Plato and Aristotle and clearly operative still with rationalists as late as Spinoza) which, secure in a fundamental commitment to the systematicity of the real, takes cognitive systematicity (i.e., systematicity as present in the framework of our knowledge) as a measure of the extent to which man’s purported understanding of the world can be regarded as adequate: Here systematicity functions as a regulative ideal for the organization of knowledge and (accordingly) as a standard of the organizational adequacy of our cognitive claims. But the approach of the Hegelian school (and the Academic Skeptics who had anticipated them in this regard—as we shall see) moves well beyond this position. Viewing systematicity not merely as a regulative ideal for knowledge, but as an epistemically constitutive principle, it extended what was a mere test of understanding into a test of the evidential acceptability of factual truth claims. Accordingly, the Hegelian Inversion sees the transformation of systematicity from a framework for organizing knowledge into a qualitycontrol mechanism for knowledge claims. Fit, attunement, systematic connection become the determinative criteria in terms of which the acceptability of knowledge-claims is assessed. On this approach, our “picture of the real” emerges as an intellectual product produced under the control of the idea of system as a regulative principle for our theorizing. 10. METAPHYSICAL RAMIFICATIONS OF THE HEGELIAN INVERSION Interesting metaphysical implications of the bearing of systematicity on the interrelation between truth and reality emerge from this perspective. Let us approach the issue in its historical dimension. A line of thought pervasively operative in antiquity may be set out by the syllogism:

109

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

Reality is a coherent system. Knowledge corresponds to reality. ∴Knowledge is a coherent system. With Kant’s Copernican Revolution this line of reasoning came to be transformed to: Knowledge is a coherent system. Knowledge agrees with (empirical) reality. ∴Reality (i.e., empirical reality) is a coherent system. While the original syllogism effectively bases a conclusion about knowledge upon a premise regarding reality, its Kantian transform infers a conclusion about reality for premises regarding knowledge. With this aspect of Kant’s Copernican Revolution we reach the idea that in espousing the dictum that “truth is a system,” what one is actually claiming to be systematic is not the world as such, but rather our knowledge of it. Accordingly, it is what is known to be true regarding “the facts” of nature that is systematized, and systematicity thus becomes—in the first instance—a feature rather of knowledge than of its subject matter. The idea of system can—indeed must—be applied by us to nature, yet not to nature in itself, but rather—as Kant puts it—to “nature insofar as nature conforms to our power of Judgment.”30 Correspondingly, system is at bottom not a constitutive conception descriptive of reality per se, but a regulative conception descriptive of how our thought regarding reality must proceed. Kant’s successors tended to turn their backs upon his regulative and epistemological approach. They wanted to overcome Kant’s residual allegiance to the Cartesian divide between our knowledge and its object. Waving the motto that “the real is rational” aloft on their banners, they sought to restore system to its Greek position as a “fundamentally ontological”—rather than “merely epistemological”—concept. In this setting, however, the concept of the systematization of truth played the part of a controlling idea more emphatically than ever. Hegel in effect simply went back to the Greeks. He was discontent with Kant’s setting up as major premise what for him (and the Greeks) ought to have been a conclusion, and so insisted once more on the

Nicholas Rescher • Epistemic Merit

110

centrality of the question: How do we know that knowledge is a coherent system? But in answering this question he also in his turn undertook a Kant-reminiscent inversion, shifting from the relatively innocuous principle: If a thesis is a part of real knowledge, then it must cohere systematically with the rest of what is known. to its more enterprising converse: If a thesis coheres systematically with the rest of what is known, then it is a part of real knowledge. Now it is clear that once we adopt this principle as our operative standard (criterion, arbiter) of knowledge—so that only what is validated in terms of this principle is admitted into “our knowledge”— then the crucial contention that “Knowledge is a coherent system” at once follows. If the epistemic constituting of our (purported) knowledge takes place in terms of considerations of systematic coherence, then it follows—without any reference to ontological considerations that the body of knowledge so constituted will have to form a coherent system. With the Hegelian Inversion the ontologically demanding thesis of the systematicity of the real comes to rest on a relatively innocuous epistemological foundation.31 NOTES 1

Much of the presently surveyed information regarding the history of the term is drawn from the monograph by Otto Ritschl. System und systematische Methode in der Geschichte des wissenschaftlichen Sprachgebrauchs und der philosophischen Methodologie (Bonn. 1906).

2

See Theodore Ziehen. Lehrbuch der Logik (Bonn. 1920). p. 821. Compare Aristotle, De mundo.

3

Thus Bartholomew Keckerman (d. 1609) wrote in his Systema logicae (Hanover, 1603) that the term logic like that for every art stands for two things: the practical skill on the one hand and the systematic discipline on the other: primo pro habitu ipso in mentem per praeceptu et exercitationem introducto: deide pro praeceptori-

111

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

NOTES

um logicorum comprehensione seu systemate . (Quoted in O. Ritschl. op. cit., p. 27.) 4

For example: Johann Heinrich Alsted. Systema mnemonicum (Frankfurt. 1610); Nicas de Februe. Systema chymicum (Paris, 1666 [in French]. London. 1666 [in English]). Richard Elton. Systema artis militaris (London.1669).

5

Logic, sect. 889; cited in Theodor Ziehen, op. cit. p. 821.

6

Thus Leibniz contrasts his own systême de l’harmonie preétablie with the système des causes efficientes et celui des causes finales as well as the systeme des causes occassionelles qui a été fort mis en vogue par les belles reflexions de l’Auteur de la Recherche de la Vérité (Malebranche). He characterizes his own contribution as the système nouveau de la nature et de la communication des substances aussi bien que de l’union qu’il ya entre l’ame et le corps. (Ritschl, p. 60).

7

See his Traite des systèmes first published in Paris in 1749.

8

The main theoretical works are various essays by Lambert (including the opuscula Fragment einer Systematologie [dated 1767 and 1771], Theorie des Systems [1782] and Von den Lücken unserer Erkenntniss [17xy] and Kant’s Critique of Pure Reason [1781], esp. Book II, Part 3, “The Architectonic of Pure Reason.” Johann Heinrich Lambert: Philosophische Schriften, two vols. (Leipzig [7]. 1782 and 1787; reprinted Hildesheim, 1967).

9

A cognitive system is never “merely descriptive”—any scientific scheme of classification always proceeds in line with explanatory considerations.

10

CPuR, A834 = 8862 (Kemp Smith).

11

Cited in O. Ritschl, op. cit., p. 64.

12

CPuR, A832-833, 8860-861; tr. Kemp Smith.

13

As Lambert puts it: the parts of a system should “alle mit einander so eng verbunden dass sie gerade das der vorgesetzten Absicht gemässe Ganze ausmachen. (Quoted in O. Ritschl, op. cit., p. 64).

14

For a fuller treatment of such fact-coordinative concepts see Chapter VI of the author’s The Primacy of Practice, (Oxford: Blackwell, 1973).

15

In view of this fact it is strange that so little attention is paid to cognitive (“intellectual,” “symbolic”) systems within the general systems theory movement. Thus in

Nicholas Rescher • Epistemic Merit

112

NOTES

Ludwig von Bertalanffy’s synoptic survey of General Systems Theory: Foundation Development, Applications (New York, 1968) the distinction is recognized, without any elaboration or discussion of the issues on the cognitive side of the matter. 16

For a useful general treatment of planning theory see G. A. Miller, E. Galanter, and K. H. Pribram, Plans and the Structure of Behavior (New York: Holt, 1960).

17

Richard S. Rudner, Philosophy of Social Science (Englewood Cliffs: Prentice Hall, 1966), p. 89.

18

In an interesting recent article “On the Concept of System” (Philosophy of Science: Vol. 42, 1975, pp. 448–46, J. H. Marchal reaches a parallel conclusion in proceeding within the “general systems theory” movement.

19

The following passage is particularly apposite here: The thing that can be thought and that for the sake of which the thought exists is the same; for you cannot find thought without something that is, as to which it is uttered. And there is not, and never shall be, anything besides what is, since fate has chained it so as to be whole and immovable … Since, then, it has a furthest limit, it is complete on every side, like the mass of a rounded sphere, equally poised from the centre in every direction; for it cannot be greater or smaller in one place than in another. For there is nothing that could keep it from reaching out equally, nor can aught that is be more here and less there than what is: since it is all inviolable. For the point from which it is equal to every direction tends equally to the limits. Here shall I close my trustworthy speech and thought about the truth. (Frag. 8, tr. J. Burnet.)

20

See Chapter VII of the author’s The Coherence Theory of Truth (Oxford: Clarendon Press, 1973) for a further treatment of relevant issues.

21

The idea that in explicating the idea of a “law of nature” we shall take systematicity as our standard of lawfulness was a standard among the English neo-Hegelians. It recurs in F. B. Ramsey, who in an unpublished note of 1928 proposed to characterize laws as the “consequence of those propositions which we should take as axioms if we know everything and organized it as simply as Possible in a deductive system.” (See David Lewis, Counterfactuals (Oxford: Clarendon Press, 1973) p. 73.) Ramsey gives the theory an interesting—but in principle gratuitous—twist in the direction of a specifically deductive style of systematization, a specification the Hegelians had made along coherentist rather than deductivist lines. The more orthodoxly neo-Hegelian, coherentist version of the theory was refurbished in the author’s Scientific Explanation (New York: The Free Press, 1970; see especially pp. 110–111). Parts of the present discussion draw upon this work.

22

CPuR, A832, 8860 (Kemp Smith).

113

PRESPECTIVES ON SYSTEMATIZATION OF KNOWLEDGE

NOTES 23

Immanuel Kant, Preface to the Metaphysical Foundations of Natural Science (tr. L. W. Beck).

24

CPuR, A834, B862 (Kemp Smith).

25

CPuR, A834, B862 (Kemp Smith).

26

It should be stressed that his stance in no way countervails against recognition that formal knowledge is an indispensable part of the rational instrumentalities by which inquiry in the factual domain proceeds. For the interviews regarding … see his Methodological Pragmatism (Oxford: Blackwell, 1976).

27

Lectures and Essays (London: Oxford University Press, 1879), Vol. II; originally published in the Contemporary Review, Vol. 30 (1887) pp. 42–54. For a useful outline of the James-Clifford controversy and its background see Peter Kauber, “The Foundations of James’ Ethics of Belief,” Ethics, vol. 84 (1974), pp. 151–166, where the relevant issues are set out and further references to the literature are given. For a particularly interesting recent treatment see Roderick Chisholm, Lewis’ Ethics of Belief” in The Philosophy of C.I. Lewis, edited by P.A. Schilpp (La Salle: Open Court, 1968), pp. 223–300.

28

The Will to Believe and Other Essays in Popular Philosophy (New York: Longmans Green, 1956), pp. 27–28. Clifford’s tough line on belief in the religious sphere is not matched by a corresponding toughness in the sphere of scientific knowledge, where he took a confidently realistic position. Rejecting the possibility of certainty here, he stressed that what we accept as our “knowledge” of nature rests on various interpretative principles, which, though indemonstrable, are nevertheless necessary for man’s survival, and whose acceptance thus to be accounted for (though not established) in evolutionary terms. He held the uniformity of nature of one such a principle, maintaining that “Nature is selecting for survival those individuals and races who act as if they were uniform; and hence the gradual spread of this belief over the civilized world” (op. cit., p. 209). James’ own position against Clifford comes down to saying that what’s sauce for the scientific goose is sauce for the religious gander as well.

29

“On Truth and Coherence,” Essays on Truth and Reality (Oxford: Clarendon Press, 1914), pp. 202-218; see pp. 202–203.

30

Introduction to Kant’s Critique of Judgment, Werke, Vol. I. V; Academy edition (Berlin, 1920), p. 202.

Nicholas Rescher • Epistemic Merit

114

NOTES 31

This chapter was originally published in Philosophy in Context, vol. 6 (1977), pp. 20–42.

Chapter 11 COMMUNICATIVE APPROXIMATION IN PHILOSOPHY 1. SEMANTICAL IMPRECISION

T

he presumption of our having a vocabulary adequate to the descriptive characterization of natural reality envisions a communicative precision in the fit between terminology and phenomenology that is actually unavailable. In consequence, the complexities of the real encounter a semantical insufficiency that forces us into the realm of communicative approximation. One illustration of descriptive inadequacy is afforded by the phenomenon of black-and-white photography with its characteristic obliviousness to various similarities and contrasts. Many details of qualitative sameness and difference will go by the board on this basis. And the inherent inadequacy of our descriptive resources arises even more strikingly when it comes to music or wines within the limitations of everyday-language vocabulary. How then can it be proceed when we want to describe a segment of fact or reality with a vocabulary that is unsuited to it? Consider describing • the workings of hypnosis or of acupuncture in the vocabulary of orthodox biomedical medicine, • the workings of physical reality at the quantum level in the vocabulary of everyday life, • the worldview of Shamanism in the vocabulary of Western culture. The complexity and convolution of nature is such that any attempts at its characterization in the language of ordinary life are bound to fail.

Nicholas Rescher • Epistemic Merit

116

In such cases we simply lack the means for stating—literally and precisely—the complicated facts of the matter. Instead, we have to resort to the mechanisms of linguistic assimilation: analogy, similes, and the like. But at this point we move across the crucial divide that separates the duality of true/false from the multi-valent plurality of degrees of adequacy whereas ordinary fact assertive statements like “The cat is on the mat” or “Two plus two are six” are generally subject to the binary appraisal of truth and falsehood. Consider the question “What does the letter A look like?” This question is unanswerable, imponderable, meaningless. It is based on the presupposition there is some definite look to the letter A. The question as it stands simply has no answer. On the other hand, the response “Usually and ordinarily, the letter A either looks A-wise or awise” is true, appropriate, and informative. The turn to linguistic approximation can expand our horizons and extend our knowledge. However, oversimplification (be it deliberate or inadvertent) is a common form of linguistic approximation. Thus then we describe the array A a A a A as a sequence consisting of five inscriptions of the latter A, we are saying something which, while perfectly true, nevertheless oversimplifies the situation and fails to do justice to the descriptive actualities of that array. Even when true, oversimplification invites error. In the present case, if we lack the lexicographic resources for distinguishing between capital and small letter we might well say that we are dealing with an array of five inscriptions of the letter A, thereby inducing our ambitions into thinking that those inscriptions are homogenous—after all, if there were significant differences why would that not be affirmed? In the presence of a descriptively insufficient lexicography we are driven from a classically binary semantics of true-false to sliding-scale semantics of degrees of adequacy. And this means that we are constrained to shift from a classical epistemology of degrees to probability with respect to truth to an (as yet nonexistent) epistemology of degrees of semantical adequacy. For, quite in general, analogy, simile, and similar modes of descriptive approximation are not subject to true-false classification but require the more complex assessment of more or less adequate or satisfactory.

117

COMMUNICATIVE APPROXIMATION IN PHILOSOPHY

2. DEFINITENESS VS. SECURITY While more definite, literal, and precise statements are more informative they are—for that very reason—also more vulnerable. Throughout the sphere of our cognitive concerns there is an inherent tension between generality and security. Increased security can generally be purchased for our claims at the price of decreased accuracy and precision. We estimate the height of a tree at around 25 feet. We are quite sure that the tree is 25 ± 3 feet high. We are virtually certain that its height is 25 ± 10 feet. But we are completely and absolutely sure when the item at issue is indeed a tree, that its height is between 1 inch and 100 yards. Of this we are “totally sure” and “certain beyond the shadow of a doubt,” “as certain as we can be of anything in the world,” “so sure that we would be willing to stake our life on it,” and the like. For any sort of plausible claim whatsoever, there is always a characteristic trade-off between its evidential security (or probability), on the one hand, and, on the other, its contentual definiteness (exactness, detail, precision, etc.). The prevailing situation is as depicted by the concave curve presented in Display 1. Throughout the range of our information-gathering inquiries, the epistemic lay of the land is such that it is in effect impracticable to make one’s generalization at once both highly informative and highly safe (i.e., secure). In classical antiquity, Aristotle’s biology and physics was full of general rules to which there are sporadic exceptions. The rules say how things go “on the whole” (hôs epi to polu: in general); the exceptions “prove” the rule. But this points towards a pre-modern conception of science—a science contest to say how things ordinarily and normally stand. Consider Display 1 again. By contrast, modern science seeks to operate at the top of the diagram. It foregoes the security of indefiniteness, in striving for the maximal achievable universality, precision, exactness, and the like. The mathematically precise law-claims of natural science involve no hedging, no fuzziness, no incompleteness, and no exceptions—they are strict: precise, wholly explicit, exceptionless. When investigating the melting point of lead, that physicist has no interest in claiming that most pieces of (pure) lead will quite likely melt at somewhere around this temperature. (Even where science deals in probabil-

Nicholas Rescher • Epistemic Merit

118

ities, it deals with them in a way that characterizes exactly how they must comport themselves.) Display 1 THE DECLINE OF SECURITY WITH INCREASING DEFINITENESS

increasing definiteness

increasing security Note: Given suitable ways of measuring security (s) and definitness (d), the curve at issue can be supposed to be the equilateral hyperbola: s x d = constant

By contrast, the ground rules of ordinary-life discourse are altogether different. Here we operate at the right-hand side of the diagram. When we assert in ordinary life that “peaches are delicious,” we mean something like “most people will find the eating of suitably grown and duly matured peaches a rather pleasurable experience.” Such statements have all sorts of built-in hedges and safeguards like “more or less,” “in ordinary circumstances,” “by and large,” “normally,” “if other things are equal,” and the like. But all such expressions serve as overt markers of expressive inadequacy. And what we have when they are at work in generalizations are not laws in the usual sense, but rules of thumb—a matter of practical lore rather than scientific rigor. In natural science, we deliberately accept risk by aiming at maximal definiteness—and thus at maximal informativeness and testability. But in ordinary life matters stand differently. After all, ordinary-life communication is a practically oriented endeavor carried on in a social context: it stresses such maxims as “Aim for security, even at the price of definiteness;” “Protect your credibility;” “Avoid misleading people, or—

119

COMMUNICATIVE APPROXIMATION IN PHILOSOPHY

even worse—lying to them by asserting outright falsehoods;” “Do not take a risk and ‘cry wolf’.” The aims of ordinary-life discourse are primarily geared to the processes of social interaction and the coordination of human effort. In this context, it is crucial that we seek to maintain credibility and acceptance in our communicative efforts— that we establish and maintain a good reputation for reliability and trustworthiness. In the framework of common-life discourse, we thus take our stance at a point far removed from that of a mathematically precise “science,” as this domain was traditionally cultivated. Our concern is perforce not with the precise necessities but with the looser commonalities of things. To be sure, one has to come to terms with the question: How much imprecision is tolerable? This is clearly something that will depend on the detailed context of deliberation. But overall its extent will be considerable because knowing at least some information is an imperative need—even if this is imperfect. All the same, an imprecise answer to a question is often better than no answer at all. 3. THE CASE OF PHILOSOPHY The crucial fact for present purposes is that in this matter of definiteness vs. security, as in others, philosophy stands on the side of everyday life. In its traditional concern for the “big questions” about humans and their status in the world’s scheme of things it is not a technical discipline (like quantum theory or neurobiology) with an exotic specialist vocabulary of its own. And yet the issues that figure on its agenda are so large and complex and the data we have are so tenuous in their bearing, that we have little realistic choice but to compromise definiteness (generality, precision, universality) for the sake of security (tenability, plausibility). If we are not content to join the skeptic in exiting the arena of deliberation empty-handed, we have to be prepared to be realistic about what the deliberations of philosophy can actually accomplish. But to achieve tenable answers to the deep and farreaching questions that we pose in this domain, we simply have to be prepared to abandon an unrealistic demand for universality and necessity and settle for the more qualified and tentative suppositions that the data of experience are in a position to underwrite. In this domain,

Nicholas Rescher • Epistemic Merit

120

we have to be prepared to do the best we can with the limited resources at our disposal. Foregoing all unrealistic demands for an unrealizable perfection in our philosophizing, we have to make the most we can of the possibilities that are, in a realistic sense of the term, actually available to us here and now. And so in the end philosophy is caught in a bind. The phenomena at issue in its characteristic questions are of a range and complexity for whose adequate treatment the language at its disposal is simply inadequate. The philosopher confronts a communicative paradox akin to that of the scientist who seeks to convey the phenomenology of subatomic physics in the language of ordinary life, conforming a vastly complex reality to the rough-and-ready conceptualisms of appearance. As long as philosophical theses and theories are framed in ordinary language, any claims to exceptionless universality will have to be qualified. For at this level of generality all universal claims will have their exceptions. 4. PHILOSOPHIZING AND THE DANGER OF ASKING TOO MUCH Were philosophers willing to talk guardedly of what is so normally, ordinarily, and for the most part—as they almost never are—then theses and theories would be on far firmer ground. Still a philosophical standardism geared to the consideration of the general cause of this has much to be said for it. But just why should we draw in our philosophical horns in such a manner? Why should one abandon the science-imitating universalist/necessitarian line of traditional philosophizing in favor of the cautious formulations of a more relaxed, normalistic approach? Primarily because we have to be realistic. For their rooting in the inherently normalistic concepts of everyday discussion requires philosophical issues to be addressed in cautiously qualified terms. Philosophy, after all, takes its departure from a concern for our workaday human affairs: even its concern for “the world” is (unlike that of natural science) anthropocentrically us-oriented, ultimately preoccupied with the bearing of the issues on our concerns—on our knowledge, our role, our prospects, etc. Accordingly, the general rules that can be laid down to

121

COMMUNICATIVE APPROXIMATION IN PHILOSOPHY

characterize our situation—be it in ethics, in epistemology, in metaphysics, or wherever—have to be geared to the general course of things because unusual and unforeseeable confluences and complications can almost always intrude to upset the apple cart. At the level of our philosophical concerns, chaos and chance can and often do intervene to call off all the usual bets, abrogating the usual order of things to which our generalizations are—and must be—attuned. In philosophical contexts, we can (generally) do no better than to support theses regarding how matters stand in general with respect to the questions at issue; in this domain, strict generalizations are (generally) not cogently substantiable. Insofar as we want viable answers— insofar as the security and tenability is a goal of ours—we are well advised to proceed conservatively, staking our philosophical claims in a way that is cautious and qualified. We should be content with mere plausibility instead of definitive certainty1 and should rest content with semantical approximation rather than absolute precision.2 Our prospects of establishing rigorously universal theses are all too often unpromising in philosophy. Reluctant to face this fact, however, philosophers have generally striven to answer their questions in terms of claims regarded as universal, necessary, and a priori. Traditionally they look to the exact sciences—and generally are the exact formal sciences, logic and mathematics—as their model. But as the history of the subject shows all too clearly, these programmatic ambitions have produced great problems. By asking too much, philosophers have often come to realize too little. Their demands for a conjoint realization of high definiteness and high security ask so much that they are in the end destined to failure. A not insignificant part of the reason for philosophical controversy and dissensus lies in the effective impossibility of giving adequate expression to the complex and convoluted phenomena at issue thought resource of everyday ammunition. The nature of philosophical issues is such as to pose the ever-present threat that if we will only be satisfied with theses that are precise, universal, and necessary, we shall wind up with having nothing at all. In endearing to push the generality of their theses and theories beyond cautions and qualified limits philosophers all too often undermine the tenability of their contentions.

Nicholas Rescher • Epistemic Merit

122

Historically, philosophers have generally tended to see philosophizing as a labor of pure abstract reason, holding with Spinoza, that “It is not in the nature of reason to regard things not as contingent, but as necessary” (Ethics, II, 44). They construe philosophizing as committed to necessitarian aspirations by its very nature as a venture in rational inquiry. But the ample course of our experience with the discipline indicates that this position is altogether unavailing—that in philosophy, as elsewhere, reason without experience is blind. And once we accept this, and acknowledge that philosophizing too has an attunement to everyday experience by virtue of which its deliverances become to some extent contingent and vulnerable to the cold winds of experiential change, then we must also acknowledge that the deliverances of philosophy will not stand secure against novelty of circumstance, but will be fragile and defeasible in the light of the altered conditions unfolding in a world where chance, chaos, and complexity play a significant role. Consider just one example. Historically, positivism came to grief because its champions could no longer defend the distinctions pivotal to its articulation (analytic/synthetic, conceptual/factual, etc.) against the challenges and objections that could be—and were—made against such over-simple dichotomies. Both the supporters and opponents of positivism saw such distinctions as being absolutely hard and fast— universal and absolute. The idea of a standardistic softening of these dichotomies—of linking their applicability to normal issues and ordinary circumstances—did not occur to any of the parties to the dispute. But once this prospect arises, matters look very different. Take the analytic/synthetic distinction between what is true on conventional and what is true on factual grounds. To investigate the tenability of “All (unbroken) knives have blades” it would be foolish indeed to inspect the knives in our kitchen drawers—or our museums. Linguistic usage suffices—if an implement does not have a blade we just do not call it a knife. Statements like “Knives have blades” are thus clearly analytic. On the other hand “No Minoan knives were made of steel” cannot be investigated on the basis of linguistic usage alone—we have to go out into “the real world” and examine artifacts. Such statements are clearly synthetic. The distinction involved—the line between analytic and synthetic—is clear enough for the standard situation of normal cases

123

COMMUNICATIVE APPROXIMATION IN PHILOSOPHY

where it is possible to understand and implement the issues in a more or less straightforward way. It is only if we seek to operate by means of oversimplifications that are to apply rigidly all across the board in an altogether hand-and-foot way that the analytic/synthetic distinction runs into trouble. It could, of course, be objected that a relaxation of demands is incompatible with the very nature of philosophy—that whether one likes it or not, many or most philosophers have in fact been committed to the pursuit of precision and strict universality. But, of course, it is one thing to desire something and another to obtain it. Its seeming weakness is actually the basis of philosophical standardism’s strength. For given the complexity of the issues, it is clear that such an “empirical”—that is, experience-oriented—approach that rests satisfied with theses geared to how things stand generally and usually (rather than universally and necessarily) affords our best prospect for obtaining answers to our philosophical questions in a way that is at once informative and defensible. When we address those “big issues” of human nature and action natural and social context, our chances of securing viable answers are vastly improved by looking to the usual course of things rather than pursuing the will-o’-the wisp of abstract general principles in a quest for strictly exceptionless universality. The aspirations of an approximative philosophy may be more modest, but they are for that very reason also much more realizable realistic. If we indeed want answers to our philosophical problems we have to be prepared to accept them as they are in practice attainable. 5. WHY PHILOSOPHY MUST TOLERATE APPROXIMATION: THE RATIONALE OF STANDARDISM Given that the ordinary concepts in whose terms we communicate about our everyday experiences cannot serve traditional philosophy’s idealized demands, why not simply abandon them altogether in this domain? For good reason. To abandon them in favor of other concepts would have the serious drawback that in taking this course we effectively leave the traditional arena of philosophical discussion. For those “imperfect and imprecise” concepts provide the raw materials for philosophy and are an essential part of its concerns. The issues with

Nicholas Rescher • Epistemic Merit

124

which our philosophizing begins, and for the sake of whose understanding and elucidation it carries on its work, are taken in the first instance from the realm of experience. Those presystematic concepts characterize the ways in which we conceive of the experience which is the stuff of life—and thus ultimately the stuff of philosophy as well. The concepts that figure centrally in philosophical discussions are always borrowed from everyday life or from its elaboration in science. The discussions of philosophy always maintain some connection to these pre- or extraphilosophical notions; they cannot simply rid themselves of those standard conceptions that are the flesh and blood of our thinking in everyday life. The philosopher’s “knowledge” and “ignorance,” his “right” and his “wrong” must be those of ordinary people—or at least keep very close to them. His “space” and “time” and “matter” must be those of the natural scientist. In abandoning the concepts of our pre-philosophical concerns in favor of word creations of some sort, the philosopher thereby also abandons the problems that constitute the enterprise’s very reason for being. For the philosopher to talk in terms of technical concepts that differ from the ordinary ones much as radically as the physicist’s concept of work differs from the plain man’s notion would in effect be to change the subject. And whatever appeal this step may have, it is not one that we can take within the framework of the professed objective of a clarificatory analysis of philosophical issues. It is neither candid nor helpful to pass off the wolf of concept abandonment as the sheep of concept clarification. It would be a deeply mistaken procedure to practice conceptual “clarification” in such a manner as to destroy the very items we are purportedly clarifying. Of course, philosophers are free to invent an artificial language with its own technical terminology. But if they are to use it for communicating with the rest of us, they will have to explain it to us, and this is something they have to do in a language that we can understand, in our language—the language of everyday life. Their gearing to the normal, ordinary course of things means that the concepts of everyday life—and those of philosophy with them—resist the introduction of surgical precision. They lack that merely abstract integrity of purely conceptual coherence that alone could enable them to survive in the harsh light of theoretical clarity.

125

COMMUNICATIVE APPROXIMATION IN PHILOSOPHY

The issues that constitute philosophy’s prime mission are not—at bottom—technical matters but issues that arise in the conditions of everyday life and in the sciences; question not, to be sure, within but rather about these domains of experience. Without them, philosophy would lose its point, its very reason for being. The technical issues of philosophy are always a means toward extra-philosophical ends. We address philosophical issues to resolve further issues that enable us to resolve yet further issues, and so on, until at last we arrive back at questions posed in the prephilosophical lingua franca of experience. What makes philosophy the enterprise it is, is its linkage to the presystemic issues of our experiential world, that are the very reason for being of our philosophical concerns. To be sure, there is the radical prospect of abandoning philosophy—a prospect which skeptics have urged upon us since classical antiquity.3 But this is an option whose price is high. The fact is that we humans have a very real and material stake in securing viable answers to our questions as to how things stand in the world we live in. In situations of cognitive frustration and bafflement we cannot function effectively as the sort of creature nature has compelled us to become. Confusion and ignorance—even in such “theoretical” and “abstruse” matters as those with which philosophy deals—yield psychic dismay and discomfort. The old saying is perfectly true: philosophy bakes no bread. But it is also no less true that man does not live by bread alone. The physical side of our nature that impels us to eat, drink, and be merry is just one of its sides. The long and short of it is that homo sapiens requires nourishment for the mind as urgently as nourishment for the body. We seek knowledge not only because we wish, but because we must. For us humans, the need for information, for knowledge to nourish the mind, is every bit as critical as the need for food to nourish the body. Cognitive vacuity or dissonance is as distressing to us as hunger or pain. We want and need our cognitive commitments to comprise an intelligible story, to give a comprehensive and coherent account of things. Bafflement and ignorance—to give suspensions of judgment the somewhat harsher name they deserve—exact a substantial price. The quest for cognitive orientation in a difficult world represents a deeply practical requisite for us. That basic demand for information

Nicholas Rescher • Epistemic Merit

126

and understanding presses in upon us and we must do (and are pragmatically justified in doing) what is needed for its satisfaction. Knowledge itself fulfills an acute practical need. And this is where philosophy comes in, in its attempt to grapple with our basic cognitive concerns. The impetus to philosophy lies in our very nature as rational inquirers: as beings who have questions, demand answers, and want these answers to be as cogent as the circumstances allow. Philosophical problems arise when circumstances fail to meet our expectations, and the expectation of rational order is the most fundamental of them all. The fact is simply that we must philosophize; it is a situational imperative for a rational creature such as ourselves. Philosophy thus cannot simply abandon these loose-jointed prephilosophical everyday-life concepts that have emerged to reflect our experience. And its need to retain them militates powerfully on behalf of standardism. For those concepts and categories are deeply entrenched in our view of how things normally go in the world. There is no viable alternative to accommodating the presuppositional needs of our everyday concepts in the deliberations of philosophy. Given the origin and nature of its questions, philosophy just cannot escape coming to terms with the commitment of our concepts to the ordinary and normal course of things as experience presents it to us.4 NOTES 1

A statement is plausible when, while not definitively established by direct evidence it is strongly supported by oblique and circumstantial considerations.

2

Statements p and q approximate one another (symbolically p ≅ q) when each is rendered highly plausible (i.e., virtually certain) when the other is given.

3

For a more recent version see Richard Rorty, Consequences of Pragmatism (Minneapolis: University of Minnesota Press, 1982).

4

On these issues see also the author’s Philosophical Standardism (Pittsburgh: University of Pittsburgh Press, 1994).

Chapter 12 PARTICULAR PHILOSOPHIES VS. PHILOSOPHY AT LARGE 1. THE PROBLEM

H

ow does a particular philosopher’s philosophy relate to philosophy-at-large? Philosophers cannot but acknowledge that their philosophy is always just one among others, and not only historically but doctrinally as well. There are always alternatives to one’s own position. But how should a philosopher come to terms with this fact? One theoretically available reaction is simply to shut one’s eye’s to positions that differ from one’s own, going one’s own way blithely ignoring the rest. But this scarcely makes sense. Philosophers have to face facts. And the fact in this instance is that there is an unavoidable plurality of alternatives out there: any philosophical position is but one among alternatives. So what is one to make of this situation? 2. A RANGE OF POSITIONS In theory there are four available reactions to doctrinal diversity that one can take in the face of a plurality of alternatives: (I)

NIHILISM: None is correct. They reciprocally annihilate once another and none have any merit. The proper stance is that of a nihilistic skepticism that deems the whole philosophical project to be impracticable. (Here one’s position is, in effect, that of an antiposition.)

(II) SYNCRETISM: All are correct. The totality of positions is to be conjoined and unified. The proper stance is that of an allembracing syncretism that conjoin the entire manifold of alternatives.

Nicholas Rescher • Epistemic Merit

128

(III) SELECTIONISM: Some are correct and some are not. A focal few are privileged via a preferential eclecticism which, as it were, cherry-picks some favored alternatives. (IV) DOGMATISM: Only one is uniquely correct. We have here thesis of eliminative exclusion holding that only a single alternative—presumably one’s own—can be maintained as correct. This inventory pretty well exhausts the range of theoretically available possibilities. What, then, is to be done in the face of this spectrum of alternatives? The trouble with (I) lies in its refusal to see the philosophical enterprise as a serious endeavor at resolving significant questions. And this is unrealistic because it is clear even on the very surface of it that the questions that figure on philosophy’s agenda—questions about what it to be thought and done, about truth, value, and obligation—address the most deep and far reaching issues of the human condition. To abandon them is to abandon man’s claims to be a rational animal. The problem with (II)—and (III) as well—lies as in the fact that different philosophical doctrines adopt outright incompatible teachings so that a conjunction of several becomes logically incoherent. In proposing incompatible answers to our questions they render the enterprise incoherent and unimaginable. The study of philosophy at large can certainly examine the range of possible answers to our questions. But to canvas alternative possibilities is not to resolve an issue. Only settling upon a particular, specific resolution can possibly manage to achieve that. If the task of philosophy is to resolve our puzzlements and perceptivities—to help to “fix our beliefs” as Peirce put it—in the face of a complex world, then the fact ranging study of possibilities cannot manage to do the job. And so, in the final analysis, (IV) is the only acceptable alternative—its somewhat ungenerous label notwithstanding. The nihilistic skepticism of (I) is an unhelpful admission of defeat in the face of serious issues about serious matters. The overly generous approach of a pluralism along lines (II) and (III) is ultimately unavailing because different alternatives are in fact incompatible: in offering flatly incon-

129

PARTICULAR PHILOSOPHIES VS. PHILOSOPHY AT LARGE

sistent responses to questions they render conjunction rationally unmanageable. Only alternative (IV) faces up to the challenge of providing serious answers to serious questions. 3. APORETIC PLURALISM The ancient skeptics cast doubt on the trustworthiness of our sciences because they saw them as unavoidably enmeshed in illusion and delusion—and in conflict as well. (Sight reports that the stick held at an angle under water is bent; touch says it is straight.) As they saw it, theory must chasten experience. However, the cruel fact is that theorizing itself yields contradictory results. In moving from empirical observation to philosophical theorizing, we do not leave contradiction behind—it continues to dog our footsteps. Philosophy itself reveals that contradiction is not confined to the domain of sensation but arises in reasoned reflection as well. For example, empiricists thus find themselves boxed into difficulty by the following quartet: (1) All knowledge is grounded in observation (the key thesis of empiricism). (2) We can only observe matters of empirical fact. (3) From empirical facts we cannot infer values; ergo, value claims cannot be grounded in observation (the fact/value divide). (4) Knowledge about values is possible (value cognitivism). Aristotle was right in saying that philosophy begins in wonder and that securing concerns to our questions is the aim of the enterprise. But of course we do not just want answers but coherent answers, seeing that these alone have a chance of being collectively true. The quest for consistency is an indispensable part of the quest for truth and this quest is one of the driving dynamic forces of philosophy. Now in the preceding predicament there are four ways out of the bind of this particular cycle of inconsistency:

Nicholas Rescher • Epistemic Merit

130

(1)-rejection: There is also nonobservational, namely, intuitive or instinctive, mode of apprehension of matters of value (intuitionism; moral-sense theories). (2)-rejection: Observation is not only sensory but also affective (sympathetic, empathetic). It thus can yield not only factual information but value information as well (value sensibility theories). (3)-rejection: While we cannot deduce values from empirical facts, we can certainly infer them from the facts, by various sorts of plausible reasoning, such as “inference to the best explanation” (values-as-fact theories). (4)-rejection: Knowledge about values is impossible (positivism, value skepticism). Committed to (1), empiricist thinkers thus see themselves driven to choose between the three last alternatives in developing their positions in the theory of value. Again, consider a further illustration. As the Presocratics worked their way through the relevant ideas, the following conceptions came to figure prominently on the agenda: (1) Whatever is ultimately real resists through change. (2) The four elements—earth (solid), water (liquid), air (gaseous), and fire (volatile)—do not persist through change as such. (3) The four elements encompass all there is by way or extant reality. Three basic positions are now available: (1)-abandonment: Nothing persists through change—panta rhei, all is in flux (Heraclitus).

131

PARTICULAR PHILOSOPHIES VS. PHILOSOPHY AT LARGE

(2)-abandonment: One single element persists through change—it alone is the archê of all things; all else is simply some altered form of it. This uniquely unchanging element is: earth (atomists), water (Thales), air (Anaximines). Or again, all the elements persist through change, which is only a matter of a variation in mix and proportion (Empedocles). (3)-abandonment: Matter itself is not all there is—there is also its inherent geometrical structure (Pythagoras) or its external arrangement in an environing void (atomists). Or again, there is also an immaterial motive force that endows matter with motion— to wit, “mind” (nous) (Anaxagoras). In such aporetic situations the compelling demand of mere logical consistency enjoins the selective exclusion of some theses whose acceptance would otherwise be tempting. And the fact that such a consistency-restrictive curtailment can invariably be affected in different ways means that a variety of different—albeit interrelated—doctrinal positions is going to confront us. To be sure, we could, in theory, simply suspend judgment in such aporetic situations and abandon the entire cluster, rather than trying to localize the difficulty in order “to save what we can.” But this is too high a price to pay. By taking this course of wholesale abandonment we lose too much through forgoing answers to too many questions. We would curtail our information not only beyond necessity but beyond comfort as well, seeing that we have some degree of commitment to all members of the cluster and do not want to abandon more of them than we have to. Confronted by an aporetic antinomy, we recognize that something must give way. We cannot maintain everything as it stands. The chain of inconsistency must be broken, and the best place to break it is at is weakest link. And the strength and weakness at issue here, is determined through the effort at optimal systematization—of preserving as much as one possibly can of the overall informative substance of one’s cognitive commitments. Realizing that something has to give and that certain otherwise plausible conclusions must be jettisoned, we seek to adopt those resolutions that cause the least seismic disturbance

Nicholas Rescher • Epistemic Merit

132

across the landscape of our commitments. But there are always going to be alternatives here. To be sure, we could, in theory, simply suspend judgment in such aporetic situations and abandon the entire cluster, rather than trying to localize the difficulty in order “to save what we can.” But this is too high a price to pay. By taking this course of wholesale abandonment we lose too much through forgoing answers to too many questions. We would curtail our information not only beyond necessity but beyond comfort as well, seeing that we have some degree of commitment to all members of the cluster and do not want to abandon more of them than we have to. Philosophical problems root in conflict, dissonance, incoherence, incongruity. A prime mission of the enterprise is to smooth matters out. Philosophy tries to do for our cognitive landscape what the Roman road builders did for the physical landscape of their world: to develop smooth, straight ways that make it possible to get about more easily, with fewer checks and frustrations. And it is thought experimentation that affords our principal instrumentality here. It above all enables us to discover the best balance of systemic cost and benefit that we are able to obtain within the limited opportunities afforded us by the aporetic situation that is at issue.1 4. A FORCED CHOICE AMONG ALTERNATIVES No more can one rationally adopt a philosophical position without considering the entire manifold of available alternatives than one can rationally purchase a house or a car without considering its context within the family of available alternatives. The salient consideration here lies in the elementary fact that a rational choice among alternatives is possible only when all of the available alternatives are taken into account. And this calls for a deliberative process that invokes weighing the comparative costs and benefits of a series of mutually exclusive alternatives in the endeavor to identify that (or this) which offers the best balance of benefits over costs. Accordingly, the overall process that is called for here is a matter of cost-benefit optimization on the basis of thinking through the overall consequences of competing alternatives.

133

PARTICULAR PHILOSOPHIES VS. PHILOSOPHY AT LARGE

This sort of thing is clearly a matter of thought experimentation. That is, we contemplate accepting—one by one—each of the available prospects in the overall speculation of possibilities and weigh out the resulting assets and liabilities on a comparative basis to determine the optimal resolution. In resolving an aporetic conflict we must break the chain of inconsistency at is weakest link. But this calls for an evaluative appraisal that any particular thinker only can make only on the basis of the manifold of personal experience that is available. For it will, in the end, have to be a matter of assessing the extent to which the contentions at issue harmonize—or conflict!—with the experience of the individual involved. The key lesson of these deliberations is clear. The aporetic perspective indicates that every philosophical position is coordinated with others that are alternative to it. The choice among them is a matter of allocating priorities and effecting comparative evaluations among competing and conflicting contentions to determine where the weakest link lies in aporetic situations. And there is every reason to think that such evaluation will differ between individual and individual. However, the comparative evaluation at issue should be characterized as contextual rather than subjective. For subjectivity is a matter of individual personal taste. And this is not at issue here. Rather what is at issue is a matter of rationality guided harmonization with the body of an individual’s experience. This depends on the smoothness of fit between a certain contention and the body of a person’s experience. And this is not something that a person choses—let alone decides arbitrarily. It is rather a matter of what he finds as the result of a conscientious comparative analysis. The rational adoption of a particular philosophical position (over against its available alternatives) is primarily a matter of 1. coordinating this philosophical commitment with other philosophical, scientific, and informal commitments that are already putting our tenability in place, 2. harmonizing this commitment with the overall background of personal experience.

Nicholas Rescher • Epistemic Merit

134

Adopting a philosophical position is a matter of cost-benefit analysis: problems solved and difficulties removed or averted vs. new problems raised and news difficulties encountered. The task of philosophy, as Socrates clearly saw, is to work our way out of the thicket of inconsistency in which we are entangled by our presystemic beliefs. For the sake of sheer consistency, something one might otherwise like to keep must be abandoned—or at least qualified. And when this happens, philosophizing becomes a matter of costbenefit optimization relative to one’s overall systemic commitments. And to this extent the issue has a fundamentally inductive nature. For the resolution at issue will here as elsewhere have to stand in alignment to the body of evidence in hand as determined in via a particular body of experience. There is not and cannot be any one-size-fits-all philosophy. For in the end taking a philosophical position is a matter of aligning one’s commitments with the substance of one’s experience. And in a world where different thinkers are differently situated as regards their experiential contexts, they are bound—rationally and logically—to come up with different manifolds of experience. And this is a matter not of indifferentist relativism but of rational contextualism. For relativism grounds difference in subjective inclination—in matters of taste and personal preference. It rules out considerations of correctness, validity, error. Contextualism by contrast is rational and objective. For the matter of which resolution is appropriate in particular circumstances is not arbitrary or subjective, but is subject to rulings of suitability and correctness. 5. THE COLLABORATIVE ASPECT The overall work of the philosophical community at large is collaborate not as regards the devising of a single system, but rather as regards mapping out the domain of possibilities for doctrinal systems. The reason why a philosopher must study the position of others is not for incorporation to render his own system more inclusive, but for contrast. For a given position is not fully defined and its ramifications and implication not adequately understood except through its distinction and differentiations from alternatives. A philosophical position is

135

PARTICULAR PHILOSOPHIES VS. PHILOSOPHY AT LARGE

never adequacy substantiated until there is an account of how it averts the problems and difficulties encountered by its alternatives. It is not until the range of the excluded possibilities is clarified that the reach of the included possibilities becomes clear. What marks the dependence of a given individual’s particular philosophy as interdependent with its larger environment in philosophyat-large are four considerations among others: • In general, a philosopher derives his question-agenda from the prior work of others, together with its conceptual framework. • And the same holds for the aporetic context that canalizes his discussion of the issue. It is the deficiency of its alternatives that marks the superiority of a philosophical position. • Often as not, the work of others brings grist to his mill by way of helping to map out a line along which issue-resolution can progress. • By broadening the range of his vicarious thought experience it strengthens a thinker’s experiential basis for assessing the strength of claims. The fact is that an adequate philosophy can be developed only subject to a reflective engagement—a productive implementation—with the wider thought environment of philosophy-at-large. This basic fact is attested to by the actual history of the subject itself. Every major philosopher—from Plato and Aristotle to Whitehead and (even) Wittgenstein—has evolved his views through interaction, be it explicit or tacit, with the thought and work of other significant thinkers. NOTES 1

On philosophical aporetics see the author’s The Strife of Systems (Pittsburgh, University of Pittsburgh Press, 1985), as well as is Aporetics (Pittsburgh: University of Pittsburgh Press, 2009).

Chapter 13 ULTIMATE EXPLANATION

T

he “ultimate why question” is that which asks not just “why does the universe exist,” but rather “why does the universe exist as it is: why is it that the nature of physical reality is as we find it to be?” Now for better or for worse this is a question that cannot be answered on scientific principles. And there is a simple and decisive reason why this is so. For scientific explanations by their very constitution as such must make use of the laws of nature in their reasoning. But this strategy is simply unavailable in the present case. For those laws of nature required for scientific explanation are themselves a part—an essential and fundamental part—of the constitution of physical reality. And they are thereby a part of the problem and not instrumentalities available for its resolution. The reality of it is that that (revised) “ultimate why question” confronts us with a choice. Either we dismiss that question as being unavailable, inappropriate, and perhaps even “meaningless” (as logical positivists have always argued). Or we acknowledge that answering this question invites and indeed requires recourse to some sort of an extra-scientific, extra-factual mode of explanation—one that transcends the cognitive resources of natural science. And with this second alternative the options become very limited. For we here enter into the region of teleology, where there are just two available alternatives. On the one hand lies the teleology of purpose, which itself can in principle operate in two ways: either by the conscious purposiveness of an intelligent being (a creator deity), or by the unconscious finality of a natural impetus towards the creation of intelligent beings, given the survival-conclusiveness of intelligence. And on the other hand, yet another, decidedly different approach envisions a teleology of value which proceeds to account for the nature of the world in axiological, value-involving terms as being for the

Nicholas Rescher • Epistemic Merit

138

best (with respect to some yet to be specified mode of evaluative optimality). Accordingly, four different doctrinal approaches confront us with respect to issues which that (revised) ultimate why question puts before us: • dismissive positivism, • theological creationism, • anthropic evolutionism, • evaluative optimalism. Each option is available. And none is forced upon us by the inexorable necessity of reason itself. In the final analysis “You pays your money, and you takes your choice.” But is the resultant resolution simply a matter of unfettered preference based on personal taste and inclination? By no means! Here as elsewhere rational choice must be based on the available evidence — and thereby on deliverances of experience. So the question becomes: Given the sort of world that our overall experience indicates this one to be, what sort of explanatory proceeding seems best suited to account for this situation? At this stage, however, the experience at issue will no longer be only the observational experience of our (instrumentality augmented) human senses. Rather in matters of the sort now at issue this evidence will be a matter not just of observation, but of the cumulative evidence of the aggregate totality of one’s life experience. And of course this “experience” has to be construed in the broadest possible sense, including not only the observational but also the affective, not only the factual but also the imaginative, not only physical experimentation but also thought experimentation, not only the personal but the vicarious. The question is one of the extent to which one’s experience creates a role for speculative, observation-transcending factors in evidentiation (the story of the Doubting Thomas is paradigmatic here). If there is little or no room for affectively guided conjecture, dismissive posi-

139

ULTIMATE EXPLANATION

tivism is the way to go. And as the scope of tolerance increases one can move on to anthropic evaluatism, evaluative optimalism, and theological creativism, in just about that order. The question in the end is one of epistemic proprieties and policies in relation to the admission and evaluation of evidence. At this point the distinction between relativism and contextualism becomes crucial. With relativism, the matter is one of arbitrariness and indifference—sheer groundless preference is the order of the day here. With contextualism person-to-person variation occurs once again, and not just because they differ in point of preference, but because they differ in point of circumstances and situation with regard to the available evidence. And while in the former case there is no requirement for evidential reason to go one way or the other, in the latter case there decidedly is. For the matter will in the end depend not on the individual’s preference but in the individual’s evidence as his experience determines it. And so while there will indeed be a lack of uniformity across the whole range of different individuals, nevertheless for given individuals, with their particular body of personal experience in place, there will, in all likelihood, be only one rationally acceptable and appropriate resolution in sight—only one “live option” to use William James’ instructive expression. So here there will be no unique one-size-fits-all resolution—since matter will depend crucially on the experiential evidence at one’s disposal. But this is apt to be a matter not of the arbitrariness of relativistic indifference but rather of the rationality of situation contextualism. In the end, then, a single, unique ultimate explanation will not emerge as the inexorable product of evidential reason in a way that is independent of individualized experience. Yet while there indeed are alternatives here, they will, by rational necessity, fall within a very narrow range.1 NOTES 1

This chapter was originally published in the Journal of Cosmology in 2012.