New Approaches to Scientific Realism 9783110664737, 9783110662467, 9783110662672, 2020935878

Scientific realism is at the core of the contemporary philosophical debate on science. This book analyzes new versions o

217 73 2MB

English Pages 465 [466] Year 2020

Report DMCA / Copyright


Polecaj historie

New Approaches to Scientific Realism
 9783110664737, 9783110662467, 9783110662672, 2020935878

Table of contents :
Novelty in Scientific Realism: New Approaches to an Ongoing Debate
I New Framework for the Realism and Anti-realism Debate
Scientific Realism: What’s All the Fuss?
Scientific Realism and Three Problems for Inference to the Best Explanation
Scientific Realism and the Conflict with Common Sense
II Approaches based on History and Scientific Realism
Evolving Realities: Scientific Prediction and Objectivity from the Perspective of Historical Epistemology
Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?
III Logical Approaches in Realist Terms
Against Paraconsistentism
Stratified Nomic Realism
IV Logico-Epistemological Structural Realism and Instrumental Realism
Structural Realism: The Only Defensible Realist Game in Town?
Mathematical Language and the Changing Concept of Physical Reality
V New Developments on Critical Scientific Realism and Pragmatic Realism
Interdisciplinarity from the Perspective of Critical Scientific Realism
Pragmatic Realism and Scientific Prediction: The Role of Complexity
VI Realism on Causality and Representation
Realism and AIM (Action, Intervention, Manipulation) Theories of Causality
Is Physics Biased Against Alternative Possibilities?
VII Realist Accounts on Objectivity and Facts
Realistic Components in the Conception of Pragmatic Idealism: The Role of Objectivity and the Notion of “Fact”
“Heard Enough from the Experts”? A Popperian Enquiry
Realism in Archaeology – A Philosophical Perspective
VIII Realism and the Social World: From Social Sciences to the Sciences of the Artificial
A Structural Realist Approach to International Relations Theory
Objectivity and Truth in Sciences of Communication and the Case of the Internet
Index of Names
Subject Index

Citation preview

New Approaches to Scientific Realism

Epistemic Studies

Philosophy of Science, Cognition and Mind

Edited by Michael Esfeld, Stephan Hartmann, Albert Newen Editorial Advisory Board: Katalin Balog, Claus Beisbart, Craig Callender, Tim Crane, Katja Crone, Ophelia Deroy, Mauro Dorato, Alison Fernandes, Jens Harbecke, Vera Hoffmann-Kolss, Max Kistler, Beate Krickel, Anna Marmodoro, Alyssa Ney, Hans Rott, Wolfgang Spohn, Gottfried Vosgerau

Volume 42

New Approaches to Scientific Realism Edited by Wenceslao J. Gonzalez

ISBN 978-3-11-066246-7 e-ISBN (PDF) 978-3-11-066473-7 e-ISBN (EPUB) 978-3-11-066267-2 ISSN 2512-5168 Library of Congress Control Number: 2020935878 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at © 2020 Walter de Gruyter GmbH, Berlin/Boston Typesetting: Integra Software Services Pvt. Ltd. Printing and binding: CPI books GmbH, Leck

Contents Wenceslao J. Gonzalez Novelty in Scientific Realism: New Approaches to an Ongoing Debate

I New Framework for the Realism and Anti-realism Debate Peter Achinstein Scientific Realism: What’s All the Fuss?


Alexander Bird Scientific Realism and Three Problems for Inference to the Best Explanation 48 Howard Sankey Scientific Realism and the Conflict with Common Sense


II Approaches based on History and Scientific Realism Anastasios Brenner Evolving Realities: Scientific Prediction and Objectivity from the Perspective of Historical Epistemology 87 Thomas Nickles Do Cognitive Illusions Make Scientific Realism Deceptively Attractive? 104

III Logical Approaches in Realist Terms Alan Musgrave Against Paraconsistentism Theo A. F. Kuipers Stratified Nomic Realism






IV Logico-Epistemological Structural Realism and Instrumental Realism John Worrall Structural Realism: The Only Defensible Realist Game in Town?


Ladislav Kvasz Mathematical Language and the Changing Concept of Physical Reality 206

V New Developments on Critical Scientific Realism and Pragmatic Realism Ilkka Niiniluoto Interdisciplinarity from the Perspective of Critical Scientific Realism


Wenceslao J. Gonzalez Pragmatic Realism and Scientific Prediction: The Role of Complexity


VI Realism on Causality and Representation Donald Gillies Realism and AIM (Action, Intervention, Manipulation) Theories of Causality 291 Tomasz Placek Is Physics Biased Against Alternative Possibilities?


VII Realist Accounts on Objectivity and Facts Amanda Guillan Realistic Components in the Conception of Pragmatic Idealism: The Role of Objectivity and the Notion of “Fact” 331 Anthony O’Hear “Heard Enough from the Experts”? A Popperian Enquiry



Matti Sintonen Realism in Archaeology – A Philosophical Perspective


VIII Realism and the Social World: From Social Sciences to the Sciences of the Artificial Adrian Miroiu A Structural Realist Approach to International Relations Theory


Maria Jose Arrojo Objectivity and Truth in Sciences of Communication and the Case of the Internet 415 Index of Names Subject Index

437 447


Wenceslao J. Gonzalez

Novelty in Scientific Realism: New Approaches to an Ongoing Debate Abstract: Scientific realism plays a central role in the philosophico-methodological discussions on research. Two are the main directions in the contributions made to scientific realism: the “internal” line and the “external” path. Following the first line, there are new visions of realism focused on central aspects of science: semantic, logic, epistemological, methodological, ontological, axiological, and ethical components. When the route follows the second path, realism in science is seen as interrelated with realism in technology and as connected to a philosophical approach to society. Altogether there is now a plethora of characterizations of scientific realism, and this paper presents the main contemporary versions of scientific realism. The analysis of the central tenets of the recent views on scientific realism contributes to present this book, which offers novelty to the ongoing debate on scientific realism. Keywords: Novelty, scientific, realism, new approaches, ongoing debate

1 Two Main Directions in Contributions to Scientific Realism Scientific realism is at the core of contemporary philosophical debate on science. This has been the case since Bas van Fraassen published his book The Scientific Image in 1980. He has developed a new approach to representation and models in science, which is an alternative to scientific realism or even an antagonist position to realism in science (van Fraassen 2008). His views on scientific representation offered new ideas on how it should be characterized, and his conception of models showed a novelty that goes beyond other empiricist approaches of recent times. Both aspects – the characterization of scientific representation and the conception of models in science – belong to a deliberate attempt to forge a “structural empiricism,” an alternative to structural realism based on an elaborated version of empiricism (Gonzalez 2014). Note: I am grateful to John Worrall for his remarks on a previous version of this paper, which I prepared at the Centre for Philosophy of Natural and Social Sciences (London School of Economics).


Wenceslao J. Gonzalez

This criticism of scientific realism started a new debate in philosophy and methodology of science, which is still going on and which presents many faces. Moreover, this renewed interest in scientific realism has enlarged the field, which includes the development of new approaches in recent decades and the discussion of novel topics. In addition, there is an analysis of more sciences than before (e.g., in the sphere of the sciences of the artificial). Thus, there are novel philosophical conceptions of realism in science that go beyond the traditional boundaries (mainly views on objectivity, truth, and entities) and into new territories of discussion.1 These new approaches can be associated with two main directions: “internal” and “external.”

1.1 The “Internal” Line and the “External” Path Following the “internal” line, which remains dominant in the field, there are new visions of realism focused on central aspects of science, such as semantic, logic, epistemological, methodological, ontological, axiological, and ethical components, which can receive different interpretations within a realist framework. These aspects have led to philosophical discussions beyond the traditional emphasis on theoretical terms, the search for truth2 (as correspondence) and the status of entities (especially theoretical entities).3 There is also more interest in scientific practice. In this regard, it is generally assumed that science involves a vision of the world as well as values that have influence on scientific practice. These values can be cognitive, ethical, social, economic, etc. (Gonzalez 2013a). On the “external” path, meanwhile, there lies even more novelty, because realism in science is seen as interrelated with realism in technology (mainly as knowledge, endeavor, and product) and as connected to a philosophical approach to society. This orientation is possible insofar as science is seen as a human activity – an undertaking that includes ethical values – rather than as a mere body of claims (see, e.g., Shrader-Frechette 2005). Thus, this human activity appears as 1 The present analysis updates and enlarges the approaches of the thematic study made on scientific realism in Gonzalez (2006, pp. 1–28; especially, pp. 11–16). This study also presupposes the examination of the main versions of contemporary realism made in Gonzalez (1993). Both papers include a wide bibliographical information on scientific realism. 2 It is commonplace to maintain that “at the heart of any scientific realist position is a success-to-truth inference,” Vickers (2019, p. 571). Here we will see that the whole thing is more complex than this well-known view. 3 The characterization of “scientific realism” by Richard Boyd is clearly insufficient to embrace the diversity of elements involved in it. See Boyd (1983, pp. 45–90; especially, p. 45). See also Putnam (1982b).

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


interconnected to technology and society (on this issue, see Gonzalez 2005 and 2015a). In addition, this connection gives a more relevant role to applied science and the application of science,4 as well as to the sciences of the artificial understood as design sciences.5 Amid the new versions of scientific realism from an “internal” perspective, there is an increasing number of conceptions of realism that see it as compatible with some positions of other philosophical backgrounds, such as conventionalism, naturalism, instrumentalism, pragmatism, perspectivism, or even constructivism. Thus, these recent views emphasize what realism shares with these philosophical views, where there are alternatives to realism but also antagonists (i.e., positions that, in principle, are based on tenets clearly opposed to realist main views), rather than highlighting what detaches from them. Consequently, there are now more epistemological views on realism in terms of pluralism. These are regarding levels of description, realms of reality (micro, meso, and macro), interpretation of the results, etc., which try to avoid relativist or skeptic claims in order to preserve the possibility of objectivity and reference to reality (which commonly form the basis for discussing the problems of truth as well as the existence and characteristics of entities). Moreover, new strategies, such as those of divide et impera or selective realism,6 are used as methodological tools to guarantee at least a minimum scientific realism in the scientific theories. Meanwhile, the new versions of scientific realism formed from an “external” orientation connect scientific creativity and technological innovation. Furthermore, scientific research is not then seen as mere discovery and evaluation of scientific theories (basic science), because there is also research on knowledge needed for solving concrete problems, where prediction and prescription intervene (applied science). In addition, there is a use of scientific knowledge by agents for solving the specific problems, according to the diverse contexts and circumstances (application of science). This application of science, within a given social milieu, is commonly made with the contribution of technology. This also requires philosophical attention (Gonzalez 2013b, pp. 11–40; especially, pp. 17–18). Thus, the focus of the analysis moves now from the traditional emphasis on basic science to discussing applied science and also the application of science, where technology and context have a more relevant role than in the case of the

4 On the distinction between applied science and the application of science, see Niiniluoto (1993, pp. 1–21; especially, pp. 9 and 19). 5 A pioneer work in this area is Simon (1996). 6 On this view, see Peters (2014).


Wenceslao J. Gonzalez

theories of basic science.7 In addition, new disciplines are considered from the point of view of realism, such as the sciences of the artificial8 (among them, information science and communication sciences) (Gonzalez 2017 and Gonzalez and Arrojo 2019), and there is now attention to some social disciplines, such as political science.9

1.2 A Plethora of Characterizations of Scientific Realism As a consequence of these two directions, since the late 80’s there has been a plethora of characterizations of scientific realism, where more weight is still attributed to the “internal” line than the “external” path. These contemporary versions of scientific realisms include the following philosophical versions, among others:10 structural realism, critical realism, referential realism, entity realism, instrumental realism, socially embedded realism, constructive realism, some versions of scientific perspectivism (or perspectivalism), dispositional realism, convergent realism, pragmatic realism, selective realism, minimal realism, and the so-called “preservative realism.” When an analysis is made of this plethora of possibilities, which are representative of the main options available in recent decades, several features require attention: a) sometimes the philosophico-methodological distinctions between these versions of scientific realism are not very sharp. Thus, there is some overlap between some of the positions. When this is the case, they commonly share a form of ontological realism. b) The approaches to scientific realism mentioned above can have variations over the years, as happened to structural realism when the ontic version was proposed.11 c) Any version of scientific realism may

7 In basic science, in addition to “theories,” there are macro-theoretical frameworks, models and hypotheses. In one way or another, they are related to explanation or prediction (or both). Philosophers of science, in general, and scientific realist, in particular, have paid a lot of attention to these elements of basic science. So far, applied sciences and application of science have less received attention in this academic area. 8 On the case of economics as a science of the artificial, see Gonzalez (2008). 9 This book provides chapters analyzing scientific realism within those thematic spheres, especially in some of them. 10 There are also contemporary approaches, such as “agential realism,” which use the term realism, but which appear to be closer to constructivism than to a genuine realism. This seems to be the case of the agential realism developed by Karen Barad based on a generalization of Niels Bohr’s ideas. See Barad (1996). 11 Ontic structural realism was introduced by James Ladyman in contrast with epistemic structural realism, see Ladyman (1998, 2007 and 2011).

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


have, in principle, strong and weak versions in order to maintain its proposals and deal with the historical cases that are studied. Moreover, there are some thinkers who have moved from one realist position to another over the years, such as Philip Kitcher, who moves from the version of realism and cognitive naturalism in The Advancement of Science to the conception of modest realism and social naturalism in Science, Truth, and Democracy (Kitcher 2001b; see also Gonzalez 2011b and 2011a). In addition, there are authors set in one conception or another according to the emphasis put on one realist aspect or another of his or her philosophy of science (such as Ian Hacking, who is sometimes associated with instrumental realism, whereas at others he appears among the entity realist authors).12 Given the variety of realist approaches to science and the large number of options now available, it is not easy to detect any central tenets that are common to all forms of “scientific realism.” Even so, it seems to me that some tenets should be considered: 1) language in science can express an objective content regarding the world (natural, social or artificial), where meaning – commonly conceived as sense (Sinn) – and reality – the reference (Bedeutung) – can have an identifiable nexus;13 2) scientific knowledge should be oriented towards seeking truth (or at least truthlikeness), and so mere certainty – individual or social – is not good enough for a realist approach,14 insofar as it is interested in objectivity; 3) it can be scientific progress in scientific activity according to several criteria, among them are those related to an improvement in the search for truth or in the levels of truthlikeness; 4) the world has, in principle, an independent existence of human mind, but it is open to human knowledge, and its reality – the properties and processes of the world – could be intelligible for the scientists through research; and 5) values in science (mainly, cognitive values) are related to scientific activity, and they can have an objectivity and, thus, are not merely reducible to a social construction.15

12 See, in this regard, the analysis made in Radder (2012, p. 176). 13 The problems regarding the reference of the terms are discussed at length in Gonzalez (1986). 14 Certainty is understood here as the subjective or intersubjective adhesion of human mind, either as individual or as a group, to a knowledge statement. 15 On additional details of these tenets, see Gonzalez (2006, pp. 12–16).


Wenceslao J. Gonzalez

2 Contemporary Versions of Scientific Realism Within the philosophical proposals on science of a realist kind, some have received more attention among philosophers of science and, from time to time, among scientists. Thus, the views expressed by John Worrall, Philip Kitcher or Ian Hacking have received attention both within the profession as well as among scientists interested in the philosophical aspects of scientific research in order to understand or improve it. There are other views that have been debated at international conferences and in many publications and, in one way or another, they are represented in the present volume. These criteria explain the presence here of the realist conceptions pointed out in the previous paragraph, which begin with structural realism because it is relevant in the profession and also attracts the attention of scientists. (i) Structural realism, which John Worrall’s influential conception of science highlights,16 includes a criticism of Kuhnian “scientific revolutions” and the search for a characterization of scientific progress where the role of logic could be relevant (Worrall 1998). Structural realism has ramified into a variety of options. Moreover, structural realism has become de facto a school of thought (mainly European),17 which is based on the “importance of the mathematical structure of scientific theories rather than their ‘content’ – the claim being that while the ‘content’ may change, apparently quite radically, during episodes of theory-change, the structure of the older theories is very largely preserved.”18 Worrall maintains that “structural realism provides a ‘synthesis’ of the main pro-realist argument – the ‘No Miracles Argument [NMA]’, and the main anti-realist argument – the ‘Pessimistic Induction’ ” (Worrall 2009, p. 1.) He considers that the intuition of the NMA provides support for structural realism. Thus, it would be a miracle if scientific theories that enjoyed success – mainly, predictive success – did that baselessly, i.e., if what they claim is no at least approximately correct. In this regard, he thinks that “what are successful or not, what elicit the ‘no miracles intuition’ or not, are individual theories – such 16 The starting point of this contemporary approach is in Worrall (1989b). His views are particularly focused on physics: Worrall (1985 and 1989a). In addition, Worrall has also paid attention to medicine: Worrall (2006). 17 Following the list of the proponents, it is basically a European philosophy of science movement, and the lines of research include a variety of options: “John Worrall, Ioannis Votsis, Steve French, Angelo Cei, James Ladyman, Simon Saunders, Michael Esfeld, Vincent Lam, Katherine Brading, Mauro Dorato, Dean Rickles, Fred Miller, and – exceptions to prove the rule – Anjan Chakravartty and John Stachel,” Lyre (2010, p. 381). On how to understand the structure in this conception, see Arenhart and Bueno (2015). 18 Worrall, J., Personal communication, 12.2.2016.

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


as Fresnel’s wave theory of light or Quantum electrodynamics. In so far as there is any sort of general, or ‘wholesale’, case to be made for scientific realism it is simply as the union of a whole set of specific cases for individual theories (. . .). Also it is a mistake to defend scientific realism as a thesis about only ‘most’ successful scientific theories” (Worrall 2009, p. 9). Meanwhile, he criticizes the pessimistic meta-induction, which claims that we should infer that are current theories are false based on the fact that accepted theories of the past turned out to be false. From a historical point of view, Worrall recognizes that his vision of structural realism is rooted in Henri Poincaré’s conception of science, who used to be considered as a conventionalist thinker.19 The problem theory-change was already addressed by Poincaré, but structural realism is commonly seen as a “postLakatosian” option focused on theory-change in science: this view is originally based on case-studies of the history of physics. Although Worrall’s approach on scientific theories is basically logico-epistemological, which in principle has no particular interest in the philosophico-methodological role of reference, structural realism includes today clearly ontological (“ontic”) versions.20 (ii) Critical realism regarding science is defended by Ilkka Niiniluoto. It includes five central theses: R1) Reality is ontologically independent of human mind; R2) truth is a semantic relation between language and reality; R3) truth is a central epistemic aim of science; R4) it is possible to have a methodological approach to truth, and although it is not easily recognizable, we can make rational assessments of cognitive progress to truth; and R5) practical success of science can be explained because scientific theories are in fact approximately true.21 In this regard, Niiniluoto sees his colleague Raimo Tuomela, who has defended realism in the social sciences, as an “internal realist” (cf. Niiniluoto 1999, p. 11). Thus, he considers that Tuomela is not far from the view held by Hilary Putnam at one point of his philosophical trajectory (see Putnam [1982a] 1990; and 1987). Niiniluoto also thinks that the Natural Ontological Attitude (NOA) can be considered as a variant of realism. NOA assumes that the results of science can be accepted in the same way as the evidence of our senses, and it is defended

19 According to Stathis Psillos, Henri Poincaré was conventionalist about the most general theoretical principles. But he thinks that this is compatible with the following claim: insofar as any theoretical facts are scientifically knowable, they are on structural relations among otherwise unknowable entities. See Psillos (1996). 20 See Frigg and Votsis (2011). Nowadays this orientation of scientific realism might be where there are the largest number of philosophers of science working on. 21 Cf. Niiniluoto (1999, p. 10). He sees as “critical realist” quite different thinkers, such as Karl Popper, Wilfrid Sellars or Richard Boyd. See Niiniluoto (1999, p. 11).


Wenceslao J. Gonzalez

by Arthur Fine as an alternative to realism and anti-realism (Fine 1984). Niiniluoto’s realist interpretation of NOA is based on truth and reference: a) NOA accepts as a core position that the results of scientific investigations are true, on a par with more homely truths; b) NOA treats truth in the usual referential way, which seems in tune with a Tarskian view; and c) there is a commitment in NOA to the existence of the individuals, properties, relations, processes, etc., referred to by the scientific statements that we accept as true (Niiniluoto 1999, p. 19). (iii) Referential realism can be considered Stathis Psillos’s version of scientific realism, insofar as he relies on natural kinds with a neo-Aristotelian tone and sees in reference the crucial point, i.e., what corresponds to science seeking for truth. a) Semantically, he defends that “theoretical assertions are not reducible to claims about the behaviour of observables, nor are they merely instrumental devises for establishing connections between observables. The theoretical terms featuring in theories have putative factual reference” (Psillos 1999, p. xix). b) Epistemologically, Psillos shares with Jarret Leplin a special recognition to the role of prediction to scientific realism,22 because he holds that mature and predictively successful scientific theories are “well-confirmed and approximately true of the world” (Psillos 1999, p. xix). c) Ontologically, he maintains that “the world has a definite and mind-independent natural-kind structure” (Psillos 1999, p. xix). Consequently, Psillos is particularly critical toward Bas van Fraassen’s “constructive empiricism.” They disagree on models and representation: Psillos sees them as truth-oriented, whereas van Fraassen conceives them as empirically grounded within a pragmatic-intentional framework. Thus, although both philosophers take measurement to be a vehicle of scientific representation, the analysis of historical cases (such as Jean Baptiste Perrin’s work on Brownian motion) leads them to disagree on the role of instruments as means for representation (Psillos 2014). In a different way, referential realism is the vision of Hans Radder, who is not sympathetic to the notion of truth as correspondence. His view is “based on a detailed account of the experimental dimension of science, with its particular interaction of material realization and theoretical description. Furthermore, it takes into account the phenomena of conceptual-theoretical discontinuity and formal-mathematical continuity in the historical development of the sciences” (Radder 2012, p. 172).

22 Leplin’s approach to realism can be seen in Leplin (1997 and 2004).

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


Radder considers that his approach provides an epistemological criterion for the reference of the relevant scientific terms as well as for the theoretical description of scientific experiments.23 On the one hand, referential realism “aims to go beyond the linguistic turn: it exploits the materially realized interactions between scientists, instrumentation and the world, and hence it is able to explain how scientific terms refer to elements of a human-independent reality” (Radder 2012, pp. 175–176). On the other hand, there is a genuinely intertheoretical coreference, i.e., “a criterion for coreference between conceptually discontinuous theories” (Radder 2012, p. 176). (iv) Entity realism, which is defended by Ian Hacking, is based on the rejection of the primacy of the idea of social construction over the independent-ofthe-mind reality. This entity realism accepts that entities in physics are real, instead of being mere constructed facts inferred from theoretical models.24 Moreover, Hacking’s book on The Social Construction of What? goes further than that (Hacking 1999). He deals with a large number of cases from natural sciences and social sciences, and he offers lucid criticisms of the relativist epistemological, methodological, and ontological theses associated with the recent versions of social constructivism. Hacking has insisted on the philosophico-methodological limitations of the social constructivist approaches. His main target has been what can be called “socially constructed” and, especially, what the task of science regarding that area of reality is. It seems clear that Hacking does not adopt a classic metaphysical realism or a naive scientific realism. His entity realism is a conception linked to a kind of critical realism, insofar as he does not accept that an essence is to be discovered, and he also criticizes the possibility of a description to reflect an inherent internal structure of the things. But he clearly admits the knowability and intelligibility of processes and objects (i.e., that reality other than the researchers’). Thus, he maintains the ontological character of natural phenomena and social events. This is also the case regarding quarks: “quarks, the objects themselves, are not constructs, are not social, and are not historical” (Hacking 1999, p. 30).

23 “Reference” is then a kind of relation, a “pointing to” (Verweisung), rather than a referent. 24 Sometimes Ian Hacking is associated with instrumental realism due to his analysis in Hacking, I., Representing and Intervening, Cambridge University Press, Cambridge, 1983, mainly chapters 1 and 3. But it seems clear to me that he has an entity realism in his philosophy of science, especially in his important book on the social construction.


Wenceslao J. Gonzalez

(v) Instrumental realism, which follows different lines of research,25 such as James Woodward’s account on experiments and the causal inference,26 is sometimes a “minimal realism” of ontological kind that plays the role of support for some epistemological conceptions (naturalist, pragmatist, etc.). Woodward is open to forms of realism, such as a kind of instrumental realism connected with experiments (Woodward 2003b), or a version of modest realism, understood as an ontological support for causal relationships and scientific explanations.27 Woodward’s interventionist view of causality, where there are no anthropomorphic elements,28 includes epistemological and methodological elements of several sorts (naturalist and anti-naturalist) with a background of minimal ontological realism.29 In addition, he maintains that “the important and philosophically neglected category of ‘natural experiments’ typically involves the occurrence of processes in nature that have the characteristics of an intervention but do not involve human action or at least are not brought about by deliberate human design” (Woodward 2003b, p. 94). These natural experiments involve a minimal ontological commitment of a realist kind, which supports their objectivity. (vi) Realism of science as socially embedded is supported by Philip Kitcher in recent decades (Kitcher 2001b, and 2011b and 2011c). He moved on his philosophico-methodological position on realism. Initially, in his book The Advancement of Science, he defended realism based on objectivity of science. He wanted to overcome the “Received view” as well as Larry Laudan’s historiographical outlook on scientific progress (Kitcher 1993a). Later on, Kitcher developed an approach where science is seen in the context of the democratic society and where the search for truth of a “well-ordered” science requires considering that scientific activity is immersed in the social milieu.30 Kitcher’s philosophical trajectory is considered as an interesting case of the contemporary realist approach. At the beginning, he has a commitment to naturalism, which reappears through three influential lines of research of recent decades: 25 The main characteristics of instrumental realism can be found in Baird (1988). On the differences between instrumental realism and other kinds of realism, mainly the referential one, see Radder (2012, pp. 176–178). 26 Woodward fits into several of the theses of instrumental realism pointed out by Davis Baird. Thus, he insists on intervention and manipulation as a central procedure to make contact with the things of the world, and he accepts a co-reference between earlier and later beliefs in the course of the development of science. 27 See Woodward ([2003a] 2005, pp. 118, 120, and 202). 28 Cf. Woodward ([2003a] 2005, p. 98). 29 See Gonzalez (2018, pp. 4–70; especially, pp. 5–32). 30 On the evolution of his philosophical approach, see Gonzalez (2011b).

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


the “cognitive turn,” the realist conceptualizations, and the “social turn,”31 which is open to pragmatism (Kitcher 2011a). Thus, a) Kitcher’s philosophy and methodology of science has an epistemological view based on the relevance of the knowing subject (Kitcher 1993a, p. 9), where his naturalistic approach is related to the “cognitive turn,” insofar as his epistemology is built up on the knowing subject (i.e., in tune with cognitive science); b) Kitcher has a methodological perspective which is particularly interested in “scientific realism,” since scientific progress is thought of according to realist grounds: there is an advancement in contents that relies on objectivity, and scientific processes are truth-seeking;32 and c) he adopts a social concern about science, because he seriously considers the relevance of democratic values to scientific activity (Kitcher 2011b). (vii) Constructive realism is the type of realist approach developed by Theo Kuipers (2000, pp. 1–10), who is interested in the natural sciences but also in the sciences of design. In his view, besides the existence of a worldindependent-of-the-mind, he maintains an epistemological realism connected to the idea of truth approximation. He prefers qualitative and comparative concepts and dislikes the quantitative characterizations of truthlikeness, as they lack in intuitive support from “scientific common sense” (Kuipers 2000, p. 316). For Kuipers, actual truthlikeness is closeness to what is true in reality, whereas nomic truthlikeness is closeness to what is physically possible. In addition, he thinks that the conception of inference to the best explanation can be seen in terms that the most successful theory available is also the closest theory to the truth (Kuipers 2000, p. 171). This comes with “a very strong dominance condition: a more truthlike theory has to retain all the successes of the less truthlike theory, otherwise they are incomparable. This feature is repeated in the definition of empirical progress” (Niiniluoto 2001, p. 776). (viii) Some versions of scientific perspectivism, understood as an epistemological conception open to several methodological approaches, are also compatible with a realist ontology. In this regard, a combination in science of realism and perspectivism does not cause any genuine trouble to a contemporary realist account,33

31 The main features of all of them are developed in Gonzalez (2006 pp. 1–27; especially, pp. 4–16). 32 This can be seen in the period of Kitcher (1993b); and later on in Kitcher (2001a). 33 Perspectivism is different from traditional realism. This is emphasized in Giere (2006). “Scientific realists would certainly agree with Giere that observation and detection are always from a specific vantage point afforded by the scientific instrument or set-up in question. But they would also resist the conclusion that the perspectival nature of scientific observation affects somehow the nature of the facts observed” (Massimi 2012, pp. 29–30).


Wenceslao J. Gonzalez

insofar as it is accepted that reality, in general, and a specific object, in particular, can have different sides or ways of presenting that reality or concrete item, and they can be represented by different models. Reality can be represented in realist terms if it is accepted in a manner similar to Gottlob Frege’s views on objects, which can have various ways of presenting themselves (modos de dación).34 Furthermore, there are different ways of knowing the world independent-of -the-mind, and the aspects known of the world (natural, social or artificial) can vary according to the angles from which they are viewed. But to consider that all scientific knowledge is pure perspective – in the sense of unstable – leads directly to a relativist viewpoint. Furthermore, if “pure perspective” is understood as involving a completely unreliable knowledge, then there is a road to skepticism. What is compatible with ontological realism is an epistemological version of perspectivism that, from different angles and distinct levels, is open to properties of the world that might be accessible to any researcher. (ix) Dispositional realism relies on the existence of a “disposition,” understood as a modal or causal property, something that confers abilities to behave in certain ways under certain conditions. This is the idea of Anjan Chakravartty. He distinguishes between a causal account of properties and a structural account of them, where the emphasis is on the symmetry, i.e., where some features remain unchanged (they are invariant) when a theory (or an object) has changed. In his book A Metaphysics for Scientific Realism (Chakravartty 2007), which has some roots in ideas defended by Sydney Shoemaker (1980), there are causal properties in the world that are identified by the dispositions they confer on objects. Based on this ontological version of realism, there is a criticism of the perspectivism from the realist conception of “dispositional identity thesis:” 1) To choose a selected range of inputs does not eo ipso license any perspectivalist conclusion about the outputs; 2) the conditioned characteristics of the output does not lead to making it perspectival in any relevant sense; and 3) perspectival facts are frequently explained away by the multi-faceted dispositional character of the causal properties of the target system (Massimi 2012, pp. 31–32). This endorsement of a causal dispositionalism versus the perspectival claims about the target system assumes that reality (i.e., properties such as mass, charge, and spin) is causal or dispositional in nature and does something under the proper circumstances. (x) Convergent realism, which is associated with a confluence toward truth in the succession of scientific theories, is one of the targets of Larry Laudan’s initial historiographical conception.35 This position is also criticized by Worrall if the

34 Cf. Frege (1892). See also Dummett ([1973] 1981), ch. 1, pp. 7–40; especially, p. 8 35 Cf. Laudan (1981). On the other side is Hardin and Rosenberg (1982).

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


convergent realist maintains that, as a matter of fact, the proportion of true scientific theories among all those currently accepted is greater now than at earlier stages in science. Thus, he sees as a wrong version of convergent realism that kind of accumulation, i.e., to claim that, regarding the development of science, the main feature has been the articulation of scientific theories with an ever greater proportion of them as approximately true (Worrall 2009, p. 37). According to Worrall, if the historical context is related to “mature” science, then “a sensible convergent realist does not interpret her view that theories now are ‘less false’ than they used to be as meaning that the proportion of accepted theories that are false is less now than it was at earlier ‘stages’ in science. Instead, the sensible convergent realist accepts that even if all currently accepted (again fundamental) theories are false, the improvement is constituted by the fact that each fundamental theory is ‘less false’ than its predecessors” (Worrall 2009, p. 44). This version of convergent realism is more in tune with the historicity of science. (xi) Pragmatic realism, as I conceive it, is a broader vision of scientific realism than other conceptions, insofar as pragmatic realism takes into account more components of what science is and ought to be than other approaches to scientific realism: a) It does not reduce the philosophico-methodological analysis of scientific activity to the descriptive terms, because it includes a prescriptive dimension on how to improve science. b) Pragmatic realism calls attention to three realms – basic science, applied science, and application of science – instead of just one or two of them. c) This conception is an alternative to Nicholas Rescher’s “pragmatic idealism.”36 Thus, although it shares his interest in scientific rationality and the conception of science as our science,37 pragmatic realism includes a number of aspects that Rescher does not really consider,38 and these are analyzed from a realist rather than from an idealist orientation. Among other features, which are connected to the previous ones,39 pragmatic realism is focused on the characterization of science as human activity 36 Rescher has developed pragmatic idealism as a system of thought, especially in three classical books (1992a; 1993; 1994). In recent times we have, among others, his book Rescher (2014). 37 Cf. Rescher 1992b. This vision is accompanied by a scientific rationality that includes evaluative rationality and view of economics of research inspired by C. S. Peirce. 38 Pragmatic idealism defends realist notions such as fact as mind-independent event and truth as correspondence between a statement and a reality. Meanwhile pragmatic realism is open to the relevance of construction for the sciences of the artificial and the social sciences. 39 Additional details are in Gonzalez (2020). In addition, there are combinations of realism and pragmatism in a number of thinkers, such as Philip Kitcher or Ilkka Niiniluoto. There is also a sociological approach that uses such expression and is endorsed by Andrew Pickering,


Wenceslao J. Gonzalez

and the role of objectivity rather than the common realist emphasis on the primacy of knowledge and the insistence on the problem of truth. Thus, it gives a relevant role to reference, both as relation and as a referent. Pragmatic realism also defends the view that science should pay special attention to prediction instead of being merely focused on explanation (in basic science) or directly oriented to prescription (in applied science). In addition, the application of science is also relevant, insofar as the contextual use of science is important. Methodologically, pragmatic realism considers that methodological universalism and “methodological imperialism” are not compatible with the diversity and complexity of the objects and problems of scientific activity. (xii) Selective realism is based on the idea that theoretical systems in science can be divided into numerous constituents. In this regard, while past systems have been considered as successful, when they are taken as a whole, it happens that they failed to be approximately true according to our present standards. Even so, certain elements contained in those theoretical systems can be retained. Thus, the realist needs to specify particular conditions for identifying those particular theoretical constituents that can be maintained (cf. Lyons 2006, p. 538). This methodological approach to a realism of divide et impera is what Kitcher, as a criticism of Laudan (1984a, pp. 116–117), suggested in The Advancement of Science, when he proposed to “distinguish between those parts of the theory that are genuinely used in the success and those that are idle wheels” (Kitcher 1993a, p. 143, note 22.) What Psillos develops as “the divide et impera move” shares the criticism of Laudan, and he has interest in theories that have led to successful predictions, thinking of what really fuels the derivation of the successful prediction (cf. Psillos 1999, pp. 108–114). For Psillos, what is required to successfully perform the divide et impera move “lies in the careful study of the structure and content of past genuinely successful theories” (Psillos 1999, p. 110). In this regard, what he thinks is needed are careful case-studies that include two ingredients: (a) to be able to identify “the theoretical constituents of past genuine successful theories that made essential contributions to their successes;” and b) to be able to “show that these constituents, far from being characteristically false, have been retained in subsequent theories of the same domain” (Psillos 1999, pp. 110–111). (xiii) Minimal realism is assumed by some philosophical views on science that aim to guarantee that they deal with some ontological objects or processes that can exist independently of the mind of the researcher, i.e., something that is not a hallucination, a fiction (in the sense of a mere imaginary construction)

but his views are different from a “pragmatic realism” that is an alternative to “pragmatic idealism.”

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


or nonexistent entity. This view is accepted by those realists that want to have a common ground with empiricists and naturalists that stress the role of epistemology and, at the same time, like to agree with the scientists of empirical sciences, who generally accept that they do research on properties and processes in the world rather than develop mere epistemological schemes about a constructed realm.40 Philosophically, minimal realism wants to avoid conceptions like George Berkeley’s claim that esse est percipi (to be is to be perceived) about the knowledge of the world or proposals of the kind of “scientific” phenomenalist view accepted by Ernst Mach, where phenomena at the end are just descriptions of the sensations perceived by the observer.41 Thus, minimal realists want to skip over a subjectivism in scientific knowledge, a view that does not fit with the way of describing scientific activity by those that practice them in empirical sciences (mainly natural sciences but also social sciences and sciences of the artificial). (xiv) “Preservative realism” is the expression used by Hasok Chang to emphasize that there have been “enough elements of scientific knowledge preserved through major theory-change processes and (. . .) those elements can be accepted realistically” (Chang 2003, p. 902). But his characterization involves quite different sorts of scientific realism, such as John Worrall’s structural realism (cf. Worrall 1989b), Philip Kitcher’s initial approach to realism in science, which includes “working posits” that are immune to the pessimistic induction,42 and Stathis Psillos’s realist vision of the caloric historical case (Psillos 1999, pp. 115–130). Certainly, “preservative realism” is a very confusing expression because the so-called “preservation” varies completely from the insistence on the mathematical structures by Worrall, whose emphasis is explicitly criticized by Psillos (1999, pp. 146–161), to Kitcher’s vision of the advancement of science based on objectivity knowledge, which he thereafter refocused due to a connection of science with

40 De facto, “minimal realism” is used as a philosophical conception rather than as a mere description of the level of acceptance of a scientific realist stance. 41 Ernst Mach maintains that all empirical statements, when they are in a scientific theory, can be capable of being reduced to statements about sensations. According to Frederick Suppe, Mach’s analysis of sensations “tries, rather unsuccessfully, to develop this approach into an analysis which construes the principles of science as nothing but abbreviated descriptions of sensations. His lack of success in carrying out this program stems partially from the fact that scientific principles contain mathematical relationships not reducible to sensations alone,” Suppe (1974, p. 10). 42 Cf. Kitcher (1993a, p. 149). Chang maintains that “the basic idea of the pessimistic induction was not original to Laudan (it was at least implicit in Thomas Kuhn’s discussion of incommensurability, if not also in other, earlier works),” Chang (2003, p. 902).


Wenceslao J. Gonzalez

the social needs, and to Psillos’s ontological characterization of realism in science, which is connected with causes and the configuration of nature in terms of kinds. Certainly, these diverse conceptions share a criticism of Laudan’s emphasis of the discontinuities shown in the historical record of science, but this critique has been made very frequently over the years, and it can be made from different philosophical viewpoints.

3 From Recent Views on Scientific Realism to the Present Book Contemporary approaches to scientific realism can work at three philosophicomethodological levels: 1) science as such, insofar as the analysis is valid for any science (mainly empirical ones), 2) a group of sciences, like natural sciences, social sciences or the sciences of the artificial, or 3) a specific science, like physics, history, political science or communication sciences. This volume includes remarks at the three levels, according to the degree of generality of the analysis, the kind of problem discussed and the limits involved in the proposal made. Regarding the first level of analysis – and, eventually, in the other two – one of the dominant features of the discussion is that the philosophico-methodological focus of attention is in three main topics: (i) the non-miracles argument (NMA),43 (ii) the pessimistic induction (PI), and (iii) the inference to the best explanation (IBE). NMA has its roots in Putnam, who has defended that “[scientific] realism is the only philosophy [of science] that does not make success in science a miracle” (Putnam 1975, p. 73). The pessimistic induction was proposed by Laudan as an explicit criticism of the realist view in terms of truthlikeness, based on an interpretation of the past.44 The inference to the best explanation finds in Peter Lipton a key figure, who is in favor of an account that affords the best explanation of a phenomenon. Thus, it can be accepted as correct when the inference based on evidence meets some criteria.45 These three topics are discussed in this book.

43 How to characterize the argument is discussed in Worrall (2009, pp. 3–34). 44 Looking at history of science, most past theories “have been based on what we now belief to be fundamentally mistaken theoretical models and structures, the realist cannot possibly hope to explain the empirical success such theories enjoyed in terms of the truth-likeness of their constituent theoretical claims” (Laudan 1984b, pp. 91–92). On his philosophical approach, see Gonzalez (1998, pp. 5–57) and Gonzalez (1999, pp. 105–131). 45 Cf. Lipton [1991] 2004, pp. 64–65. From a pragmatic idealist view, Rescher makes a criticism of this IBE and offers some historical precedents of IBE, cf. Rescher (2019).

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


Although the emphasis on the contemporary versions of scientific realism is mainly on the epistemological and ontological components, like in the discussions on the three topics already mentioned – and, commonly, in the three levels pointed out –, there are realist accounts based on each component of a science, such as language, structure, knowledge, methods, activity, ends, and values.46 Thus, the realisms pointed out in this paper include features of realism in the different aspects of analysis involved (i.e., semantic, logical, epistemological, methodological, ontological, axiological, and ethical). In one way or another, scientific realism deals with criteria of identity of the real (natural, social or artificial), routes for identification of properties, processes or objects that are independent-of-themind, and patterns for reidentification of the reality after a theory-change. Commonly, these elements of realism – identity, identification, and reidentification – are associated to the philosophico-methodological issues concerning objectivity (characteristics and accessibility), truth (its possibility and approximation to it or truthlikeness), and entities (their existence, their status as processes or objects, and recognizability after changes). They are explicitly mentioned in a number of papers of the present book. Obviously, the main issue is which philosophico-methodological approach fits better with what scientific activity is de facto and with how it ought to be in order to solve the problems of science (basic, applied, or its application).47 Placed in this philosophico-methodological setting, this book offers new approaches to scientific realism, which include novel criticisms as well. The focus of this volume is not on adding new versions of scientific realism per se or on enlarging those already available. Thus, within the space available, the aims are to some extent different: First, to present analyses it is helpful to consider if these new approaches are better than the philosophico-methodological alternatives to realism (naturalism, pragmatism, conventionalism, instrumentalism, perspectivism, etc.) or its usual antagonists (relativism, skepticism, idealism, constructivism, etc.); second, to contribute to deciding which of the new versions of scientific realism, if any, meets the criteria of fitting better with what scientific activity is de facto and how it ought to be; and third, to enlarge the philosophico-methodological analysis of

46 One author that stresses scientific realism based on values is Timothy D. Lyons: “In contrast with epistemic deployment realism, a purely axiological scientific realism can account for key scientific practices made salient in my twentieth century case studies” (Lyons 2017, p. 3203). 47 The differences between basic science, applied science and application of science are outlined in the section 1 of this paper. The analysis on the distinction applied science – application of science and its consequences is in Gonzalez (2013b, pp. 17–18) and Gonzalez (2015b, pp. v, 2, 4, 18, 32–9, 10, 64–65, 70–71, 114, 151n, 249–250, 317–319, and 321–325).


Wenceslao J. Gonzalez

science to reach new topics and novel disciplines (i.e., sciences not discussed in the traditional philosophy of science), taking into account the approaches to scientific realism. According to these lines, the book comprises 8 parts in order to show its internal configuration, which diversifies central topics: I) New Framework for the Realism and Anti-realism Debate; II) Approaches based on History and Scientific Realism; III) Logical Approaches in Realist Terms; IV) Logico-Epistemological Structural Realism and Instrumental Realism; V) New Developments on Critical Scientific Realism and Pragmatic Realism; VI) Realism on Causality and Representation; VII) Realist Accounts on Objectivity and Facts; and VIII) Realism and the Social World: From Social Sciences to the Sciences of the Artificial. These parts include chapters that explicitly discuss several realist approaches pointed out here, such as structural realism, instrumental realism, critical realism, and pragmatic realism. In some cases, such as structural realism, the analysis covers the general configuration and the particular case of political science, within the social sciences. Besides this feature, which means going deeper into the new approaches on scientific realism and extending them to new territories, the novelty of this book can be found in a number of traits: a) The realism versus anti-realist debate here includes the discussion in new contexts, which are related to theoretical, empirical, and historical aspects; b) the role of historical epistemology is recognized, which means that historicity matters for the discussion of scientific realism and affects the dualism structure-content; c) there are new contributions in a number of ways, such as in logical approaches, interdisciplinarity, and complexity; d) new views are added on relevant topics such as causality and representation; e) there are new suggestions on objectivity and facts, including the problem of the fake news in communication sciences; and f) in addition to the relevant issue of objectivity in the social sciences, there are new ideas on objectivity and truth in the sciences of the artificial and in a dual science, like archeology (a social science that needs methods of natural sciences too). All of them provide grounds of novelty to the ongoing debate on scientific realism. Finally, I would like to thank each of the contributors to this volume for their presence in the book: Peter Achinstein, Alexander Bird, Howard Sankey, Anastasios Brenner, Thomas Nickles, Alan Musgrave, Theo Kuipers, John Worrall, Ladislav Kvasz, Ilkka Niiniluoto, Donald Gillies, Tomasz Placek, Amanda Guillan, Anthony O’Hear, Matti Sintonen, Adrian Miroiu and Maria Jose Arrojo. I especially thank those who, like Alan Musgrave and John Worrall, have delivered their text more time ago. I am also grateful to Dr Jessica Rey (Center for Research in Philosophy of Science and Technology) for her collaboration in the edition of this book. I also appreciate the collaboration of Dr Amanda Guillan and Pablo Vara in the preparation of the indexes of this volume.

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


References Arenhart, J. R. B. and Bueno, O. (2015): “Structural Realism and the Nature of Structure.” European Journal for Philosophy of Science, v. 5, n. 1, pp. 111–139. Baird, D. (1988): “Five Theses on Instrumental Realism.” In: Fine, A. and Leplin, J. (ed.): PSA 1988. East Lansing, MI: Philosophy of Science Association, vol. 2, pp. 165–173. Barad, K. (1996): “Meeting the Universe Halfway: Realism and Social Constructivism without Contradiction.” In: Nelson, L. H. and Nelson, J. (eds.): Feminism, Science, and the Philosophy of Science, pp. 161–194. Dordrecht: Kluwer. Boyd, R. (1983): “On the Current Status of the Issue of Scientific Realism.” Erkenntnis, v. 19, n. 1/3, pp. 45–90. Chakravartty, A. (2007): A Metaphysics for Scientific Realism: Knowing the Unobservable. Cambridge: Cambridge University Press. Chang, H. (2003): “Preservative Realism and Its Discontents: Revisiting Caloric.” Philosophy of Science, v. 70, n. 5, pp. 902–913. Dummett, M. ([1973] 1981): Frege: Philosophy of Language, 2nd ed. London: Duckworth. Fine, A. (1984): “The Natural Ontological Attitude.” In: Leplin, J. (ed.): Scientific Realism, pp. 83–107. Berkeley: The University of California Press. Frege, G. (1892): “Über Sinn und Bedeutung.” Zeitschrift für Philosophie und philosophische Kritik, v. 100, pp. 25–50. Frigg, R. and Votsis, I. (2011): “Everything that you Always Wanted to Know About Structural Realism but Were Afraid to Ask.” European Journal for Philosophy of Science, v. 1, n. 2, pp. 227–276. Giere, R. N. (2006): Scientific Perspectivism. Chicago: The University of Chicago Press. Gonzalez, W. J. (1986): La Teoría de la Referencia. Strawson y la Filosofía Analítica. Salamanca-Murcia: Ediciones Universidad de Salamanca and Publicaciones de la Universidad de Murcia. Gonzalez, W. J. (1993): “El realismo y sus variedades: El debate actual sobre las bases filosóficas de la Ciencia.” In: Carreras, A. (ed.): Conocimiento, Ciencia y Realidad, pp. 11–58. Zaragoza: Seminario Interdisciplinar de la Universidad de Zaragoza-Ediciones Mira. Gonzalez, W. J. (1998): “El naturalismo normativo como propuesta epistemológica y metodológica. La segunda etapa del Pensamiento de L. Laudan.” In: Gonzalez, W. J. (ed.): El Pensamiento de L. Laudan. Relaciones entre Historia de la Ciencia y Filosofía de la Ciencia, pp. 5–57. A Coruña: Publicaciones Universidad de A Coruña. Gonzalez, W. J. (1999): “El giro en la Metodología de L. Laudan. Del criterio metaintuitivo al naturalismo normativo abierto al relativismo débil.” In: Velasco, A. (ed.): Progreso, pluralismo y racionalidad en la Ciencia. Homenaje a Larry Laudan, pp. 105–131. City of Mexico: Ediciones de la UNAM. Gonzalez, W. J. (2005): “The Philosophical Approach to Science, Technology and Society.” In: Gonzalez, W. J. (ed.): Science, Technology and Society: A Philosophical Perspective, pp. 3–49. Netbiblo, A Coruña. Gonzalez, W. J. (2006): “Novelty and Continuity in Philosophy and Methodology of Science.” In: Gonzalez, W. J. and Alcolea, J. (eds.): Contemporary Perspectives in Philosophy and Methodology of Science, pp. 1–28. A Coruña: Netbiblo.


Wenceslao J. Gonzalez

Gonzalez, W. J. (2008): “Rationality and Prediction in the Sciences of the Artificial: Economics as a Design Science.” In: Galavotti, M. C., Scazzieri, R. and Suppes, P. (eds.): Reasoning, Rationality, and Probability, pp. 165–186. Stanford: CSLI Publications. Gonzalez, W. J. (ed.) (2011a): Scientific Realism and Democratic Society: The Philosophy of Philip Kitcher, Poznan Studies in the Philosophy of the Sciences and the Humanities. Amsterdam: Rodopi. Gonzalez, W. J. (2011b): “From Mathematics to Social Concern about Science: Kitcher’s Philosophical Approach.” In: Gonzalez, W. J. (ed.): Scientific Realism and Democratic Society: The Philosophy of Philip Kitcher, pp. 11–93. Poznan Studies in the Philosophy of the Sciences and the Humanities. Amsterdam/N. York: Rodopi. Gonzalez, W. J. (2013a): “Value Ladenness and the Value-Free Ideal in Scientific Research.” In: Lütge, Ch. (ed.): Handbook of the Philosophical Foundations of Business Ethics, pp. 1503–1521. Dordrecht: Springer. Gonzalez, W. J. (2013b): “The Roles of Scientific Creativity and Technological Innovation in the Context of Complexity of Science.” In: Gonzalez, W. J. (ed.): Creativity, Innovation, and Complexity in Science, pp. 11–40. A Coruña: Netbiblo. Gonzalez, W. J. (2014): “On Representation and Models in Bas van Fraassen’s Approach.” In: Gonzalez, W. J. (ed.): Bas van Fraassen’s Approach to Representation and Models in Science, Synthese Library, pp. 3–37. Dordrecht: Springer. Gonzalez, W. J. (2015a): “On the Role of Values in the Configuration of Technology: From Axiology to Ethics.” In: Gonzalez, W. J. (ed.): New Perspectives on Technology, Values, and Ethics: Theoretical and Practical, Boston Studies in the Philosophy and History of Science, pp. 3–27. Dordrecht: Springer. Gonzalez, W. J. (2015b): Philosophico-Methodological Analysis of Prediction and its Role in Economics. Dordrecht: Springer. Gonzalez, W. J. (2017): “From Intelligence to Rationality of Minds and Machines in Contemporary Society: The Sciences of Design and the Role of Information.” Minds and Machines, v. 27, n. 3, pp. 397–424. DOI: 10.1007/s11023-017-9439-0. Available in (accessed on 6. 10.2017). Gonzalez, W. J. (2018): “Configuration of Causality and Philosophy of Psychology: An Analysis of Causality as Intervention and its Repercussion for Psychology.” In: Gonzalez, W. J. (ed.): Philosophy of Psychology: Causality and Psychological Subject. New Reflections on James Woodward’s Contribution, pp. 21–70. Boston/Berlin: Walter de Gruyter. Gonzalez, W. J. (2020): “Pragmatic Realism and Scientific Prediction: The Role of Complexity.” In: Gonzalez, W. J. (ed.): New Approaches to Scientific Realism, pp. 251–287. Boston/ Berlin: Walter de Gruyter. Gonzalez, W. J. and Arrojo, M. J. (2019): “Complexity in the Sciences of the Internet and its Relation to Communication Sciences.” Empedocles: European Journal for the Philosophy of Communication, v. 10, n. 1, pp. 15–33. DOI: Hacking, I. (1983): Representing and Intervening. Cambridge: Cambridge University Press. Hacking, I. (1999): The Social Construction of What? Harvard: Harvard University Press. Hardin, C. and Rosenberg, A. (1982): “In Defence of Convergent Realism.” Philosophy of Science, v. 49, n. 4, pp. 604–615.

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


Kitcher, Ph. (1993a): The Advancement of Science: Science without Legend, Objectivity without Illusions. N. York: Oxford University Press. Kitcher, Ph. (1993b): “Realism and Scientific Progress.” In: Kitcher, Ph.: The Advancement of Science: Science without Legend, Objectivity without Illusions, pp. 127–177. N. York: Oxford University Press. Kitcher, Ph. (2001a): “Real Realism: The Galilean Strategy.” Philosophical Review, v. 110, pp. 151–197. Kitcher, Ph. (2001b): Science, Truth, and Democracy. N. York: Oxford University Press. Kitcher, Ph. (2011a): “Scientific Realism: The Truth in Pragmatism.” In: Gonzalez, W. J. (ed.): Scientific Realism and Democratic Society: The Philosophy of Philip Kitcher, pp. 171–189. Poznan Studies in the Philosophy of the Sciences and the Humanities. Amsterdam: Rodopi. Kitcher, Ph. (2011b): Science in a Democratic Society. Amherst, NY: Prometheus Books. Kitcher, Ph. (2011c): “Science in a Democratic Society.” In: Gonzalez, W. J. (ed.): Scientific Realism and Democratic Society: The Philosophy of Philip Kitcher, Poznan Studies in the Philosophy of the Sciences and the Humanities, pp. 95–112. Amsterdam: Rodopi. Kuipers, Th. (2000): From Instrumentalism to Constructive Realism: On Some Relations Between Confirmation, Empirical Progress, and Truth Approximation. Dordrecht: Kluwer. Ladyman, J. (1998): “What is Structural Realism?” Studies in History and Philosophy of Science, v. 29, pp. 409–424. Ladyman, J. (2007): “Scientific Structuralism: On the Identity and Diversity of Objects in a Structure.” Aristotelian Society Supplementary Volume, v. 81, n. 1, pp. 23–43. Ladyman, J. (2011): “Structural Realism versus Standard Scientific Realism: The Case of Phlogiston and Dephlogisticated Air.” Synthese, v. 180, n. 2, pp. 87–101. Laudan, L. (1981): “A Confutation of Convergent Realism.” Philosophy of Science, v. 48, n. 1, pp. 19–49. Laudan, L. (1984a): Science and Values. The Aims of Science and Their Role in Scientific Debate. Berkeley: University of California Press. Laudan, L. (1984b): “Explaining the Success of Science: Beyond Epistemic Realism and Relativism.” In: Cushing, J. T., Delanay, C. F. and Gutting, G. M. (eds.): Science and Reality, pp. 83–105. N. Dame: University of N. Dame Press. Leplin, J. (1997): A Novel Defense of Scientific Realism. N. York: Oxford University Press. Leplin, J. (2004): “Para incorporar a Popper en el tratamiento del realismo científico.” In: Gonzalez, W. J. (ed.): Karl Popper: Revisión de su legado, pp. 299–337. Madrid: Unión Editorial. Lipton, P. ([1991] 2004): Inference to the Best Explanation, 2nd ed. London: Routledge. Lyons, T. D. (2006): “Scientific Realism and the Stratagema de Divide et Impera.” British Journal for the Philosophy of Science, v. 57, n. 3, pp. 537–560. Lyons, T. D. (2017): “Epistemic Selectivity, Historical Threats, and the Non-Epistemic Tenets of Scientific Realism.” Synthese, v. 194, n. 9, pp. 3203–3219. Lyre, H. (2010): “Humean Perspectives on Structural Realism.” In: Stadler, F., Dieks, D., Gonzalez, W. J., Hartmann, S., Uebel, Th., and Weber, M. (eds.): The Present Situation in the Philosophy of Science, pp. 381–397. Dordrecht: Springer. Massimi, M. (2012): “Scientific Perspectivism and its Foes.” Philosophica, v. 84, pp. 25–52. Niiniluoto, I. (1993): “The Aim and Structure of Applied Research.” Erkenntnis, v. 38, pp. 1–21. Niiniluoto, I. (1999): Critical Scientific Realism. Oxford: Clarendon Press.


Wenceslao J. Gonzalez

Niiniluoto, I. (2001): “From Instrumentalism to Constructive Realism: On some Relations between Confirmation, Empirical Progress, and Truth Approximation.” Mind, v. 101, n. 439, pp. 774–777. Peters, D. (2014): “What Elements of Successful Scientific Theories Are the Correct Targets for ‘Selective’ Scientific Realism?” Philosophy of Science, v. 81, n. 3, pp. 377–397. Psillos, S. (1996): “Poincaré Conception of Mechanical Explanation.” In: Greffe, J. L., Heinzmann, G. and Lorenz, K. (eds.): Henri Poincaré: Science and Philosophy/ Wissenschaft and Philosophie, pp. 177–191. Berlin: Academie Verlag, and Paris: A. Blanchard. Psillos, S. (1999): Scientific Realism. How Science Tracks Truth. London: Routledge. Psillos, S. (2014): “The View from Within and the View from Above: Looking at van Fraassen’s Perrin.” In: Gonzalez, W. J. (ed.): Bas van Fraassen’s Approach to Representation and Models in Science, Synthese Library, pp. 143–166. Dordrecht: Springer. Putnam, H. (1975): “What is Mathematical Truth?” In: Putnam, H.: Mathematics, Matter and Method: Philosophical Papers, vol. 1, pp. 60–78. Cambridge: Cambridge University Press. Putnam, H. ([1982a] 1990): “A Defense of Internal Realism.” Proceedings of the American Philosophical Association, December 1982; reprinted in Putnam, H.: Realism with a Human Face, pp. 30–42 Cambridge: Harvard University Press. Putnam, H. (1982b): “Three Kinds of Scientific Realism.” Philosophical Quarterly, v. 32, pp. 195–200. Putnam, H. (1987): The Many Faces of Realism. La Salle, IL: Open Court. Radder, H. (2012): The Material Realization of Science. Dordrecht: Springer. Rescher, N. (1992a): A System of Pragmatic Idealism. Vol. I: Human Knowledge in Idealistic Perspective. Princeton, NJ.: Princeton University Press. Rescher, N. (1992b): “Our Science as our Science.” In: Rescher, N., A System of Pragmatic Idealism. Vol. I: Human Knowledge in Idealistic Perspective, pp. 110–125. Princeton, NJ.: Princeton University Press. Rescher, N. (1993): A System of Pragmatic Idealism. Vol. II: The Validity of Values: Human Values in Pragmatic Perspective. Princeton: Princeton University Press. Rescher, N. (1994): A System of Pragmatic Idealism. Vol. III: Metaphilosophical Inquiries. Princeton: Princeton University Press. Rescher, N. (2014): The Pragmatic Vision. Themes in Philosophical Pragmatism. Lanham, MD: Rowman and Littlefield. Rescher, N. (2019): “Does the Inference to the Best Explanation Work?” In: Rescher, N.: Philosophical Clarifications: Studies Illustrating the Methodology of Philosophical Elucidation, pp. 145–154. Cham: Palgrave Macmillan. Shoemaker, S. (1980): “Causality and Properties.” In: van Inwagen, P. (ed.): Time and Cause: Essays Presented to Richard Taylor, pp. 109–135. Dordrecht: Reidel. Shrader-Frechette, K. (2005): “Objectivity and Professional Duties Regarding Science and Technology.” In: Gonzalez, W. J. (ed.): Science, Technology and Society: A Philosophical Perspective, pp. 51–79. A Coruña: Netbiblo. Simon, H. A. (1996): The Sciences of the Artificial, 3rd ed. Cambridge, MA: The MIT Press, (1st ed., 1969; 2nd ed., 1981). Suppe, F. (1974): “The Search for Philosophic Understanding of Scientific Theories.” In: Suppe, F. (ed.): The Structure of Scientific Theories, pp. 1–241. Urbana: University of Illinois Press, (2nd ed. 1977). van Fraassen, B. C. (1980): The Scientific Image. Oxford: Oxford University Press.

Novelty in Scientific Realism: New Approaches to an Ongoing Debate


van Fraassen, B. C. (2008): Scientific Representation: Paradoxes of Perspective. N. York: Oxford University Press. Vickers, P. (2019): “Towards a Realistic Success-to-truth Inference for Scientific Realism.” Synthese, v. 196, n. 2, pp. 571–585. Woodward, J. ([2003a] 2005): Making Things Happen: A Theory of Causal Explanation. N. York: Oxford University Press (paperback edition, September, 2005). Woodward, J. (2003b): “Experimentation, Causal Inference, and Instrumental Realism.” In: Radder, H. (ed.): The Philosophy of Scientific Experimentation, pp. 87–118. Pittsburgh: University of Pittsburgh Press, Pittsburgh. Worrall, J. (1985): “Scientific Discovery and Theory-Confirmation.” In: Pitt, J. C. (ed.): Change and Progress in Modern Science, pp. 301–331. Dordrecht: Reidel. Worrall, J. (1989a): “Fresnel, Poison and the White Spot: The Role of Successful Predictions in the Acceptance of Scientific Theories.” In: Gooding, D., Pinch, T., and Schaffer, S. (eds.): The Uses of Experiment, pp. 135–157. Cambridge: Cambridge University Press. Worrall, J. (1989b): “Structural Realism: The Best of Both Worlds?” Dialectica, v. 43, n. 1–2, pp. 99–124. Worrall, J. (1998): “Realismo, racionalidad y revoluciones.” Agora, v. 17, n. 2, pp. 7–24. Worrall, J. (2006): “Why Randomize? Evidence and Ethics in Clinical Trials.” In: Gonzalez, W. J. and Alcolea, J. (eds.): Contemporary Perspectives in Philosophy and Methodology of Science, pp. 65–82. A Coruña: Netbiblo. Worrall, J. (2009): “Miracles, Pessimism and Scientific Realism.” PhilPapers, Available in: (accessed on 7. 6.2019), pp. 1–55.

I New Framework for the Realism and Anti-realism Debate

Peter Achinstein

Scientific Realism: What’s All the Fuss? Abstract: Scientific realists say that the inferences made by J. J. Thomson in 1897 from his experiments on cathode rays to the truth of the claim that electrons exist, and by Jean Perrin in 1908 from his experiments on Brownian motion to the truth of the claim that molecules exist, are legitimate and decisive. Anti-realists deny that such inferences are legitimate since the inferred entities are unobservable, and any inference to the existence and properties of any unobservable whatever is unwarranted. Legitimate inferences are possible only to the empirical adequacy of such theories, not their truth. My paper explores reasons for the latter bold claim and rejects them all. Keywords: Brownian motion, Pierre Duhem, electron, Bas van Fraassen, molecule, Jean Perrin, scientific realism, scientific anti-realism, J. J. Thomson, unobservable

Some contemporary physicists have trouble with claims about the existence of strings, ten-dimensional spacetime, and multiverses. They complain that theories postulating the existence of such entities are mere speculations for which no observational evidence has been produced. On the other hand, few, if any, contemporary physicists have trouble with claims about the existence of molecules or electrons, although these claims were controversial when first introduced. They have been established experimentally and are no longer in dispute. By contrast, some philosophers have trouble with existence claims about all the entities mentioned above. They believe that such claims, whether about electrons and molecules, or strings and multiverses, cannot be established by observation to be true. Most contemporary physicists, at least those who know about what these philosophers say, regard such beliefs as absurd. The philosophers I have in mind I will call “scientific anti-realists.” They classify all the entities mentioned as “unobservables,” and they believe that claims about the existence as well as properties of unobservables cannot be shown to be true by observation. Many of them say that it is not the aim of science to prove such claims, but to use them to “save the phenomena,” i.e., to make inferences from them to claims that can be established by observation. That is the best that can be done.


Peter Achinstein

The physicists I have in mind I will call “scientific realists.” They believe that among the class that anti-realists call “unobservables” are ones, such as electrons and molecules, claims about whose existence and properties can be, and indeed have been, established to be true by observation. Scientific realists assert this about many “unobservables” introduced in physics, even while remaining skeptical about others, such as strings and multiverses. Scientific realists and anti-realists usually say much more than I have expressed above. For example, some scientific realists want to say that there are universal rules that should be followed for inferring truths about unobservables (for example, Newton’s four rules of causal and inductive reasoning; Whewell’s version of hypothetico-deductivism involving what he calls “consilience” and “coherence.”) Other realists are “localists” who say that no such universal rules exist for inferring truths about unobservables; each such inference is to be defended empirically without invoking universal rules. And, of course, all realists, scientific or otherwise, want to say that the entities they postulate exist independently of those postulating them. Anti-realists also make various claims in addition to ones expressed above. And, as with realism, there are differences within the anti-realist community. Some, for example, want to say that sentences invoking terms for unobservables should not be treated as having a truth-value, but simply as “inference devices” for getting from some true observational statements to others. Other antirealists say that unobservable claims do have a truth-value, but scientists can never know what it is, and should be concerned only with whether theories they are employing make true statements about the observable part of the world. In what follows, I want to focus on what I regard as the most essential difference between realism and anti-realism, or at least between important representatives of each viewpoint: the difference between their views about the legitimacy of inferences to the truth of claims about the existence and properties of unobservables. To get at the heart of the debate between the “realist” physicists and the “anti-realist” philosophers I have in mind, I will start with how physicists involved in experiments leading to the postulation of electrons and molecules in fact proceeded to give what they regarded as observational evidence sufficient to establish the existence and some of the properties of these entities. Afterwards, I will consider why the anti-realist philosophers noted deny the strong conclusion of the realist physicists. There is, I will claim, a fundamental principle that the philosophers in question demand to be satisfied which the physicists regard as stifling and absurd. My aim in this paper will be to defend the realist physicists I have in mind against the anti-realist philosophers I have in mind. But it should be clear that not all physicists are realists, and not all philosophers are anti-realists.

Scientific Realism: What’s All the Fuss?


1 Do Electrons Exist? How do Physicists Know? Let’s briefly look at the history of how electrons were discovered to exist, or at least at the experimental basis on which their existence was said by physicists to be established. Whether this is sufficient for “scientific realism” is another matter, which I will discuss beginning in section 3. It was known in the 19th century that in a glass tube containing positive and negative electrodes, when a source of high potential is connected to the positive electrode and the pressure of the air in the tube is reduced to a few mm of mercury, an electric discharge fills the space between the electrodes with a pink or reddish glow, indicating a discharge of electricity. In 1859, with newly invented gas pumps that could evacuate much more gas from the tube, it was discovered that when the gas pressure is reduced to 0.001 mm of mercury, the glass near the negative electrode, or cathode, glows with a greenish phosphorescence, and the position of the glow can be changed with a magnet. It was concluded that rays are emanating from the cathode (so-called cathode rays) that are different from the ordinary electric discharge. The question became: what are these rays? Two theories emerged. One, supported by several German physicists, including Hertz, is that cathode rays are waves, distinct from, but somewhat similar to, light. The other theory, supported generally by British physicists, including William Crookes and Arthur Schuster, is that they are charged particles of some kind. In 1883, Hertz argued against the charged particle idea on the grounds that if it were true, then when electrically charged plates are introduced into the cathode tube, the cathode rays should be attracted to one of these plates. In a series of carefully designed experiments with oppositely charged electric plates, Hertz produced no deflection of the rays. He concluded that they are waves, not particles. Fourteen years later, in 1897, when the British physicist J.J. Thomson repeated Hertz’s experiments with much more highly evacuated tubes, the electrical deflection appeared. From his experimental results, Thomson concluded: As the cathode rays carry a charge of negative electricity, are deflected by an electrostatic force as if they were negatively electrified, and are acted on by a magnetic force in just the way this force would act on a negatively electrified body moving along the path of these rays, I can see no escape from the conclusion that they are charges of negative electricity carried by particles of matter.

Thomson goes on to derive a formula relating the ratio of mass to charge of these particles to experimentally measurable quantities (Thomson, 1897, p. 302). And he demonstrates that this ratio is approximately the same for different gases used in the cathode tube. Two years after this 1897 paper Thomson published


Peter Achinstein

results of new experiments in which negatively charged particles were produced by ultraviolet light falling on an electrified metal plate. He experimentally determined the ratio of mass to charge in these cases and found it to be the same as that with his cathode ray experiments. Thomson’s experiments showing that cathode rays are composed of negatively charged particles and providing a measurement of their mass to charge ratio convinced physicists that these particles exist and have mass and charge. Within a few years they were called electrons.1

2 Do Molecules Exist? How do Physicists Know? Physicists in the 19th century, such as James Clerk Maxwell in the 1860’s and 1870’s, developed mathematically expressed theories of gases that postulated molecules. They gave some empirical and methodological reasons for supposing such postulates to be true.2 Even though many physicists, including Maxwell, believed that molecules exist, there were no experiments considered decisive in showing that they do. Among the first convincing experiments were ones done in 1908 by Jean Perrin on Brownian motion – the continuous haphazard motion of small but visible particles suspended in a liquid, discovered by botanist Robert Brown in 1827. Perrin prepared a set of such “Brownian” particles all of the same mass m and suspended them in a cylinder of known height h. He experimentally determined the density D of the material making up the particles, the density d of the liquid in which they were suspended, the temperature T of the liquid, and, with microscopes, the number of suspended particles per unit volume at various heights. He performed experiments with different fluids, at different temperatures, and with particles of different sizes and mass. He then derived the formula n/n’ = 1 – Nmg(1 – d/D)h/RT, which incorporates some of the quantities cited. In this equation, n and n’ represent the number of Brownian particles per unit volume observed at upper and lower levels, mg is the weight of a particle, R is the gas constant, and, crucially, N is Avogadro’s number – the number of molecules in a gram molecular

1 For more details concerning Thomson’s reasoning, see Achinstein “Who Really Discovered the Electron?”, (2010b, reprinted in Achinstein 2010a). Thomson is usually credited as being the discoverer of the electron, although other physicists at the time, including William Crookes, Arthur Schuster, and Philipp Lenard, claimed they discovered it before Thomson. 2 See Achinstein (1991, part II) and (2013, chapter 4).

Scientific Realism: What’s All the Fuss?


weight of a substance. Perrin used this equation to experimentally determine a value for Avogadro’s number and discovered that it is indeed a constant whose value is approximately 6.023 x 1023. In deriving this equation Perrin was assuming that the motion of the visible Brownian particles was caused by collisions with the molecules making up the liquid. He defended this assumption by citing late 19th century experiments by Gouy and others showing that the Brownian motion is not produced by a range of known external causes capable of transmitting forces to the fluid, nor by any visible entities in the fluid itself. From these experiments he concludes that the most probable cause of the visible motion of the Brownian particles is the motion of invisible particles that comprise the fluid, the “molecular agitation,” as he puts it. “No other cause of the movement could be imagined.”3 On the basis of this argument, and of his experiments leading to a determination of Avogadro’s number, Perrin concludes: The objective reality of the molecules therefore becomes hard to deny.4

3 The Anti-Realist Response These arguments for the existence of electrons and molecules are not convincing to scientific anti-realists. Let me choose two of my favorites, the physicist Pierre Duhem and the philosopher Bas van Fraassen. Well after the experiments by J. J. Thomson on electrons in 1897 and by Jean Perrin on molecules in 1908, Duhem, until his death in 1916, explicitly rejected theories postulating either one. He did so partly on the grounds that as the theories postulating these entities were developed during his lifetime they became complicated, disturbed, overburdened with arbitrary complications without succeeding, however, in rendering a precise account of the new laws or in connecting them solidly to the old laws . . . . (Duhem 1954 [orig. 1905], p. 305)

But Duhem’s most important reason for rejecting the realist conclusion that electrons and molecules exist and can be shown to exist experimentally follows from a fundamental assumption he makes as an anti-realist. He writes: Now these two questions – Does there exist a material reality distinct from sensible appearances? and What is the nature of this reality? – do not have their source in experimental

3 Perrin (1909, pp. 507–601, quotation pp. 510–511). 4 Perrin (1990, p. 105). For a more detailed formulation of Perrin’s reasoning, see Achinstein (2002).


Peter Achinstein

method, which is acquainted only with sensible appearances and can discover nothing beyond them. (Duhem 1954 [orig. 1905], 10)

What exactly is Duhem saying here and what form of anti-realism does it yield? One of the examples he uses to illustrate his position is that of light. Duhem’s view is that we receive sensations of light which physicists represent using what he calls “abstract and general notions” introduced in geometrical optics, such as ray of light, refraction, reflection, and others. Geometrical optics relates these “representations” by what Duhem calls “experimental laws,” such as Snell’s law relating the angle of incidence of a refracted ray of light to the angle of refraction. Now, he writes: Of these experimental laws the vibratory [wave] theory of light gives a hypothetical explanation. It supposes that all the bodies we see, feel, or weigh are immersed in an imponderable, unobservable medium called the ether. To this ether certain mechanical properties are attributed . . . and, without enabling us to perceive the ether, without putting us in a position to observe directly the back-and-forth motion of light vibration, the theory tries to prove that its postulates entail consequences agreeing at every point with the laws furnished by experimental optics. (Duhem 1954 [orig. 1905], p. 9)

Duhem’s anti-realist claim is that even if the wave theory of light entails all the experimental laws in geometrical optics and even if it yields new ones, an inference from this to the truth of the wave theory is not legitimate. The reason is that the putative ether and the waves in it are unobservable. So, any inference to their existence from empirical facts expressed using the (“sensation-based”) concepts and laws of geometrical optics is prohibited by “experimental method.” Duhem does not object to treating the wave theory, with its postulation of “unobservables,” as a useful device or instrument for making predictions about “observables,” so long as the theory is simple and well-ordered, and so long as the “observables” are described using the sorts of concepts and laws in geometrical optics.5 But if the theory is so treated, it should not be thought of as depicting a reality that exists and underlies and explains the observable world, the world described using sensation-based concepts and laws. In his important 1980 work, The Scientific Image, Bas van Fraassen, like Duhem 74 years earlier, claims that the aim of empirical science should be to “save the phenomena,” not to “give us a literally true story of what the world is like,” an aim he ascribes to scientific realists. For van Fraassen, the “phenomena”

5 Duhem believed (in 1905) that the wave theory, but not the atomic theory, could be so used. Although both theories postulate unobservables, the wave theory, he believed, satisfied his requirement of simplicity and order, whereas the atomic theory did not.

Scientific Realism: What’s All the Fuss?


comprise the observable part of the world, though he does not require that they be described using sensation-based concepts and laws of experimental physics of the sort Duhem has in mind. However, like Duhem, van Fraassen believes that “saving the phenomena” should be the aim of empirical science because Experience can give us information only about what is both observable and actual, not about what is unobservable. (van Fraassen, in Churchland and Hooker 1985, p. 253)

According to van Fraassen, electrons are unobservable. The reason he classifies them as such is that there are no “circumstances which are such that if . . . [an electron] is present to us under those circumstances then we observe it.” (van Fraassen 1980, p. 16.) In cloud chamber experiments physicists observe tracks (allegedly) left by electrons as they pass through the chamber. The tracks consist of visible water droplets formed when the vapor in the chamber condenses on (putative) ionized molecules that the (putative) electrons produce in their journey. But water droplets are not electrons, which are unobservable. Similarly, when J.J. Thomson observed that the position of the fluorescence produced in the cathode tube changed with the introduction of charged electric plates he was not observing electrons, but the fluorescence that (allegedly) they produced on the glass walls of the tube. And Perrin was not observing molecules moving, but the motion of observable Brownian particles (allegedly) caused by collisions with molecules. Interestingly, van Fraassen calls all such cases “detecting” the unobservable entity, which he distinguishes from “observing.” The latter entails the existence of the entity, the former does not. For both Duhem and van Fraassen, Thomson did not discover by experiment that electrons exist and have negative charge, and Perrin did not discover by experiment that molecules exist and obey Avogadro’s law. At best they discovered that their theories containing these postulates “save the phenomena.”

4 An Analysis of this Anti-Realist Response Duhem and van Fraassen claim that a scientist can never legitimately make an inference from what is observed to be the case to the truth of a claim about the unobservable part of the world. I will put what I regard as their crucial assumption as follows: Crucial Assumption: A scientist can legitimately infer the truth of the claim that X exists and has property P only if X and X’s having P are observable.

Several points need clarification.


Peter Achinstein

First, for both Duhem and van Fraassen, “observable” means “observable to humans.” It is relative to human capacities. If the latter were different, humans might be able to observe more (or less) than they can now. Second, saying that X, and (the fact, state of affairs, or phenomenon) that X has P, are observable is not to say that X, or X’s having P, has been or indeed ever will be observed. It may be very difficult to do so. Whatever observability is supposed to mean here, the anti-realists in question would say that Brownian particles are observable, as is their exhibiting zig-zag motion, even if no one ever observed such particles or their motion, while molecules and their motion are not observable. Third, it could be the case that X is observable but that X’s having P is not. For example, gold is observable, but gold’s being composed of atoms with 79 electrons is not. On the present anti-realist view, one can legitimately infer the truth of the claim that gold exists, but one cannot legitimately infer the truth of the claim that gold has atoms with 79 electrons. Fourth, saying that the truth of the claim that X exists and has P can be inferred legitimately only if X and X’s having P are observable is not to say that the only legitimate way to infer these things is by observing X and X’s having P. One can make inferences “indirectly” by observing effects produced by X or by X’s having P. The claim is simply that such “indirect” inferences are legitimate only if “direct” observation is possible that could settle the matter conclusively. So, the anti-realist will say, I can infer that you exist from observing your shadow, because you are observable. But I can’t infer that God exists from observing a miracle God (allegedly) produces, because God is not observable. Fifth, since Duhem defends a form of holism, according to which it is entire systems that are inferred, not individual “isolated” claims, we need to understand the “Crucial Assumption” broadly to allow both “holistic” and “particularistic” versions. In the former case, it would be understood as saying that a scientist can legitimately infer the truth of a system of hypotheses containing the claim that X exists and has P only if X and X’s having P are observable. The “Crucial Assumption” is much stronger than the following weaker one made by some empiricists: Weaker Assumption: A scientist can legitimately infer the truth of the claim that X exists and has P only if X’s existence and the fact that it has P are either observable or, if not observable, can be inferred from observable facts.6

6 A holistic version might say that a scientist can legitimately infer a system of hypotheses containing the claim that X exists and has P only if that system can be inferred from a set of facts containing ones that are observable.

Scientific Realism: What’s All the Fuss?


Scientific realists, such as Thomson and Perrin, could accept the weaker assumption without buying the crucial one. Their inferences that electrons or molecules exist and have certain properties they attributed to them were made from a set of facts containing ones that are observable. In the case of electrons these observable facts included that the position of the phosphorescence changes when charged electric plates are introduced in the cathode tube. In the case of molecules, they included observable facts about the Brownian motion and countable numbers of Brownian particles at the upper and lower levels. But, in violation of the crucial assumption made by anti-realists, neither the fact that electrons exist and are negatively charged, nor the fact that molecules exist and satisfy Avogadro’s law, are observable facts. So, they cannot be inferred from anything. Many empiricists would reject even the Weaker Assumption. Perhaps the most important reason has to do with the issue of whether the observableunobservable distinction is even viable.7 Another reason has to do with the claim, made by some, that non-observational reasons, for example, ones invoking simplicity or unification or even metaphysical principles, can provide a legitimate basis for defending claims of the form X exists and X has P. However, it is not my aim to explore these issues here.8 For the sake of argument, let’s accept the Weaker Assumption and ask: What justification, if any, can be given for the Crucial Assumption?

5 A Methodological Defense: The Aim of Science It might be argued that restricting inferences to the observable part of the world in accordance with the Crucial Assumption reflects the aim of science. According to Duhem, the “aim [of a physical theory] is to summarize and classify logically [his italics] a group of experimental laws without claiming to explain these laws . . . [It is not] to strip reality of the appearances covering it like a veil, in order to see the bare reality itself.” (Duhem 1954 [orig. 1905], p. 7.) For van Fraassen, “science aims to give us theories which are empirically adequate . . . what [such a theory] says about the observable things and events in this world is true” (van Fraassen 1980, p. 12). The Crucial Assumption fits well with this claim, the Weaker Assumption does not.

7 For example, in Achinstein (1968) I argue that the distinction is highly contextual and is not the sort of thing presupposed by van Fraassen or Duhem. 8 For reasons to reject the idea that simplicity plays an important epistemic role, see Achinstein (2018, chapters 2 and 3).


Peter Achinstein

The aim-of-science claims of Duhem and van Fraassen are not meant to describe what individual scientists may or may not be saying about what they are doing. Thomson and Perrin do both start with observable phenomena (the phosphorescence, the Brownian motion). From these they explicitly infer what they regard as truths about the existence of their causes, even if those causes turn out to be unobservable. So do many physicists throughout the history of physics, including Newton (who inferred that a universal gravitational force exists from observed motions of the planets), Maxwell (who inferred that molecules exist from observed behavior of gases), and string theorists (who hope to be able to infer the existence of strings and 10-dimensional spacetime from future experimental results). By contrast, Duhem and van Fraassen will say, the aim they are speaking of is the aim of science itself, as a discipline, not the aims of particular scientists, which can vary. In the case of “unobservable” claims of the form “X exists and has P,” the aim they have in mind is “saving the phenomena” or “empirical adequacy,” not truth, which may or may not be the aim of particular scientists. Reply: Perhaps some scientists throughout the history of science have espoused a “realist” aim (for example, Perrin), while others (for example, Duhem) have espoused an “anti-realist” one, and still others have expressed no opinions either way.9 But if so why privilege the anti-realists by saying that theirs is the true and proper aim of science itself? Why do so unless you think that there is something wrong, or misguided, or inferior about the realist aim? Why do so unless you think that inferences to the existence and properties of electrons and molecules of the sort made by Thomson and Perrin are somehow unjustified? After all, both Duhem and van Fraassen say that empirical science has no access to the unobservable, and both say that electrons and molecules are unobservable. How can they say these things unless they believe either that Thomson and Perrin should not have made the (“realist”) inferences they did, or else that antirealist interpretations should be given to their inferences, viz. as ones to the conclusion that the inferred claims “save the phenomena,” not that they are true? In either case, Duhem and van Fraassen are saying that something is mistaken or

9 As van Fraassen notes, realism and anti-realism are philosophical views about science, not scientific ones about the world (1980, p. 255, n. 6). This doesn’t prevent scientists from explicitly endorsing one view or the other. Nor does it prevent a scientist from implicitly endorsing realism (or anti-realism) by endorsing (or rejecting), as legitimate, inferences to the truth of many (or no) “unobservable” claims of the form “X exists and has P.” I am calling such a scientist a realist (or anti-realist). To be one or the other you don’t have to be a philosopher, or express the view explicitly or with the depth and precision that some at least would demand of a philosopher.

Scientific Realism: What’s All the Fuss?


misguided about the inferences of Thomson and Perrin interpreted “realistically.” What is it? Well, for one thing, Thomson concluded mistakenly that electrons are particles, in the classical sense, and have no wavelike properties.10 For another, he concluded without sufficient evidence that electrons are the only constituents of atoms. But these aren’t the sorts of mistakes that Duhem and van Fraassen have in mind. They have in mind a much more general epistemic mistake commonly made within and outside of science, viz. inferring the truth of a claim about the unobservable world from a truth about the observable world. Why is that a mistake, or unjustified, or unwise? Without offering justification, both Duhem and van Fraassen simply say that science has no access to the unobservable world, to the world of electrons and molecules, not to mention strings and other esoteric items. My question is this: What is, or could be, the basis of such a bold claim? Why is there no such access? In the following sections I will consider three reasons that might be offered for this idea. They pertain to the violation of fundamental principles that, it might be claimed, do and should govern the making of scientific inferences: Safety First (“don’t make risky inferences when safer ones are available that give you a good deal of what you want”); Settling the Matter (“don’t make inferences to propositions whose truth-values cannot be settled”); and Bias (“don’t make inferences from biased samples”). The first is suggested to me by remarks of van Fraassen about the advantages of the requirement of empirical adequacy over truth. The second is suggested by Duhem’s historical remarks about the continual disagreements in physics when theories are proposed that postulate unobservables. The third, which I propose and regard as the strongest, is suggested neither by van Fraassen nor by Duhem. I will explore these three reasons and reject all of them.

6 Safety First Restricting inferences to truths about the observable part of the world is safer than allowing them about the unobservable part as well. Following the Crucial Assumption will satisfy “safety first.” This is not to imply that all inferences made from “observable” facts to “observable” claims are legitimate. Many of them

10 Quantum theory hadn’t been dreamed of yet. Indeed, it was only in the 1920’s when physicists, including Thomson’s son, G. P. Thomson, conducted experiments showing the wave properties of electrons.


Peter Achinstein

are quite faulty for various reasons, including insufficient or biased data. The problem is that inferences to the truth of claims about unobservable causes of observable phenomena are stronger, and hence riskier, than inferences simply to the empirical adequacy of such causes. The former could be false, even if the latter are true.11 Safety alone suggests restricting inferences to ones about the observable world. Moreover, this restriction still gives scientists a good deal of what they want from a theory about the “unobservable” part of the world. It gives them the ability to use such a theory to make justified inferences about the “observable” part. Indeed, since the truth of such a theory cannot be determined, van Fraassen claims that asserting its truth is just giving an “extra opinion . . . It is but empty strutting and posturing . . . ”12 It has no real advantage over asserting the empirical adequacy of the theory, and it is riskier. Reply. Yes, scientists want their theories to be empirically adequate, to have inferential powers with regard to the “observable” part of the world. But many, if not most (including Thomson, Perrin, and a host of others), want their theories to do more, viz. say what is really going on in the “unobservable” part as well.13 They want their theories to tell us what things exist in this part of the world and what their properties are. They want such theories to be true and have empirical justification. Anti-realists seek to convince scientists (and everyone else) to settle for less – to settle for agnosticism about the truth of theories about electrons, molecules, and many other things, because otherwise truth assertions are just “empty strutting and posturing.” They are so because empirically justified true theories about “unobservables” are not to be had, since, on the response under consideration, inferences to them are too risky. Are they? We might agree that statements such as (a) Electrons exist are less certain, more risky, than statements such as (b) A greenish phosphorescence will appear in a cathode tube when it is appropriately set up. 11 Van Fraassen (in Churchland and Hooker 1985, p. 255) emphasizes the idea that truth is stronger than empirical adequacy. He regards the latter as sufficient for science, so he claims that belief in the truth of a theory is “supererogatory,” since we can have evidence for truth only via evidence for empirical adequacy. 12 van Fraassen (1985, p. 255). 13 For example, Perrin writes: “To divine in this way the existence and properties of objects that still lie outside our ken, to explain the complications of the visible in terms of invisible simplicity, is the function of the intuitive intelligence which, thanks to men such as Dalton and Boltzmann, has given us the doctrine of Atoms. This book [Atoms] aims at giving an exposition of that doctrine.” (1990, p. 85, italics his).

Scientific Realism: What’s All the Fuss?


But this doesn’t mean that an inference to the truth of (a) is somehow faulty, or unjustified, or unwise. It just means that, since (a) is a stronger claim than (b), Thomson had to do a lot more to establish the truth of (a) than to establish the truth of (b).14 To this van Fraassen will respond that he is not comparing the strength of (a) with (b), but with (c) Thomson’s theory that electrons exist is “empirically adequate.” It “saves the phenomena.” Claim (a) is stronger, and riskier, than (c). Claim (a) could be false even if (c) is true. But, again, more than this needs to be argued to show that inferences to claims like (c) are okay whereas inferences to claims like (a) are not. After all, there are claims about “observables” (e.g., “the sun doesn’t move across the sky”) that were once thought risky but now are no longer so. It needs to be shown that the kind or extent of riskiness involved in inferences to the truth of “unobservable” claims like (a) will always be there and will always make such inferences faulty or unjustified or at least not completely satisfactory. That is not shown simply by arguing that (a) is riskier than (c). Either we must strengthen this argument or replace it. Let’s see what can be done.

7 Settling the Matter If I observe X then X exists; if I observe X’s having P then X has P. Observing settles the matter. This is part of the present anti-realist claim. But the anti-realist needs more, since a realist might agree that observing can settle the matter. To distinguish his view from that of the realist, the anti-realist might add these two claims: (1) Assuming the question is an empirical one, this is the only way to settle whether it is true that X exists and has P; if X and X’s having P are unobservable, then whether it is true that X exists and has P can never be settled by empirical means. (2) Unless the question is settleable, scientists should avoid making inferences to the truth of the claim that X exists and has P. Otherwise their claims become like those in metaphysics – speculations that are never settled or indeed

14 In probabilistic terms, the probability of (a), conditional on evidence available before Thomson’s experiments, may be quite low, much lower than the probability of (b) conditional on the same evidence. But this doesn’t preclude the possibility that the probability of (a), conditional on new additional evidence (such as the results of Thomson’s experiments), will be quite high.


Peter Achinstein

settleable, at least not empirically.15 At most, scientists should infer that “X exists” and “X has P” are empirically adequate, that they save the observable phenomena, not that they are true. An inference to truth in such cases would require that X and X’s having P be observable and observed. An article on the front page of the New York Times on April 11, 2019 begins as follows: Astronomers announced on Wednesday [April 10, 2019] that they had captured an image of the unobservable: a black hole, a cosmic abyss so deep and dense that not even light can escape it. [The striking image of the black hole is reproduced on the first page of the Times.] . . . “We have seen what we thought was unseeable,” said Shep Doeleman, an astronomer at the Harvard-Smithsonian Center for Astrophysics . . . The image offered a final, ringing affirmation of an idea so disturbing that even Einstein, from whose equations black holes emerged, was loath to accept it.

In seeing “what we thought was unseeable” the astronomers “offered a final, ringing affirmation” of the existence of black holes. The matter became settleable and indeed was settled. A realist can agree with the claim that the production of the image settled the matter without accepting the anti-realist claims (1) and (2) above. As noted earlier, van Fraassen distinguishes “observing” from “detecting.” According to him, in a cloud chamber we don’t observe the electron moving. Rather, we detect it by observing the lines produced by droplets of water that the electrons produce in their path. He says: So while the particle is detected by means of the cloud chamber, and the detection is based on observation, it is clearly not the case of the particle’s being observed. (van Fraassen 1980, p. 17.)

For van Fraassen, “detecting” X or X’s having P seems to involve observing something produced by X without observing X or X’s having P. So, for example, pilots employ radar to determine the presence, angle, and velocity of other planes in the sky. Using van Fraassen’s terminology, when they do so they “detect” other planes without “observing” them. They are observing the signal on a screen which is produced by bouncing radio waves off the other plane, but (I can hear van Fraassen asserting) they are not seeing or observing the plane itself (which, of course, they can do on a different occasion).

15 Duhem offers many historical examples of disputes within science, including ones about atomism, which he labels “metaphysical.” He regards them as unsettleable scientifically because they go beyond the observable. See Duhem (1954 [orig. 1905], p. 10–15).

Scientific Realism: What’s All the Fuss?


For the sake of argument, let’s accept this claim. Let us also construe the term “observe” to cover not just unaided observation (as some van Fraassen purists might insist) but also observations made with the use of certain instruments, such as eyeglasses, magnifying glasses, mirrors, and even some telescopes and microscopes, that produce images of what is observed and whose physical basis can be understood (for example, by appeal to laws in geometrical optics governing light rays) without invoking “unobservables.”16 The first claim I want to challenge is that observing X and X’s having P is sufficient for settling the matter of whether X exists and has P in a sense of “settling the matter” required for the present argument. That sense requires knowing, or at least having a good reason to believe, that it is X that exists and has P. But I could be observing X, or X’s having P, without knowing, or having any reason to believe, that it is X, or X’s having P, that I am observing. I could be observing the powder in front of me in my lab without realizing I am observing potassium nitrate. The fact that I am observing something that unbeknownst to me is potassium nitrate doesn’t settle the matter of whether potassium nitrate exists in my lab. The most the present anti-realist can claim is that observing X and X’s having P is a necessary condition for settling the matter in question. Second, and even more important, suppose that anti-realists are right that observing X and X’s having P is at least a necessary condition for “settling the matter” of whether X exists and has P. Anti-realists such as Duhem and van Fraassen have to say more than this. They have to make a claim like (2) above. They have to say that unless the matter of X’s existence and having P is settleable, scientists should avoid inferences to the truth of the claim that X exists and has P. Why should they avoid such inferences? One answer, a pragmatic one, is that they should do so simply in virtue of the fact that the matter cannot be settled. Rule 8 of Descartes’ Rules for the Direction of the Mind is as follows: Rule 8: If in the series of things to be examined we come across something which our intellect is unable to intuit sufficiently well, we must stop at that point, and refrain from the superfluous task of examining the remaining items.

16 The sophisticated image of a black hole would not be such an image. So, van Fraassen would probably call this a case of detecting a black hole, not observing it. Whether or not observing this image of a black hole counts as observing a black hole, according to the New York Times report, “the image offered a final, ringing affirmation” of the existence of black holes; it settled the matter. It is much closer to van Fraassen’s sense of “observing” than is observing an electron’s track to observing an electron. (The track in the cloud chamber is not an image of the electron.)


Peter Achinstein

Descartes is, of course, preaching rationalism, not empiricism. But the basic thought might be put like this for both rationalists and empiricists: If in an investigation of whether X exists, and if so, what properties it has, you cannot reach a conclusion that settles the matter, then you should not continue the investigation. And even worse, if for some reason the matter is incapable of being settled, then not just you, but everyone, should discontinue the investigation. For Descartes, settling the matter requires “intuition” and “deduction” in the sense he gives to these terms. For anti-realists of the sort I am describing, settling the matter requires observing X and X’s having P. If one cannot do that because X and X’s having P are unobservable, then one should stop at that point and avoid any further inferences to X’s existence and properties. Suppose that the truth of Thomson’s claim that electrons exist and have negative charge was not settled by the experimental evidence he gathered. Suppose further that because electrons are unobservable, it could not be settled in a sense demanded by the anti-realists in question. Thomson could reply: “Yes, in a sense of ‘settled’ requiring ‘observation’ and ‘observability’ of the item postulated, my claims cannot be settled. But I can still offer good experimental reasons for believing that electrons exist and have negative charge. These reasons involve observing things other than electrons. For example, they involve observing the phosphorescence moving toward the positively charged plate, which I can argue is produced by negatively charged electrons attracted by the positively charged plate. The reasons are not as conclusive as ones given by an experimenter who says ‘I know that X exists and has P because I have observed X and X’s having P.’ But this doesn’t mean that experimental evidence for the truth of the claim that electrons exist and have negative charge is not strong – strong enough to make it very probable that the claim is true.” This, indeed, is what was claimed by many physicists within a few years after 1897. What anti-realists such as Duhem and van Fraassen must show is that Thomson’s reply is mistaken. There is something wrong with, or questionable about, an inference even to the probable truth of the claim that X exists and has P if the truth of the claim that X exists and has P is not settleable, at least in principle, by observing X and X’s having P. What is wrong or questionable? Why does the impossibility of observing X and X’s having P impugn inferences of the sort in fact made by Thomson and Perrin from their experimental evidence to the (probable) existence of (unobservable) electrons and molecules? An argument more convincing than any so far is needed. For this purpose, I invite the reader to consider the Bias Argument.

Scientific Realism: What’s All the Fuss?


8 The Bias Argument Let’s start with a straightforward induction. Suppose that having observed a great number of X’s and found them to have a property P we infer that all X’s have P. For example, from the fact that all observed accelerating bodies in contact with other bodies exert forces on them we infer that all accelerating bodies, including molecules (if they exist), in contact with other bodies exert forces on them. This is the sort of inference, we might suppose, that Perrin made and had to make to get from Brownian motion to molecular motion.17 Now in making an inference from “all observed X’s have P” to “all X’s have P,” we need to avoid choosing a biased sample of X’s: It may be the case that all the X’s that have been observed have another property B, and that there are unobserved X’s without B many of which don’t have P. In other words, B may be a biasing condition. To preclude this possibility, we need to select X’s for observation that lack B as well as ones that have B before we can conclude that B is not a biasing condition. Now, the argument continues, when we make an inference from the fact that all observed accelerating bodies in contact with other bodies exert a force on them to a claim about all bodies (observable or otherwise), we are restricting our sample to the class of observable bodies. But observability is at least a potentially biasing condition. And, by definition, there is no way of showing that it is not biasing by examining unobservable bodies as well. We are stuck in the observable world. Even if we cannot demonstrate that observability is a biasing condition, we cannot show it is not. Therefore, since we must assume it is not when we make inferences about the unobservable world, and since an assumption of an unbiased sample is a necessary assumption – but one that cannot be justified – we cannot justifiably infer truths about the unobservable from truths about what is observed.18 Reply: Is there any way to find out whether observability is a biasing condition other than by “observing the unobservable”? Yes, there is. We begin by considering the sorts of factors that can make X or X’s having P unobservable (in van 17 In an analogous manner Newton makes an inductive inference about extension: “The extension of bodies is known to us only through our senses, and yet there are bodies beyond the range of these senses; but because extension is found in all sensible bodies, it is ascribed to all bodies universally.” (1999, p. 795). 18 Newton solves the problem by saying that inductive inferences from “sensible” bodies to ones beyond the range of our senses are justified by simplicity: “Nature is always simple and ever consonant with itself” (p. 795). He doesn’t justify the simplicity claim in the Principia, though in various letters he tries to do so by appeal to the simplicity of God’s actions. For a critical discussion of Newton’s appeal to simplicity here, see Achinstein (2018, pp. 143–167). Those who don’t buy Newton’s simplicity solution, or some other justification of the inductive voyage from the observable to the unobservable, need to confront the bias objection.


Peter Achinstein

Fraassen’s sense of there being no “circumstances which are such that if X [or X’s having P] is present to us under those circumstances, then we observe it.”)19 One is size: X may be too small to be observed by us. Another is distance from us: X or the state or event of X’s having P may be too far away from us. Another is temporal duration: X, or the state in which X has P, may not last long enough to be observed by us. Another is lack of interactions: X, or the state in which X has P, may not interact with other objects or states that can be observed. And so forth. Now in order to determine whether observability is, or might well be, a biasing condition, although we cannot observe both the observable and the unobservable, we can vary the conditions that make something unobservable. We can vary size, distance from us, temporal duration, numbers of interactions, etc. So, in our case of accelerating bodies in contact with other bodies, we can vary the size of the bodies, their distance from us, the magnitude of their acceleration, and the number of bodies with which they interact, in order to determine whether such variation has any effect on whether accelerating bodies exert forces on other bodies with which they come into contact. If we find that varying conditions in virtue of which objects and facts are observable (or unobservable) does not change whether accelerating bodies in contact with other bodies exert forces on them, we may infer that the conditions in question (size, distance, etc.), and the observability they make possible (or not), are not biasing conditions. Even if this is not conclusive proof, we have at least some good reason to suppose that observability in cases we have examined is not a biasing condition. That should be enough to reject or at least seriously question the Bias Argument.

9 Two Advantages of Realism Scientific realism has major advantages over anti-realism, at least anti-realism of the Duhem-van Fraassen sort. To understand the two I will mention, it is important to keep in mind that Duhem and van Fraassen are not saying that electrons, molecules, and other “unobservables” don’t exist. They are saying that if 19 I restrict the discussion here to classical physics before quantum theory was developed and theoretical reasons given why certain quantities, e.g., simultaneous position and momentum, cannot be “observed” or measured, or why measuring one destroys the measurability of the other. It is during this classical period that Thomson (in 1897) inferred the existence of electrons and Perrin (in 1909) inferred the existence of molecules. Moreover, Duhem and van Fraassen are not basing their claims about observability, or, more generally their anti-realism, on quantum mechanics, even though the formalism of the latter has been given both realist and anti-realist interpretations.

Scientific Realism: What’s All the Fuss?


they do scientists have no access to them, and so legitimate scientific inferences cannot be made to their existence and properties. One advantage of realism is that it doesn’t have to distinguish between the “observable” and “unobservable” parts of the world, or even agree that such a distinction is viable. The issue of epistemic access is not whether the entity is observable but what sort of experiments and observations can be designed to show the existence of the entity. How experiments and observations will show this is an empirical matter that differs from one experiment, observation, and entity to another. Even if observability of something is required (in whatever sense is given to “observability” by anti-realists), it needn’t be observability of the entity and properties inferred. The observability of the Brownian particles and their motion sufficed for Perrin; observability of molecules and their motion was not necessary. A similar point can be made for Thomson in the case of electrons. To the question, “Can scientists make a legitimate inference to the existence and properties of electrons, molecules, strings, and multiverses?” the proper answer is Yes, to the first two (Thomson did so on the basis of experiments with cathode tubes, Perrin with Brownian motion); Not Yet, to the third (string theorists have no experiments); and perhaps Never, to the fourth (no signals from other universes are possible) – even though all of these entities are classified as “unobservable” by our anti-realists. If you want to criticize the arguments of Thomson and Perrin, do so on specific grounds of the sort physicists might offer, not on the “philosophical” grounds that no empirical argument to the existence of electrons or molecules is legitimate since they are “unobservable” entities. Follow this suggestion unless you can present convincing reasons showing that no inferences to the truth of claims about unobservables and their properties are legitimate. The reasons examined here are not successful. If you want to be the sort of empiricist that demands that all claims about the existence and properties of scientific “unobservables” be justified by appeal to observations, then you can accept the “Weaker Assumption” of section 4 rather than the “Crucial Assumption.” You can accept the idea that such claims need to be justified by appeal to something observable, not that whatever entity whose existence is inferred be observable. A second major advantage of scientific realism is that it permits a much broader range of inferences than anti-realism. It allows inferences to the existence and properties of entities such as electrons, molecules, and many others that are the building blocks of physical reality. And, equally important, it allows “theoretical” inferences from such entities and their properties to other “unobservable” entities and properties, even if these inferences have not been, and perhaps never will be or can be, established with the certainty of “I observe X, and X’s having P.” (For example, from the inferred claim that negatively charged


Peter Achinstein

electrons exist in the atom it might be inferred that probably positively charged particles do too that will balance the negative charge and yield a neutral atom.) The best that anti-realist scientists can hope to do is give us theories that “save the phenomena,” not ones that tell us what is really going on among the building blocks of nature. Realists can do everything that anti-realists can do, and much more. Some realist scientists are empirically justified on the basis of the arguments they offer, some aren’t. Those that are occasionally win Nobel Prizes for discovering the existence of these building blocks, as did Thomson and Perrin. Those that aren’t are not denied pride of place in the history of science because they dared to seek the truth. We might even say that realists are more curious about the universe than anti-realists are. They want to find out what is really going on everywhere. Anti-realists don’t, because they think, mistakenly, that there is a portion of the universe epistemically closed to them, even if they can make use of theories about the closed part to make inferences about the open part. They are not curious about the truth of those theories because they believe it cannot be determined. Realist physicists, who seldom “strut and posture” (at least before they win a Nobel Prize), are much more optimistic about getting to the truth. More important, they are justifiably so, on empirical grounds. They have physicists such as Thomson and Perrin who provide such grounds in particular cases. What the realist physicist should say to the anti-realist philosopher is what, in the original unredacted version of the play, Hamlet says to Horatio: “There are more things in heaven and earth, Horatio, than are dreamt of in your [antirealist] philosophy. [Many have already been found, and I am optimistic that many more will be].”20

References Achinstein, P. (1968): Concepts of Science. Baltimore: Johns Hopkins University Press. Achinstein, P. (1991): Particles and Waves. New York: Oxford University Press. Achinstein, P. (2002): “Is there a Valid Argument for Scientific Realism?” The Journal of Philosophy, v. 99, n. 9, pp. 470–495. Achinstein, P. (2010a): Evidence, Explanation, and Realism. New York: Oxford University Press. Achinstein, P. (2010b): “Who Really Discovered the Electron?” Reprinted in: Achinstein, P. (2010a).

20 I am indebted to my colleagues Justin Bledin and Steven Gross for sharp comments, and to Sonya Ringer for sharp editorial eyes.

Scientific Realism: What’s All the Fuss?


Achinstein, P. (2013): Evidence and Method. New York: Oxford University Press. Achinstein, P. (2018): Speculation: Within and About Science. New York: Oxford University Press. Churchland, P. M. and Hooker, C. A. (eds.) (1985): Images of Science. Chicago: University of Chicago Press. Duhem, P. (1954 [1905]): The Aim and Structure of Physical Theory. Trans. Philip P. Wiener. Princeton: Princeton University Press. van Fraassen, Bas. (1980): The Scientific Image. Oxford: Oxford University Press. Newton, I. (1999): The Principia. Trans. and ed. I. Bernard Cohen and Anne Whitman. Berkeley: University of California Press. Perrin, J. (1909): “Brownian Motion and Molecular Reality.” Reprinted in: Nye, MJ. (ed.) (1986): The Question of the Atom, pp. 502–601. Los Angeles: Tomash Publishers. Perrin, J. (1990): Atoms. Woodbridge, CT: Ox Bow. Thomson, J. J. (1897): “Cathode Rays.” Philosophical Magazine, v. 44, pp. 303–326.

Alexander Bird

Scientific Realism and Three Problems for Inference to the Best Explanation Abstract: Scientific Realism stands or falls with Inference to the Best Explanation. Realism cannot be accepted if one has reason to think that Inference to the Best Explanation cannot lead to the truth, or is unlikely to. Peter Lipton raises three important problems for his model of Inference to the Best Explanation: Voltaire’s objection, Hungerford’s objection, and the problem of Underconsideration. In this paper I show that Lipton’s own solutions do not fully answer those problems. I argue that what is required to solve these problems is for our conception of explanatory goodness to be truth-conducive because it is sensitive to the way the world actually is. I suggest that the cognitive psychology of exemplars, as described by Kuhn, may provide an answer. Keywords: Scientific realism, inference to the best explanation, underconsideration, Voltaire’s objection, Hungerford’s objection, exemplars

1 Introduction Science makes frequent use of Inference to the Best Explanation (IBE). And IBE is central to the case made by Stathis Psillos (1999) and others for scientific realism: if we are to uphold scientific realism, it had better be the case that IBE is truth-conducive. Peter Lipton (2004) provides a model of how IBE operates. He then considers three objections to the claim that IBE is truth-conducive. If these objections are good, then it is highly unlikely that IBE is truth-conducive. That would drive us towards scientific anti-realism. Lipton provides his own solutions to the problems he raises. I argue that they are only partially satisfactory and that the sceptical worries the three objections raise remain. I indicate what I think is required for a satisfactory solution.

Scientific Realism and Three Problems for Inference to the Best Explanation


2 Inference to the Best Explanation and its Problems Inference to the Best Explanation (IBE) is about choosing among explanations. It is a matter of choosing among potential explanations of some phenomenon the one that is the best by certain criteria. If there is a suitable best potential explanation, IBE says that we may infer that it is the actual explanation, i.e. that the explanatory hypothesis is true. According to Peter Lipton, IBE is a two-stage process, where both stages are filters of potential explanations (Lipton 2004, pp. 56–64): Stage 1: The first stage filters out the implausible explanations. The imaginative capacity of scientists generates all the plausible potential explanations and just leaves the remainder unconsidered. Stage 2: At the second stage, scientists investigate the live potential explanations that have passed through the first filter, and ultimately rank them according to their explanatory goodness, in order to select the top ranking explanation as the explanation. Two qualifications need to be made concerning the second stage: – For the best explanation to be inferred it should normally, considered on its own, be a sufficiently good explanation of enough evidence. If our best explanation is a weak explanation even of a large quantity of data (Lipton 2004, pp. 63, 154), or explains only a limited amount of evidence well, then that is some reason to doubt that it is the actual explanation. – For the best explanation to be inferred it must be significantly better than its nearest rival. If two competing explanations are both good enough, and one is slightly better than the other, our faith in that slightly better one must be slim. While Lipton does not mention this, it is a clear corollary of his account.1 Both stages in IBE raise important philosophical questions. A crucial question concerns the first stage. Since it filters out so many logically possible explanations, what confidence can we have that the actual explanation is allowed through? Why should the imagination of scientists have the capacity to pick on

1 Some might complain that even this is not enough. If one hypothesis is still ‘live’, being consistent with the evidence, can we know that any rival hypothesis is true? My own view is that we cannot (cf. Bird 2005a, 2007b, 2010). But I shall not insist on that in this paper. This second qualification and my eliminativist doubts may be set aside if the purpose of the ranking is to infer that the top ranked hypothesis is true, but in order to assess the plausibility of each hypothesis.


Alexander Bird

the true explanation among those it creates? This problem Lipton (2004, p. 152) calls ‘Underconsideration’. The stage 2 ranking is no good at all if the actual explanation hasn’t made it through stage 1 on account of the scientists’ failure to think of it. Assuming that the actual explanation is among those investigated at stage 2, two problems emerge, which Lipton calls ‘Hungerford’s objection’ and ‘Voltaire’s objection’. Hungerford’s objection raises the worry that what we consider to be the goodness of explanations (which Lipton calls ‘loveliness’) may be too subjective to have any relationship to the truth. However, even if explanatory goodness is objective, there will be many worlds where it does not correlate with truth. Voltaire’s objection says that it implausible that our world is the best possible world by those standards, which it would have to be for the best explanation to be true. In this essay I will first articulate the thee problems – Underconsideration, Hungerford’s objection, and Voltaire’s objection – in more detail and explain why I find Lipton’s own solutions to these problems to be only partially satisfactory. Thereafter I will show how we may find a solution to these problems in the cognitive psychology of scientific research, first articulated in Thomas Kuhn’s notion of an exemplar. The latter, suitably developed, will allow us to see why it is plausible that our standards of explanatory goodness are objective and likely to correlate with the truth.

3 The Problem of Underconsideration No matter how accurate the ranking is at stage 2, the top ranked theory will not be true if the true theory is not among those theories selected at stage 1. The worry is that the theories to which we have actually given conscious consideration are a small subset of the range of all possible theories, and so the chance of our even thinking of the true theory is correspondingly negligible. As Lipton (cf. 2004, p. 152) puts it, if that is correct, then one’s thinking that the theory ranked as best at stage 2 is true, is like thinking that Jones will win at the Olympics when all one knows is that he is the fastest miler in Britain. We may give the worry some bite by drawing on a version of the pessimistic metainduction. We think that scientists of previous eras got things wrong (geocentricism, miasma theory of disease, Newtonian space, phlogiston theory of combustion etc.). When such theories were first conceived and then adopted the currently accepted theories were not even considered as possible alternatives.2 2 Kyle Stanford (2006) develops this line of argument against scientific realism in detail.

Scientific Realism and Three Problems for Inference to the Best Explanation


So it seems at least plausible to doubt that when scientists think of the possible explanations of some phenomenon they are in general able to generate the true theory among them. Lipton (2004, p. 152), who develops the problem from van Fraassen (1989, p. 143), presents it as an argument with two premises: (R) (the ranking premise): “The testing of theories yields only a comparative warrant.” That is, the process of ranking at stage 2 reliably ranks more likely theories higher than less likely theories. But it does not tell us how likely any of these theories is. (N) (the no-privilege premise): “Scientists have no reason to suppose that the process by which they generate theories for testing makes it likely that a true theory will be among those generated.” The true theory may simply not be generated at all, and there is no way of knowing how likely that is. From which the following conclusion is drawn: Conclusion: “While the best of the generated theories may be true, scientists can never have good reason to believe this.” Lipton’s strategy in responding to Underconsideration is to argue that the ranking premise undermines the no-privilege premise. He emphasizes that the process of ranking will avail itself of background theories. Here is an example (my own). Why was Luis and Walter Alvarez’s impact theory of the K-T extinction held to be a good explanation? One reason was that it explained the iridium anomaly – an unexpectedly high concentration of iridium at the geological K-T boundary; and it could explain that thanks to a background theory which tells us that comets and asteroids have high abundance of iridium, which is rare on Earth. Without such a background theory, the impact theory would have been a less good explanation of the data. If the background theories employed in ranking were false, then the ranking process would not be reliable. So the reliability of the ranking process implies that we often have true background theories. Lipton then notes that our background theories are themselves products of earlier processes of IBE; in which case those processes did produces true theories. From which is follows that we can reject the no-privilege premise, (N), because it is frequently the case that we have considered and selected the true theory. Lipton’s response shows that if our ranking is reliable, then IBE as a whole is reliable. But I do not believe that he shows that Underconsideration is a selfundermining argument. For the role of the ranking premise, (R), in generating the conclusion is minimal. (N) entails the conclusion on its own, which can be seen straightforwardly by reflecting that if the conclusion were false – scientists did have reason for thinking that the best generated theory is true – then (N)


Alexander Bird

would be false – those scientists would thereby have reason for thinking that the theory generating process does frequently generate the true theory. Let us look at this in more detail. (R) may be divided into two components: (R1) Ranking does not give an absolute measure of likeliness. and (R2) Ranking does give a reliable measure of comparative likeliness. (R1) is a corollary of (N). It tells us that we do not get an absolute measure of theory likeliness from ranking. If we did get an absolute measure, then we would know how likely it is that we have considered the true theory. So it is not the case that (R1) plays a role in generating the conclusion of the Underconsideration argument. Rather both (R1) and the conclusion are consequences of (N) on its own. Lipton’s intended argument against Underconsideration aims to undermine (N) by pointing out that (N) is inconsistent with (R2), the claim that ranking does reliably assess comparative likeliness. Hence Lipton’s argument will work only if (R2) is indeed part of the Underconsideration argument. But, as emphasized above, (R2) plays no role at all in generating the conclusion of the Underconsideration argument. So the fact that (R2) undermines the ranking premise does not show that the argument for Underconsideration as presented is self-defeating. Lipton (2004, p. 158) does note that the proponent of the Underconsideration argument can avoid his response by weakening the ranking premise, but replies, “Of course, if ranking were completely unreliable, the skeptic would have his conclusion, but this just takes us back to Hume. The point of the argument from Underconsideration was rather to show that skeptical conclusion follows even if we grant scientists considerable inductive powers.” However, I do not see how dropping (R2) would ‘just takes us back to Hume’. Yes, the conclusion, ‘we have no reason to believe that our inductive practices yield the truth’, is much the same as Hume’s conclusion. But what is important is the argument for the conclusion, and the sceptical concern with IBE raised by the no-privilege premise is quite different from anything in Hume’s problem, which may be regarded a stage 2 problem rather than a stage 1 problem. Underconsideration worries that we might not have thought of the correct hypothesis. Hume’s problem worries that even if we have thought of the correct hypothesis, any attempt to prove it will be question-beggingly circular. Lipton shows that a solution to the stage 2 problems implies a solution to the stage 1 problem. But that does not entail that the stage 1 problem reduces to the stage 2 problem; Lipton’s argument can be contraposed: if there is no solution to the stage 1 problem, there is no solution to the stage 2 problem. The problem of Underconsideration adds to the problems surrounding stage 2. Although Lipton explicitly presents the ranking premise, and its component (R2), as a premise in the Underconsideration argument, the quotation given in the first sentence of this

Scientific Realism and Three Problems for Inference to the Best Explanation


paragraph suggests instead that (R2) is supposed to be a concession made by the sceptic, a concession that emphasizes the power of the Underconsideration argument. In effect, the sceptic is taken to be saying, “Let us agree that stage 2 is justified, insofar as it is correct that if theory A is a better explanation than theory B, then A is more likely to be true than B. Still, that is no reason to believe that the best ranked theory is true, since we have no reason to believe that we have considered the true theory when we carried out the ranking.” We should conclude from Lipton’s argument that the sceptic should not assert that. If ranking is reliable, then IBE is safe from the problem of Underconsideration. But as we have seen the sceptical proponent of the Underconsideration argument doesn’t need to claim that ranking is reliable. A perfectly plausible position for the IBE sceptic is to claim that we have no idea how reliable ranking is, just as we have no idea how likely it is that our generating process generates the true theory among those is puts forward for consideration. Lipton is right that Underconsideration fails as an intermediate scepticism – conceding to us the capacity of (relative) inductive ranking while denying absolute ranking and so knowledge. Nonetheless, it remains as a full-blooded sceptical problem, and a distinct one from Hume’s problem. For note that the Underconsideration sceptic can still concede that if the scientist were in general able to think of the true hypothesis among those she considers then she would be able to use IBE to discern which one that is – a concession that the Humean sceptic would deny. So Underconsideration is a distinctive sceptical problem that still needs to be answered.

4 Hungerford’s Objection The final lines of Keats’s ‘Ode on a Grecian Urn’ tell us: Beauty is truth, truth beauty, – that is all Ye know on earth, and all ye need to know.

These lines capture the central insight of Inference to the Best Explanation. Keats exaggerates poetically in saying that truth and beauty are identical. According to IBE they are correlated (when we take ‘beauty’ to refer to the good-making features of a explanation). Hungerford’s objection is so-called since it is encapsulated in Margaret Hungerford’s famous line in Molly Bawn, that ‘beauty is in the eye of the beholder’. If we put Keats and Hungerford together we get the conclusion that truth is correlated with the subjective perception of beauty. Unless relativism about truth of a radical sort is correct, that cannot be right. For the realist about truth, Hungerford’s objection is simply


Alexander Bird

the worry that our standards of explanatory goodness (beauty) are too subjective to have any correlation with something as objective as truth.3 Helge Kragh (1990, p. 287) articulates it thus: The principle of mathematical beauty, like related aesthetic principles, is problematical. The main problem is that beauty is essentially subjective and hence cannot serve as a commonly defined tool for guiding or evaluating science.

Arguably some subjectively determined properties might correlate with objective features of the world: if it is correct that a person’s experience of colour is subjective, then that would be a subjective quality that does correlate with an objective property. On the other hand, so Hungerford’s objection goes, we have reason to believe that judgments of explanatory goodness are rather more like judgments of beauty that the experience of colour. For one thing, the terminology used to articulate notions of explanatory goodness, or aspects thereof, are taken from the aesthetic realm: loveliness, beauty, elegance. So the claim is that explanatory goodness is subjective in the respect that aesthetic properties are, which makes them unlikely to correlate with the truth.4 Lipton (2004, pp. 143–144) responds to Hungerford’s objection by pointing out that while there is interest-relativity in our assessment of the goodness of explanations, that relativity is to be welcomed, since it correlates with the interestrelativity of IBE itself. For example, we expect audience relativity, since different people have different evidence and background beliefs. IBE is also relative to the interests of the audience, since different interests determine different contrasts (Lipton argues that explanation is contrastive). I might be asked to explain why I flew to Vienna; but that request for an explanation could be intended in more than

3 It should not be thought that what I am calling ‘explanatory goodness’ coincides with the ‘beauty’ of a scientific theory, although the latter may be a component of the former, perhaps an important one. Walker (2012) renames Hungerford’s objection ‘the subjectivity objection’ in order to avoid this conflation. 4 Consider the claim made by some evolutionary psychologists, that our ideas of human physical beauty correlate with health, fertility and other properties that go towards biological fitness, the propensity to survive and reproduce (e.g. Grammer, Fink, Møller and Thornhill 2003). One objection made to this proposal is that ideas of beauty are too subjective to permit such a correlation. Different individuals have different views about what they find attractive; what people find beautiful may vary with their individual circumstances, with fashion, and with their cultural background. If so, our sense of beauty is too subjective to be any indicator of something objective, such as biological fitness. I am not endorsing this objection – maybe there is less variability in perceptions of beauty than one thinks, perhaps there is an unvarying core to our sense of beauty surrounded by a varying penumbra – rather I am pointing to the form of this argument, which it shares with Hungerford’s objection.

Scientific Realism and Three Problems for Inference to the Best Explanation


one way: to explain why I came by plane rather than by train, or why I came to Vienna rather than visiting Munich or staying at home, or even why I was invited to Vienna rather than some other, more illustrious person. A difference of interests might determine a different intended contrast amongst these possibilities, and hence what is to be explained and so what facts are inferred by IBE. While I agree with Lipton that IBE does show these forms of relativity, and that they are no threat to the objectivity of IBE, I also maintain that they do not get to the heart of the ‘beauty is in the eye of the beholder’ objection. One way to see this is to note that Hungerford’s objection might still be raised in a case where none of the relativity to which Lipton refers arises. We may find a case where we are focussing on one specific contrast and where the relevant evidence is agreed on by all; if in such a case the competing hypotheses are ranked according to some feature F, where the possession of by a theory of F is a matter of subjective opinion, one would doubt that the ranking produced would correlate with likeliness of truth. In times past, scientific ideas were often expressed in poetic form – Lucretius’s De Rerum Natura is a prime instance.5 Imagine that in some community a theory is often preferred (as regards truth) to another because the poetry of the first is deemed aesthetically superior to that of the latter. Clearly that would be a poor basis for a theory preference. The challenge of Hungerford’s objection is that the judgments of our scientists in using IBE have something in common with such a community. Even if our scientists’ judgments are not transparently subjective, they are subjective nonetheless. Note that the subjectivity implied by Hungerford’s dictum ‘beauty is in the eye of the beholder’ is typically individual – a building that you find beautiful I may find ugly. However, the subjectivity of Lipton’s problem is public and shared. The community of scientists is usually agreed on which explanations they find lovely. The objection is not that explanatory goodness is personal, but rather that it is something more like a fashion or the taste exhibited and extolled in a particular historical era. There is agreement but like a shared aesthetic response it is too shifting and ungrounded a basis for evaluating objective truth. Thus, ancient and medieval astronomers may have been moved by the aesthetically pleasing nature of uniform circular motion whereas in the Newtonian era elegance was found not in the motions of planets or other bodies but in the simplicity of the equations governing them. The beauty of modern physics is for many found

5 See Taub (2008) and Timmermann (2013) for discussion of scientific verse in ancient and late medieval periods respectively.


Alexander Bird

in something else, the symmetries that the theories embody.6 And if we go beyond physics, to biology for example, what counts as a desirable character of a theory is different again. How worrying is Hungerford’s problem for IBE? That rather depends on how plausible one finds the accusation of subjectivity in the sense just articulated. Are our judgments of explanatory goodness analogous to judgments of aesthetic value, albeit less transparently subjective? I suggest that the following are prima facie reasons to think that they might be and which need to be addressed by any response to the problem: i. Affect While a scientist’s assessment of a theory will typically be informed by a conscious, rational assessment of the theory and its relationship to the evidence, ultimately her judgment of whether the theory is a good or poor explanation will be a matter of affect – her inner feeling about the theory. Is the theory an elegant explanation of the evidence or is it contrived? There is no methodology for making such an assessment – once all the cogitation and thinking is done, that is just a matter of how it feels to the scientist. (This is no different from the arts. A critic may discuss various aspects of a work of art, but whether it is an aesthetically successful work is a further judgment, a matter of the individual’s aesthetic response.) ii. Pleasure A key element of the inner response is pleasure (or its absence). In the arts, it is a positive affect that drives a positive judgment of aesthetic quality – the work should be satisfying or pleasing. Likewise, in the sciences, a scientist’s response to an elegant explanatory theory is one of pleasure, of finding it satisfying. iii. Ineffability Philosophers and others have difficulty in saying what exactly explanatory goodness is; there is no consensus on what it is or what contributes to it. Simplicity, for example, may be thought to be a quality that is widely agreed to be a component of explanatory goodness, but even then philosophers disagree about whether it is an epistemic virtue of explanations or a merely pragmatic one. This difficulty in articulating the nature of explanatory goodness suggests that it had the same ineffability as subjective qualities such as aesthetic beauty. iv. Variability Even when a set of more concrete criteria is proposed, there is variation in what counts as exemplifying those criteria. Furthermore, there is variation in the weight or significance attached to the criteria. For example,

6 ‘Symmetry denotes that sort of concordance of several parts by which they integrate into a whole. Beauty is bound up with symmetry’ (Weyl 1952).

Scientific Realism and Three Problems for Inference to the Best Explanation



Kuhn (1977, pp. 321–322) lists five values prized across scientific disciplines and eras – accuracy, consistency, scope, simplicity, and fruitfulness, but he argues that there is variation in what counts are exemplifying these values and how significant they are relative to one another. This variability across time (and sometimes synchronically, as in the case of disagreement during a scientific revolution) is what one would expect if explanatory goodness operates much like taste or fashion. Terminology When philosophers do attempt to articulate their differing conceptions of explanatory goodness, they tend to use terms that refer to subjective qualities. Lipton talks of ‘loveliness’, McAllister (1999) of ‘beauty’, and Glynn (2010) of ‘elegance’. These terms as well as others such as ‘harmony’ may be found in use among scientists also when they articulate the merits of a theory.

In summary, it is a plausible proposal that the central notion of IBE, ‘goodness’, is assessed by subjective criteria, in which case IBE is on epistemically shaky ground. Hungerford’s objection still stands in need of a robust response.

5 Voltaire’s Objection Hungerford’s objection provides one ground for thinking that judgments of explanatory goodness are unlikely to correlate with the truth. Voltaire’s objection provides a different and independent reason for thinking that such a correlation is implausible. Let our standards of explanatory goodness be such that we prefer explanations that have quality S over those that do not, and those that have a high degree of S to those that have a low degree. For IBE to be reliable the world must be such that high S explanations are more likely to be true than low-S explanations, and in particular that explanations with the maximum degree of S are frequently true. Call such a world an ‘S-world’. Voltaire’s objection contends that we do not have any reason, for supposing that the actual world is an S-world. Imagine a community similar to that in the preceding section where scientists prefer theories expressed in dactylic hexameters to those in heroic couplets. However, they do so not for any aesthetic preference for the former, but simply because this is a wellestablished standard of this community. Hungerford’s objection no longer applies: whether a verse is in dactylic hexameters is an objective feature of the verse. So there could be a correlation between theories in dactylic hexameters and the truth. But why should there be? Clearly we would not expect any such correlation – we


Alexander Bird

do not believe that the actual world is a dactylic-hexameters-world. But what reason to we have for believing that the actual world is an S-world, where S-standards are the standards we actually use? Lipton does not present a solution to Voltaire’s objection as such but argues that Voltaire’s objection in effect reduces to Hume’s problem of induction. The way I have expressed Voltaire’s problem in the preceding paragraph suggests this. Our use of IBE assumes that our world is a high-S world, just as our use of enumerative induction assumes that our world is a largely uniform one. But it is difficult to see how we can justify such assumptions without appealing either to the very same assumption or at least to something similarly intractable. However, in my view this undersells Voltaire’s objection. By giving the objection this name, Lipton implies is suggesting that the objector claims that IBE proponent is like Dr Pangloss in Voltaire’s Candide. Dr Pangloss holds the Leibnizian view that the actual world is the best of all possible worlds, “Dans ce meilleur des mondes possibles, tout est au mieux.” Here ‘best’ means best in terms of moral and physical good and evil. Voltaire satirizes the view by pointing to the evidence for the contrary, such as the terrible earthquake at Lisbon in 1755. However, there is another objection to such a view. Note that there are so many possible worlds which differ from our own world to a lesser or greater degree. So the proposal that the actual world is the best of all these is to propose that the actual world is very special indeed. It is not merely an adequate world, nor even a fairly good world, it is the best world. And that just seems implausible. It would be an enormous fluke that we live in the best of possible worlds whereas our very similar counterparts in other possible worlds do not. The IBE enthusiast is likewise betting on the world being the best of all possible worlds in terms of explanatory goodness. Which is also to make the actual world a very special world, and is likewise implausible. There must be many worlds in which the correct explanations are often ranked second or third best or even lower. So why is the actual world not one of them, but is instead this very special world? If our standards are set a priori (as in Voltaire’s day our moral standards were held to be), the it would be a fluke that our world meets the S-standards to the highest or even a very high degree, when most worlds do not.7 There is therefore a difference from Hume’s problem. What does a world need to be like in order for inductive projection in that world to be truthconducive? Roughly a world should be such that in a good proportion of cases,

7 IBE might be reliable if the actual world is not the best world, but is very similar to it. The objection remains, since it would require the actual world to be one of a small and very special set of all possible worlds.

Scientific Realism and Three Problems for Inference to the Best Explanation


observed regularities persist; or, a little more precisely, the world should be such that our sampling procedures should not render our samples unrepresentative of the populations from which they are drawn. Let us call such worlds projectible. They contrast with unprojectible worlds where although we observe regularities, they break down in future or other hitherto unobserved or unsampled cases. (We can ignore as irrelevant largely irregular worlds where there are few partial or complete interesting regularities to be observed at all.) The supposition that the actual world is a projectible world does not seem to be a Panglossian supposition. It requires the actual world to be different from the unprojectible worlds, but it does not require the world to be special or better than all (or almost all) the rest. The projective inductivist can hold that the actual world is just one of many projectible worlds and is nothing special in this respect. On the other hand, IBE does require the actual world to be special. When we infer that the best explanation is true, then for that inference to be correct, it must be the case that the actual world is the best by our explanatory standards. Walker (2012, p. 66) puts the point in terms of laws: ‘The defender of IBE must show that, of all the lovely lawgoverned possible worlds, the actual world has the loveliest laws.’ That requires the actual world to be unique. Perhaps being close to best will be enough for warranted belief or some knowledge. But just being good (but not close to being the best) by those standards will not be enough. So, compared to projective induction, IBE puts a rather stronger requirement on the way the world must be in order for its inferences to lead to knowledge or justification. If that is right, then Voltaire’s objection may remain even if we feel we have a solution to Hume’s problem. Hume’s problem kicks in only with the demand that in using projective induction we must also show that the world is a projectible world. Reliabilist and other externalist epistemologists will reject that demand, asserting that can have projective knowledge or warrant if we happen to inhabit a projectible world, and make a correct projective inference – it is not necessary to give a further justified reason for thinking that we do indeed inhabit such a world (Mellor 1991). Reliabilism can also help IBE, but only to some extent. If we do indeed inhabit a world where the best explanations are true, then IBE will be reliable, and so we can have warrant, rational belief, and perhaps even knowledge, by use of IBE. In both cases the reliabilist is able to defeat the sceptic who asserts ‘knowledge and rational belief are not possible’. But Voltaire’s objection raises a worry that seems to be consistent with accepting this reliabilist refutation of scepticism. For the Voltairean may respond, “I accept that knowledge from IBE is possible, since it is possible that we inhabit the best of all possible worlds. But it is just so implausible that we live in that optimal world, that we ought to conclude that as a matter of contingent fact, it


Alexander Bird

is highly unlikely that IBE is reliable; and it is therefore unlikely that IBE gives us knowledge or even rational belief.”

6 What is Needed to Defend IBE? In this section I consider what kind of response is needed that would defend IBE again the three objections. Note that we are in the territory of cognitive psychology. For we are asking about our human ability to think up (potential) explanations. We are asking whether we are able to exercise our imaginations in a way that bears an appropriate relationship to the truth. What, then, do the objections to IBE tell us about explanatory goodness, such that it can be a guide to the truth? Hungerford’s objection tells us that explanatory goodness must be objective: it must be a quality that could correlate with the truth. Voltaire’s objection requires that we can explain why the actual world is a lovely one, a world where ‘good’ explanations tend to be true. That suggests that our standards of explanatory goodness cannot be apriori but are somehow answerable to the way the world is. Finally, Underconsideration shows that the methods by which we select our hypotheses for consideration can direct us, in a good proportion of cases, to hypotheses that are likely to be true, thereby making it likely that the true explanation is among those we consider. Hypothesis selection cannot be a random walk in the logical space of possibilities, but must be directed towards the actual explanation. When we say that explanatory goodness must be objective, and so not subjective, we need to be clear what we mean by ‘subjective’. In the relevant sense, a property Φ is subjective when the truthmaker for some a’s being Φ is a subject’s affect in response to a, rather than some intrinsic property of a itself. When Mrs Hungerford wrote that ‘beauty is in the eye of the beholder’, she meant that whether some person (Eleanor Massereene in Molly Bawn) is beautiful is a matter of the attitude of the beholder, not an objective property of the person beheld. As Hume ([1757] 1987) puts it, ‘Beauty is no quality in things themselves: It exists merely in the mind which contemplates them; and each mind perceives a different beauty.’ On the other hand, some objective, intrinsic properties of things might be detected by subjective experiences. So one way to understand secondary properties is as intrinsic dispositional properties of things to produce responses in observers. An instance of this view says that ‘red’ denotes an objective, intrinsic quality of objects, a disposition to cause certain (‘normal’) observers to have ‘red’

Scientific Realism and Three Problems for Inference to the Best Explanation


experiences.8 So while responding to Hungerford’s objection rules out explanatory goodness being a subjective property in the former sense (like beauty, according to Hungerford and Hume), it does not need to rule out explanatory goodness having the connection with subjectivity or response-dependence that redness has (when understood as a secondary quality).

Heuristics and Gut Feelings I believe that it is possible to find an account of explanatory goodness that responds satisfactorily to the demands of the three objections. Goldstein and Gigerenzer (2002) discuss the recognition heuristic, a cognitive mechanism often exemplified in answering the following question: ‘Which city has the larger population, Detroit or Milwaukee?’ German students more frequently answered this question correctly than American students, despite knowing less about the cities than the Americans. In fact many of the German students had not heard of Milwaukee. And this is why they did better. Unconsciously they are using the recognition heuristic, which one might articulate thus: If you recognize the name of one city but not that of the other, then infer that the recognized city has the larger population. Because the heuristic is unconscious, its phenomenology is like that of intuition. When the heuristic is at work, the subject responds ‘Detroit, surely’ without knowing why she thinks that is the correct answer. Gigerenzer (2007, p. 16) uses the term ‘gut feeling’ to describe a judgment ‘1. that appears quickly in consciousness. 2. whose underlying reasons we are not fully aware of, and 3. is strong enough to act on’. The relevance of gut feelings for my response to the problems of explanatory goodness is this. Gut feelings have an element of subjectivity: they are judgments based on feelings or intuitions. They are nonetheless judgments that are aimed at determining an objective fact (e.g., which of two named cities is larger). They are able to connect the subjective feeling with the objective fact because the cognitive mechanism behind the judgment, the unconscious heuristic, is able to access what the subject knows (e.g., recognizing the name of one city, but not the other). So it is possible for a cognitive mechanism to deliver judgments about objective facts with some degree of reliability, but which

8 In this territory lies the murky issue of response-dependent concepts and whether and in what sense they denote subjective or objective properties. See Rosen (1994), Wedgwood (1997), and Haukioja (2013).


Alexander Bird

are also subjective in the weaker sense discussed above. A solution to the problems of explanatory goodness should therefore not be impossible. Cognitive psychologists from Bartlett (1932) onwards have described a number of mental mechanisms, such as schemata, scripts, frames, and analogies that are in the same broad category as the heuristics that generate gut-feeling judgments. As with the recognition heuristic, the use of these mechanisms is partly or wholly unconscious and draws on our knowledge and experience of the world to generate judgments and decisions. A further important feature of such mechanisms is that they are typically acquired through experience. While Gigerenzer is concerned to show that some heuristics are innate, being the product of evolution alone, others are developed in the course of our interactions with the world (the recognition heuristic could not be entirely innate, for example). So the judgments we form are not strictly intuitive, since they are learned responses – they are what I have called ‘quasi-intuitive (Bird 2005b, 2007a)’. They are nonetheless phenomenologically like intuition in that they lead at least partly unconsciously to a judgment of the form ‘this feels right’.

Exemplars The mechanism on which I draw in answering the problems of explanatory goodness is the exemplar. The term ‘exemplar’ is used by Kuhn (1970) to articulate one of the two senses in which he used the broader term ‘paradigm’. An exemplar is an exemplary solution to a scientific problem that serves as a model for subsequent science in the relevant field. Young scientists acquire their understanding of exemplars in large part by practising problem-solving with them – at first simple problems, then more complex ones. In so doing they acquire the ability to see a new problem as fundamentally similar to an exemplary puzzle and therefore requiring the same kind of solution (e.g. a vibrating string as similar to a pendulum and therefore as exhibiting simple or damped harmonic motion). This ‘seeing’ is essentially a matter of pattern recognition (where the pattern may be an abstract rather than a visual one). Pattern recognition, an ability typically acquired through repeated exposure and practice, is always in part and sometimes entirely an unconscious process as well as a cognitive one (think of recognizing a face, or correctly identifying the composer of an unfamiliar piece of music). Kuhn (1977, pp. 321–322) also tells us that in acquiring understanding through training with exemplars, a scientist also acquires the shared values of her field. As mentioned above, Kuhn holds that there are, in the abstract, values that are shared across all of science (primarily accuracy, consistency, scope, simplicity, and fruitfulness). Nonetheless, what features count as exemplifying these

Scientific Realism and Three Problems for Inference to the Best Explanation


values, as well as how they are to be weighed against each other, varies from one field to another. My proposal is that this is how a scientist’s sense of the quality of an explanatory theory is acquired. Indeed, I suggest that the best way to understand explanatory goodness is as a summary judgment of the value of a theory, to which the particular values mentioned contribute. Thinking about explanatory goodness in this way allows us to reply to the three objections.

Exemplars and Hungerford’s Objection Above I argued that Hungerford’s objection, that explanatory goodness is too subjective to correlate with truth, appears plausible because (i) we judge it on the basis of affect, an inner response to a theory; (ii) we take pleasure when we encounter it; (iii) it is ineffable; (iv) it shows variability across different fields and times; and (v) we use aesthetic terminology to describe it. While these are strongly suggestive of a subjectivity shared with aesthetic qualities, they are consistent also with the weaker kind of subjectivity that can correlate with objective properties. As we saw, the gut feeling of a judgment formed by an unconscious heuristic has the latter kind of weak subjectivity. Judging the explanatory goodness of a potential explanation, informed by training with exemplars, is like a gut feeling in this respect, which explains feature (i). The feeling of rightness is also a positive one. The exemplar is held up as a paradigmatic example of how science ought to be; recognition that a hypothesis is like the exemplar provides intellectual pleasure – (ii). Because such judgments are based on a hypothesis ‘feeling right’, only partially informed by conscious reflection, it is difficult for a subject to articulate the basis of this judgment and to say what explanatory goodness is – (iii). Because in different fields and at different periods, there are different paradigms – exemplars – in operation, what counts as a good explanation will vary over time and from one scientific field to another – (iv). These aspects, (i)–(iv) are also found in aesthetic qualities, and aesthetic qualities are the most prominent instances of properties displaying (i)–(iv), so it is not surprising that we use aesthetic terminology to describe them.

Exemplars and Voltaire’s Objection Voltaire’s objection was, put simply, that if our standards of explanatory goodness are apriori, then it would be an implausible coincidence that our world, given all the ways a world could possibly be, meets those standards so closely that very frequently good explanations are true. The lesson to draw from that


Alexander Bird

objection is that standards of explanatory goodness are not apriori. And if standards of explanatory goodness are gained from training with exemplars, then indeed they are not apriori. Of course, that our standards are gained from exemplars does not guarantee that they do correlate with the truth. For if our exemplars are themselves badly mistaken, the standards of explanatory goodness gained from them will not be truth-conducive – much medieval science was in this position. However, if the exemplars of a field are true or highly truthlike, then the standards of explanatory goodness they generate will be truth-conducive. And since it would not be an implausible coincidence that such exemplars are true, then it does not require an implausible coincidence that true theories have a high degree of explanatory goodness.

Exemplars and Underconsideration Underconsideration raises the challenge of explaining why it is that our methods of theory selection at Stage 1 make it likely that the true theory is among those selected for consideration. Kuhn emphasizes the role that exemplars play in discovery, the process of generating hypotheses. Training with exemplars enables scientists to see the world in a particular way, which is to say that some features of the world become scientifically salient and others less so. It also means that certain explanatory schemata suggest themselves. In the Aristotelian paradigm motion is salient, and an Aristotelian scientist will naturally seek an explanation of motion in terms natural tendencies in the object itself. Whereas a Newtonian scientist will not find rectilinear motion itself salient, but will regard changes in motion or nonrectilinear motion as salient, and so she will look for the explanations of these in forces governed by general laws. So the explanations we reach for are not randomly chosen from among all possible explanations, but will be ones that are similar or analogous to explanations with which we are already familiar. When our exemplars are erroneous, as in the Aristotelian case, then we probably will not consider the correct explanation. But when our exemplars are correct, then they make it likely that the true explanation will be among those we in fact consider.

7 Conclusion Inference to the Best Explanation is the form of reasoning employed by many of our best scientific theories. (It is also central to the No Miracles Argument for scientific realism, though I regard that as less important.) If, therefore, there

Scientific Realism and Three Problems for Inference to the Best Explanation


are general arguments that suggest that IBE cannot be reliable, then such arguments are a threat to scientific realism. Peter Lipton articulates three such arguments. In this paper I have explained why I do not find Lipton’s own responses to those arguments convincing. The problems he raises still need solutions. In my opinion, we do not learn from sceptical arguments that we do not have knowledge. For our conviction that we do indeed have knowledge (of the external world, of some science, etc.) should be greater than our confidence that the sceptical philosophical arguments are sound. Instead, by looking for and finding correct responses to the sceptical arguments, we learn something about the nature of knowledge and of our knowledge-producing processes. So in this paper I have tried to indicate what we learn about IBE from Lipton’s three problems. We learn that explanatory goodness is subjective only in the weak sense. It is a property that although it reveals itself to us as an inner state or feeling of rightness, is nonetheless responsive to the world, so that it could correlate with the truth. Secondly, our standards of explanatory goodness are not apriori. Instead they are themselves responsive to the world, which it is why it is not an implausible coincidence that true theories should be meet those standards. Finally, we learn that the way in which we choose our hypotheses for investigation should not be a random selection from the space of possible hypotheses but should instead be directed towards the hypotheses with higher chances of being true. It is, however, one thing to say that a correct account of explanatory goodness should show that it is only weakly subjective and that our standards of explanatory goodness should be responsive to the world, and quite another thing to show how this is in fact the case. I have suggested that one place to look for am answer is Kuhn’s idea that the processes of scientific cognition are driven by exemplars. Exemplars are our paradigms of good science – they show what good science should look like and so become the source of our scientific values – our standards of explanatory goodness. Because our deployment of exemplars and so of our standards of explanatory goodness are quasi-intuitive – like intuition, but learned from experience – our sense of explanatory goodness is subjective, but only weakly. And in the not implausible circumstance that our exemplars are correct scientific hypotheses, it will not be a coincidence that explanatory goodness is indicative of the truth. It might seem incongruous that I am using Kuhnian ideas to explain how it is possible for explanatory goodness to be truth-conducive. In The Structure of Scientific Revolutions Kuhn is very interested in the individual and social psychology of scientific change – there are more references to psychologists than philosophers. Although he later expressed anti-realist views about truth, in the first edition of Structure Kuhn does not discuss truth nor does he say anything


Alexander Bird

else that directly implies anti-realism, and the book was well received among practicing scientists. It is only after criticism from Lakatos and others that Kuhn came to be thought of as an antirealist. And Lakatos himself had a quite specific, Popper-inspired, conception of what scientific rationality amounts to. In particular, Kuhn’s interest in the psychology of scientific thought was particularly excoriated by Lakatos. Times have since changed (even if the suspicion that Structure is anti-realist has not). What Lakatos (cf. 1970, p. 178) ridiculed as Kuhn’s appeal to ‘mob psychology’ would now be regarded as naturalistic social epistemology. Finally, we should be clear about what this ‘defence of scientific realism’ amounts to. Even if correct, it does not mean that we have shown that some strong thesis of scientific realism is correct: that the successful theories of science are true or highly truthlike. Rather, this paper has a weaker, more defensive intent. If the three problems were correct, then one could not take a realist attitude to any theory supported by IBE (let alone to science in general). So this defence removes that sceptical threat. It does not thereby show that a successful scientific theory is true. Indeed, as we have seen, the exemplar story is consistent with exemplars being false and providing a misleading standard of explanatory goodness. Nonetheless, it is important to know that our scientific processes can be truth-conducive; whether they actually are depends on the details of each particular case. That’s good enough and is as much realism as we should expect from philosophy.

References Bartlett, F. (1932): Remembering. Cambridge: Cambridge University Press. Bird, A. (2005a): “Abductive Knowledge and Holmesian Inference.” In: Gendler, T. S. and Hawthorne, J. (eds.): Oxford Studies in Epistemology, pp. 1–31. Oxford: Oxford University Press. Bird, A. (2005b): “Naturalizing Kuhn.” Proceedings of the Aristotelian Society, v. 105, p. 109–127. Bird, A. (2007a): “Incommensurability naturalized.” In: Soler, L., Sankey, H. and HoyningenHuene, P. (eds.): Rethinking Scientific Change and Theory Comparison, Volume 255 of Boston Studies in the Philosophy of Science, pp. 21–39. Dordrecht: Spinger. Bird, A. (2007b): “Inference to the Only Explanation.” Philosophy and Phenomenological Research, v. 74, pp. 424–432. Bird, A. (2010): “Eliminative Abduction – Examples from Medicine.” Studies in History and Philosophy of Science, v. 41, n. 4, pp. 345–352. Gigerenzer, G. (2007): Gut Feelings. Short Cuts to Better Decision Making. N. York, NY: Penguin. Glynn, I. (2010): Elegance in Science. Oxford: Oxford University Press.

Scientific Realism and Three Problems for Inference to the Best Explanation


Goldstein, D. G. and G. Gigerenzer (2002): “Models of Ecological Rationality: The Recognition Heuristic.” Psychological Review, v. 109, pp. 75–90. Grammer, K., B. Fink, A. P. Møller, and R. Thornhill (2003): “Darwinian Aesthetics: Sexual Selection and the Biology of Beauty.” Biological Reviews, v. 78, pp. 385–407. Haukioja, J. (2013): “Different Notions of Response-dependence.” In: Schnieder M. H. B. And Steinberg, A. (eds.): Varieties of Dependence, pp. 167–90. Munich: Philosophia Verlag. Hume, D. ([1757] 1987): “Of the Standard of Taste.” In: Essays Moral, Political, Literary. References to the edition by Eugene F. Miller. Indianapolis, IN: Liberty Fund. Kragh, H. (1990): Dirac: A Scientific Biography. Cambridge: Cambridge University Press. Kuhn, T. S. (1970): The Structure of Scientific Revolutions (2nd ed.). Chicago, IL: University of Chicago Press. Kuhn, T. S. (1977): “Objectivity, Value Judgment, and Theory Choice.” In: Kuhn, Th.: The Essential Tension, pp. 320–339. Chicago, IL: University of Chicago Press. Lakatos, I. (1970): “Falsification and the Methodology of Scientific Research Programmes.” In: Lakatos, I. and Musgrave, A. (eds.): Criticism and the Growth of Knowledge, pp. 91–195. Cambridge: Cambridge University Press. Lipton, P. (2004): Inference to the Best Explanation (2nd ed.). London: Routledge. McAllister, J. (1999): Beauty and Revolution in Science. Ithaca, NY: Cornell University Press. Mellor, D. H. (1991): “The Warrant of Induction.” In: Mellor, D. H.: Matters of Metaphysics, pp. 254–268. Cambridge: Cambridge University Press. Psillos, S. (1999): Scientific Realism: How Science Tracks Truth. London: Routledge. Rosen, G. (1994): “Objectivity and Modern Idealism: What is the Question?” In: Michael, M. and O’Leary-Hawthorne, J. (eds.): Philosophy in Mind, pp. 277–319. Dordrecht: Kluwer Academic Publishers. Stanford, P. K. (2006): Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives. N. York: Oxford University Press. Taub, L. (2008): Aetna and the Moon. Corvallis, OR: Oregon State University Press. Timmermann, A. (2013): “Scientific and Encyclopaedic Verse.” In: Boffey, J. and Edwards, A. S. G. (eds.): A Companion to Fifteenth-Century English Poetry, pp. 199–212. Cambridge: D. S. Brewer. van Fraassen, B. (1989): Laws and Symmetry. Oxford: Oxford University Press. Walker, D. (2012): “A Kuhnian Defence of Inference to the Best Explanation.” Studies in History and Philosophy of Science, v. 43, pp. 64–73. Wedgwood, R. (1997): “The Essence of Response-dependence.” European Review of Philosophy, v. 3, pp. 31–54. Weyl, H. (1952): Symmetry. Princeton, NJ: Princeton University Press.

Howard Sankey

Scientific Realism and the Conflict with Common Sense Abstract: The aim of this paper is to identify and resolve a tension between scientific realism and commonsense realism that arises due to a purported conflict between science and common sense. It has sometimes been held that common sense is antiquated theory which is found to be false and eliminated with the advance of science. In this paper, a distinction is proposed between three kinds of common sense: practical skill; widely held belief; basic common sense. It is agreed that common sense in the sense of widely held belief does succumb to the advance of science. It is left open to what extent practical skill varies with scientific change. It is argued that basic common sense is by and large resistant to change due to scientific change. Epistemological aspects of basic common sense are explored. A number of objections to the proposal about basic common sense are considered. It is suggested that basic common sense is sufficiently epistemologically robust to provide a foundation both for scientific knowledge and for scientific realism. Keywords: Science, common sense, scientific realism, commonsense realism

1 Introduction Contemporary discussion of scientific realism is driven by debate between scientific realists and anti-realists of various persuasions. Arguments for and against scientific realism have led to sustained and vigorous debate. In the course of the debate, a range of compromise positions have emerged, e.g., entity realism, structural realism, deployment realism. I do not propose to enter into the debate between scientific realism and anti-realism in this paper. However, in the longer term I hope that what I have to say here will prove to be of some significance in the context of the broader debate. Rather than engage in the debate with anti-realism, I seek to draw attention to a problem that has the potential to divide scientific realists among themselves. There is an unresolved, and largely ignored, tension that lies at the heart of the scientific realist position.1 The problem stems from the alleged existence of a

1 The tension will be familiar to readers accustomed to Sellars’ distinction between the manifest and the scientific images (Sellars 1963[1991], p. 5). I choose not to frame the issue in

Scientific Realism and the Conflict with Common Sense


conflict between science and common sense. The problem, in a nutshell, is this: if science is to prevail in the conflict with common sense, it will undermine itself, since observation resides at the level of common sense. My aim in this paper is to bring this conflict into focus, and to argue for an understanding of common sense that has the potential to remove the purported conflict between science and common sense. I wish to argue that there is a basic form of common sense that has the capacity both to survive the advance of science and to provide the epistemic basis for science itself. I will proceed as follows. In section 2, I will characterize the positions of scientific and commonsense realism in the way that I will understand these positions for the purposes of this paper. In section 3, I present a dilemma to which the conflict between science and common sense gives rise. In section 4, I propose a distinction between different forms of common sense on the basis of which the dilemma may be resolved. In section 5, I discuss epistemological aspects of the basic form of common sense. In section 6, I reply to some objections that may be presented against the view that I propose. In section 7, I offer concluding remarks.

2 Scientific Realism and Commonsense Realism For the purposes of this paper, I will understand scientific realism in what I take to be a traditional way. Traditionally, scientific realism has been characterized as a view about the aim of science. The ultimate or fundamental aim of science is to discover the truth about the natural world.2 There may be other aims apart from truth. But they are lower-order aims that subserve the overarching aim of truth. Such a view about the aim of science has implications with respect to the nature of scientific progress. Given that the aim of science is truth, progress in science must consist in progress toward the truth. This may be understood as a cumulative build-up of truths or as convergence on the truth. The usual notion of truth associated with scientific realism is the correspondence theory of truth, though some realists may favour a deflationary conception of truth.

Sellarsian terms for a number of reasons, most notably because of Sellars’ restriction of inference within the manifest image to “correlational induction” (1963[1991], p. 7). This restriction seems to rule out inference to best explanation, which I regard as being part and parcel of common sense. 2 In writing of the “natural world,” I do not wish to foreclose the possibility of a realist philosophy of social science. So, I shall simply assume that the social world forms part of the natural world. The traditional focus of the scientific realist dispute has been with respect to the theories and theoretical entities of the natural sciences, especially the physical sciences.


Howard Sankey

What most clearly distinguishes scientific realism from anti-realist alternatives is the characteristic realist attitude toward theoretical science.3 Scientific realism takes the claims of theoretical science at face value rather than adopting an instrumentalist construal of theoretical discourse. Thus, according to scientific realism, the truth sought by science is not restricted to truth at the level of what may be observed by the human senses unaided. Science seeks and succeeds in discovering truths about theoretical entities, properties, processes, states of affairs, etc. Such theoretical items are typically unable to be observed using the human senses alone, though in some cases it may be possible to observe them with the assistance of instrumentation. According to scientific realism, discourse about unobservable theoretical entities such as atoms and electrons is to be interpreted literally as discourse that purports to refer to genuinely existing entities. Such things as atoms and electrons are to be conceived as genuine physical entities rather than as shorthand for experience or convenient fictions. Such realism about the entities of theoretical science is to be distinguished from realism about the items of ordinary everyday experience, such as tables, chairs and other human beings. I will refer to realism about the world of ordinary everyday things as commonsense realism. According to commonsense realism, the ordinary items (tables, chairs, etc.) with which we interact on a daily basis are real, genuinely existing physical things. The position of commonsense realism has both epistemological and metaphysical components. On the one hand, the epistemology of commonsense realism is anti-sceptical. It places due emphasis on empirical sources of knowledge. We arrive at knowledge of the world around us by making use of our senses. In some circumstances, our senses may lead us astray. But, for the most part, our senses are reliable. Our senses are a good guide to the way the world is. They provide a sound basis for justified belief and knowledge about the everyday world. By way of such belief and knowledge, they also serve as the basis for successful practical interaction with the world. On the other hand, the metaphysics of common sense is robustly realist. The ordinary things that we perceive by means of our senses are real things. They do not cease to exist when we do not perceive them. They are mind-independent entities that do not depend on us for their ongoing existence.

3 This basis for the distinction between scientific realism and anti-realist alternatives does not capture all possible contrasts between scientific realist and anti-realist positions. Hilary Putnam’s internal realist position, which I regard as a form of anti-realism, was realistic about the entities of theoretical science (see Putnam 1981). For remarks by Putnam on the kind of scientific realism that he accepted and the kind of metaphysical realism that he at one stage rejected, see Putnam (1982). For an argument that the internal realist’s epistemic theory of truth collapses into idealism, see Musgrave (1997).

Scientific Realism and the Conflict with Common Sense


In a certain sense, scientific realism and commonsense realism are participants in different philosophical games. As such, they face different opponents. The primary opponent of scientific realism is anti-realism about theoretical entities. By contrast, the primary opponent of commonsense realism is scepticism about the external world, though it is also opposed to idealist views of the mind-dependent status of ordinary objects. The contrast may also be set in terms of a contrast between debates within different sub-branches of philosophy. The debate between scientific realism and anti-realism with respect to the reality of theoretical entities is a debate that arises within the philosophy of science. By contrast, the debate between commonsense realism and its sceptical and idealist adversaries is a debate that arises within general epistemology.4

3 A Dilemma for Scientific Realism At first sight, commonsense realism and scientific realism may seem to constitute a natural partnership.5 Commonsense realism is realism about observable entities. Scientific realism is realism about unobservable entities. It is possible both to be a realist about observable entities and about unobservable entities. So, it is possible to be both a commonsense realist and a scientific realist at the same time. Indeed, this may seem the natural position to adopt. In this instance, however, appearances are apt to mislead. The situation is more complicated than may appear at first sight. The reason is that there is, or purports to be, a conflict between science and common sense. But if science and common sense are in conflict, it may prove difficult to reconcile scientific realism with commonsense realism. They may not form so natural a pairing after all. The first step in seeing the potential for conflict between science and common sense is to note an apparent similarity between the two. In both scientific and commonsense thinking, we are apt to form beliefs about the world. These beliefs are in a certain respect hypothetical in nature. Suppose, for example, that I form a belief about the colour of my desk. Upon looking at my desk, I form the belief that

4 As this way of setting the contrast suggests, another way of characterizing commonsense realism might be as realism about the “external world,” since it stands opposed to scepticism about the external world, as well as to the idealism that emerges from attempting to block such scepticism. However, I tend not to employ the expression ‘external world’. It mischaracterizes our actual situation. We are not separated off from a reality that is outside of ourselves. We are a part of the world, immediately engaged with it. Setting the issue in terms of an external world provides scepticism with more encouragement than is warranted. 5 In this section, I draw on and further develop the points originally presented in Sankey (2018).


Howard Sankey

it is white. Such a belief is in effect a hypothesis about the colour of the desk. I am certain that my eyes do not deceive me on this occasion. Nevertheless, a belief is the sort of thing that might be false. In that respect, the belief is hypothetical in nature. Thus, my belief about the colour of the desk has in common with a scientific claim that it is a hypothesis about the world.6 From this apparent similarity, it is a short step to the basis of the conflict. If commonsense beliefs are hypotheses, it can hardly escape notice that they lack in sophistication by comparison with scientific hypotheses. Indeed, a number of philosophers have taken the view that common sense is really nothing more than outmoded theory that has been passed down to us from our primitive ancestors.7 As scientific inquiry advances, it exposes the erroneous ways of our commonsense belief, showing it to be mistaken in various ways. As we accept an increasing amount of what science tells us about the world, we thereby come to reject more and more of our commonsense beliefs. On entering the world revealed by modern science, we leave behind the erroneous beliefs of our ancestors. There is a host of examples that may be given of the purported conflict between science and common sense.8 I will content myself with what is perhaps the best-known example of the conflict. This is the case of Sir Arthur Eddington’s two tables. Eddington’s book, The Nature of the Physical World, opens with the words: I have settled down to the task of writing these lectures and have drawn up my chairs to my two tables. Two tables! Yes; there are duplicates of every object about me . . . . (Eddington, 1933, p. xi)

Strictly speaking, of course, there are not two tables, but only one table described from two different perspectives. Eddington explains that the first table is the “ordinary table.” It is “familiar . . . from earliest years,” “a commonplace object.” The second table is the “scientific table,” i.e., the table as described by physics.

6 In saying that my belief about the desk is hypothetical, I only wish to draw attention to the fact that, qua belief, it is the sort of thing that may be false. I do not wish to suggest that basic perceptual beliefs are inferred. Indeed, I favour the view that basic perceptual beliefs have direct, non-inferential warrant. 7 Bertrand Russell is sometimes credited with this view (see Campbell 1988, p. 164). There is a hint of it in Quine’s comparison of positing molecules with positing the “bodies of common sense” (1966, p. 237). The idea that common sense is a theory is explicit in Churchland (1979, p. 2). It is found throughout Feyerabend’s writings; his claim that modern physics shows there to be no tables, chairs, etc., is a particularly dramatic statement of the thought (1981, pp. 158–159). See also his letter to Smart in his (2016, pp. 211 ff.). 8 Examples include the conflict between geocentric and heliocentric astronomy, as well as conflicting views about the reality of time or colour, the existence of free will and the relationship between mind and brain.

Scientific Realism and the Conflict with Common Sense


The ordinary table is “substantial.” By contrast, there is “nothing substantial” about the scientific table. It is “nearly all empty space” (Eddington 1933, p. xii). It is not made of solid matter at all. Yet “delicate test and remorseless logic” assure him that the “scientific table is the only one which is really there” (Eddington 1933, p. xiv). In sum, for Eddington there is a conflict between the table of ordinary common sense and the scientific table. In his view, only the scientific table is real. What is the scientific realist to make of the conflict between science and common sense? This depends on what the scientific realist takes to be involved when one accepts or believes a scientific claim such as an assertion about the nature of a theoretical entity. According to scientific realism, we are to take what science says as the truth about the world.9 Acceptance of a theory constitutes belief in the truth of assertions made by the theory. If we accept what science says as true, and science conflicts with common sense, then we must reject common sense as mistaken. Given the conflict between science and common sense, adoption of a realist stance toward science will lead to the overthrow of common sense. Commonsense realism awaits a similar fate. As previously indicated, some philosophers hold that common sense is mistaken theory that is to be rejected with the advance of science. Philosophers who hold this view may favour an eliminativist approach to common sense. For them, common sense is to be eliminated in favour of science. No doubt, some scientific realists will endorse an eliminativist attitude toward common sense. Presumably, scientific realists who take such an eliminativist attitude will see no need to adopt or defend the position of commonsense realism. In my view, such an eliminativist form of scientific realism is deeply problematic. It should be resisted. Observation provides the evidential basis for science. The empirical evidence on which science is based is evidence arrived at by means of observation. It derives either from immediate sense perception or from instrumentation which extends the senses. But observation is part of common sense. Observation using our senses is the primary means by which we obtain knowledge of the ordinary things with which we interact every day. If we reject common sense, we must reject observation as well. Thus, without common sense, the evidential basis for science disappears. We would have no basis to accept science in the first place. Actually, the situation is worse than this suggests. If we have no basis to accept science, we would have no basis to reject common sense. This means 9 This should, of course, be qualified in a number of ways. Belief in the truth of scientific claims should be restricted to the most highly confirmed or well-established scientific claims. Moreover, scientific realists typically speak of approximate truth rather than committing themselves to the complete truth of theories.


Howard Sankey

that we must accept common sense instead of science. Thus, to reject common sense on the basis of science is self-defeating. I will conclude this section by stating the problem that I have been presenting in the form of a dilemma. Either we admit the conflict between science and common sense or we embrace common sense. If we admit the conflict, we remove the evidential basis for science and have no reason to accept science in the first place. If we embrace common sense, we must reject the conflict between science and common sense as an illusion. The first option requires the scientific realist to develop an account of the evidential basis of science in which observation plays no role. I see little meaningful prospect for this.10 I take the second option to be more promising. That is the option that I propose to explore.

4 Three Forms of Common Sense In the remainder of this paper, I will propose an account of common sense on the basis of which the dilemma may be resolved.11 On the view that I propose, it is possible to distinguish between different forms of common sense. Given this, it may be argued that, while there are stable elements of common sense, there are also elements that may undergo change as a result of the advance of science. I wish to suggest that the stable elements of common sense involve the use of our observational capacities, and so are able to provide an evidential basis for science. Before turning to the different forms of common sense, a preliminary remark about the general notion of common sense is in order. It seems to me that the expression ‘common sense’ draws connotatively on two meanings that the word ‘sense’ may be used to convey. On the one hand, the word ‘sense’ refers to the various sensory modalities, i.e., sight, smell, taste, hearing and touch. On the other hand, the word ‘sense’ is also used to refer to a capacity for sound judgement, as when one is said to have good sense or to behave in a sensible manner. I wish to suggest that both meanings of the word ‘sense’ are at play

10 I do not have a knockdown objection against the eliminativist approach. My point is simply the programmatic one that the work of developing an eliminativist epistemology has not been done. Indeed, almost nobody even seems to realize that such work is necessary. One exception of which I am aware is Churchland (e.g. 1981), who sketches a number of suggestions about how epistemic matters might be approached in the context of an eliminative materialist philosophy of mind (see also Churchland 1979, chapter 5). 11 The distinction between three forms of common sense was originally proposed in Sankey (2014).

Scientific Realism and the Conflict with Common Sense


when we speak of common sense. The exercise of common sense may involve both the use of sensory perception and a capacity to make sound judgement. I turn now to the distinction between forms of common sense. On the view that I propose, the notion of common sense has a certain ambiguity. In particular, I wish to suggest that there are at least three different things that the expression ‘common sense’ may be used to refer to. In presenting this set of distinctions, I am not attempting to provide a conceptual analysis of the notion of common sense. I only want to suggest that our notion of common sense is sometimes applied to these different things. Nor do I wish to suggest that the distinctions that I propose yield an exhaustive classification of all that may fall under the head of common sense. There are a number of different uses of the expression ‘common sense’ that my distinctions do not capture.12 I suggest only that there are recognizable uses of the notion in which it does apply to the things of the kinds that I am about to distinguish. The three different forms of common sense that I wish to distinguish are as follows: 1. Practical skill: common sense is sometimes taken to be involved in the possession or application of practical skill or expertise. Technicians and tradespeople have a range of different practical or technical skills. A person who possesses this kind of common sense is able to solve practical problems which may defeat those who do not have such skills. The ability to solve practical problems in a way not available to those who lack a skill also suggests that there may be a capacity for judgement relating to such problems that is connected with having the skill. 2. Widely held belief: the notion of common sense is sometimes used to refer to a set of beliefs that are widely held by members of a culture at a particular period of time. Such beliefs may appear so obvious to members of the culture that they are simply taken for granted. Some widely held beliefs may be so deeply held that members of a culture may find it difficult or impossible to question the beliefs. This second sense of “common sense” might be thought of as a quasi-anthropological or cultural-historical use of the expression.

12 The Democratic candidate in the 2016 U.S. presidential election, Hilary Clinton, called for a “commonsense approach” to gun control. Former Australian Prime Minister, Tony Abbot, spoke of the need for common sense in the debate about marriage equality. Neither of these two uses fit easily into any of the forms of common sense that I am about to distinguish in the text. Nor does the Aristotelian idea of a single more general sense that lies behind the various sensory modalities fit into my classification. All I claim is that my proposals capture some recognizable aspects of the notion of common sense.



Howard Sankey

Basic common sense: underlying the various practical skills and widely held beliefs, there is a more rudimentary form of common sense. It is typified by our unreflective awareness of the world around us and manifests itself in the routine way in which we deal with objects in our immediate vicinity. Our senses provide us with knowledge of our surroundings on the basis of which we navigate our way around objects in our environment. I will refer to this rudimentary form of common sense as basic common sense.13

As previously indicated, I do not propose the distinction between the above three forms of common sense in order to provide a conceptual analysis of the notion. What I do suggest is that practical skill, widely held belief and basic common sense are significant examples of the kind of thing to which the notion of common sense is on various occasions appropriately applied. Nor do I regard my proposal as one that is based on a priori considerations. The proposal is intended in naturalistic spirit, as an empirical claim both about how the notion of common sense is employed and about the items to which the notion is applied. Again, I do not suggest that this set of distinctions fully captures the notion of common sense. What I do suggest is that, on the basis of this set of distinctions, it is possible to resolve the dilemma which arises for scientific realism as a result of the purported conflict between science and common sense. To resolve the dilemma, I suggest that we focus on the third form of common sense, i.e., “basic common sense.” Such basic common sense would seem to play a more fundamental role in our lives than practical or technical skill. A person may possess basic common sense even though they fail to have the practical skills of a technician or tradesperson. Basic common sense is employed on an ongoing basis in our mundane interactions with our immediate surroundings. In what follows, I will set aside the issue of practical skill and focus on basic common sense.14

13 What I describe as “basic common sense” seems to me to be very close to what Armstrong calls “bedrock common sense” (Armstrong 2004, p. 27). Elsewhere, Armstrong writes that “some of the things that have been accounted commonsense have turned out to be erroneous, and present-day commonsense may contain its quota of errors. But it seems to me that there is an inner core of our beliefs which we cannot deny to be cases of knowledge without falling into irrationality in some very strong sense” (1999, p. 78). Again, I think that what Armstrong describes as “an inner core of our beliefs” may be close to what I am calling “basic common sense.” 14 In setting practical skill aside, I do not wish to suggest that it is irrelevant to the matter at hand. There are interesting questions about how the practical skills involved in laboratory practice are affected by theoretical change, as well as whether skills of a non-scientific nature are influenced by science. But, for the present task of showing that there is a form of common sense that withstands scientific advance, I focus instead on basic common sense.

Scientific Realism and the Conflict with Common Sense


The important contrast for our purposes is the distinction between widely held belief and basic common sense. The widely held beliefs of a culture in a particular historical time-period may be brought into question and rejected or modified on the basis of developments in science. As a result, the advance of science may lead to the overthrow of the widely held beliefs of particular cultures. By contrast, the sense-based beliefs involved in practical interaction with our immediate environment have a more solid basis. Beliefs closely integrated with everyday practical action resist overthrow. For the most part, basic common sense survives the advance of science.15 I wish to suggest that basic common sense provides the evidential basis on which science is founded. In our ordinary everyday interaction with the physical objects that surround us, we make routine use of our senses in determining how things stand in the world around us. It is precisely such use of our sensory capabilities which is involved in the collection of the observational data which forms the evidential basis for the sciences. Even where instrumentation is employed to extend the senses, our usual perceptual apparatus is employed in reading the outputs of the instruments. Given the involvement of basic common sense in establishing the observational basis of science, I suggest that scientific realism and basic common sense are well-suited to each other. There need be no clash between science and basic common sense.

5 Epistemological Aspects of Basic Common Sense In this section, I wish to make some brief remarks about epistemological aspects of basic common sense. It is important to emphasize that what I refer to as basic common sense is primarily involved in practical interaction with the objects that we encounter in our everyday interaction with the world. It is most apparent in our immediate, unreflective awareness of the objects in close proximity to us within our environment. On the basis of such awareness, we navigate in and around our environment in a routine way. We avoid tables, walk

15 Fallibilism is no doubt the appropriate attitude to adopt toward scientific knowledge. But this may not be the case at the level of basic common sense. Some basic commonsense beliefs (e.g. G. E. Moore’s “here is one hand . . . ”) seem to have a degree of certainty that few, if any, theoretical beliefs may achieve. We need a graded notion of certainty. Some of our beliefs seem to have a high measure of certainty even if they may lack certainty in some absolute sense. And some of our beliefs are more certain than others.


Howard Sankey

through open doorways, change lanes while driving, wash and dry dishes, etc., each and every day of our lives. Perception plays a vital role in the exercise of basic common sense. We use perception to arrive at knowledge and justified belief about our immediate surroundings. We undertake action based on such knowledge and justified belief in order to achieve desired results. We modify goals based on how we perceive the world to be. We may alter an intended course of action as our senses inform us of previously unknown facts about a situation. We may abandon a course of action because perception provides reason not to pursue an intended goal. In these and many other ways, our practical interaction with the world around us is informed by perceptually based knowledge and justified belief.16 The attitude of basic common sense toward the objects and states of affairs which we encounter in the course of daily activity is a realistic one. In such activity we interact with a world of material objects of various shapes and sizes with a multitude of properties. We acquire immediate knowledge of such things by means of our sensory experience of those objects.17 The material objects that we deal with on a daily basis have mind-independent existence. We interact causally with them in bodily movement and action. But, although we may physically interact with them, the objects themselves are outside the control of our minds. Without bodily movement or action, thought cannot by itself bring about change in the world of objects. On occasion, our senses mislead us. We may be subject to an illusion. We may misperceive an object or misinterpret an object that we perceive. These are ordinary occurrences that arise in the course of everyday life. In actual practice, they do not give rise to scepticism. Instead, errors relating to perception are dealt with in a routine and typically automatic manner by means of a range of corrective techniques. If the visual appearance of an item seems odd, we double-check by looking at it again. Sometimes, we may look at the item from a different angle or perspective. We may use a different sense modality from the one that originally misled us. If there are others around, we may ask someone else if they

16 In my view, basic common sense is closely involved with practical action. Actions have aims. Aims are typically things that we value. This suggests that value plays a role in basic common sense, or, at the very least, that it interacts with it. I will not explore the implications of this point here. My focus for present purposes is on the epistemological and, to a lesser extent, metaphysical aspects of basic common sense. 17 In speaking of immediate knowledge gained by perception, I do not wish to exclude indirect knowledge or inferentially warranted (non-basic) belief. It seems to me that a range of inferential strategies are available within basic common sense, though this is not of crucial importance in the present discussion.

Scientific Realism and the Conflict with Common Sense


perceive the same thing as we have. From the point of view of basic common sense, no sceptical moral is to be drawn from the possibility of perceptual error. In our practical dealings with the world, we are entitled to a reasonable degree of practical certainty that the world is by and large as it seems to us to be. Only when something goes perceptually awry in specific circumstances do we form doubt. Even then, doubt is restricted to those specific circumstances rather than being generalized in sceptical fashion. In sum, perceptually formed beliefs lie at the heart of basic common sense, and perceptual error does not give rise to scepticism. But questions may still be asked about the justificatory status of such beliefs. In my view, there is a range of points to be made in relation to the justification of the perceptual beliefs that arise within basic common sense. We may start, first of all, with the classic Moorean point that we may be more certain of the reality of directly perceived objects (e.g., my hands) than we are of any of the controversial philosophical assumptions that might lead us to doubt the existence of such things.18 Secondly, the involvement of perceptual beliefs in successful practical interaction with the world provides strong pragmatic vindication for perceptual belief and perceptual belief-forming processes. Thirdly, basic perceptual beliefs seem to me to possess direct perceptual warrant given that they are formed by means of a reliable belief-forming process. Fourthly, we may adopt a point from Michael Devitt, who argues that “over a few years of living people” arrive at realism about ordinary objects which “is confirmed day by day in their experience” (Devitt, 2002, p. 22). The perceptual beliefs of basic common sense have strong empirical support due to the immense variety of experience which confirms those beliefs. Finally, we have strong evolutionary grounds for confidence in the perceptual beliefs that lie at the heart of basic common sense. Our survival constitutes evidence of the reliability of such beliefs.

6 Objections and Replies In this section, I will consider a number of objections that may be raised against the position developed here. This will permit me to articulate a number of aspects of the position that I have not dealt with so far.

18 For this interpretation of Moore’s proof as a comparative plausibility argument, see Lycan (2001).


Howard Sankey

Objection one: It is not clear that basic common sense does survive scientific change. Consider the case of Eddington’s two tables. The table of common sense is solid. The table of science is mostly empty space. On the assumption that the solid table is the table of basic common sense, there is a conflict between science and basic common sense. On the further assumption that what science tells us is true, basic common sense is to be rejected as science progresses. Reply: Let us suppose that the table really is mostly empty space rather than solid matter. Does this mean that basic common sense is mistaken? I am not so sure. The reason is that, at the level of our practical interaction with the table, it remains the case that the table is a solid object. It constitutes a physical obstacle. If the table is between us and the door, we cannot get to the door by passing through the table. We may walk around it or perhaps climb over or crawl under it. The table with which we interact in ordinary practical activity remains exactly as it was before it was discovered to be mostly empty space. In this sense, it seems to me that basic common sense has survived scientific developments. What has changed is that science now provides an explanation of the apparent solidity of the table in terms of the fundamental particles of which the table is made, and of the behaviour and relations between those particles. This explanation sheds useful light on the nature of the table, as well as on various features of the experience of the table that we have in the course of our interaction with it. But it does not show that the beliefs at the level of basic common sense about the table are mistaken or in need of elimination. At that level, our beliefs and actions with respect to the table remain exactly as before.19 Objection two: The relationship between science and common sense has been mischaracterized. Science has an influence on common sense. Before the rise of modern science, there may have been no scientific content in common sense. But contemporary common sense contains elements drawn from the sciences. Thus, the contrast drawn between science and common sense is ill-conceived. Reply: There are two points to be made in reply to this objection. First, as stated, the objection is directed both against the position proposed here and against the eliminativist view that is rejected here. The eliminativist holds that common sense is to be eliminated with the advance of science. Against the eliminativist, the current objection suggests that science becomes integrated

19 An opposing view has been proposed by Orly Shenker, who suggests that what science explains is not the solidity of the table but why our minds have the experience of the table being solid (Shenker, manuscript).

Scientific Realism and the Conflict with Common Sense


into common sense rather than leading to the wholesale elimination of common sense. As such, the objection leads either to revision or rejection of the eliminativist position. Turning to the second point, the position advocated here is able to absorb the main force of the objection. The objection is to be understood as making a point about common sense in the sense of beliefs that are widely held within a culture at a given time-period. The objection does not apply to what I call basic common sense. In contemporary cultures, there is no doubt that science makes a significant contribution to common sense in the sense of widely held belief. As for whether science contributes to basic common sense, this is less clear. Because what I call basic common sense pertains mainly to beliefs and judgements involved in practical interaction with the objects around us, it is not obvious that science does become integrated into basic common sense. On the other hand, there is no need to exclude the possibility that science may contribute to the content of basic common sense. This is an empirical matter in need of further investigation. It will depend a great deal on what mental mechanisms are involved in basic common sense.20 Objection three: Scientific realism has no need of common sense. The leading argument for scientific realism is the success argument. Scientific realism is to be accepted because it provides the best explanation of the success of science. Common sense has no role to play in the case for scientific realism. Reply: The problem with this objection that it fails to take into account the nature of the success of science of which scientific realism is said to be the best explanation. The success of science is success at the empirical level, as well as at the level of practical application. The evidence for the success of science must be able to be detected at the observational level by means of perception. As a result, basic common sense is directly involved in connection with the empirical evidence and practical applications which constitute the evidence for the success of science. Without the involvement of basic common sense, the success argument for scientific realism cannot get off the ground.

20 If basic common sense is grounded in informationally encapsulated mental modules, then science might have no impact on basic common sense. On the other hand, if the mechanisms are not informationally encapsulated, then potentially science might have an influence on basic common sense. For an interesting discussion of common sense in the context of Fodor’s modularity view, see Campbell (1988, pp. 166–170). What Campbell refers to as “the Basic Observational Fragment of common sense” (1988, p. 170) may be very close to what I call “basic common sense.”


Howard Sankey

7 Conclusion My aim in this paper has been to draw attention to a tension that lies at the heart of scientific realism. Because the debate about scientific realism is focussed on the dispute with anti-realism, this tension seems to me to be largely, if not completely, ignored. I have attempted to frame the problem in terms of a dilemma by presenting scientific realism with two options. Either scientific realism must adopt an eliminativist approach to common sense or it must embrace common sense. If scientific realists choose the eliminativist option, their work is cut out for them, since an account must be given of the empirical basis of science which does without the perceptual apparatus of ordinary common sense. If scientific realists choose the option of embracing common sense, then it must be shown that the apparent conflict between science and common sense is an illusion. In my view, the first option holds little promise. It is the second option that we should pursue. On the approach that I have presented here, it is possible to make out a middle path. On the one hand, we may allow that common sense in the sense of the widely held beliefs of a culture at a time is indeed subject to elimination on the basis of scientific developments. On the other hand, the basic common sense enacted in our ordinary everyday practical dealings with the world around us survives the advance of science. Not only does it survive, but we have good reason to believe in the dictates of basic common sense, since they are in large part justified on the basis of perceptual experience. Indeed, I would go further than this. Basic common sense provides us with what David Armstrong has called the “epistemic base” (1999, p. 77). It is the base on which all other knowledge, including scientific knowledge, is built. At the outset of this paper, I said that I would not engage in the dispute with anti-realism here. But I also said that I hope that what I say here will be of some future relevance to the dispute. Allow me to close by briefly indicating why I have this hope. In my view, scientific realism should be seen as an outgrowth of commonsense realism. If we can provide a good foundation for commonsense realism, and then firmly ground scientific realism in commonsense realism, then it is my hope that scientific realism will obtain a secure foundation because of its grounding in commonsense realism. In other words, my hope is that some of the considerations which currently seem to favour anti-realist views of science will be weakened by reflection upon the way in which scientific realism stems from commonsense realism.

Scientific Realism and the Conflict with Common Sense


References Armstrong, D. M. (1999): “A Naturalist Program: Epistemology and Ontology.” Proceedings and Addresses of the American Philosophical Association, v. 73, n. 2, pp. 77–89. Armstrong, D. M. (2004): Truth and Truthmakers. Cambridge: Cambridge University Press. Campbell, K. (1988): “Philosophy and Common Sense.” Philosophy, v. 63, n. 244, pp. 161–174. Churchland, P. (1979): Scientific Realism and the Plasticity of Mind. Cambridge: Cambridge University Press. Churchland, P. (1981): “Eliminative Materialism and the Propositional Attitudes.” The Journal of Philosophy, v. 78, n. 2, pp. 67–90. Devitt, M. (2002): “A Naturalistic Defence of Realism.” In: Marsonet, M. (ed.): The Problem of Realism, pp. 12–34. Aldershot: Ashgate. Eddington, A. (1933): The Nature of the Physical World. Cambridge: Cambridge University Press. Feyerabend, P. K. (1981): “Linguistic Arguments and Scientific Method.” In: Feyerabend, P. K.: Realism, Rationalism and Scientific Method: Philosophical Papers, Volume 1, pp. 146–160. Cambridge: Cambridge University Press. Feyerabend, P. K. (2016): Philosophy of Nature, edited by H. Heit and E. Oberheim. Cambridge: Polity Press. Lycan, W. (2001): “Moore against the New Sceptics.” Philosophical Studies, v. 103, pp. 35–53. Musgrave, A. E. (1997): “The T-scheme Plus Epistemic Truth Equals Idealism.” Australasian Journal of Philosophy, v. 75 n. 4, pp. 490–496. Putnam, H. (1981): Reason, Truth and History. Cambridge: Cambridge University Press. Putnam, H. (1982): “Three Kinds of Scientific Realism.” The Philosophical Quarterly, v. 32, pp. 195–200. Quine, W. V. O. (1966): “Posits and Reality.” In: Quine, W. V. O.: The Ways of Paradox, pp. 233–241. N. York: Random House. Sankey, H. (2014): “Scientific Realism and Basic Common Sense.” Kairos, v. 10, pp. 11–24. Sankey, H. (2018): “A Dilemma for the Scientific Realist.” Spontaneous Generations: A Journal for the History and Philosophy of Science, v. 9, n. 1, pp. 65–67. Sellars, W. (1963[1991]): “Philosophy and the Scientific Image of Man.” In: Sellars, W.: Science, Perception and Reality, pp. 1–40. Atascadero: Ridgeview Press. Shenker, O. (manuscript): “Common Sense and Scientific Realism: Completing the Cycle.”

II Approaches based on History and Scientific Realism

Anastasios Brenner

Evolving Realities: Scientific Prediction and Objectivity from the Perspective of Historical Epistemology Abstract: Predictive power is one of the main arguments put forth in favor of scientific realism. Yet its precise characterization raises questions. From a logical point of view, a prediction consists of deriving a consequence from a universal law and initial conditions. But this does not appear to capture what motivates the scientist to adopt a new theory. Following post-positivists, one may go on to stipulate further conditions: a prediction should be novel, stunning or dramatic. Such concepts draw attention to context. They express a move away from the logical analysis of theory structure to the historical study of theory change. The temporal aspect of prediction is thereby taken into account. However, by bringing in the historical, sociological and psychological dimensions, the risk is that we dissolve altogether the notion we are trying to define. I am led to believe that there is something lacking in the historical inquiry just outlined. To be sure, it has been carried out within a rather narrow compass and has not sufficiently taken into account the contributions of other relevant traditions. My aim, then, is to bring the school of historical epistemology to bear on this issue, in other words, to question afresh the evolving realities that science offers us. Keywords: Forecast, historical epistemology, objectivity, scientific realism, structural realism, prediction, predictive power

1 Introduction Predictive power is one of the main arguments put forth in favor of scientific realism. If we are able to infer a prediction from a particular theory, say from the theory of relativity the bending of light rays in a gravitational field, and observe this phenomenon during a solar eclipse, then we are led to consider that the theory has been verified or confirmed. If the prediction is the existence of an object, for example Le Verrier’s hypothesis of a planet beyond Uranus, followed by the observation of the planet Neptune, we are likely to attribute some reality to this hypothesis. The concept of prediction goes back to the very origins of Western thought. Let us quote a passage from Herodotus:


Anastasios Brenner

The war between the Lydians and the Medes had lasted five years. During an encounter in the sixth year, in the midst of the battle, day gave way to night. Thales of Miletus had indeed predicted [προηγόρευσε] this eclipse to the Ionians, for the year during which it took place. The Lydians and the Medes witnessing that night had replaced day, ceased to fight and were much more disposed to make peace. (Herodotus 1920, I, 74, p. 91, translation amended)

Here Herodotus is mocking the credulousness of “barbarians” with his words. Solar eclipses can be predicted. In one sweep Thales founded philosophy and science in opposition to mythology. Prediction is intimately bound up with our idea of science. Yet it has not been a major focus of philosophy of science and has been considered mainly indirectly with regard to testing or explaining. Moreover, the conception of prediction, which has been by far the most dominant, is a formal one: a prediction consists in deriving a consequence from a hypothesis. But this logical characterization does not appear to capture what motivates the scientist to adopt a new theory. A prediction should be surprising, informative, novel. Such adjectives draw attention to a temporal and psychological dimension. To be worthwhile the type of statement we are interested in should carry something new with respect to antecedent knowledge. Thereby we are led to call on history, and the various factors that it involves – psychological, sociological, institutional. On this issue formal and historical methods come into conflict. The present study will provide an opportunity to call on historical research, new methods and different traditions. I shall draw thus on some recent directions of research variously termed historical epistemology, history of philosophy of science or integrated history and philosophy of science (Stadler 2014, p. 747–767; Brenner 2015, pp. 201–214). My aim is then to question afresh the evolving realities that science offers us.

2 Some Difficulties in Twentieth Century Theories of Prediction Many languages distinguish, with regard to rational anticipation, between a strong sense and a weak sense. The following are some distinctions that will appear in the ensuing pages: prediction/forecast, prédiction/prévision, predizione/previsione, Voraussage /Voraussicht. Furthermore, the semantic field designating the different degrees and types of foretelling is quite rich: vaticination, prognostic, warning, etc.1 It remains then to clarify this varied usage. 1 For a philological analysis of these terms, see H. Pulte (2001) and M. Fischer (2001).

Evolving Realities: Scientific Prediction and Objectivity


In interwar Vienna logical empiricists subjected the language of science to a rigorous logical analysis, thereby touching on the concept of prediction. The motivation was to reformulate causality in a satisfactory manner, avoiding metaphysics. Thus Moritz Schlick, in his article “Causality and Contemporary Physics”, declares that “the true criterion of regularity [of a law of nature], the essential mark of causality, is the fulfillment of predictions [Voraussagen]” (Schlick 1931, p. 150; 1979, v. 2, p. 185). In this spirit Karl Popper offered his well-known definition: We have [. . .] two different kinds of statements, both of which are necessary ingredients of a complete causal explanation. They are (1) universal statements, i.e. hypotheses of the character of natural laws, and (2) singular statements, which apply to the specific event in question and which I shall call ‘initial conditions.’ It is from universal statements in conjunction with initial conditions that we deduce the singular statement [. . .]. We call this statement a specific or singular prediction [Prognose]. (Popper 1959, §12, p. 60)

Let us note that in the German original Popper does not use the usual term Voraussage or prediction, but Prognose or prognosis; Popper is aware that his definition does not correspond exactly to the ordinary meaning of the word. A prediction is thus a singular statement, which is deduced from a set of premises. Popper’s aim is to avoid the notion of cause. This notion is too narrow: one can explain without furnishing the cause. Moreover, the notion of cause carries metaphysical connotations, and Popper is intent on avoiding postulating a principal of universal causality. It is, then, not surprising to see him adopt an apparently positivist thesis. He is compelled to do so on account of his criterion of demarcation between science and non-science as well as his logical approach. Popper is of course refining the conception given by Schlick. Singular statements are not equated to the protocol statements of the logical empiricists directly connected with observation. They are basic statements, which can comprise lower-level theoretical terms. All that is required of them is that they make a judgment possible. The question then arises to what extent the agreement between prediction and experiment provides a refutation or corroboration. Questions of interpretation can arise here, which go against a verificationist and foundationalist account. But a decision must ultimately be possible, in order for the modus tollens to be effective. Popper’s criticism of verificationism and foundationalism did not prevent some logical empiricists, the so-called left wing of the Vienna Circle, from taking up his account of explanation and prediction. Such is the case of Carl Hempel. He readily adopts what he calls the deductive-nomological explanation or lawcovering model. He illustrates this model with the example of the discovery of Neptune:


Anastasios Brenner

The explanation has the character of a deductive argument whose premises include general laws – specifically, Newton’s laws of gravitation and of motion – as well as statements specifying various quantitative particulars about the disturbing planet. (Hempel 1966, p. 52)

He further comments, that Le Verrier’s explanation “was strikingly confirmed by the discovery, at the predicted location, of a new planet, Neptune, which had the quantitative characteristics attributed to it” (Hempel 1966, p. 52). Let us note in passing that Hempel was interested in extending this model from the natural sciences to the social sciences. We have here one of the few points of agreement between Popper and left wing logical empiricists. Such an agreement could explain the long persistence of this formal characterization of prediction – a dogma of logical theories of scientific research. It is only fair to add that all logical empiricists were not exclusively attached to formal methods. Such is but one direction within the movement, albeit a direction which became dominant. There were however members of the Vienna Circle who resorted to historical considerations. For example, Otto Neurath devoted particular attention to prediction as an activity in its multiple and concrete aspects. He emphasizes in the following passage the need to call on several theories: Under certain circumstances it must be possible to link the laws of all sciences with each other to make one definite prediction. One can only know whether a certain house will burn down if one can take into account how the building components behave, how the human groups behave who push on to fight the fire. The various scientific disciplines together make up the ‘unified science’. It is the task of scientific work to create unified science with all its laws. (Neurath 1983, pp. 53–54)

The formulation of a singular concrete prediction, owing to the complexity of reality, necessarily involves a network of knowledge. This is not unrelated to Neurath’s holism. The article in question marks a transition from phenomenalist language to physicalist language. Neurath was quick to integrate such ideas within the setting of the project of an Encyclopedia of Unified Science. But this project was to be left aside in favor of a more formalist approach. It is not difficult to realize that a purely formal characterization of prediction does not capture what motivates the scientist to adopt a new theory. The elegance of Popper’s definition is bought at the cost of an over-simplification. It is a case of explaining away, rather than a genuine account of scientific activity. The post-positivists were to question the pertinence of such a definition, while significantly calling for a historical approach in philosophy of science. It is well known that Thomas Kuhn rejected the cumulative vision of scientific development held by the logical empiricists. He was thus led to reformulate

Evolving Realities: Scientific Prediction and Objectivity


the role of prediction within his theory of paradigms, for instance: “Newton’s success in predicting quantitative astronomical observation was probably the single most important reason for his theory’s triumph over its more reasonable but uniformly qualitative competitors” (Kuhn 1970, pp. 53–54). Kuhn was intent on taking into account the epistemic aspects of the historical context. What is important is the competition between families of theories. Paradigms are not merely axiomatic systems; they comprise a number of different elements: symbolic generalizations, exemplars, models and values. Prediction is no longer conceived as a formal operation, a nomological deduction, but as an epistemic value. In other words, what is important is predictive power or heuristic potential. To follow Imre Lakatos, one must stipulate extra conditions with regard to Popper’s definition: a prediction should be novel, stunning or dramatic. Such concepts draw attention to the temporal and epistemic dimension of prediction. One could of course attempt to answer such criticism by resorting to temporal logic or epistemic logic. Thus a subscript could be added to the effect that the deduction from a given theory was made at the time T1 and then confirmed at a later time T2. But this may not suffice: for example, it is likely that Neptune had been observed by chance prior to Le Verrier’s calculation, but not registered as a planet. Epistemic logic would allow us to distinguish between the belief held by some subject and the fact of the matter. But, in order to grasp the intricacy of actual scientific practice, one would be led to complicate greatly the model, and this would be to shatter the hopes placed in first-order logic in the early stage of analytic philosophy. It seems more promising then to turn resolutely to history. By qualifying as he does prediction, Lakatos invites us to see it as a sort of judgment on the part of an expert, and this judgment involves a number of things: the predictive statement concerned presumably refers to something not available before, something that corroborates one of the various theories currently available, and something that leads to posit new entities. Lakatos was drawing on the work of a number of scholars, such as Kuhn, Feyerabend and Hanson. These authors marked a move away from the logical analysis of theory structure toward the historical study of theory change. Let us consider in more detail Lakatos’s account. He gives a series of historical examples: Halley’s comet, the discovery of Neptune, the Einsteinian bending of light rays, and the Davisson-Germer experiment. These are instances of spectacular prediction. But Lakatos does not hesitate to extend the concept to other cases and to give it a role in his methodology of scientific research programs: “For already Copernicus’s rough model had excess predictive power over its Ptolemaic rival” (Lakatos and Zahar 1978, p. 188). Such a judgment must be understood as retrospective. Lakatos admits an evolution of thought: predictive power implies factual novelty, but this novelty is often recognized


Anastasios Brenner

only long after the event. In this respect he rallies to Zahar’s modified version of the methodology of scientific research programs. If Lakatos offers some stimulating observations, his analysis of prediction raises several difficulties: he gives a rational reconstruction rather than a genuine historical study. It is not clear from his examples when or how the instances of prediction he mentions were considered as such. In this respect, the account given does not pay sufficient attention to context. Furthermore, what is lacking is an examination of the underpinnings of the debates involved, that is a historical consciousness or reflexivity. The objections raised here agree with the general criticism leveled at post-positivism: ambivalence with respect to history and insufficient consideration of practice.

3 Episode One: Prediction and Forecast in Early Modern Science As the quote from Herodotus given earlier shows, “prediction” is a concept bound up with rationality since the inception of Western thought. The term we use derives from the Greek προαγορεύω, and has been employed since Antiquity to designate rational anticipations of natural phenomena, most notably in the field of astronomy. It nevertheless had to be reworked in order to acquire its modern meaning. On inspection, the idea of prediction was more frequent in philosophical texts than scientific texts. Let us note that in the standard English translation of Ptolemy’s Almagest the term “prediction” stands for what the modern reader expects in this context, that is lunar and solar eclipses; but the Greek text has ἐπίσκεψις or literally “examination”.2 Ptolemy characterizes what he is doing as mathematical examination or calculation, rather than as prediction, which is summarized in astronomical tables. When Copernicus takes up the issue of lunar eclipses several centuries later in the context of his heliocentric theory, he criticizes strongly the hypotheses and results proposed by his predecessors. He does not however readily use the term prediction. Only in the title of the last chapter of book four do we find: “How to foreknow [ad praenoscendum] the duration of an eclipse” (Copernic 2015, p. 325; 1992, p. 223).3 Let us now turn to his followers. Galileo, in speaking 2 Ptolemy (1998, p. 310, H527): “Correct prediction of lunar eclipses can be achieved merely by the above, if the computations are carried out accurately in the way described.” Cf VI 9, p. 308, H523. 3 This may be a reminiscence of Cicero (1923, 1, 82; 2, 130).

Evolving Realities: Scientific Prediction and Objectivity


of his observations of sunspots by means of the telescope, writes in the Dialogue on the Two Chief World Systems: It came about that, continuing to make very careful observations for many, many months, and noting with consummate accuracy the paths of various spots at different times of the year, we found the results to accord exactly with the predictions [predizioni]. (Galileo Galilei, [1899–1909, p. 379] 1968, p. 352)

If “prediction” indeed appears here, it is not clear that it corresponds to the meaning we give to the term today. The English translation is not always consistent: in another passage in which Galileo uses the same term, the translator puts “prophecies”, as Galileo is speaking of the astrologers ([1899–1909], p. 135; 1968, p. 110). Galileo had discovered the sunspots with the help of the telescope. He noted the changes in their motions and forms over time. For him these new phenomena were in accordance with the Copernican system. What he is referring to is more the conformity to expectations than a quantitative anticipation of the phenomenon. The concept of prediction would acquire its modern meaning only over time, as the result of the development of science: a specific statement, rigorously deduced from premises stating the occurrence of a phenomenon with an assigned degree of approximation and dependent on experimental techniques. The term prediction remains rare in Newton’s Mathematical Principles of Natural Philosophy. I did not find any occurrences in the passages relating to what we consider today as his two major predictions concerning the shape of the Earth and the return of comets. It is telling that we must turn to later texts, for example, the commentaries added to the French translation by Émilie du Châtelet, written shortly before her death in 1749 and published in 1756: The comet of 1680 having a period of such a long duration [575 years], its return should come about only by the year 2255, which constitutes for us a prediction [prédiction] of slight interest. But there is another comet whose return is so near that it promises to be a very pleasing spectacle for the astronomers of our time: it is the comet that appeared in 1682, which presented circumstances so similar to those of the comet that appeared in 1607 that one cannot refrain from believing that it is not the one and same planet, accomplishing its revolution in 75 years around the sun. If this conjecture is verified, we shall see the same comet reappear in 1758, and this will be a very flattering moment for proponents of Sir Newton. (Du Châtelet [1789] 1990, p. 114)

Châtelet’s commentary, written more than half a century after Newton’s opus, gives a clearer and more explicit formulation. The context was that of the battle between two systems of the world, that of the Cartesians and that of the Newtonians. It preceded shortly the return of Halley’s comet. When the event occurred, this came to be seen as a decisive confirmation of Newton’s theory.


Anastasios Brenner

We thus see that a consequence derived from a theory, the result of a series of calculations finally came to be considered as a successful prediction.

4 Episode Two: Prediction and the Development of Philosophy of Science After examining a case of prediction in early modern science, I shall now turn to the manner in which philosophers of science came to conceive its role. As philosophy of science developed into an autonomous discourse, attention was devoted to the significance of prediction in general. According to Auguste Comte, the ability to provide precise predictions is the hallmark of the scientific mind. He offers the case of eclipses and transits of the planets as a prime example: The exactness and rationality of their forecast [prévision] has always been the obvious and decisive criterion according to which the effective perfection of astronomical theories has become easily ascertainable [. . .]. Because such a result necessarily supposes a real and deep knowledge of the geometrical laws that the two or three celestial bodies involved in the phenomenon follow in their motions [. . .]. What is concerned is exclusively the truly mathematical predictions [prédictions], which only began with the immortal school of Alexandria. (Comte 1998, Leçon 23, p. 375)

Let us note in passing that Comte perceives in Ptolemy’s determination of eclipses and planetary motions the commencement of scientific prediction. He goes on to characterize science not as a search for causes, be they final causes or productive causes, but rather the capacity to anticipate future events. His positivism was attentive to the social usefulness of science, and it placed importance on rational anticipation. It remained however to provide an understanding of the role prediction plays with regard to theory. I shall take the case of two thinkers of the turn of the 20th century who were engaged in a controversy over realism: Pierre Duhem and Émile Meyerson. The former was suspicious of atomic hypotheses; the latter embraced them wholeheartedly. One could extend the enquiry to several other similar debates: Mach versus Planck or Boltzmann versus Ostwald. Let us begin by recalling Duhem’s wellknown definition of theory: “A physical theory is not an explanation. It is a system of mathematical propositions, deduced from a small number of principles, which aim to represent as simply, as completely, and as exactly as possible a set of experimental laws” (Duhem 1981, p. 24; 1954, p. 19). A scientific theory is no longer conceived as the explanation of deep causes, but as an abstract representation of laws. Duhem goes on to explain that theory consists in a symbolic construction

Evolving Realities: Scientific Prediction and Objectivity


characterized by four operations: the definition of concepts, the choice of hypotheses, the mathematical development, and the comparison with experiment. One may perceive here an intuition of the standard view of theories, which the logical empiricists will further develop: a theory is an axiomatic system, a set of propositions deductively linked, separated into axioms and theorems, its empirical interpretation being provided by certain operations such as measurement. Duhem does not hesitate to compare his conception of science with the attitude of the ancient Greeks with respect to their astronomical hypotheses. What the Greeks had to say about astronomy, the main example of a mathematical study of nature at the time, suffices to extend to the whole of physics. Thus Duhem writes: Since Antiquity there have been certain philosophers who have rightly recognized that physical theories are by no means explanations, and that their hypotheses were not judgments about the nature of things, only premises intended to provide consequences conforming to experimental laws. (Duhem 1981, p. 54; 1954, p. 39, translation amended)

This attitude is summarized in the phrase “to save the phenomena”, which can be traced back to Plato. Representative theories have accompanied science since the very beginning, no less than explanatory theories. To claim that a physical theory should merely save the phenomena amounts to asserting that a theory does not aim to give an explanation of the intimate nature of things, but to provide an abstract representation of empirical regularities. This revival of the Platonic formula has become a leitmotiv of contemporary philosophy of science. In contradistinction to Duhem, Meyerson asserted the explanatory aim of science. The search for causes, even if the particular hypotheses are discarded, is beneficial for science. He went on to show that underlying Duhem’s definition is a positivistic trend. Duhem, without admitting it, took up Comte’s views, which had come under attack because of the undue limitations imposed on scientific research. Comte had gone too far in excluding causes; scientific explanation does not reduce to the formulation of laws. Science continues common knowledge in its aspiration to provide an explanation of things, by filling in the gaps due to our ignorance, by supplementing the visible with the invisible. To be sure, Duhem expounds his conception in two stages: the goal of a physical theory and the aim of theoretical physics. The latter implies a tendency towards classifications that become more and more natural. Duhem thus does not exclude all considerations of reality. The question then arises what is, in the final analysis, Duhem’s intimate conviction. Commentators have interpreted his position either as a reformulated positivism or a structural realism. Duhem is perhaps best characterized as a second order realist, whose position comes out clearly in contrast with Meyerson’s. In this sense he is proximate to


Anastasios Brenner

Poincaré. Duhem is cautious in ascribing reality to theoretical abstractions, and particularly to the hypotheses of the atomic theories of the time. Meyerson places his realistic claims within a general view of scientific evolution. The mind strives for identity. But nature submits itself only in part to rational requirements, and the scientist must continually start over again, adjusting his or her categories to the complexity of reality. This tension between the real and the rational generates a growth of knowledge, a direction of motion, in sum, scientific progress. Against positivists of various strands, Meyerson claims that one cannot eliminate explanatory theories from science. Science carries along metaphysical elements: the belief in the reality of the external world and an ontology of particular objects. According to Meyerson, the relations about which science is concerned are relations of things, taking us beyond both sensations and phenomena. What then to make of predictions? They enter Duhem’s conception in the second stage of the appraisal of theory: the evolution of science toward more and more natural classifications. Some of Duhem’s remarks on the topic are negative: one should be careful not to infer too much from a successful prediction. According to the famous Duhem-Quine thesis, The physicist can never subject an isolated hypothesis to experimental test, but only a whole group of hypotheses; when the experiment is in disagreement with his forecasts [prévisions], what he learns is that at least one of the hypotheses constituting this group is unacceptable and ought to be modified; but the experiment does not designate which one should be changed. (Duhem 1981, p. 284; 1954, p. 187; translation amended)

Duhem is making a logical point: a negation of a series of conjunctions is equivalent to a series of negated disjunctions. As an experiment involves a large number of hypotheses drawing on several branches of science, the lesson of experimental testing is complex. Duhem follows this up with a rejection of crucial experiment, understood as a verifying instance. In particular, one should not draw any conclusions as to the reality of the entities of the successful theory. While Duhem grants that Foucault’s experiment on the speed of light in different media provides good reasons to favor Fresnel’s wave theory as opposed to Newton’s corpuscular theory, he criticizes the following inference: “The debate is over; light is not a body, but a vibratory wave motion propagated by ether” (Duhem 1981, p. 287; 1954, p. 189). There is no reason to conclude that ether exists. Duhem is careful to analyze the nature of a prediction: it is a proposition deduced from a theory. The question then arises of the relation between the deduced proposition and the postulates placed at the beginning. In physics, owing to the conceptual and the mathematical intermediaries, the consequences are far removed from the hypotheses. Duhem specifies that the mathematical development is autonomous. This is one of the meanings of convention: the

Evolving Realities: Scientific Prediction and Objectivity


hypotheses are freely chosen, and, one could add, freely manipulated. Thus a physicist speaks of infinitesimal quantities, variations from zero to infinity, perfect gazes, ideal shapes, etc. Only the consequences of a theory are concerned with reality. As Duhem writes: “When the logical edifice has reached the top floor, it becomes necessary to compare the set of mathematical propositions, which are obtained as conclusions from these long deductions, with the set of experimental facts” (Duhem 1981, p. 313; 1954, p. 206). There is however a positive side to prediction, which Duhem brings out most notably in a section under the title “Theory preceding [devançant] experiment” (Duhem 1981, p. 36; 1954, p. 27; translation amended). Successful predictions reinforce our belief in the natural character of scientific classifications. In particular, they invite us to not stay content with a conception of theory as a purely artificial system. Duhem goes on to give a historical example: From the principles put forward by Fresnel, Poisson deduced through an elegant analysis the following strange consequences: If a small, opaque, and circular screen intercepts the rays emitted by a point source of light, there should exist behind the screen, on the very axis of this screen, points which are not only bright, but which shine exactly as though the screen were not interposed between them and the source of light. (Duhem 1981, p. 39; 1954, p. 29)

This prediction seemed “strange”, “contrary to expectations”, “unlikely”. The success was then all the more convincing. Fresnel’s theory had gone through a robust test. Duhem thus brings the practice of predicting into line with his conception of theory as representation. The predictive power of theories gives meaning to the pursuit of science. Meyerson argues in the opposite direction from Duhem: mechanical explanations are useful, even if tentative. According to him, successful predictions provide an argument in favor of realism; they give reasons for believing in the reality of the entities postulated by theories. He recalls in favor of his conception the analysis Duhem gave of Poisson’s spot. To this he adds a more recent example: Kékulé’s hypothesis on the structure of carbon compounds. Not only has this been confirmed by several discoveries, but it has opened up whole new fields of inquiry (Meyerson [1908] 1951, p. 449). This allows him to offer his own position: “Ontology is part and parcel of science itself and cannot be separated from it” (Meyerson [1908] 1951, p. 439). The debate between Duhem and Meyerson hinges on atomism. Let us turn again to the section on “Theory preceding experiment” in The Aim and Structure. Duhem’s agnosticism regarding atomic hypotheses does not prevent him from using atomic notation, developed formula, with its full array of atomic letters and links. He writes:


Anastasios Brenner

The relations of analogy and derivation by substitution it establishes among diverse compounds have meaning only in our mind; yet, we are convinced that they correspond to kindred relations among substances themselves, whose nature remains deeply hidden but whose reality does not seem doubtful. (Duhem 1981, p. 38; 1954, p. 29)

What is important to underline is that Duhem reasons with respect to relations. This line of development gave rise to a position that has come to be characterized as structural realism (Worrall 1989): reality is to be ascribed not to things, but to the underlying structures tentatively captured in the formula. Duhem prefers to proceed by formal analogies rather than mechanical models, reserving his conclusions for a future time when the confirmation of predictions will allow for a definitive answer. Meyerson claims that Duhem’s objections to mechanical models is unjustified and writes: “It is obvious that even from the point of view of experimental science most rigorously considered, we are well advised to pursue to the end causal deductions, even if they appear most abstract” (Meyerson [1908] 1951, p. 469). He justifies his view at length, defending the heuristic power of atomism. Concerning the relation of science to reality, Duhem appears respectively to grant too little, and Meyerson to claim too much. Atomic theories proved to be remarkably fruitful in the early years of the 20th century, but they underwent a deep transformation. The relation of quantum theory to the earlier mechanical interpretations has been interpreted variously. Duhem and Meyerson have had their respective followers up to the present time. The former influenced logical empiricists and van Fraassen. Meyerson inspired Einstein, Popper and Zahar.

5 New Directions and the Predictive Potential of Science As post-positivism receded from the scene in the 1980s, new directions of research arose. These took up afresh some of the difficulties left pending and integrated more closely philosophy of science and history of science. Bas van Fraassen adopted the Platonic formula, to which Duhem had given currency, as a short hand for his own structural empiricism.4 More recently he returned to the question in Scientific Representation, in which he elaborated on his conception

4 This lies at the heart of Bas van Fraassen’s conception. First presented in an article “To Save the Phenomena”, 1976; it was later incorporated as chapter 3 in his book The Scientific Image, 1980.

Evolving Realities: Scientific Prediction and Objectivity


putting it into historical perspective. He believes that quantum mechanics, in the interpretation given by the Copenhagen school, incites us to reject what he calls the appearance from reality criterion, that is the requirement that physics must explain how the appearances are produced in reality, an attitude he attributes to Galileo. He then goes on to present his empiricist construal: “It is incumbent on the theory only to predict what its appearances will be like” (van Fraassen 2008, p. 308). And he continues: The reality to which [theoretical science] is accountable is only the observable part of the world, and that implies for us that what it is in practice directly accountable to are the appearances – the outcomes of the measurements and observations that are actually made . (van Fraassen 2008, p. 308)

In his Representing and Intervening Ian Hacking brought forcefully to the attention of philosophers the importance of practice. He distinguishes two ideas of reality, the one pertaining to representation and the other to intervention. This leads him to state: “None of the traditional values – values still hallowed in 1983 – values of prediction, explanation, simplicity, fertility, and so forth, quite do the job” (Hacking 1983, p. 146). Hacking is not relinquishing realism, but rather arguing in favor of a different perspective, one that puts greater emphasis on the active as opposed to the contemplative character of science. This was to provide a central theme of his philosophical reflection. He takes up the issue of realism more directly in The Social Construction of What? (Hacking 1999). In this book he examines the impact of the debates stirred up by constructivist positions since the 1980s. The originality of his account is to relate realism to other relevant questions. He locates three “sticking points” that underlie the polemic between realists and social constructivists: contingency, nominalism and explanations of stability. With regard to this set of questions Hacking goes on to classify the different conceptions possible. Retrospectively, Kuhn comes out as a wholehearted constructivist. But Hacking believes it possible, without rejecting the results provided by historical and sociological studies of science, to offer an intermediate position. His own responses to the three questions raised above are globally balanced. In particular, concerning stability in science, he claims to keep equal distance from realism and antirealism. This does not prevent him from inclining toward a moderate nominalism on the one hand and a moderate inevitabilism on the other. In other words, there are grades of realism, and one can formulate a conception of science that makes sense of a robust fit and a resistance offered by the world, without making excessive commitments concerning the entities posited by today’s science.


Anastasios Brenner

Hacking’s suggestions have encouraged some thinkers to delve more deeply into the historical material. Thus, Daston and Lunbeck encounter the question of prediction as part of the activity of observing: The growing concern between observations and table-based predictions explains in part the eagerness with which Latin Christian scholars embraced the astral science then flourishing in the Muslim world. This knowledge was part of a much broader flow of learning from East to West that began in the late tenth and accelerated through the twelfth century, borne on a tide of texts, instruments, and travelers from Muslim lands, and provided the tools for a dramatic change in observational practices in Europe. (Daston and Lunbeck 2011, p. 23)

The search for better anticipations stimulated the development of observation in a way that came to characterize European science. Such an interpretation provides incentive to focus on the use of prediction in early modern science, as was developed in Part two. Daston and Peter Galison, address more directly contemporary issues in Objectivity. They claim to subject the dimension of intervention to further study. Although prediction is not the main focus, one can infer from their approach several consequences in this regard: “By the early twenty-first century, nanomanipulation, suspended between science and engineering, sidestepped the long-standing struggle between representing and intervening” (Daston and Galison 2007, p. 392). Anticipation, broadly construed, is an essential feature of scientific practice. This practice has been highly successful, thus providing arguments in favor of the robust fit of science and reality. But at the same time the manner of reaching these results has become more and more complex. There has also been a shift in focus: the challenge lies in extending anticipations to new areas of science as well as formulating new procedures and methods (see, for example, Gonzalez 2015).

6 Conclusion Four points have been raised in the previous pages: the difficulties of 20th century ideas concerning prediction, the use of prediction in early modern science, the philosophical debate of the turn of the 20th century and new trends with regard to prediction. What I have been aiming at could be labeled a discriminating and pluralistic realism. Scientific entities are of various sorts and of differing ontological status. Some scientific hypotheses materialize into objects that fit in nicely with our common sense ontology: Le Verrier’s hypothesis turned

Evolving Realities: Scientific Prediction and Objectivity


out to be a planet, Neptune, similar to those already known, and easily observed even with an amateur telescope. A second sort of entity can come to acquire a place in our everyday conceptual scheme. I am thinking of useful applications such as radiation treatment for cancer. There is no reason to limit ourselves to directly perceived, pre-industrial objects. Such applications may alter our lives. They have concrete consequences, even ethical import that should be taken into account by the philosopher. There are other entities still that may appear further removed from our ordinary experience. These are to a greater degree relative to a certain theoretical and experimental state of affairs. But their relativity may be merely provisional, new applications bringing them into the realm of the familiar. Our ontologies have to be continually updated. One of the consequences of this line of reasoning is that the philosopher of science should not separate the epistemic aspect from the ethical and political. For these different aspects taken together make up our ontologies. Returning to the debate between realists and anti-realists, which has been going on for at least a century, we note a twofold danger: strict nominalism eliminates entities essential for scientific research; exuberant realism multiplies dubious and superfluous entities. By calling on a historical epistemology I wish to attract attention to a philosophical tradition that has for a long period been neglected by analytic philosophers. Several arguments deployed in this tradition deserve to be taken into consideration, if only because a philosopher should address first and foremost those reasons that appear to go against her or his views. Furthermore, I believe that we can discover in historical epistemology insights that may help us to provide a more complete and coherent account of genuine scientific practice. Predictive power is a rational value. It is liable to vary during the course of history. Prediction also refers to a complex act of judging. We thus need at our disposal several techniques of investigation, including those derived from history.

References Brenner, A. (2015): “Is There a Cultural Barrier between Historical Epistemology and Analytic Philosophy of Science?”. International Studies in the Philosophy of Science, v. 29, n. 2, pp. 201–214. Châtelet, É. Du ([1789] 1990): “Exposition abrégée du système du monde.” In: Newton, I.: Principes mathématiques de philosophie naturelle, v. 2, part 2, pp. 1–116. Paris: Gabay. Cicero (1923): De divinatione, English translation W. A. Falconer. Cambridge, MA: Harvard University Press.


Anastasios Brenner

Comte, A. (1998): Cours de philosophie positive, 2 v. Paris: Hermann. Copernic, N. (2015): De Revolutionibus Orbium Coelestium. Paris: Les Belles lettres. English translation by Rosen, E. (1992): On the Revolutions of Celestial Orbs. Baltimore: Johns Hopkins Press. Daston, L. and Lunbeck E. (eds.) (2011) : Histories of Scientific Observation. Chicago: University of Chicago Press. Daston, L. and Galison P. (2007) : Objectivity. N. York: Zone books. Duhem, P. (1981): La Théorie physique, son objet et sa structure. Paris: Vrin. English translation by Wiener, P. (1954): The Aim and Structure of Physical Theory. Princeton: Princeton University Press. Fischer, M. (2001), “Voraussicht.” In: Ritter, J., Gründer, K. and Band, G. G. (eds.): Historisches Wörterbuch der Philosophie, v. 11, pp. 1180–1182. Basel: Schwabe. Galilei, G. (1899–1909), Dialogo sopra i due massimi sistemi del mondo. In: Galileo, G.: Le Opere di Galileo Galilei, v. 7. Edited by Favaro, A. Florence: Barbera. English translation by Drake, S. (1968): Dialogue Concerning the Two Chief World Systems. Berkeley: University of California Press. Gonzalez, W. J. (2015): Philosophico-methodological Analysis of Prediction and its Role in Economics. Dordrecht: Springer. Hacking, I. (1983): Representing and Intervening. Cambridge: Cambridge University Press. Hacking, I. (1999): The Social Construction of What? Cambridge, MA: Harvard University Press. Herodotus (1920): The Persian Wars, English translation A. D. Godley, v. 1, Cambridge, MA, Harvard University Press. Hempel, C. (1966): Philosophy of Natural Science. Upper Saddle River: Prentice Hall. Kuhn, T. (1970): The Structure of Scientific Revolutions. Chicago: University of Chicago Press. Lakatos, I. and Zahar E. (1978) : “Why Did Copernicus’s Research Program Supersede Ptolemy’s?” In: Lakatos, I.: The Methodology of Scientific Research Programmes, pp. 168–192. Edited by John Worrall and Gregory Currie. Cambridge: Cambridge University Press. Meyerson, É. ([1908] 1951): Identité et réalité. Paris: Vrin. Neurath, O. (1983). “Physicalism. The Philosophy of the Viennese Circle.” In: Neurath, O.: Philosophical Papers: 1913–1946, pp. 48–51. Translation by Cohen, R. S. and Neurath, M. Dordrecht: Reidel. Popper, K. (1959): The Logic of Scientific Discovery. N. York: Harper and Row. Ptolemy, C. (1998): Almagest. English translation G. J. Toomer. Princeton: Princeton University Press. Pulte, H. (2001): “Voraussage; Vorhersage; Prognose.” In: Ritter, J., Gründer, K. and Band, G. G. (eds.): Historisches Wörterbuch der Philosophie, v. 11, pp. 1146–1166. Basel: Schwabe. Schlick, M. ([1931] 1979): “Die Kausalität in der gegenwärtigen Physik.” Die Naturwissenschaften, v. 19, pp. 145–162. English translation by Heath, P.: Philosophical Papers, v. 2, pp. 176–209. Dordrecht: Reidel. Stadler, F. (2014): “History and Philosophy of Science: Between Description and Construction.” In: Galavotti, M. C., Dieks, D., Gonzalez, W. J., Hartmann, S., Uebel, Th.

Evolving Realities: Scientific Prediction and Objectivity

and Weber, M. (eds.): New Directions in the Philosophy of Science, pp. 747–767. Dordrecht: Springer. van Fraassen, B. (1980): The Scientific Image. Oxford: Clarendon Press. van Fraassen, B. (2008) : Scientific Representation: Paradoxes of Perspective. Oxford: Clarendon Press. Worrall J. (1989). “Structural Realism: The Best of Both Worlds.” Dialectica, v. 43, nn. 1–2, pp. 99–124.


Thomas Nickles

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive? Abstract: Affirming-the-consequent is a well-known fallacy that leads naïve people to believe that a correct prediction shows that they are “on the right track,” the track of truth. Here I outline fifteen subtler forms of deception that I term ‘cognitive illusions’, intellectual perceptions that make strong realism seem more plausible than it is. Like affirming the consequent, some of these items are used as positive arguments by proponents of strong realism. I do not claim that exposing these illusions amounts to a decisive refutation of strong realism, let alone weaker forms of realism, some of which I can accept. But appreciating their force makes strong realism less attractive. Keywords: Strong realism, weak realism, textbook realism, nonrealism, cognitive illusions, historicism

In this chapter I identify several (mainly) socio-psychological temptations to become strong realists. In my view, they encourage scientists and science analysts to be strong realists or to slide from weak realism into stronger realism. Although some of the items below can be construed as arguments in favor of strong realism, none of them are convincing in my judgment. Since they look more convincing than they are, I lump them under the term cognitive illusions.1 I identify fifteen or so of them here, without presuming to be comprehensive. Rather than make a contribution to the increasingly epicyclic discussion of inference to the best explanation (the no-miracle argument, etc.), I take the different tack of asking whether psychological and sociological factors as well as logical deception tempt all of us to be strong realists.

1 There are many other cognitive biases not covered here, such as confirmation bias. I prefer to speak of cognitive illusions in order to emphasize the perspectival aspect. Note: Thanks to Wenceslao J. Gonzalez for the invitation to contribute. Various parts of this material were presented at: a conference on scientism organized by Massimo Pigliucci and Maarten Boudry, the 2014 &HPS Conference in Vienna, the University of Minnesota, and Indiana University. I am indebted to the participants for discussion.

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


The idea of a cognitive illusion is sometimes explained by analogy with that of a visual illusion, but that is misleading. For the inferential “mechanism” behind cognitive illusions can be understood so as to make the illusion disappear. Cognitive illusions can be avoided via the right sort of careful, critical thinking. By contrast, visual illusions persist even when we know they are illusions, and even when we think we understand the heuristic mechanism underlying them.2 We carefully measure the length of the arrows in the Müller-Lyer illusion and verify that they are equal, yet we still perceive the illusion (Gigerenzer 2000, ch. 12). Nonetheless, cognitive illusions do involve a sort of intellectual perception, or deception. Please note that I am not restricting the term ‘cognitive illusion’ to its use in the Kahneman and Tversky “cognitive biases” program (Kahneman, Slovic, and Tversky 1982).3 Also note that I am not restricting it to individual psychological mechanisms. Most of the illusions are produced, in part, or at least reinforced, by social contexts of learning and argument. There are other social psychological factors that also tempt us to be strong realists. For example, we feel social pressure to resist quacks and science deniers by claiming to have the truth, “the facts,” rather than by claiming, pragmatically, to have only the most reliable results yet available. And we are deeply curious about the nature of various domains of reality or we would not be in our business. We should all like to “see Plato’s Forms” before we die. Hence the temptation to regard a highly successful scientific achievement as telling us the ultimate truth about the world – or at least as giving us a glimpse of the promised land from afar. Some of the illusions described below play upon this desire, this hope. By weak realism I mean what some thinkers call intentional realism, the family of positions according to which scientists and commentators attempt to give true descriptions and explanations of some domain of reality. For such realists there may be other kinds of intelligibility problems (see below), but not, specifically, those deriving from narrow empiricist theories of meaning. Weak realists emphasize that their postulations are conjectural and, hence, that they may be superseded by ongoing research. The various versions of what I am calling strong realism hold that we today know that specific, mature scientific theory complexes are true, or nearly true, and that the theoretical terms of these theories succeed in referring to real

2 My neuroscientist colleague, Gideon Caplovitz, and his students have used such knowledge to create several new, prize-winning visual illusions. 3 For present purposes, I do not include the better-known Kahneman and Tversky (1982) biases (anchoring, availability, base-rate, etc.). For a challenge to some of them, see Gigerenzer (1991a).


Thomas Nickles

entities and processes in nature. Psillos (1999) is an excellent account and defense of strong realism. I want to emphasize the referential point – that the scientific experts know precisely to what sorts of entities or processes their theoretical terms refer. As I understand it here, strong realism, where major theories or models are concerned, is a “deep” realism that claims to reveal the secrets of nature. The implication is that a mature theory must be intelligible to experts in the sense that they genuinely understand what is “really” going on at the theoretical level.4 Mathematical derivation of predictions and “explanations” from laws, theories, or models is not enough insofar as the latter are not fully understood. Peter Lewis (2016) writes that metaphysicians should pay attention to quantum mechanics as a revisionary empirical constraint on our usual, intuitive ways of thinking about the world, not because it provides final answers to our metaphysical questions. The theory of quantum mechanics is notoriously difficult to interpret, to such an extent that several prominent physicists and philosophers have denied that it provides us with any description of the physical world at all. ... The notable thing about quantum mechanics is that it is remarkably silent about what the basic mathematical structures of the theory represent. (Lewis 2016, introduction.)

And the major interpretations available are utterly incompatible with one another. Some of them reside almost at the folk science level. (Meanwhile, Lewis himself remains a sort of intentional realist.) My reason for this emphasis is that anything deserving the name ‘scientific realism’ must include the idea that the relevant account of the world is intelligible to experts. This was assumed to be the case in formulations of traditional scientific realism, but this condition seems to be minimized in some versions of (allegedly) strong realism today. Since the experts themselves disagree wildly about what the theory literally tells us about the constituents of reality, quantum mechanics and its later elaborations would seem to undermine rather than to support strong realist claims. That is why I placed the word ‘explanation’ in scare quotes above, for if there is no consensus among experts about what is

4 J. D. Trout (2002) rightly distinguishes genuine understanding from the illusory, subjective, “sense of understanding” that often derives from hindsight and overconfidence biases or simply from familiarity. (Below I extend these biases to include illusions of probable truth. Truth is already implicated in the idea of genuine explanation, so not much extension is needed.) Henk de Regt (2004) takes Trout to reject the relevance of understanding altogether. Without entering into this controversy, I’ll simply say that we need more work on scientific understanding and intelligibility, and to what extent we humans can usefully get beyond “folk” understanding. See also de Regt et al. (2009) Trout (2005), and Rosenberg (2018).

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


really going on, how can they claim to have produced genuine explanatory understanding? Isn’t it misleading to retain the term ‘scientific realism’ for something so different from “good old” scientific realism? Popper’s realism neatly illustrates the difference between weak and strong realism. Popper was an intentional realist; but, as everyone knows, he vehemently denied that we can ever know whether a general theory is true or even probable (Popper 1963, ch. 1). Popper’s fallibilism was too extreme. But, then, isn’t the professed fallibilism of strong realists also too weak? Having distinguished weak from strong realism, I mention what we may call textbook realism. This is the strong, deep realism of scientific textbooks, a realism that presents the successes of the field in question simply as established facts, an uncritical realism that naive students are invited to embrace. After all, both the textbook and the instructor are authorities on the subject, aren’t they? Meanwhile, structural realists are (or can be) strong realists who reject deep realism as quasi-metaphysical and who are content to say that it is the formal equations that are (approximately) true. Augustin Fresnel’s equations of the 1820s concerning the behavior of light have held up, despite the fact that we no longer believe that light consists of waves in an ether medium (Worrall 1989). For structuralists, mathematical invariance under deep scientific change is a mark of (approximate) truth. It is the mathematical structure that counts – Fresnel’s theory is Fresnel’s equations, not their interpretation in terms of richly postulated entities and processes. The only sort of explanation left for structuralists is the shallow one of computing empirically observable patterns from the minimally interpreted equations. The deep descriptive and explanatory understanding of traditional scientific realism is no longer taken to be a realizable scientific goal. As Worrall notes, the great French mathematician, physicist, and philosopher Henri Poincaré (1952) is the precursor of today’s structural realism. Structural realism is not easy to apply to mathematically underdeveloped fields, but that difficulty is not my present concern. Where do I myself stand on these issues? The reader will already have gathered that I am not a strong realist where deeply theoretical issues are at stake. My own position is agnostic rather than realist in the case of deep theoretical claims.5 Since my focus is on understanding scientific practice and the epistemically

5 For most of my career, I ignored the realism debate. I saw it both as an overreaction to firstgeneration social studies of science and the bugaboo of relativism and also as irrelevant to understanding how the sciences actually work in the trenches. Increasing interest in policy issues and other factors have dragged me (perhaps naively) into the debate. In some moments I see the debate as almost theological – strong realists defending the faith against the satanic opposition. “Oh, I could never believe in nonrealism, for I should be wretched if I did!”


Thomas Nickles

warranted decision-making that can actually be made there (as opposed to the highly idealized, information-rich situations that many philosophical discussions presuppose), it makes little difference to scientific practice and to our explanatory understanding of it, whether or not the high-level theoretical claims involved are true. (However, we do sometimes have good reasons for claiming them to be false, which amounts to a strong, negative realism that tells us what reality is not.) It can make little or no difference to research decisions, since no one knows the truth. However, that does not prevent our evaluating their current practical import and their heuristic fertility for ongoing research. Nonrealism, as distinct from antirealism, is another label for my position. My position is also local in the sense that, in areas where we have gained a good deal of experimental access and theoretical success as well as intelligibility, I can be a realist. In some cases, nonrealism gives way to degrees of realism in a given domain, as science progresses. While I am skeptical that we already know the true answers to our deepest questions about the universe, this does not mean that science fails to progress or that scientists need not be truthful. There is an important distinction between truth and truthfulness (Williams 2004). We should be honest, objective, and open to criticism of our decisions and positions, and energetic in our search for evidence for and against our claims. (That’s what I’m concerned with in this paper.) Those who decry nonrealism and antirealism for playing into the hands of postmodern political movements (for example) miss this point. Those who say that we must be scientific realists, in order to counter these postfactual and anti-scientific developments, commit the very sin that they preach against. For they are compromising truthfulness for socio-political reasons. Besides, realists’ talk of truth has hardly slowed the science deniers. Both the science deniers and the strong realists are hung up on absolute truth and falsity, whereas what normally matters in deep scientific work is practical adequacy and future promise as a guide to further research. Concern with absolute truth carries the foundationist temptation that blocks the road to inquiry. The remainder of the paper consists of a selection of candidate illusions. The items on the list in the next section overlap and interrelate in various ways, sometimes by way of mutual reinforcement.6

6 Earlier statements of some of the illusions appear in Nickles (2016, 2017a and b, 2018a and b).

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


1 Some Cognitive Illusions 1.1 The Overconfidence Illusion We all tend to rate ourselves above average, and we academics way above average. But, of course, it is impossible that nearly everyone is above average. Psychological research (e.g., Fischhoff, Slovic and Lichtenstein 1977) shows that most people are overconfident of their knowledge, ability, and degree of control. In planning, we tend to underestimate the pitfalls, including the unintended consequences. In decision-making, even technically informed people often confuse uncertainty with risk, as economists understand those terms. In the field of rational decision theory or choice theory, risky decisions are those that are a matter of probability, as in your chance of winning a fair lottery. It is assumed, for a given situation, that all action choices are laid out, and outcome probabilities known for all possible states of the world. In addition, it is assumed that the agent making the decision has a well-defined preference ranking, i.e., distribution of utilities over the possible outcomes. Ordinary people rarely have such information available. Neither do scientists – the less so at frontiers of research. Typically, some (or many) of the possible states of the world are unknown, not to mention the probabilities. And the preference rankings are rarely fully defined and perhaps oddly time-sensitive. There is an old pragmatic maxim that applies here: “There are two ways to solve a problem. You can either get what you want, or you can want what you get.” Where the conditions of perfect decision-making information are absent, we are thrown into the regime of uncertainty. As the name suggests, the pitfalls here are many – and worse than those of mere risk. Overconfident people underestimate both risk and uncertainty, sometimes by thinking that uncertainty itself is only risk concerning the probability of already envisaged outcomes. It would be naïve to suppose that scientists and science analysts usually escape this cognitive illusion. Even the brightest people underestimate the difficulty of completing major projects. The relatively short history of artificial intelligence provides a series of sobering examples. Many announced scientific and technological breakthroughs end up as minor advances, or they quietly fade away. As with reference letters, it is wise to discount such announcements by at least 30%. What we may call the subjective certainty illusion is a special case of the overconfidence illusion. And the a priori illusion is a special case, in turn, of the subjective certainty illusion, where a supposed faculty of Reason allegedly gives us the ability to scan the space of all logical possibilities and impossibilities in a completely reliable, historically decontextualized manner. Reason is supposed to give us god-like powers to escape from history.


Thomas Nickles

There are many conceptual and technical breakthroughs in the history of science, mathematics, and technology, where what was once declared physically or even logically impossible was later shown not to be (and vice versa). For instance, general relativity, today’s leading space-time theory, tells us that space-time is non-Euclidean, whereas in 1800 almost no one thought that nonEuclidean geometry was even logically possible, let alone true of our universe. Human logical intuitions are not particularly reliable, nor are our intuitions about probabilities, especially when not expressed as relative frequencies (cf. Gigerenzer 2014). In scientific methodology this leads to intuition problems concerning the so-called “catchall” hypothesis – roughly, the set of all hypotheses in a given domain that have not yet been considered or even thought of. On what basis can we conclude that a present theory or model is probably true or probably very close to the truth, when there are a host of possibilities that are presently underdeveloped, or not yet conceived, or not yet even conceivable because beyond our current horizons of conception and imagination?7As the rigorous philosopher of science, Wesley Salmon, remarked concerning the Bayesian probabilistic approach: What is the likelihood of any given piece of evidence with respect to the catchall? This question strikes me as utterly intractable: to answer it we would have to predict the future course of the history of science. No one is ever in a position to do that with any reliability. (Salmon 1990, p. 329)

It seems to me that thinking about scientific realism is beset by folk intuitions more commonly associated with other topics, such as morality or the economy. This is ironic in the case of science, given the rigorous, hard-nosed, critical attitude that we usually associate with it.

1.2 The Flat-future or End-of-history Illusion Do you, dear reader, think that the future of a given science, say physics, is likely to be as dynamic as its past? Yes or no? If you answered “no,” did you take into account that “the future” of modern science is likely to be much longer than its past? After all, “the future” is a long time! Modern physics is, at best, about three hundred years old. So even if the next three hundred years look tame compared to the past three hundred, what about the next thousand years, or two thousand? Even if a science changes more slowly in the future than in the past, it can still, eventually, outdo its past, making 7 In his (2006) and later writings, Kyle Stanford has wonderfully re-opened this topic.

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


today’s science look just as wrong or limited, primitive even, to future researchers as our past looks to us. Maybe. No one knows. No one can know. I return to this idea below. We are tempted to treat the future, however distant, as a simple extrapolation of the present. Prominent social psychologist Daniel Gilbert and colleagues have described what they term “The End of History Illusion.” We measured the personalities, values, and preferences of more than 19,000 people who ranged in age from 18 to 68 and asked them to report how much they had changed in the past decade and/or to predict how much they would change in the next decade. Young people, middle-aged people, and older people all believed they had changed a lot in the past but would change relatively little in the future. People, it seems, regard the present as a watershed moment at which they have finally become the person they will be for the rest of their lives. This “end of history illusion” had practical consequences, leading people to overpay for future opportunities to indulge their current preferences. (Quoidbach, Gilbert and Wilson 2013, p. 96)

As Gilbert remarked to a New York Times reporter: Middle-aged people – like me – often look back on our teenage selves with some mixture of amusement and chagrin. What we never seem to realize is that our future selves will look back and think the very same thing about us. At every age we think we’re having the last laugh, and at every age we’re wrong. (quoted by Tierney 2013)

Lead author Jordi Quoidbach added: “Believing that we just reached the peak of our personal evolution makes us feel good.” Decision theorist Daniel Goldstein (2011) and colleagues have developed various tricks (such as filming you to look like a dramatically older person) in order to improve decision-making with longterm life implications. Something like this illusion invites scientists and science observers to be realists. I feel the temptation myself and must actively resist it and the other illusions described below. Just as our personal futures look uneventful or flat compared with our pasts, so does the future history of science. The future is not available to us, of course, and only with the greatest difficulty, by resorting to science fiction, scenario planning, historical patterns projected onto the future, and the like, can we begin, concretely, to invent candidate future scientific knowledge transformations. Even if we succeed, the imaged scenarios often strike us as implausible because they violate current “knowledge.” Who wants to read science fiction that is obviously “unscientific”? Why call this illusion the end-of-history illusion? Well, a flat future in a given field means that no more really fundamental breakthroughs will occur.


Thomas Nickles

The history of major progress in that field is over. From here on out, the gains will be, at best, routine, “normal science.”8 But how could anyone possibly know that? As Popper used to say, “we cannot anticipate today what we shall know only tomorrow” (see Popper 1957, preface). And, insofar as Popper and others are correct that we learn (mainly) from our “mistakes,” to suppose that we are now correct means that there is little more to learn of fundamental importance (Firestein 2015). Obviously, to say that, today, we cannot seriously imagine any alternative to present mature theories does not count as evidence.9 Many first-rate scientists in the past were in the same situation. The inaccessibility of the future does not mean that strong realism wins by default. That would be to commit a fallacy of appeal to ignorance: “I am right unless you can provide a specific reason why my current theory is faulty.” To the objection that future research can always continue to delve more deeply into what is now understood, so that there can be major progress without discontinuity, the reply is that, while that does sometimes happen, it often does not. Historically, many or most successful, deeper inquiries have eventually disrupted received beliefs and practices. So, supposing that it continues that long, why should we expect the science of the year 3,000, to closely resemble the mature science of today? The flat-future illusion involves short-term accounting. Even if the long term can be regarded as a series of short-term extensions, it does not follow that these short terms will all be direct extensions of our short term. This point leads into the evolution illusion. Revolutionary disruption is not necessary for a major transformation.

1.3 The Evolution Illusion It is important to note that critics of strong realism are not committed to holding that future, disruptive scientific revolutions are in the offing (although that does not seem unlikely, given the dynamical past history of all of our sciences). For, as several authors have noted, slow evolution over a long enough time

8 Here and below the reader will find positive resonances to Laudan (1981), Fine (1986), Cartwright (1999), van Fraassen (2001), Teller (2001), Stanford (2006), and Giere (2006). 9 I find it increasingly awkward to use this theory-centered language, given that many sciences are not characterized by grand theory in the way that parts of physics have been, and given that modeling is where we find more of the research action. Since most models are deliberately imperfect representations of reality, the modeling “turn” itself provides a challenge to strong realism.

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


period – many scientific generations – can be as transformative as you please. Thus, continuity through theory and instrumental change is not enough to sustain a realist standpoint on the history of science. Again, modern science is only a few hundred years old, and most sciences are far younger, making this point less evident to us than I suspect it eventually will be. The evolution illusion is a scaling illusion that potentially undercuts both the revolutionary view and the strong realists’ static view. It is a distinct, third possibility. Over a long enough period of time, we should not be surprised to find instances of all three regimes. Evolutionary change can be so slow that it is hardly visible within a single scientific generation. This is especially true when it precedes a dramatic revolution. In the wake of the relativity and quantum revolutions, even Kuhn (1962 [1970]) held an insufficiently dynamical view of so-called classical mechanics. In rejecting the evolutionary (as well as the revolutionary) scenario, strong realists are committed to denying that the conditions for evolution (any longer) apply to mature fields. Yet all it takes for evolution to occur are variations, relatively long-term selection pressures on them, and retention of some of them, which become, in turn, the basis for further variation. Darwin himself pointed out that when these conditions occur (with the right sort of linkage), evolution is not improbable; on the contrary, it is virtually unstoppable.10

1.4 The Historical Maximum Illusion Our latest, greatest scientific advances put us at the top of our game. We seem to be masters and controllers of the universe. But this is a historical illusion, one that depends on our bounded historical perspective. For this illusion confuses what is really only a local maximum, as far as contemporary experts can tell, with the global maximum – whether ‘global maximum’ means ‘the true theory’ or only ‘the best theory that human research will ever achieve.’ With the naked eye, mountain climbers in a dense fog can only tell when they have reached a local maximum, by noting that each direction heads downhill from where they are standing. Research scientists at far frontiers are in even more of a fog, for they cannot tell whether they are at a maximum at all, local or not, since tomorrow may bring another step “upward.” In fact, since the standards 10 The parallel in economics is the static, equilibrium theories that dominated much of the 20th century. Since evolution is driven by innovation, these static conceptions preserve the old idea that innovation is an exogenous factor, an intrusion from outside the normal working of the scientific and economic systems.


Thomas Nickles

themselves are historically local to some degree (they change over time), researchers cannot determine “upward” or “downward” in an absolute sense (except, perhaps, where ‘downward’ means ‘decisively refuted’). Let’s say, then, that they are at a historical maximum, higher than all previous history of work in that domain. They can at least tell when they have reached a historical maximum (supposing that their research holds up), but they are far from knowing that it will turn out to be the highest peak. Strong realists tend to be “Ptolemaic” (that is, “pre-Copernican”) in privileging our own historical perspective as if it were absolute, thereby enabling us to determine, in favorable cases, that we are at the global maximum of correct understanding, or only a small step away (Nickles 2016). But since we cannot compare present work with either metaphysical reality or with future scientific work, such a determination is impossible. Plato’s paradox of the Meno lurks here. For if we could compare our claims with the truth about reality, then we would not have to inquire, because we would already know the truth. That is one horn of Plato’s Meno paradox of inquiry. The other horn of the Meno dilemma is that we would not recognize the truth even should we hit upon it accidentally. That is too extreme for many situations, but it remains plausible at the far or “wild” frontiers. If a textbook of quantum mechanics had landed at Aristotle’s feet, he would not have recognized it as any sort of scientific advance, let alone the truth. But we need not resort to such extreme examples. Planck firmly resisted Einstein’s postulation of free quanta in 1905 and his later derivation of an early form of wave-particle duality, even though Planck certainly understood the physics publications of his day (Kuhn 1987). For years, Planck, the so-called founder of quantum theory, failed to recognize Einstein’s truth (if that’s what it was) even as a significant advance. The historical maximum illusion could also be called the better-best illusion. Today’s scientific achievements are better than anything before in terms of methodological criteria that researchers can use – level of detail, mathematical sophistication, predictive accuracy, real-world applicability, etc. Strong realists want to say they are not just better but the best or close to the best possible, including any that may become available in the distant future. It is as if “now,” in the early 21st century, were special, a turning point in history. It is as if, with “mature” science, we have passed a nonhistorical threshold. To do that would mean that we are standing outside of history and issuing an absolute judgment, a sort of “final judgment” about maturity. (See below for more on this maturity illusion.) Our scientific advances are significant enough without having to exaggerate them. To claim that they embody ultimate truths (nearly enough) is to deny that the science in question will be seriously progressive in the future. And to say that about the future implies, in turn, that today’s frontiers are no longer deep and

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


challenging. As noted above, we would all like to think that we had glimpsed the Forms before we die, or at least that we had lived at a turning point in history. And the “end” of history would be a turning point of turning points!

1.5 The Maturity Illusion Our stable, most mature sciences have made so much progress over the past centuries that how could they not be approaching the truth about reality? Here we have another illusion of historical perspective. Strong realists tend to conflate past-oriented, relative maturity judgments with absolute maturity judgments. Why does maturity enter the picture? Because strong realists must respond to the challenge that, at several points in the past, scientists have thought they had the truth; yet it later turned out that they were badly mistaken. What then makes today’s realism more defensible than theirs? Because, many strong realists say, the sciences they champion (usually physics) are now mature, whereas they were not before. The flaw in this move is easy to see. Maturity is invoked to stop a negative historical induction, but maturity judgments run up against the very same problem. As in the historical maximum illusion, we usually have good reasons for thinking that a present-day science is more mature than its predecessors, but this same judgment will be possible by future scientists about their science, in relation to ours. Claims about scientific maturity are only historically relative and should be historically indexed. Some realists take the maturity claim to imply that scientific frontiers have now been tamed, given the increased sophistication of our methodologies and instrumentation. Here is an example from Michael Devitt, a prominent moderate realist. [W]e have very good reason to believe that we have been getting better and better at learning about the unobservable world; good reason to believe that, aided by technological developments, there has been, over recent centuries, a steady improvement in the methodology of science. That’s why our present theories are more successful. Indeed, serious sciences like physics take this improvement for granted, as methodological instruction in the classroom demonstrates. A naturalized epistemology surely supports this confidence. And, I have argued, there seems to be no basis for a pessimistic meta-induction about this epistemology . . . . So, we have serious scientific support for the view that theories are indeed getting more successful and, hence, the realist argues, more true. (Devitt 2011, p. 292)

In my opinion, this sort of view is a product of a sort of double whig bias (past and future – see below). It comes close to begging the question in favor of realism – not directly, by presupposing that our theories are true – but indirectly,


Thomas Nickles

by supposing that our methods are now mature, that is, correct and sufficient to reveal the truth (or something close). But, again, how could we possibly know that? Why think that today’s research at the far frontiers is any less fraught with uncertainty than the frontiers faced by previous generations? Realists who insist on the maturity point seem to think that science has now reached a permanent Kuhnian “normal” status of filling in some gaps, resolving some anomalies, and increasing precision (Kuhn 1962[1970]). Having studied the standard textbooks or done scientific work, we find quantum mechanics pretty normal by now. But whenever scientists attempt to move into novel domains or regimes (think Planck scale, for instance), as scientists are certainly still doing today, there is no guarantee that the old methods and practices will work well (Wolpert and Macready 1995, Wolpert 1996). Such exploratory research is replete with trial and error. (See also item 12, The illusion of control.) The strong realist claim about maturity also fails far from the “wild” frontier. For, as noted above, the best experts still disagree on how to interpret the quantum mechanics of the 20’s and 30’s, let alone quantum field theory and later. Some wonder whether an intelligible realist interpretation is possible at all. So even the existence of familiar, long-stabilized regions of science does not necessarily speak in favor of realism.

1.6 The Expert Agreement or Consensus Illusion When scientists in a field agree on the equations cum solutions, and predictions, we observers are tempted to believe that these experts also agree on what is really going on at deep theoretical levels of description. That is the way both textbook science and new breakthroughs are commonly presented. But that is often not the case (Brockman 2015, Lewis 2016). The debate over the interpretation of quantum theory exposed that illusion in the 20th century. However, the difficulty was apparent from the beginning of the so-called Scientific Revolution. Newton’s mathematically successful treatment of causal action-at-a-distance reduced Cartesian contact action to folk science, yet was itself notoriously unintelligible, counterintuitive. A second example is that most of the 17th-century greats made a strong appearance-reality distinction, the idea being that there is a reality underlying the appearances, very unlike the appearances, that causally explains those appearances. Yet there was wide disagreement among the corpuscularians (for example), so realists should not take the fact that this distinction stimulated a lot of fruitful theoretical thinking (as well as much unfruitful work) to provide evidence in support of strong realism about the

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


unseen world. And, clearly, to say that there is some sort of reality beneath the appearances is vague and unspecific. It is not strong realism as I have defined it. As Kuhn already pointed out, the monolithic nature of normal science does not carry over into specific “philosophical” interpretations of the nature of the referents involved. This was surely one of his reasons for rejecting strong realism. Scientists can agree on what the problems are and how to solve them (by modeling unsolved puzzles on concrete problem solutions already available) while radically disagreeing about the status and meaning of the principles and equations employed. His examples included Newton’s second law (definition or deep empirical law?), his action-at-a-distance, and Maxwell’s electromagnetic theory.

1.7 The Whig Fallacy, Past and Future We are surely correct to judge our science as better than that of a hundred or four hundred years ago, for even students can appreciate that the problems addressed by Galileo and Descartes were easy ones. Well, yes and no. In its familiar form, the whig fallacy interprets and evaluates past practices and products in the light of present ones. It promotes a kind of intellectual and cultural imperialism in imposing present understandings, goals, and standards on the past instead of evaluating past work on its own terms. The whig writes under the illusion that the past was flat in the sense that, at least since 1600 or so, scientists have always been scientists in our sense of the term and have always worked within the same ahistorical context of goals, standards, etc., i.e., “modern science.” Applied to the realism debate, as Theodore Arabatzis remarks (2001, p. S539), “the past state of science is, on a realist reading, an imperfect version of its present state.” A similar mistake (I claim) is made by strong realists in imposing their own understandings, goals, and standards on the future. They do this gratuitously and then congratulate themselves for having finally achieved a fully mature science that need not undergo much further change. How else can we conclude that the future will be flat, despite continued research? The end-of-history fallacy combines with the whig fallacy. Each fallacy serves to support the other via this circular reasoning. What about the weaker realists called structural realists? My answer is that structural realists, too, are whiggish. They tend to judge what was correct and necessary (rather than superfluous) in past science by supposing that our science furnishes the (nearly) correct standard and then calling our attention to how past work anticipated the equations in today’s textbook. This practice is unhistorical, and it begs the realist question concerning the status of our


Thomas Nickles

presently best products of mature science. Fresnel’s equations may have survived the transition to Maxwellian electromagnetic theory and beyond, but how do we know they will continue to hold up so well as science advances? Both moves – evaluation of the past in terms of the present and projection of our present understandings and intuitions onto the future – simply presuppose that our own science is basically correct. So, again, our present perspective is taken to be the fixed point in terms of which all other work is to be evaluated. Notice that the idea of absolute maturity imposes our claims on the future even when we are careful not to be whiggish about the past.

1.8 The Fish Illusion We are like fish in water in that key parts of our socio-cultural tradition and of our scientific techno-cultural environment are invisible, or at least transparent, to us. In some cases, they are not yet articulable by us. In other cases, the claims are so “obviously true” as to seem a priori, so that anyone who questions them is considered weird. Especially where science is concerned, this illusion of cultural transparency tempts us to think that we are culture-free and timelessly objective. Even we historicists are unable fully to locate ourselves historically in the sense that future historians and culture commentators (who are also fallible, of course) will make insightful observations about us in relation to our past (and our future) that we have failed to see – and perhaps cannot see except when looking back at their earlier selves. We cannot escape from history, even though we can identify progress in overcoming previous historical blindness. To put the point more provocatively, we can overcome history incompletely and only to the extent that we become strong historicists, able to take into account deep and subtle historical perspectives.

1.9 The Textbook Illusion Most of us science studies practitioners, including science journalists, have learned most of our science from textbook-based science courses plus independent reading, rather than from a life of on-line scientific practice. (There are important exceptions.) Especially in the hard sciences, textbooks are standardly written in a style that we may call textbook realism. Standard textbooks from respected publishers are, after all, authorities on the state of the field, relative to the level of the student. They deal with material that has achieved wide and deep community consensus. If they contain an element of history, it is typically thin and whiggish.

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


Rarely do teachers question the textbook or suggest that significant alternative science might be in the offing, and it requires active effort on the part of the student/ reader even to doubt a textbook claim, let alone to possess the scientific imagination to suggest an alternative. Thus, a reader who learns science mainly from reading textbooks, absent a study of the relevant history of science, is likely to accept what the text says uncritically (Chang 2012). In sum, the very nature of science education predisposes us to be strong realists. Kuhn (1962[1970]) thought this was probably a necessary cultural bias in the education of good scientists, but (as he emphasized) it is a bias nonetheless.

1.10 The Puzzle-solving Illusion Mathematical and empirical scientific problems and paradoxes fascinate us and challenge us intellectually, just as crossword puzzles, jigsaw puzzzles, and quiz shows do. It is too easy to regard the former as like the latter. This illusion has at least two components. One is the view that most scientific problems are well-structured. This illusion overlaps the textbook illusion in that all problems are thought to be like the normal scientific problems that Kuhn (1962[1970], ch. 4) labeled “puzzles.” The strong version of the illusion derives from assuming that scientific problems can be compared to crossword puzzles in the sense that they are so heavily constrained that there is a unique right answer. This uniqueness illusion is enhanced if the problem and solution are formally crisp and also if it is the only known solution at the time. We tend to take for granted that the problem is legitimate and correctly formulated. Working scientists as well as science students and we philosophers are invited to make this mistake insofar as we look backward whiggishly, after the problems have been solved and standardized, instead of forward to unsolved problems and open research questions at the frontiers of research, where the transformative growth of knowledge occurs. At this point Baruch Fischhoff’s (1975) hindsight bias has come into play. “I knew it all along,” or at least “How could it be otherwise?” The second component is that disciplinary attention is drawn to crisp, wellstructured problems and solutions disproportionately to their importance, partly because they seem more definitively soluble and (relatedly) partly because such puzzles provide challenges that are especially energizing. Consider standard neoclassical economic theory with its highly idealized homo economicus. In general, as the social sciences have balkanized into professionalized specialties addressing idealized, academic problems, they have become less influential on social policy that addresses real-world problems, at least in the USA (Desch, Ikenberry,


Thomas Nickles

Trachtenberg and Wohlforth 2019). In philosophy of science, far more attention is given to formal theories of confirmation than to real decision-making at research frontiers. Because they are not such brain teasers, we are tempted to neglect messy, ill-structured problems.

1.11 The Precise Prediction Illusion Wow! How could a theory yield a correct prediction to the tenth decimal place and yet not be true? Accurate predictive success tempts us to say that we must be at, or very near, the truth, the more so when the prediction is novel and surprising. This temptation has multiple sources. First, it is really just a turbocharged version of the fallacy of affirming the consequent. We forget that there are any number of logically possible theories that produce the same result, few if any of which we can actually think of. To emphasize the fact that nearly all of the ones we do succeed in constructing are ad hoc failures misses the point of the underdetermination problem. Second, it is easy to confuse numerical proximity of a prediction to the measured value with proximity to the deep, “metaphysical” truth about the world. Historically, many theories and models have enjoyed predictive success and yet been badly wrong about the world – by later theoretical and empirical lights. Newtonian mechanics is the paradigm case. Third, and relatedly, precise prediction does not automatically translate into explanatory understanding, explanatory intelligibility. Being so impressed with the ability to make extremely accurate predictions invites the illusion that one really understands the phenomenon in question, because “it was to be expected” (Hempel 1965, 337). This may be especially true when one is quite familiar with the calculation practices that yield the precise prediction (Gigerenzer 1991b). Being so familiar with quantum field theory or with Kant that one can anticipate what comes next in the text is not the same as genuine understanding. We might speak here of the familiarity illusion. Fourth, degree of accuracy is only relative. Isn’t it amazing that Eratosthenes, in the third century, BCE, could measure the circumference of the earth on the basis of his assumption of its spherical form, and that, in the late 17th century, Newton could predict the return of Haley’s comet and the motions of the moon? Yet these remarkable predictions look merely approximate to us today. The number of possible decimal places is unlimited, so we must not assume that our degree of precision provides an absolute standard of accuracy, let alone one of proof or truth. “There’s plenty of room at the bottom” in this respect as well as in the

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


nanotechnology advances that Richard Feynman was forecasting in his famous 1959 lecture (Feynman 1960). Fifth, while remarkable predictive success provides important evidence of fertility as well as some (debatable) evidence of truth, it is easy to forget that it is a double-edged sword. For the slightest empirical discrepancy can undermine the most entrenched theoretical platform. The lesson here is that the more precision we are able to achieve, the greater the chance that a small and unresolved discrepancy with current theory may be found. There is a perspectival cognition in play, something of an observer selection effect, with the precise prediction argument for strong realism. Even our most precise tools yield coarse-grained results relative to a much higher standard. Greater precision might well yield discrepancies not yet “visible” to us. There may be fish that we do not catch because our nets aren’t fine enough. The perspectival point can be generalized, I believe. Scientists and commentators naturally focus on extant theories, models, instrumentation, research designs, stated goals and standards, etc.; for, given our humanly bounded scientific imagination, there is not a whole lot else to focus on. Yet, both historically and logically, these are tiny samples of possible theories, models, instruments, techniques, etc. We are relatively blind to the alternatives potentially available to us or already available to the super-intelligent inhabitants of the planet Zork. In this respect, which theories get proposed or instruments or research designs invented at a given time is historically contingent.11 Many strong realists place particular emphasis on the epistemic value of novel predictions, those that are unexpected, given the background knowledge available to the scientists at the time. I can only touch briefly on this controversy, but I am less impressed. For one thing, I do not place epistemic value on subjective, psychological “wow” factors. Unlike the traditional theory of confirmation (the timeless logical relation of theory to evidence), there are Bayesian probabilistic approaches that do provide a rationale for taking novel prediction seriously. I take it seriously, too, but I take novel predictive success more as good evidence of research fertility (extending our models into new domains) than as the mark of truth. Similar points apply to empirical robustness and to theoretical unification. We rush to regard these significant indicators of scientific progress as equally strong evidence that we are on the track of truth, but such progress does not entail that we are on that track. As with prediction, there is a potential cost to such advances. Here again we hold a double-edged sword. The cost of tight

11 For a fascinating collection of articles on historical contingency in science, see Soler, Trizio and Pickering (2015).


Thomas Nickles

robustness (in the sense of independent lines of investigation converging on the same conclusion) is fragility, that is, the loss of resilience (ability to recover from epistemological buffeting). This includes the tight logico-mathematical unification of previously distinct theories or models. In both cases the tight connections of the parts provide conduits for rapid propagation of error through the system. Without buffering (“wiggle room,” “fudge factors”), the slightest discrepancy can begin a cascade of disconfirmations, via modus tollens, that overturn one established claim after another.

1.12 The Illusion of Control Our latest scientific and technological breakthroughs convince us that we are well on our way to becoming masters of the universe. In these moments of exhilaration, we forget the old saw that every answer raises new questions (Firestein 2012). Spyros Makridakis and Nassim Taleb (2009) speak of “the illusion of control” with respect to funding agencies that resist funding potentially disruptive projects – those that challenge the status quo. Such resistance discourages innovation. Their work inspires three more general points. (a) The success of (especially) mature science has led many people to think of the scientific frontier as now conquered. By contrast, Vannevar Bush (1945) spoke of an “endless frontier.” Informally stated, thinkers from Hume to Wolpert and Macready on the “No Free Lunch” theorems have argued convincingly that no method, including any present one, is guaranteed a priori to work in novel domains (cf. Wolpert and Macready 1995, Wolpert 1996). (b) The textbooks and popular hype suggest that we know far more about the physical and social universes than we do. Nancy Cartwright’s (1999) metaphors of the universe as a “dappled world” and of “islands of order” provide an alternative picture of current science. That picture gains plausibility when looking in detail as some concrete applications. And, on a large scale, recall how recent are the advances in nonlinear dynamics (“chaos theory”) and complexity theory, and how little we know about so-called dark matter and (especially) dark energy. (c) The illusion of self-containment is that part of the illusion of control that supposes that science is driven entirely by internal factors, a message unfortunately propagated by internalist historians and philosophers in earlier generations. As historians of science and technology have shown convincingly by now, economic, political, and other interests, as well as the development of new instrumental technologies have often opened up new frontiers for investigation. Who in 1850, absent today’s reliance on silicon wafers for

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


computer chips, would have thought research on sand (silicon) so important? And the current revolution in scanning technologies is driven, in part, by perceived needs ranging from medicine to metallurgy. It is no surprise to pragmatists that scientific goals reflect human purposes. In Expert Political Judgment, on hedgehogs and foxes, Philip Tetlock writes of our shared need to believe that we live in a comprehensible world that we can master if we apply ourselves. (Tetlock 2005, p. 19) The dominant danger remains hubris, the mostly hedgehog vice of close-mindedness, of dismissing dissonant possibilities too quickly. But there is also the danger of cognitive chaos, the mostly fox vice of excessive open-mindedness, of seeing too much merit in too many stories. (Tetlock 2005, p. 23)

In overreacting to the challenges of first- and second-generation science studies, philosophers retreated to the hedgehog position on realism. Tetlock is writing about political judgment, but there are similarities to scientific judgment about the future. Both scientists and political policy-makers are working at a moving frontier.

1.13 The Popular Science or Dead Metaphor Illusion By contrast with popularized presentations of scientific work, including good semi-technical presentations, we are tempted to think that there is a completely literal science that makes no use of models, analogies, and other rhetorical tropes. To be sure, there is an important difference between publications in top journals and popular re-presentations. No one denies that. Nevertheless, it does not follow that mature scientific results make no use of rhetorical language, that our best, mature theories somehow capture “nature’s own language,” as Galileo already seems to have thought. It remains an open question what it takes to “kill” a scientific metaphor, so that it is now “dead,” that is, completely literal. This point resonates with the fish-in-water illusion. Sometimes, in historical retrospect, we recognize that metaphors associated with the social culture of the day are in play that were not recognized as such by everyone at the time. An example, pointed out by Gigerenzer and Goldstein (1996), is that Charles Babbage regarded his computers as machines analogous to the punched-cardusing Jacquard looms of the British Industrial Revolution, rather than as information-processing devices, to employ our own metaphor. So regarded, Babbage’s machines belonged to the machine age rather than to the information age. And


Thomas Nickles

today we still talk of quantum processes in terms of “particles” and “waves,” although neither term really fits. Scientific publications are only relatively literal, for the professional papers are based on research that employs models, analogies, similes, and other tropes. Models themselves are tropes, based on one or another figurative representation. Moreover, all scientific work, including representation, has to be “popular” in the sense of fitting human cognitive capabilities (Nickles 2019).

1.14 The Simulation Illusion What is the difference between an excellent simulation and the target system itself? The difference seems obvious, but it becomes surprisingly easy to blur the distinction. The simulation illusion is the unwarranted slide from simulation to reality, precisely the slide from research tools to ontic assertion. Nickles (2018b) provides several examples from informational biology and digital quantum physics. Underlying this slide is the ambiguity of the terms ‘model’ and ‘to model X as Y’ and, more basically, of ‘representation’ and ‘to represent X as Y.’ Simulations can be extremely valuable research tools, of course, tools that enable us to learn much about reality. Simulations are frequently used even to generate “empirical” data. (There is much discussion about how this is possible.) But a simulation is still a human product, a manufactured tool, not targeted reality itself. Simile, metaphor, and analogy are all well-known rhetorical tropes, so this cognitive illusion, like the metaphor illusion, can be regarded as a specific instance of a general rhetorical illusion – that completed science is, and must be, devoid of rhetorical force. Simulation practices are part of what I call the modeling revolution or turn in scientific methodology. We now appreciate the modeling of various kinds pervades all sciences, from data gathering and interpretation to abstract theory. This amounts to a weakening of older conceptions of scientific method. Simply stated: the five-step scientific method that we all learned in school was essentially the method of hypothesis. Traditionally, such hypotheses were not known to be true, but nor, prior to testing, were they known to be false. So there was always the hope that one’s hypothesis might be true, to be confirmed by rigorous testing. Modeling is different in that nearly all models are known to be false from the getgo, because of deliberate simplification, abstraction, etc. This weakening of traditional macro-methodology of science has surely stimulated enormous scientific progress. Given that models, as they stand, cannot be true representations of reality, I think realists face a challenge of how to explain the fertility of modeling.

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


1.15 The Truth Illusion “You agree that tables and chairs and other people exist, that several substances possess the enduring trait of solubility in water, and even that atoms and molecules exist; so what is it about quarks or strings or alternative universes that could possibly bother you?” One form of the truth illusion is a slippery slope fallacy from everyday truth to (claimed) deep truth. We can resist it by acknowledging that our skepticism should increase by degrees as the claims concern the more esoteric and inaccessible natural structures and processes. It is not irrational to have our epistemic confidence fade out in this way. A second form of the truth illusion confuses truth claims with claims about reliability and/or heuristic fertility. Take reliability. A high degree of reliability is not the same as truth. An important question is whether we need more than reliability, given the inaccessibility of truth in highly theoretical contexts. It is hard to see what usable leverage is gained by adding “It is true” to a scientific claim already established as reliable. How does speaking of truth provide any additional constraint on, or directive for, research? Easy, informal talk of truth invites us to think of truth as the final stage of confirmation, but, strictly speaking, a confirmation of hypothesis h2 is comparative. It shows only that h2 does better than h1. To add that h2 is true adds nothing of value to ongoing frontier research. If this is correct, then, ironically, talk of deep truth and strong realism is merely unwarranted interpretative overlay to the research enterprise. Another irony: speaking of truth is very often a proxy for speaking of high reliability and/or heuristic fertility, whereas for traditional philosophy of science (especially logical empiricism) it was exactly the other way around. So, what of heuristic fertility? William James articulated a so-called pragmatic theory of truth. His account was engaging but rather loose and deserved much of the criticism it received when James was taken quite literally as speaking of semantic truth in the technical, philosophical sense. But when we understand his remarks to be more about heuristic fertility than about literal truth, they begin to make more sense. For James applies the label ‘true’ to those hypotheses or gambits that have shown themselves to be highly fruitful in the sense of leading us to new experience and in reconciling new with older experience (1907, Lectures 2 and 6). In scientific research, fertility, like reliability, can be assessed far more confidently than truth. Unlike truth claims, forward-looking claims of heuristic fertility can be checked retrospectively and provide feedback on specific problem choice, methodology, instrumentation, grantsmanship, and such. Moreover, most of the value of the products of research (theories, models, methods, instruments, etc.) resides in their value for new research – rather than as representational truth claims that aim to mirror nature. Once we recognize this, we


Thomas Nickles

can better appreciate the openness to future change characteristic of nonrealist stances, for even the most valuable heuristic devices tend to break down when pressed into new regions beyond the limits of their original or “normal” application (Wimsatt 1980). Where strong realists claim (approximate) truth, my bet is that, as often as not, we have not yet discovered those limits.

2 Conclusion I do not claim that the above list is complete, although some critics will think it bloated, or worse. Nonetheless, I think the cognitive illusions presented here, especially when taken collectively, provide a powerful temptation to become a strong realist. They play on deceptive psychological and social features such as cognitive surprise that something could be so predictively accurate or could provide such a pretty problem solution yet not be true or even on the right track, “metaphysically” speaking. Please note that I am not saying that strong realists have no evidence or arguments for their positions. Nor am I saying that the above items are pure illusions, that none of them have any evidential value whatever. Clearly such items as precise prediction and impressive problem-solving ability can constitute evidence. My position is that they are not evidence enough to establish truth or nearness to the truth, and that people are overly impressed by them via sociopsychological perspectives that display the evidence in a misleading light. Finally, I am certainly not denying that science makes theoretical progress of a realist sort. After Jean Perrin’s work, it was reasonable to conclude that atoms are real; and, given the century of additional work from then to now, it is today far more reasonable than it was then. However, given today’s greatly increased empirical (instrumental, experimental) access, that is now a fairly shallow, weak sort of realism, by contrast with strong realism about empirically remote theoretical postulations. Second, people usually exaggerate it. In Perrin’s day, it was reasonable to believe in atoms and molecules, but did anyone then really know what exactly they were? Far from it, by our present lights. Their ability correctly to describe what atoms are was very limited.12 Do we now know exactly what atoms are, or those components that we call electrons? I think not. Although we know a lot more about atoms today, we are

12 I cannot enter here into the ongoing debate about the nature and significance of Perrin’s work. See, e.g., Achinstein (2003, chs. 12 and 13), van Fraassen (2011), Psillos (2011), Hudson (2018).

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


also now ignorant in ways that experts in Perrin’s day could not even have formulated. Given the divergent interpretations of the quantum mechanics of the 1920s and ‘30s, not to mention more recent developments can we confidently say that our present claims are likely to stand up to the research of the next century? New science certainly giveth, but it also taketh away things that past scientists thought they knew.

References Achinstein, P. (2003): The Book of Evidence. New York: Oxford University Press. Arabatzis, T. (2001): “Can a Historian of Science Be a Scientific Realist?” Philosophy of Science, v. 68, pp. S531–S541. Brockman, J. (ed.) (2015): This Idea Must Die: Scientific Theories That Are Blocking Progress. New York: Harper Perennial. Bush, V. (1945): Science, the Endless Frontier: A Report to the President. Washington, D. C.: U. S. Government Printing Office. Cartwright, N. (1999): The Dappled World. Cambridge: Cambridge University Press. Chang, H. (2012): Is Water H2O? Dordrecht: Springer. De Regt, H. (2004): “Discussion Note: Making Sense of Understanding.” Philosophy of Science, v. 71, pp. 98–109. De Regt, H., Leonelli, S. and Eigner, K. (eds.) (2009): Scientific Understanding: Philosophical Perspectives. Pittsburgh: University of Pittsburgh Press. Desch, M., Ikenberry, G. J., Trachtenberg, M., and Wohlforth, W. (2019): Cult of the Irrelevant: The Waning Influence of Social Science on National Security. Princeton: Princeton University Press. Devitt, M. (2011): “Are Unconceived Alternatives a Problem for Scientific Realism?” Journal for General Philosophy of Science, v. 42, pp. 285–293. Feynman, R. P. (1960): “There’s Plenty of Room at the Bottom.” Caltech Engineering and Science, v. 23, pp. 22–36. Fine, A. (1986): The Shaky Game: Einstein, Realism, and the Quantum Theory. Chicago: University of Chicago Press. Firestein, S. (2012): Ignorance: The Driver of Science. New York: Oxford University Press. Firestein, S. (2015): Failure: Why Science Is So Successful. New York: Oxford University Press. Fischhoff, B. and Beyth, R. (1975): “‘I Knew it Would Happen’: Remembered Probabilities of Once-Future Things.” Organizational Behavior and Human Performance, v. 13, pp. 1–16. Fischhoff, B., Slovic, P., and Lichtenstein, S. (1977): “Knowing with Certainty: The Appropriateness of Extreme Confidence.” Journal of Experimental Psychology: Human Perception and Performance, v. 3, pp. 552–564. Giere, R. (2006): Scientific Perspectivism. Chicago: University of Chicago Press. Gigerenzer, G. (1991a): “How to Make Cognitive Illusions Disappear: Beyond ‘Heuristics and Biases.’” European Review of Social Psychology, v. 2 n. 1, pp. 83–115. DOI: 10.1080/14792779143000033. Gigerenzer, G. (1991b): “From Tools to Theories: A Heuristic of Discovery in Cognitive Psychology.” Psychological Review, v. 98, pp. 254–267.


Thomas Nickles

Gigerenzer, G. (2000): Adaptive Thinking. New York: Oxford University Press. Gigerenzer, G. (2014): Risk Savvy: How to Make Good Decisions. New York: Viking. Gigerenzer, G. and Goldstein, D. G. (1996): “Mind as Computer: Birth of a Metaphor.” Creativity Research Journal, v. 9, pp. 131–144. Goldstein, D. G. (2011): “The Battle between Your Present and Future Self.” TED talk. ture_self. (Accessed November 6, 2016.) Hempel, C. G. (1965): Aspects of Scientific Explanation. New York: Free Press. Hudson, R. (2018): “The Reality of Jean Perrin’s Atoms and Molecules.” British Journal for the Philosophy of Science, c. 0 (published online), pp. 1–26. axx054 (Accessed on 25 April 2019). James, W. (1907): Pragmatism. New York: Longmans Green. Kahneman, D., Slovic, P., and Tversky, A. (eds.) (1982): Judgment Under Uncertainty: Heuristics and Biases. New York: Cambridge University Press. Kuhn, T. S. (1962 [1970]): The Structure of Scientific Revolutions. Chicago: University of Chicago Press. 2nd edition, contains “Postscript-1969.” Kuhn, T. S. (1987): Black-Body Radiation and the Quantum Discontinuity: 1894–1912. Oxford: Clarendon Press. Laudan, L. (1981): “A Confutation of Convergent Realism.” Philosophy of Science, v. 48, pp. 19–49. Lewis, P. J. (2016): Quantum Ontology: A Guide to the Metaphysics of Quantum Mechanics. Oxford: Oxford University Press. Makridakis, S. and Taleb, N. (2009): “Living in a World of Low Levels of Predictability.” International Journal of Forecasting, v. 25, pp. 840–844. Nickles, T. (2016): “Perspectivism versus a Completed Copernican Revolution.” Axiomathes, v. 26, n. 4, pp. 367–382. DOI 10.1007/s10516-016-9316-0. Nickles, T. (2017a): “Prospective versus Retrospective Points of View in Theory of Inquiry: Toward a Quasi-Kuhnian History of the Future.” In: Beaney, M., Harrington, B. and Shaw, D. (eds.): Aspect Perception after Wittgenstein: Seeing-As and Novelty, pp. 151–163. London: Routledge. Nickles, T. (2017b): “Cognitive Illusions and Nonrealism: Objections and Replies. In: Agazzi, E. (ed.): Varieties of Scientific Realism, pp. 151–163. Cham: Springer. Nickles, T. (2018a): “Strong Realism as Scientism: Are We at the End of History?” In: Boudry, M. and Pigliucci, M. (eds.): Science Unlimited? The Challenges of Scientism, pp. 145–163. Chicago: University of Chicago Press. Nickles, T. (2018b): “TTT: A Fast Heuristic to New Theories?” In: Ippoliti, E. and Danks, D. (eds.): Building Theories: Hypotheses and Heuristics in Science, pp. 169–189. Switzerland: Springer. Nickles, T. (2019): “The Crowbar Model of Method and Its Implications.” The Theoria article was published in 2019, specifically: Theoria 34(3):357–372. Poincaré, H. (1952): Science and Hypothesis. New York: Dover. Originally published in French in 1905. Popper, K. R. (1957): The Poverty of Historicism. London: Routledge and Kegan Paul. Popper, K. R. (1963): Conjectures and Refutations. New York: Basic Books. Psillos, S. (1999): Scientific Realism: How Science Tracks Truth. London: Routledge.

Do Cognitive Illusions Make Scientific Realism Deceptively Attractive?


Psillos, S. (2011): “Making Contact with Molecules: On Perrin and Achinstein.” In: Morgan, M. (2011): Philosophy of Science Matters: The Philosophy of Peter Achinstein, pp. 177–190. New York: Oxford University Press. Quoidbach, J., Gilbert, D. and Wilson, T. (2013): “The End of History Illusion.” Science, v. 339, n. 4, pp. 96–98. Rosenberg, A. (2018): How History Gets Things Wrong: The Neuroscience of Our Addiction to Stories. Cambridge, MA: The MIT Press. Salmon, W. C. (1990): “The Appraisal of Theories: Kuhn Meets Bayes.” PSA 1990, v. 2, pp. 325–332. Soler, L., Trizio, E. and Pickering, A. (eds.) (2015): Science as It Could Have Been. Pittsburgh: University of Pittsburgh Press. Stanford, P. K. (2006): Exceeding Our Grasp: Science, History, and the Problem of Unconceived Alternatives. New York: Oxford University Press. Teller, P. (2001): “Twilight of the Perfect Model Model.” Erkenntnis, v. 55, pp. 393–415. Tetlock, P. (2005): Expert Political Judgment. Princeton: Princeton University Press. Tierney, J. (2013): “Why You Won’t Be the Person You Expect to Be.” New York Times, January 4. Trout, J. D. (2002): “Scientific Explanation and the Sense of Understanding.” Philosophy of Science, v. 69, pp. 212–233. Trout, J. D. (2005): “Paying the Price for a Theory of Explanation: De Regt’s Discussion of Trout.” Philosophy of Science, v. 72, pp. 198–208. van Fraassen, B. C. (2001): “Constructive Empiricism Now.” Philosophical Studies, v. 106, pp. 151–170. van Fraassen, B. C. (2011): “What Was Perrin’s Real Achievement?” In: Morgan, M.: Philosophy of Science Matters: The Philosophy of Peter Achinstein, pp. 231–246. New York: Oxford University Press. Williams, B. (2004): Truth and Truthfulness. Princeton: Princeton University Press. Wimsatt, W. C. (1980): “Reductionistic Research Strategies and their Biases in the Units of Selection Controversy.” In: Nickles, T. (ed.): Scientific Discovery: Case Studies, pp. 213–259. Dordrecht: Reidel. Wolpert, D. (1996): “The Lack of A priori Distinctions between Learning Algorithms. Neural Computation, v. 8, n. 7, pp. 1341–1390. Wolpert, D. and Macready, W. (1995): “No Free Lunch Theorems for Search.” Technical Report SFI-TR-95-02-010. Santa Fe Institute. Worrall, J. (1989): “Structural Realism: The Best of Both Worlds?” Dialectica, v. 43, pp. 99–124.

III Logical Approaches in Realist Terms

Alan Musgrave

Against Paraconsistentism Abstract: Once upon a time it was thought that inconsistent theories are no good – they are false and they explain nothing because “From a contradiction everything follows” (‘Explosion’). Contemporary logical wisdom (Relevantism, Dialethism, Paraconsistentism) has outgrown these traditional views – it turns out that contradictions can be true and Explosion is mistaken. I defend the traditional views. Keywords : Paraconsistentism, contradiction, relevantism, dialethism, realism

Mark Colyvan asks “Who’s Afraid of Inconsistent Mathematics?” (Colyvan 2008). To which I answer “I am.” I am afraid, not just of inconsistent mathematics, but also of inconsistent physics, inconsistent medicine, inconsistent politics, inconsistent ethics, inconsistent theology, of any inconsistent theory quite generally. The purpose of this paper is to say why. The chief reason is very simple. It can plausibly be maintained that the growth of human knowledge has been and is driven by contradictions. More precisely, that it has been and is driven by the desire to remove contradictions in various systems of belief. The contradictions can be of many different kinds. Empirical scientists seek to remove contradictions between the predictions of their theories and the deliverances of observation and experiment. Important developments in mathematics have arisen from the desire to remove contradictions in mathematical theories. These are obvious cases, and examples of them could be multiplied. But contradictions can arise anywhere, not just in science or mathematics. And the effort to remove them has led to progress, or at least to change, in non-scientific fields such as theology, astrology, you name it. Why do people try to get rid of contradictions? The classical answer is straightforward enough. A contradiction is a statement of the form “P and not-P.” Such a statement must be false, and so any theory that yields a contradiction must be false as well. For example, “There is a biggest prime number” when combined with the axioms of arithmetic leads to a contradiction. Arithmeticians just took this to be a reductio ad absurdum proof of its negation, “There is no biggest prime number.” So what went wrong? What went wrong was that logical paradoxes were discovered. Logical paradoxes are very peculiar statements indeed. If you assume a paradoxical statement true, then you can prove that it is false – and if you assume that it is false, then you can prove that it is true. The famous example is the Paradox of the


Alan Musgrave

Liar, “This statement is false.” If you assume that this statement is true, then it follows that it is false. Does this not simply prove by reductio that it is false? No, because if you assume that it is false, then it follows that it is true. Paradoxes are not ordinary contradictions but ‘double contradictions’ and the classical way out is not available to us. More desperate measures are called for. Logical paradoxes or ‘double contradictions’ are not the most familiar kind of paradox. The most familiar paradoxes – such as Zeno’s paradoxes of motion or Sorites paradoxes – are not peculiar statements but rather peculiar arguments. The arguments are peculiar in that they are apparently valid, yet proceed from apparently true premises to an apparently false conclusion. Faced with such an argument, we have three choices: we must show that contrary to appearances either the premises are not true, or the conclusion is not false, or the argument is not valid. In fact the classical solutions to the classical paradoxes all take the first course. Nobody considers them a challenge to classical logic, as the third course would require. For example, in the famous paradox of Achilles and the Tortoise we demonstrate that Achilles can never catch the Tortoise by assuming in the premises that Achilles cannot cover infinitely many distances in a finite time. But if space and time are infinitely divisible, as the argument assumes, then Achilles can cover infinitely many distances in a finite time. You do not need to be Achilles to perform this marvelous feat – if I walk one meter in one second, then in one second I cover infinitely many distances, half a meter, a quarter of a meter, an eighth of a meter, and so on ad infinitum. It is the same with so-called Sorites paradoxes, of which there are a great many. They all arise from the fact that ordinary languages contain lots of vague predicates, which have clear cases where they apply, clear cases where they do not apply, and a grey area in between. Applying – or rather, misapplying – a basic principle of arithmetic to such cases gives rise to Sorites paradoxes. Here are a few examples: A man with only 1 hair on his head is bald. If a man with n hairs on his head is bald, so is a man with n + 1 hairs on his head. Therefore, all men are bald. A 1-year old is not an adult. If an n-year old is not an adult, neither is an n + 1 year old. Therefore, nobody is an adult. A 1-year old is not a teenager. If an n-year old is not a teenager, neither is an n + 1 year old. Therefore, nobody is a teenager.

Against Paraconsistentism


1 grain of sand is not a heap of sand. If n grains of sand are not a heap of sand, neither are n + 1 grains. Therefore, there are no heaps of sand. I think that these Sorites paradoxes all have a false second premise, which arises from applying the Principle of Mathematical Induction not to natural numbers in the abstract, but to concrete numbers of things. Applied to concrete numbers of things, such as the number of hairs on a man’s head, or the number of years a person has lived, or the number of grains of sand in a heap of sand, the Principle of Mathematical Induction is often false, as Sorites paradoxes show. So far, so good. But this means that there is some “magic number n” such that if you have n hairs on your head you are bald, whereas if you have n+ 1 hairs you are not. Similarly for the other cases. Sometimes we know what the ‘magic number’ is. In the case of the third example to do with teenagers, the magic number is 13. You become a teenager on your thirteenth birthday (and cease to be one on your twentieth birthday). Similarly, perhaps, with the term “adult,” if there is a generally accepted legal definition of when a person becomes an adult. But this only shows, it may be objected, that the terms ‘teenager’ and perhaps also ‘adult’ are not genuinely vague terms after all. What about genuine cases of vagueness? Consider the fourth example, the so-called “Paradox of the Heap.” This is a particularly stupid misapplication of the Principle of Mathematical Induction, for it assumes that whether or not we have a heap of sand depends solely upon the number of grains of sand that are involved. But a million grains of sand do not make a heap of sand if they are laid out end-wise in a straight line. A heap of sand is a three-dimensional group of grains of sand, touching one another. You can make a tiny heap of sand by having two touching grains of sand and a third one sitting on top of them. (I have applied for a research grant to hire a sharp-eyed and delicate-fingered student to demonstrate this!) So the ‘magic number’ in the so-called ‘Paradox of the Heap’ is three. What about the last example, concerning baldness? Here we do not know what the magic number is, because nobody has yet been bothered to specify what it is. And perhaps nobody will ever, in the entire history of the human race, be bothered to specify it. So what? The second premise of the ‘Baldness Paradox’ is still obviously false, even though we do not know and likely never will know what the magic number in this case is. (Cheer up! Not even an omniscient God knows it. Of course, she might pick on a magic number of her own. But that would not be what we mean by “bald.”)


Alan Musgrave

Enough of paradoxes in the usual sense, which if I am right pose no challenge to classical logic. What about the logical paradoxes, which do pose such a challenge and which seem to call for desperate measures? Some logical paradoxes, like the famous Liar Paradox, have been known for ages, and for ages the most common desperate response to them was (to quote Bertrand Russell) the March Hare’s solution: “I’m tired of this – let’s change the subject.” (Russell 1997, p. 59.) They were seen just as clever tricks or party puzzles. They were on a par with the famous question “Have you stopped beating your wife yet?” A decent married bloke cannot answer this question YES or NO without incriminating himself. His only sensible course is to refuse to answer the question. Similarly with the question “Is ‘This statement is false’ true or false?” Logico-mathematical geeks like The Reverend Charles Dodgson (a.k.a. Lewis Carroll) might like to peddle such puzzles and confuse their fellow Fellows at Oxbridge dinner parties. But sensible folk should just smile and pass on to something more serious. This relaxed attitude was all very well before Bertrand Russell discovered a paradox in set theory. Russell’s paradox was no amusing party-trick. Russell and others were trying to make out that set theory was the logical foundation for all of mathematics. Paradox at the heart of set theory had to be taken more seriously. Another common response to paradoxes like the Liar is to say that they are meaningless, and hence neither true nor false. But that is a desperate measure as well. We understand the paradoxical sentence perfectly well – it is not meaningless as a piece of gobbledy-gook is. Moreover, it is a declarative sentence rather than a question or command where the issue of truth or falsity does not arise. And so forth. As for Tarski’s response, he does not so much solve the Paradox of the Liar as show us how to avoid it, by providing us with rules for forming languages in which it will be impossible to state it. Dialethists and paraconsistentists have another desperate response to logical paradoxes. They say that paradoxical statements just are both true and false, in defiance of the law of contradiction. Perhaps this is not completely crazy – after all, you can prove by reductio that a paradoxical sentence is false, and you can also prove by reductio that it is true. Paraconsistentism began from this desperate response to logical paradox. I once asked Graham Priest, the High Priest of paraconsistentism, to give me an example of a true contradiction. He gave me the example of the Paradox of the Liar. I also asked him when reductio ad absurdum worked and when it did not. This is an important question for philosophers, since reductio ad absurdum is the only real weapon in the philosopher’s armoury. Priest replied that this is a good question – I do not recall him answering it. The trouble is that the moderns seem to overlook the difference between paradoxes and ordinary contradictions. It is one thing to say that a paradox or double contradiction is both true and false. It is quite another thing to say that

Against Paraconsistentism


an ordinary contradiction is both true and false. It is one thing to preach a Gospel of Tolerance regarding paradoxes. It is quite another thing to extend the Gospel beyond that peculiar and narrow compass, and preach the same Gospel regarding contradictions more generally. Yet this is what a paraconsistentists do. (One is tempted to remark that paraconsistentists do not know the difference between ‘para’ and ‘consis’.) Mark Colyvan writes a paper called “Who’s Afraid of Inconsistent Mathematics?” (Colyvan 2008). How long will it be before lesser mortals start writing companion pieces to this – “Who’s Afraid of Inconsistent Economics?” or “Who’s Afraid of Inconsistent Ethics?” or (God Forbid!) “Who’s Afraid of Inconsistent Theology?”? Before I say more about what is wrong with Colyvan’s paper, let me say a little more about what is wrong with contradictions. It is not just that an inconsistent theory must (if we accept the law of contradiction) be false. It is also that an inconsistent theory is worthless at explaining and/or predicting things. Why is that? Because a contradiction logically implies anything whatever, because of ‘Explosion’ as it is called. The proof of Explosion is very simple. Suppose we have ‘P & not-P.’ This logically implies P. And P logically implies ‘P or Q.’ But ‘P & not-P’ also logically implies ‘Not-P’. And ‘P or Q’ and ‘Not-P’ logically imply Q. Q.E.D. Explosion means that an inconsistent theory is worthless. Suppose we derive from a scientific theory a surprising prediction, which experiment reveals to be true. If the theory is inconsistent, there was no point doing the experiment in the first place. Whatever the result of the experiment, the theory will be able to predict it. Or to put the point the other way round, suppose experiment reveals that the surprising prediction is false. No problem – if the theory is inconsistent, it will also be able to get the result of the experiment right no matter what it is. The same goes for explanation. An inconsistent theory provides no good explanation of anything, since it explains everything and can never say that something will be the case rather than something else. Explosion is, it must be admitted, a counterintuitive result. Folk are often surprised when they first encounter it. So what? Folk are surprised by lots of the results of science. Folk-intuitions are often mistaken. Why should it be any different with logic? After all, the proof of Explosion is very simple. Despite this, Relevance Logicians do not accept that their folk-intuition is mistaken – rather they think that classical logic is mistaken about a contradiction implying everything. To avoid Explosion, they reject at least one of the rules that say that ‘P & Q’ logically implies P, that P logically implies ‘Either P or Q,’ and that ‘Either P or Q’ together with ‘Not-P’ logically implies Q. The favourite candidate for rejection is the rule that ‘Either P or Q’ together with ‘Not-P’ logically implies Q. Paraconsistentists ally themselves with Relevantists on this issue, for obvious reasons. A paraconsistentist Gospel of Relaxation regarding contradictions


Alan Musgrave

will seem far more plausible if you also deny that a contradiction entails anything whatever. Contradictions, as well as being both true and false, may no longer be useless at explaining or predicting things because of Explosion. Why reject Explosion because it goes against folk-intuition, rather than rejecting folk-intuition because it conflicts with logic? What is the difference between logic and other sciences in this regard? The difference is that logic is thought of as a ‘theory of thinking’. Explosion is mistaken because folk do not go around thinking that Fermat’s Last Theorem is true because it is raining and it isn’t. Disjunctive syllogism is wrong because folk do not go around making inferences like “Grass is green, therefore grass is green or 2 + 2 = 4.” Quite so – and so what? Logic is not a branch of descriptive psychology that describes the ways folk do and do not think. After all, logicians detect fallacies, widespread but invalid ways of thinking. If it should turn out, as well it might, that most folk habitually commit the fallacy of affirming the consequent, shall we incorporate this into our logic? Of course not. [I say “Of course not” but that is a little naïve. Stephen Stich and Richard Nesbitt would incorporate a fallacy into their logic, provided the folk they recognize as expert thinkers habitually commit the fallacy. More generally, nowadays people say that if our inferential practices are in ‘reflective equilibrium’ with our inferential principles, then everything is fine. Which means that everything is fine if we systematically commit the fallacy of affirming the consequent, and endorse it as an inferential principle as well. But that is a long story.] Right from the outset logicians realized, in their practice if not in their theory, that logic does not just describe the ways folk think. George Boole’s great book was called The Laws of Thought (cf. Boole 1952), but in its Preface he made it clear that logic does not describe how people do think. Rather, Boole said, it prescribes how they ought to think. Alas, this normative psychologism is not true either, though the relevantists and paraconsistentists think that it is. Their criticisms of Explosion all rest upon this mistaken view. It is objected, against Explosion, that people who believe that P should not waste time and mental energy deducing from P indefinitely many conclusions of the form ‘P or Q’. Quite so. But this is only a criticism of the claim that P logically implies ‘P or Q’ if it is also assumed that logic is telling people that they ought to waste time deducing from P arbitrary conclusions of the form ‘P or Q’. Logic says no such thing. All that logic says about our thinking is this – “If you produce an argument, make sure that it is a valid argument.” Logic has nothing to say about which arguments it is sensible to produce. That is a quite different question. It belongs to what, in the Middle Ages, was called the ‘Science of Rhetoric’. One might say that the relevantists and paraconsistentists are confusing Logic with Rhetoric.

Against Paraconsistentism


To see this, consider the following. Just as people do not go around deducing arbitrary conclusions of the form ‘P or Q’ from P, so also people do not go around deducing P from P itself. The reason is obvious. People produce arguments to try to convince others of things they do not believe. Obviously, if somebody does not believe P, it is no good producing the argument “P, therefore P” to try to convince them. Yet the argument “P, therefore P” is as valid as an argument can be. (It has been called absolutely valid, because it does not depend on any analysis of the internal logical structure of the statement or proposition P.) Similarly, and slightly less trivially, if somebody does not believe P, it is no good producing the argument “P and Q, therefore P” to try to convince them. Why do the relevantists and paraconsistentists not avoid Explosion by objecting to the Rule of Simplification (“P and Q, therefore P”) on the same ground as they object to “P, therefore P or Q”? Graham Priest’s argument against Explosion comes from the same stable. He makes a song-and-dance about the fact that a maths student would not get high marks if, asked to prove that there are infinitely many prime numbers, the student derived this from a contradiction.1 Quite so. This is meant to convince us that Explosion is mistaken. But neither would the student get high marks for deriving the result from itself, or for saying “Grass is green and there are infinitely many prime numbers, therefore there are infinitely many prime numbers.” Should this convince us that Simplification is mistaken? Rejecting the Rule of Simplification would be really exciting – we could have Parasimple Logics as well as Paraconsistent ones! Neither would the student get high marks for proving the infinity of primes by saying “Either there is an infinity of primes or I’m a monkey’s uncle, but I am not a monkey’s uncle, so there is an infinity of primes.” Shall we have Paradisjunctive Logics as well? Where will it end? Suppose we find somebody who believes P, and also believes not-P. You point out that they believe a contradiction ‘P and not-P.’ To which they reply, “No, I do not believe this contradiction at all. You are assuming that I accept the logical Law of Addition, but I do not. I believe P, and I believe not-P, but I do not believe ‘P and not-P’. Problem solved!” Shall we have Para-Additive Logic as well? All these arguments rest upon a simplistic and misguided interpretation of the idea that logic tells us how we ought to think, a simplistic and misguided normative psychologism. As I said earlier, they all confuse Logic with Rhetoric. Rhetoric has, since the Middle Ages, been a neglected subject. So I suppose we should be grateful to the relevantists and paraconsistentists for drawing our attention to it again, though they do not realize that this is what they are doing.

1 See, for example, Priest (1997, 2000a, 2000b, 2001); or Priest and Tanaka (2004).


Alan Musgrave

Should we really be grateful, though? Perhaps rhetoric is a justly neglected subject. It is very difficult to come up with useful generalizations about what is and what is not a sensible way of arguing. That depends on all kinds of facts about the person you are talking to or arguing with. There are pretty platitudinous generalities like “If you want to convince someone of a certain conclusion C, try to find premises which that person accepts or is likely to accept, from which C follows.” But these do not add up to much. We were recently treated, by Koji Tanaka, to two so-called ‘empirical arguments for paraconsistency’ (see, for example, Tanaka, Berto, and Paoli 2012). The first argument has already been discussed, and consists of Priest’s rhetorical claim that “Anyone who actually reasoned from an arbitrary [contradictory] premise to, e.g., the infinity of prime numbers, would not last long in an undergraduate mathematics course.” (I added the word ‘contradictory’ here – it is no part of classical logic to suppose that an argument from an arbitrary premise to the infinity of prime numbers is valid.) Tanaka’s second argument is due to Robert Meyer the relevantist, and goes as follows:2 1. We are committed to the logical consequences of our beliefs. 2. If we are committed to the classical consequences of our beliefs, then we are committed to everything if we have contradictory beliefs. 3. We are committed to some contradictory beliefs. 4. We are never committed to everything. Therefore, we are not committed to the classical consequences of our beliefs. I grant that this argument is valid. But I think it obviously unsound. Premise 3 is obviously false. To commit oneself to a belief is, presumably, to think it true. So what premise 3 is telling us is that ‘we’ think contradictions true. But we do not do that – at least, not unless we are already paraconsistentists. Meyer does not argue for paraconsistentism, rather he assumes in his premises that ‘we’ are already paraconsistentists. Instead of saying that we are committed to contradictory beliefs, as in Meyer’s premise 3, we should say that folk often or always have contradictory beliefs without suspecting it. That is true. But plain folk, when it is pointed out to them that they have contradictory beliefs, are shocked and try to get rid of the contradiction. They do not say “Oh, that’s fine, who’s afraid of contradictions, I think my contradictory beliefs are both true.” This is the heuristic power of

2 See, for example, Meyer (1976); Meyer and Mortensen (1984).

Against Paraconsistentism


contradictions, or rather of the desire to get rid of them, with which I began – and to which I now return. If simplistic ‘normative psychologism’ (if I may call it that) is mistaken, what are we to make of my own view that creative thinking is driven by the desire to remove contradictions? Does that view not imply that as soon as a contradiction is discovered in some theory or system of belief, everybody should drop everything else they are engaged in and focus upon removing that contradiction? Mark Colyvan objects to such a normative or heuristic prescription. He gives examples where mathematical theories were found to be inconsistent but where some mathematicians continued working on other problems despite being aware of the fact. One of his examples is set theory: Paradoxes such as Russell’s . . . led to a crisis in mathematics at the turn of the 20th century. This, in turn, led to many years of sustained work on the foundations of mathematics. In particular, a huge effort was put into finding a consistent (or at least not known-to-beinconsistent) replacement for naïve set theory. The generally-agreed-upon replacement is Zermelo-Fraenkel set theory with the axiom of choice (ZFC). But the inconsistency of naïve set theory cannot be the whole story . . . After all, there was a period of some 30 odd years between the discovery of Russell’s paradox and the development of ZFC. Mathematicians did not shut up shop until the foundational questions were settled. They continued working, using naïve set theory, albeit rather cautiously. Moreover, it might be argued that many mathematicians to this day, still use naïve set theory (Colyvan 2008, p. 25).

(By the way, Colyvan makes an obvious mistake here. He says that: “Not all arguments involving contradictions (or taking contradictions as premises) are defective. Take the argument from P & not-P therefore P & not-P. Surely this is both valid and sensible” (Colyvan 2008, p. 30). Surely this argument is not sensible at all. It is never sensible to try to convince someone of the truth of P by producing the argument “P, therefore P.” I suppose Colyvan deems it sensible because he does not want to reject “P, therefore P” on the same ground as he rejects Explosion.) Colyvan’s second example concerns the early calculus. Again I quote: When the calculus was first developed in the late 17th century by Newton and Leibniz, it was fairly straightforwardly inconsistent. It invoked strange mathematical items called infinitesimals . . . The problem is that in some places these entities behave like real numbers close to zero but in other places they behave like zero. . . . The calculus was eventually, and gradually, made rigorous . . . in the 19th century. . . . The point is simply that for over a hundred years mathematicians and physicists worked with what would seem to be an inconsistent theory of calculus (Colyvan 2008, pp. 27–28).

There is a great difference between Colyvan’s two examples. Naïve set theory led to paradox, whereas the theory of infinitesimals was simply contradictory. The


Alan Musgrave

calculus was made rigorous by showing that we can get by without postulating infinitesimally small quantities that both are and are not equal to zero. Colyvan overlooks the difference and preaches a Gospel of Relaxation regarding contradictory theories in general. He uses these examples to urge that paraconsistentism is the appropriate logic for mathematics. Once we become paraconsistentists, naïve set theory and naïve infinitesimal calculus can be rescued. There is no need to adopt their more mathematically sophisticated replacements: ZFC and modern calculus. There are a couple of pay-offs here. First, both naïve set theory and naive infinitesimal calculus are easier to teach and learn than their modern successors. . . . The second payoff is related to the first and concerns the intuitiveness of the theories in question. [The naïve theories are more intuitive.] (Colyvan 2008, p. 31).

Gee whizz, what happened to the search for truth? Aristotle’s theory of terrestrial motion is easier to learn and more intuitive than modern physics. I am not making this up. Educational psychologists have produced ample evidence that it is true. Trouble is Aristotelian physics contradicts observable facts. No problem – let’s just become paraconsistentists in physics as well, and go back to the Dark Ages! All I can say is that it is a good thing that classical logic prevailed, and that science did not rest content with Aristotelian physics, infinitesimals or naive set theory. All the great advances in human thought have been inspired by the desire to get rid of contradictions and find the truth. It is, of course, no part of classical logic to say that once a contradiction in a system of thought is discovered, everybody should immediately drop everything and seek to remove it. That would be a foolish contribution to the neglected science of Rhetoric. It is not how we do think, or how we ought to think. It is better to engage in a division of epistemic labour. In urging this point, Colyvan merely echoes what Imre Lakatos colourfully said long ago – that mathematics can “progress on inconsistent foundations” (Lakatos 1978, p. 67). But of course, Lakatos does not figure in his extensive bibliography. Let me conclude with an objection to what I have been saying. There is a rhetorical puzzle regarding deductive logic. The valid argument “P, therefore P” is not sensible, because it is completely circular and question-begging. The valid argument “P and Q, therefore P” is also not sensible, being partially circular and question-begging. The maths student would not get high marks for producing either as a proof of Q. But suppose the student produces a more convoluted valid argument to the same conclusion (Euclid’s proof of the infinity of primes, for example). The student will get good marks. But is the convoluted proof any more sensible? The conclusion will still be “contained in the premises,” as logicians say, only less obviously so. Why reward the student for lack of obviousness?

Against Paraconsistentism


Down the ages reflections like this have convinced many philosophers that no valid deductive argument is a sensible argument, precisely because its conclusion is always contained in its premises. The only sensible arguments are non-deductive or content-increasing or ampliative or inductive arguments. After all, only such arguments can ever tell us anything new. Or so many philosophers have argued. But as I pointed out long ago, this confuses logical novelty with psychological novelty. Folk are not logically omniscient, they do not always believe (let alone know) all the logical consequences of what they believe. People can be astonished to discover the logical consequences of their premises. (The best examples of this, down the ages, are provided by mathematical theories.) The point of arguing is to remedy our lack of logical omniscience. It is not to arrive at new conclusions – logically new conclusions, I mean. If you want to come up with some new hypothesis, engage in some creative or imaginative thinking. This may involve arguing – new hypotheses do not typically result from flashes of mystical intuition or come to scientists in their dreams. But when discovery does involve argument, the arguments are deductive ones. The ‘logic of discovery,’ in so far as it exists, is plain old deductive logic. The idea that we can get logically new conclusions using valid ampliative or inductive reasoning is just an attempt to square the circle. So the solution to the rhetorical puzzle is obvious. Deductive reasoning has a point because we are not logically omniscient. The conclusion of a valid deductive argument will not be logically new – but it may very well be psychologically new. We may be astonished to discover that something we do not think true follows deductively from things we do think true. Similarly, we may be astonished to discover that things we think true have a deductive consequence that we think false. Deductive logic is necessary simply to overcome our lack of logical omniscience. God does not need logic – but we do. So I conclude, with the obvious truth that I am not God.

References Boole, G. (1952): “Preface.” In: Boole, G., An Investigation on the Laws of Thought on which Are Founded the Mathematical Theories of Logic and Probabilities, pp. iii–iv. La Salle: Open Court Publications Co. Colyvan, M. F. (2008): “Who is Afraid of Inconsistent Mathematics?,” ProtoSociology. An International Journal of Interdisciplinary Research, v. 25, (2008), pp. 24–35. Lakatos, I. (1978): The Methodology of Scientific Research Programmes. Philosophical Papers, v. 1, ed. Worrall, J. and Currie, G. Cambridge: Cambridge University Press. Meyer, R. K. (1976): “Relevant Arithmetic.” Bulletin of the Section of Logic of the Polish Academy of Sciences, v. 5, pp. 133–137.


Alan Musgrave

Meyer, R. K. and Mortensen, C. (1984): “Inconsistent Models for Relevant Arithmetic.” Journal of Symbolic Logic, v. 49, pp. 917–929. Priest, G. (1997): “Inconsistent Models of Arithmetic Part I: Finite Models.” Journal of Philosophical Logic, v. 26, n. 2, pp. 223–235. Priest, G. (2000a). “Truth and Contradiction.” The Philosophical Quarterly, v. 50, n. 200, pp. 305–319. Priest, G. (2000b): “Inconsistent Models of Arithmetic Part II: The General Case.” Journal of Symbolic Logic, v. 65, pp. 1519–1529. Priest, G. (2001): Worlds Possible and Impossible: An Introduction to Non-Classical Logic. Cambridge: Cambridge University Press. Priest, G. and Tanaka, K. (2004): “Paraconsistent Logic.” In: Zalta, E. N. (ed.): The Stanford Encyclopedia of Philosophy 2004, (Winter 2004 Edition), URL archives/win/2004/entries/logic-paraconsistent (accessed on 25. 6.2019). Russell, B. (1997): “Principia Mathematica: Philosophical Aspects.” In: Russell, B., My Philosophical Development, pp. 57–65. N. York: Routledge, reprinted edition. Tanaka, K., Berto, F. and Paoli, F. F. (2012) (eds.): Paraconsistency: Logic and Applications. Dordrecht: Springer.

Theo A.F. Kuipers

Stratified Nomic Realism Abstract: From a realist perspective, the main target of theory oriented empirical science may be characterized as the truth about the demarcation between nomic, e.g. physical, possibilities and impossibilities, called the ‘nomic truth’. In my “Models, postulates, and generalized nomic truth approximation” (2016) I have presented the ‘basic’ version of generalized nomic truth approximation, starting from ‘two-sided’ theories, consisting of models and postulates. Nomic truth approximation becomes, in this way, a process of revising theories, by revising their models and/or their postulates, as more evidence arises. The basic version of generalized nomic truth approximation is in several respects as simple as possible. Among other things, the basic version does not make a (theoryrelative) distinction between an observational and a theoretical level. This raises the question of how, in a stratified set-up, theoretical (nomic) truth approximation relates to observational truth approximation and to increasing empirical success. Keywords: Theoretically closer to the truth, observationally closer to the truth, truth projection, truthlikeness projection, theoretical realization principle, stratified success theorem, two-sided theories, realism, nomic realism, constructive realism, comparative realism

1 Introduction In my realist view (Kuipers, 2000), the target of theory oriented empirical science in general and of nomic truth approximation in particular is to characterize the boundary or demarcation between nomic possibilities and nomic impossibilities, for example the demarcation between physically possible and impossible states or trajectories of a system or between economically possible and impossible markets. In my “Models, postulates, and generalized nomic truth approximation” (2016), I presented the ‘basic’ version of generalized nomic truth approximation, starting from ‘two-sided’ theories. The main claim of that article is that nomic truth approximation can be perfectly achieved by combining two prima facie opposing views on theories:




Theo A.F. Kuipers

The traditional (Popperian) view: theories are (sets of models of) postulates that exclude certain possibilities from being realizable, enabling explanation and prediction. The model view: theories are sets of models that claim to (approximately) represent certain realizable possibilities.

Nomic truth approximation, i.e. increasing or otherwise improving truthcontent and decreasing or otherwise weakening falsity-content, becomes in this way revising theories by revising their models and/or their postulates, as more evidence arises. My pre-2012 work on truth approximation, notably (Kuipers, 2000), was restricted to maximal theories, that is, theories in which the models are just the structures satisfying the postulates. Hence, the two-sided approach is a far-reaching generalization. However, the basic version of generalized nomic truth approximation is in many respects as simple as possible. The present article deals with the third of (at least) three plausible concretizations of the basic version: a quantitative version, a refined version, and a stratified version. Typical for many philosophers of science is the presupposition of a (theoryrelative) distinction between an observational and a theoretical level, even such that Lakatosian hard core postulates, for example, only have bite on the observational level in combination with auxiliary hypotheses: hence, stratification is needed. To present the stratified version we will first clarify, in Section 2, the main target of nomic truth approximation and the nature of (true and false) two-sided theories and of empirical evidence. Then we will present the crucial definitions of the basic unstratified account and the corresponding success theorem, according to which (nomic) truth approximation entails increasing empirical success in the long run. In Section 3, the core of this paper, we will deal in the first place with the stratification of the relevant notions. We will then deal with truth approximation on the theoretical level and its projection on the observational level will be presented. Combined with the success theorem applied on the observational level, we get the stratified success theorem. Next we will discuss the strength of one of the assumptions needed for projection, the so-called Theoretical Realization (TR-) principle, in particular in relation to the (non-)reference of theoretical terms. Finally, we will resume the remaining methodological steps to complete the stratified theory of nomic truth approximation. In the final Section, 4, we turn to the stratified version of Lakatos’s distinction between hardcore and auxiliary assumptions. On the ‘postulate (P-)side’ of theories it can easily be made. As a rule, both hardcore and auxiliary postulates

Stratified Nomic Realism


will either be formulated purely in theoretical terms or in both types of terms. On the ‘model (M-)side’ we can make the distinction between prototypical and tentative models. Whereas this leads on the P-side to the well-known Lakatosian playground for revising the auxiliary postulates, it leads on the M-side to a similar playground of revising (the set of) tentative models. Hence, Lakatosian empirical progress on the observational level and truth approximation on the theoretical level can then be achieved when such revisions respect the prototypical models and the hardcore postulates. At the end of this section I would like to characterize my ontologicalcum-epistemological position as elaborated in Kuipers (2000 and 2019). There is a human-independent natural world, about which true claims may be (non-compellingly) justified, not merely restricted to what is observable, but also with respect to theoretical terms and statements, provided we take the comparative piecemeal perspective, notably that of ‘more successful’ and ‘closer to the truth’, being a core feature of comparative ‘theory realism’. However, as to the question of whether there is some ideal vocabulary fitting the natural world, and hence leading to a kind of essentialistic realism, my answer is negative, for this reason I speak of ‘constructive’ realism, e.g. in the title of my 2000-book. As I emphasize in the title of this paper, comparative constructive realism is a kind of nomic realism. As already suggested above, in my view theory formation and revision is not directed at truth approximation with respect to the actual world but at the nomic world, that is, the realm of what is nomically, e.g. physically, possible. In sum, comparative constructive nomic realism is the full name of the position I elaborate in this paper with respect to the distinction between an observational and theoretical level. In view of this focus, the title of this paper could have been ‘stratified comparative constructive nomic realism’, but I simplified it to ‘stratified nomic realism.’1 As a final terminological point, I would like to mention that in the literature ‘(nomic) truth approximation’ is also called ‘increasing (nomic) truthlikeness’, or ‘increasing (nomic) verisimilitude’, or ‘increasing legisimilitude’, to use Jonathan Cohen’s favorite term ‘legisimilitude’ for ‘nomic verisimilitude’ (Cohen 1987).

1 The core of this paper, Section 3, is based on Chap. 7 of my Nomic Truth Approximation Revisited (Kuipers, 2019). To some extent, it is the two-sided version of Kuipers (2000, ch. 9); it has never been presented before. Section 2 is based on a similar preparatory section for a separate publication of chap. 6: (Kuipers, 2020).


Theo A.F. Kuipers

2 Nomic Theories, Nomic Evidence, and the Basic Account This section provides the background and framework for the two following sections.

2.1 Nomic Theories and Nomic Evidence In order to characterize the boundary or demarcation between nomic possibilities and nomic impossibilities,2 we need to presuppose a set, U, of conceptual possibilities in a given, bounded, context, e.g. the states or trajectories of a system or a type of systems,3 that is, the set of structures generated by a descriptive vocabulary, V, in which U and subsets of U, e.g. X, Y, M, P, R, S, are characterized (cX will indicate the complement of X, U–X). Let bold T indicate the unknown subset of U of nomic possibilities, not (yet) based on V. Hence cT indicates the subset of nomic impossibilities. In these terms, the target of nomic research is identifying, if possible, T’s boundary in V-terms, called the nomic truth, for reasons that will become clear soon. For this purpose we design theories with claims. A (two-sided) theory is a tuple of subsets of U, defined in V-terms, where M indicates a set of (specified) models and P indicates the set of models of certain Postulates (P = Models (Postulates)). The theory’s claims are: “M ⊆ T,” the inclusion claim: all members of M are nomic possibilities “T ⊆ P,” i.e. “cP ⊆ cT,” the exclusion claim: all non-members of P are nomic impossibilities

This combines the two views on theories: representation (or inclusion) and exclusion. The two claims are compatible, making the theory consistent, iff (if and only if) M ⊆ P, that is, assuming the chosen models satisfy the chosen postulates (i.e., are models of these postulates).

2 Hence, ‘nomic’ is used here as a generic term. Moreover, the notion of nomic possibility, and its field specific cases, such as physical or biological possibility, function as basic or primitive ones, with corresponding laws, such as physical laws, as derivative notions. 3 Hence, U is not a set of possible worlds in the standard ‘there is only one world’ sense, but concerns so-called ‘small worlds’. They are only mutually exclusive and jointly exhaustive in the same case in the given context, e.g. a system may have several physically possible states, but has only one state at one time. Cf. the space of possible elementary outcomes in probability theory: one experiment has only one elementary outcome.

Stratified Nomic Realism


A theory is maximal if M= P; non-maximal otherwise. My pre-2012 work on truth approximation was restricted to maximal theories; hence we deal now with a far-reaching generalization. The definition of two-sided theories leaves room for two one-sided extremes: and , i.e. pure inclusion and pure exclusion theories, respectively, also referred to as the M- and P-theory constituting theory . A theory is true if both claims are true, i.e. M ⊆ T ⊆ P, false otherwise. It is easy to check that there is at most one true maximal theory, called the true theory or simply the (nomic) truth, resulting from the characterization of T in V-terms, if it exists.4 It will be indicated by , or simply T, i.e. nonbold ‘T’. This T is more specifically the target of (theory-oriented) research. It is also easy to check that this maximal theory