The Scientific Imagination 0190212306, 9780190212308

The imagination, our capacity to entertain thoughts and ideas "in the mind's eye," is indispensable in sc

1,062 130 4MB

English Pages 376 [361] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Scientific Imagination
 0190212306, 9780190212308

Table of contents :
Cover
The Scientific Imagination
Copyright
Contents
About the Contributors
Introduction
1. Capturing the Scientific Imagination
2. If Models Were Fictions, Then What Would They Be?
3. Realism About Missing Systems
4. The Fictional Character of Scientific Models
5. Models and Reality
6. Models, Fictions, and Conditionals
7. Imagining Mechanisms with Diagrams
8. Abstraction and Representational Capacity in Computational Structures
9. “Learning by Thinking” in Science and in Everyday Life
10. Is Imagination Constrained Enough for Science?
11. Can Children Benefit from Thought Experiments?
12. Metaphor and Scientific Explanation
13. Imaginative Frames for Scientific Inquiry: Metaphors, Telling Facts, and Just-​So Stories
Index

Citation preview

The Scientific Imagination

The Scientific Imagination Philosophical and Psychological Perspectives Edited By A R N O N L EV Y A N D P E T E R G O D F R EY- S​ M I T H

1

3 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2020 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. Library of Congress Control Number: 2019033333 ISBN 978–​0–​19–​021230–​8 1 3 5 7 9 8 6 4 2 Printed by Integrated Books International, United States of America

Cover image: M.C. Escher’s “Depth” © 2019 The M.C. Escher Company-The Netherlands. All rights reserved. www.mcescher.com

Contents About the Contributors

vii

Introduction Arnon Levy and Peter Godfrey-​Smith

1

1. Capturing the Scientific Imagination Fiora Salis and Roman Frigg

17

2. If Models Were Fictions, Then What Would They Be? Amie L. Thomasson

51

3. Realism About Missing Systems Martin Thomson-​Jones

75

4. The Fictional Character of Scientific Models Stacie Friend

102

5. Models and Reality Stephen Yablo

128

6. Models, Fictions, and Conditionals Peter Godfrey-​Smith

154

7. Imagining Mechanisms with Diagrams Benjamin Sheredos and William Bechtel

178

8. Abstraction and Representational Capacity in Computational Structures Michael Weisberg 9. “Learning by Thinking” in Science and in Everyday Life Tania Lombrozo

210 230

10. Is Imagination Constrained Enough for Science? Deena Skolnick Weisberg

250

11. Can Children Benefit from Thought Experiments? Igor Bascandziev and Paul L. Harris

262

12. Metaphor and Scientific Explanation Arnon Levy

280

vi Contents

13. Imaginative Frames for Scientific Inquiry: Metaphors, Telling Facts, and Just-​So Stories Elisabeth Camp

304

Index

337

About the Contributors Igor Bascandziev is a Visiting Assistant Professor of Psychology at Reed College. He completed his doctoral training at the Harvard Graduate School of Education and his postdoctoral training in the Department of Psychology at Harvard University. In addition, Igor spent one year as a Mind Brain Behavior Research Associate in the Department of Psychology and the Department of Philosophy at Harvard University, where he worked on questions concerning thought experiments. Some of the central questions of his research program concern the origins and development of concepts. How is it that humans, and only humans, know what the concept “1/​5th” means or what the concept “density” means? In particular, Igor is interested in the cognitive resources and the learning mechanisms that support the acquisition of such conceptual knowledge. William Bechtel is Distinguished Professor of Philosophy and a member of the Center for Circadian Biology at the University of California, San Diego. His research examines explanatory practices in molecular and cell biology, network systems biology, neuroscience, and cognitive science. In particular, he focuses on how biologists invoke mechanisms in their explanations, with an emphasis on the ways in which the conception of mechanisms has evolved over time. For several years he led a research group examining how scientists employ diagrams in their reasoning. He is the co-​author of Discovering Complexity and Connectionism and the Mind and author of Discovering Cell Mechanisms and Mental Mechanisms. Elisabeth Camp is Professor of Philosophy at Rutgers University, New Brunswick. She works in the philosophy of language, philosophy of mind, and aesthetics, focusing on thoughts and utterances that don’t fit standard propositional models. She has written extensively about the cognitive and communicative effects of cognitive perspectives, especially with figurative speech, such as metaphor and sarcasm, and with loaded language, such as slurs. She also works on the theory of concepts, on nonhuman animal cognition, and on non-​sentential representational systems such as maps. She was a Junior Fellow at the Harvard Society of Fellows and an Associate Professor at the University of Pennsylvania before moving to Rutgers. Her papers have appeared in venues including Analytic Philosophy, Midwest Studies in Philosophy, Noûs, Philosophical Perspectives, and Philosophical Studies. Stacie Friend is a Senior Lecturer in Philosophy at Birkbeck College, University of London. Her research focuses on issues at the intersection of aesthetics, language, and mind, especially in relation to our engagement with fictional narratives. She has published on the metaphysics of fictional characters, the nature and cognitive value of

viii  About the Contributors fiction, thought and discourse about the nonexistent, and emotion and imagination in response to literature and film. She has been a Mellon Postdoctoral Fellow at the University of Michigan and a British Academy/​Leverhulme Trust Senior Research Fellow, and has held visiting professorships at the University of Barcelona and the Institut Jean Nicod/​Ecole Normale Supérieure. She is the President of the British Society of Aesthetics, an organizer of the London Aesthetics Forum series of talks at the Institute of Philosophy, and a co-​investigator on the Leverhulme Trust research project “Learning from Fiction:  Philosophical and Psychological Perspectives” (2018–​2021). Roman Frigg is Professor of Philosophy in the Department of Philosophy, Logic and Scientific Method, Director of the Centre for Philosophy of Natural and Social Science (CPNSS), and Co-​Director of the Centre for the Analysis of Time Series (CATS) at the London School of Economics and Political Science. He is the winner of the Friedrich Wilhelm Bessel Research Award of the Alexander von Humboldt Foundation. He is a permanent Visiting Professor in the Munich Centre for Mathematical Philosophy of the Ludwig-​Maximilians-​University Munich, and he has held visiting appointments at the University of Western Ontario, the University of Utrecht, the University of Sydney, and the University of Barcelona. He was associate editor of the British Journal for the Philosophy of Science and a member of the steering committee of the European Philosophy of Science Association. He currently serves on a number of editorial and advisory boards. He holds a PhD in philosophy from the University of London and master’s degrees in both theoretical physics and philosophy from the University of Basel, Switzerland. His research interests lie in general philosophy of science and philosophy of physics, and he has published papers on climate change, quantum mechanics, statistical mechanics, randomness, chaos, complexity, probability, scientific realism, computer simulations, modeling, scientific representation, reductionism, confirmation, and the relation between art and science. Peter Godfrey-​Smith studied at the University of Sydney and the University of California, San Diego. He taught at Stanford, Harvard, the Australian National University, and the CUNY Graduate Center before moving to his current position as Professor of the History and Philosophy of Science at the University of Sydney. His main interests are in the philosophy of biology and the philosophy of mind, though he also works on pragmatism and various other parts of philosophy. He has written five books, including Theory and Reality: An Introduction to the Philosophy of Science (University of Chicago Press, 2003), Darwinian Populations and Natural Selection (Oxford University Press, 2009), and Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness (Farrar, Straus and Giroux, 2016). Paul L. Harris is a developmental psychologist with interests in the development of cognition, emotion, and imagination. For many years he taught at Oxford University, where he was a Professor of Developmental Psychology and a Fellow of St John’s

About the Contributors  ix College. In 2001 he migrated to Harvard, where he holds the Victor S.  Thomas Professorship of Education. He is a Fellow of the British Academy, the Norwegian Academy of Science and Letters, and the American Academy of Arts and Sciences. His book on children’s understanding of emotion, Children and Emotion, appeared in 1989 and his book on play and imagination, The Work of the Imagination, in 2000. He currently studies how young children learn about history, science, and religion on the basis of what trusted informants tell them. His latest book, Trusting What You’re Told: How Children Learn from Others, describing this research, was published by Harvard University Press in 2012. It has received the Eleanor Maccoby Book Award from the American Psychological Association and the Cognitive Development Society Book Award. Arnon Levy is Senior Lecturer in Philosophy at the Hebrew University of Jerusalem. He holds an AM in organismic and evolutionary biology and a PhD in philosophy, both from Harvard University. Levy’s research centers on modeling and explanation, especially in the life sciences. He has also worked on connection between biology (especially evolutionary theory) and moral norms. His papers have been published in a range of philosophical venues, including Nous, the Journal of Philosophy, Philosophy of Science, the British Journal for Philosophy of Science, and Philosophy and Public Affairs. He currently heads the Interuniversity Program in the History and Philosophy of the Life Sciences, sponsored by the Council for Higher Education, and is a member of the Hebrew University’s Center for Logic, Language, and Cognition. Tania Lombrozo is a Professor of Psychology at Princeton University, as well as an Associate of the Department of Philosophy. She received her PhD in psychology from Harvard University in 2006 after receiving a BS in symbolic systems and a BA in philosophy from Stanford University. Dr. Lombrozo’s research aims to address foundational questions about cognition using the empirical tools of cognitive psychology and the conceptual tools of analytic philosophy. Her work focuses on explanation and understanding, social cognition, causal reasoning, learning, and folk epistemology. She is the recipient of numerous early-​career awards, including the Stanton Prize from the Society for Philosophy and Psychology, the Spence Award from the Association for Psychological Science, a CAREER Award from the National Science Foundation, and a James S. McDonnell Foundation Scholar Award in Understanding Human Cognition. Fiora Salis is Associate Lecturer in Philosophy at the University of York and Research Associate at the Centre for Philosophy of Natural and Social Science at the London School of Economics. She has held positions at the University of Lisbon and the London School of Economics, and visiting appointments at the University of London and the University of Geneva. She received a PhD and a master’s degree in cognitive science and language from the University of Barcelona and a master’s degree in philosophy and history of ideas from the University of Turin.

x  About the Contributors Benjamin Sheredos received a joint PhD in philosophy and cognitive science from the University of California, San Diego in 2016. In addition to philosophy of science, he works in the history of philosophy and has developed a novel understanding of Brentano’s and Husserl’s accounts of the origins of intentionality in mental acts. He has also applied his research to create open-​access science education materials, doing part of this work as a postdoc with Professor Susan Golden in UCSD’s Center for Circadian Biology. He is presently a lecturer in UCSD’s Analytical Writing Program, helping many first-​year and first-​generation college students prepare for their studies. Amie L. Thomasson is the Daniel P.  Stone Professor of Intellectual and Moral Philosophy at Dartmouth College. She is the author of Ontology Made Easy (Oxford University Press, 2015), Ordinary Objects (Oxford University Press, 2007), and Fiction and Metaphysics (Cambridge University Press, 1999), and co-​editor (with David W. Smith) of Phenomenology and Philosophy of Mind (Oxford University Press, 2005). Her book Ontology Made Easy was awarded the American Philosophical Association’s 2017 Sanders Book Prize. She has also published more than seventy book chapters and articles on topics in metaphysics, metaontology, fiction, philosophy of mind and phenomenology, the philosophy of art, and social ontology. She has twice held fellowships with the National Endowment for the Humanities. She delivered the 2017 Wedberg Lectures at the University of Stockholm and the 2018 Anderson Lectures at the University of Sydney. Martin Thomson-​Jones is Professor of Philosophy at Oberlin College. Before Oberlin, he taught at Princeton and then at the University of California, Berkeley. His recent research has focused on a cluster of questions about representation in the sciences, including questions about the nature of models and modeling, and about the connections between scientific representation and “ordinary” fiction. He has also worked in the philosophy of physics and in related areas of metaphysics. Deena Skolnick Weisberg is an Assistant Professor in the Department of Psychological and Brain Sciences at Villanova University, where she directs the Scientific Thinking and Representation (STAR) Lab. She is also the co-​director of the Pennsylvania Laboratory for Understanding Science (PLUS). She earned her PhD in psychology from Yale University and received postdoctoral training at Rutgers University and Temple University. Her research interests include scientific thinking and reasoning in children and adults, the development of imaginative cognition, and the roles that the imagination plays in learning. Her work has been published in a variety of journals, including Science and Cognition, and it has been supported by the Templeton Foundation and the National Science Foundation. Michael Weisberg is Professor and Chair of Philosophy at the University of Pennsylvania. He also serves as the Editor in Chief of Biology and Philosophy and Co-​Director of the Penn Laboratory for Understanding Science and the Galápagos Education and Research Alliance. Dr. Weisberg received a BS in chemistry and a BA in philosophy from the University of California, San Diego in 1999, and continued

About the Contributors  xi graduate study in philosophy and evolutionary biology at Stanford University, earning a PhD in philosophy in 2003. His research focuses on methodological issues arising in the life and social sciences, especially the ways that highly idealized models and simulations can be used to understand complex systems. Dr. Weisberg’s research group also aims to develop a comprehensive understanding of public understanding and misconceptions of scientific issues. His group has recently completed the most comprehensive study to date of North Americans’ attitudes about and knowledge of evolutionary biology, and is working with experimental documentary filmmaking techniques to help address common misconceptions. Dr. Weisberg also co-​leads several community science and community conservation initiatives in the Galápagos archipelago. He is the author of Simulation and Similarity: Using Models to Understand the World and Galapagos: Life in Motion, with Walter Perez. Stephen Yablo is the David W. Skinner Professor of Philosophy. He earned a PhD from the University of California, Berkeley in 1986. Yablo has been at MIT since 1998, having taught previously at the University of Michigan, Ann Arbor. He works on metaphysics and the philosophy of mathematics and language. Author of Thoughts (Oxford University Press, 2009), Things (Oxford University Press, 2010), and Aboutness (Princeton University Press, 2014), he gave the Hempel Lectures at Princeton in 2008, the Locke Lectures at Oxford in 2012, and the Whitehead Lectures at Harvard in 2016.

The Scientific Imagination

Introduction Arnon Levy and Peter Godfrey-​Smith

Science is both a creative endeavor and a highly regimented one. It involves surprising, sometimes unthinkably novel ideas, along with meticulous exploration and the careful exclusion of alternatives. At the heart of this productive tension stands a human capacity typically called “the imagination”: our ability, indeed our inclination, to think up new ideas, situations, and scenarios and to explore their contents and consequences in the mind’s eye. Despite its centrality, the imagination has rarely received systematic attention in philosophy of science. This neglect can be attributed in part to the influence of a well-​known distinction between the context of discovery and the context of justification (Reichenbach 1938), and a tendency in positivist and post-​positivist philosophy of science to set aside psychological aspects of the scientific process. That situation has now changed, and a growing literature in the philosophy of science is devoted to the role and character of imagining within science. This has been especially visible in the literature on scientific modeling, but the interest now extends more broadly. One goal of this volume to showcase current thinking about these issues and try to organize it into a coherent research agenda. This introduction will make that agenda explicit, set it in a historical context, and then outline how the chapters of this book fit together.

I.1 Themes Two sets of issues will be central, though not exhaustive. The first is the role of the imagination in facilitating discovery in science. The second is the role and status of models, and whether and how the practice of modeling can be understood as an employment of the imagination, perhaps analogous to the construction of fictions. Arnon Levy and Peter Godfrey-Smith, Introduction In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0001

2  The Scientific Imagination Innovators of all kinds—​artists, designers, policymakers—​exercise their imagination in the process of constructing and developing new ideas. But science is distinctive, compared to most other creative endeavors, in having epistemic goals. Science aims to give us knowledge, or something approaching that, of the natural world. Flights of the imagination, even if they provide raw material for the scientific process, must then be subjected to empirical testing if they are to be added to the stock of scientific knowledge. Traditionally, philosophers of science have focused on what happens after the imagination has done its job, perhaps because they assumed that that is where the philosophically significant issues are—​especially questions about evidence—​and perhaps because it was thought that nothing systematic can be said about how the imagination works. As we noted earlier, this attitude was in part due to the influence of a distinction between the context of discovery and the context of justification—​between questions about the generation of new ideas and questions about their validation. A distinction of this kind was central to logical empiricism, and also influential outside that movement. As Karl Popper put it, “The question how it happens that a new idea occurs to a man—​whether it is a musical theme, a dramatic conflict, or a scientific theory—​may be of great interest to empirical psychology; but it is irrelevant to the logical analysis of scientific knowledge” ([1934] 2002, 7–​8). In the latter part of the twentieth century, the constraining influence of the discovery/​justification distinction eroded. This was probably due in part to Thomas Kuhn’s critique of these separations in The Structure of Scientific Revolutions (1962), and in part due to the rise of naturalistic approaches to the philosophy of science (Hull 1988; Kitcher 1993; Quine 1969). Like many others, we think there is still a reasonable distinction to draw between logical relationships and empirical facts (psychological, sociological, historical) about the scientific process, but we see the traditional distinction between discovery and justification as largely unhelpful. Once this constraint is set aside, the initial generation, development, and critical scrutiny of scientific ideas can all be studied from a variety of points of view, and we believe that the role of the imagination is substantial in each of these phases. Flights of the imagination are important in conceiving new theoretical ideas, in exploring the explanatory resources of those ideas, and in working out how to bring theoretical ideas into contact with empirical constraints. A second topic we introduce here is very different, and it concerns “the imaginary” as a status—​as a feature of some objects and systems. This role is

Introduction  3 specific to a particular set of theoretical constructs, those seen in modeling (model building), at least of certain kinds. Science often seems to deal with “missing systems,” to use the terminology originated by Martin Thomson-​Jones (2010). Various kinds of scientific theorizing seem to be attempts to describe concrete, richly structured systems, and yet such systems cannot be found in the world around us. Examples include the ideal pendulum of physics, ecologies with just two interacting species, the “worm-​like chain” of the theory of polymers, and economic markets consisting of wholly rational agents. These and other systems are closely studied by scientists, yet they are not empirically accessible, spatiotemporally locatable entities. What are they? What status do they have? One possible answer is that they are imaginary systems—​systems that only exist “in” a scientist’s imagination. A related idea is that these are fictional systems, though they clearly have differences from other familiar kinds of fiction. Indeed, it is not uncommon for modelers to present models in language that is reminiscent of the introduction of a fictional scenario:  “Consider a two-​body system with identical masses orbiting a common barycenter . . . ,” or “Imagine a cylindrical cell with a uniformly insulating membrane . . .” As with fictions and related objects of the imagination, the activity of modeling seems to feature a distinction between internal and external perspectives: there is what is correct “in” or “according to” the model (there are two identical bodies, orbiting a common center of mass; there are two species in an ocean, one predator and one prey), and there is what is correct simpliciter (there are many bodies acting on each other through gravity; there are many thousands of species in any ocean). Similarly, there is what is correct according to Steven Spielberg’s film Lincoln (2012), and there is what actually happened in Washington in 1865. The two may coincide, but they need not. All this suggests at least a partial analogy between models and fictions. This approach to models has antecedents in the work of Nancy Cartwright (1983) and Ronald Giere (1988). Cartwright only briefly sketched an idea of this kind (“A model is a work of fiction. Some properties ascribed to objects in the model will be genuine properties of the objects modelled, but others will be merely properties of convenience” [1983, p. 153]), perhaps seeing it as ancillary to some ideas about natural laws and their contribution to explanation that were especially controversial and took more of her attention. Giere, in contrast, sketched what amounted to a view of this kind while describing

4  The Scientific Imagination it in different terms, and without talking at all about fictions. Giere offered an account of models that distinguished between the sentences and formulas used to specify a model, on one hand, and the model itself as a system that has all and only the properties attributed by the modeler, on the other. Giere called such a system “abstract” rather than “fictional” or “imaginary.” But as several authors noted, what Giere called an “abstract” system looks a lot like an imaginary or fictional system (see Frigg 2010; Godfrey-​Smith 2009; Thomasson, this volume; Thomson-​Jones 2010). Giere, however, was ambivalent about this interpretation, distancing himself from it insofar as he could (see Giere 2009; Godfrey-​Smith, this volume). Looking further back, these ideas have ancestors in the fictionalist philosophy of science of Hans Vaihinger—​his “philosophy of ‘as if ’ ” ([1911] 1924). Vaihinger thought that the use of fictions—​both non-​actual but possible constructs and impossible ones—​is indispensable to human life and cognition, both in scientific contexts and outside. (See Fine 1993 for a detailed treatment of Vaihinger in relation to more recent work.) Vaihinger himself acknowledged Jeremy Bentham and others as precursors. The chapter by Fiora Salis and Roman Frigg in this collection (Chapter 1) shows that leading scientists such as James Clerk Maxwell have often engaged in explicitly fictionalist moves. The idea of a model as something treated as a freestanding construct—​an imagined or abstract system, avowedly distinct from any empirical system while being the target of sustained scientific work by many individuals—​might be largely a twentieth-​century creation. The rise of computer simulation methods after World War II seems to have been important in the creation of “modeling” as a specific scientific strategy and skill. Even if an initial analogy can be drawn between models and fictions, a philosophical account of modeling in these terms faces several difficulties. First, the motivations cited earlier—​apparently “missing systems”—​seem to fit with only some kinds of modeling practices, and perhaps the practices with poorer fit are more scientifically central. In some kinds of highly mathematical modeling, the systems described do not appear to be treated as candidates for concrete existence. Instead they seem to be mathematical objects (Weisberg 2013, ch. 4). An ontology of abstract objects—​of a kind that is familiar, albeit problematic, from the philosophy of mathematics—​seems suited to handling such cases. Further, even when the analogy does seem compelling, merely proposing an analysis of models as fictions does not take us very far. It is not at all clear

Introduction  5 what a fiction is. What makes a text (or a model, for that matter) a piece of fiction? What is a fictional scenario? The literature on fiction includes a number of options, some of which are represented in this volume. The chapters by Stacie Friend, Thomson-​Jones, and Amie L. Thomasson, and to some extent those by Peter Godfrey-​Smith and Elisabeth Camp, engage this issue. An additional question concerning the models-​as-​fictions view is how fictional models might tell us about the actual, non-​fictional world. While writers on fiction often take it that literary fiction can be informative about real-​world matters, the kind of informativeness at issue there is typically of a relatively modest sort. Literature (and art more generally) does not supply the kind of detailed, often quantitative, sometimes highly accurate information about the world that modeling does. Though it seems as if this should be a central concern, in fact this question has received relatively little attention so far (exceptions include Frigg 2010; Levy 2015; and to an extent Toon 2012). Godfrey-​Smith’s contribution to this volume, which argues for a view of model-​based knowledge focused on conditionals, attempts to sketch a positive view in this area. We now want to outline two sets of issues that go beyond these questions about models. The first concerns what the imagination is, and how theories of our faculty of imagination bear on questions in the philosophy of science. The second, closely related, draws connections between philosophical and empirical work about the imagination.

I.2  What Is the Imagination? An important step on the way to a better understanding of the imagination’s role in science is developing a refined understanding of what the imagination is. Chapter 1, by Salis and Frigg, covers this topic in detail. Here we just outline some questions and initial points. Suppose we describe someone as imagining the Earth to have an extra moon. In ordinary contexts, this carries two potential connotations. It may be a characterization of a type of attitude toward a scenario on which the Earth has two moons, a way of saying that the person thinks of the Earth as if it had two moons. Roughly, this points to a certain disconnect between what is imagined and what is true or believed true. But to speak of someone as exercising the imagination can also suggest that the person’s thinking has a certain phenomenology—​that the person is “seeing in the mind’s eye” an

6  The Scientific Imagination Earth with two moons. Here, the imagination is understood in terms of a perception-​like engagement with the content in question, a kind of experience. This distinction is not exclusive: one might be in both states at once. There are at least two reasons for attending to this distinction. First, the role of the imagination in different scientific contexts may vary depending on which sense of imagination is at issue. Differences of opinion about the force and legitimacy of employing the imagination in the service of science can be traced back to this. For instance, differing views about the cognitive and epistemic powers of thought experiments, and the extent to which they can carry theoretical weight—​and especially whether thought experiments play a unique role, irreducible to other epistemic tools—​depend at least in part on the significance one attaches to the ability to visualize the proposed gedanken in the mind’s eye (Gendler 2004; Norton 2004). More generally, one may view the perception-​like aspects of the imagination as key in contexts of initial exploration of hypotheses but as dispensable when it comes to the development and testing of those hypotheses. The distinction between two senses, or modes, of imagination may also be relevant to telling apart its (epistemic) role in science from its role in other contexts. The perception-​like sense of imagination may seem important to the potential for emotional engagement with fiction and in other everyday contexts (fantasy, dream, role-​play), but it seems less relevant, indeed typically absent, in the scientific case. Relatedly, noting the two senses may help when we wish to compare and contrast imagining with other mental activities or with practices that involve them—​for instance, imagining may well differ from dreaming, fantasizing, and role-​playing in terms of the kind of attitude that the agent displays toward whatever is being imagined (or fantasized, or utilized in role-​play) but not necessarily in terms of phenomenology. (Salis and Frigg, in Chapter 1, discuss some of these question at length.)

I.3  Cognitive Science Perspectives Moving beyond conceptual questions, an important aspect of this volume is the effort to connect philosophical and cognitive scientific perspectives on the imagination. In this respect, the volume is guided by a naturalist perspective, one that treats science and philosophy as partly overlapping in subject matter and methods. Let us highlight several aspects of this.

Introduction  7 One important question for cognitive psychologists studying the imagination is the degree to which imaginative thinking resembles thinking about what is taken to be real. Clearly, part of the point of imagining is to depart from actuality. But if the imagination is to have cognitive utility, then it appears that there must also be some kind of systematic connection between our imaginations and how things stand in the real world. Thus we can ask: How are the contents of one’s imaginings related to one’s knowledge (or beliefs) about the non-​imaginary world? Does the imagination operate under constraints that resemble physical and causal constraints, and if so, which ones? Does our imagination work in different ways when concerned with different kinds of systems—​physical, biological, social? One reason this matters is that it may provide clues about the effectiveness of the imagination by suggesting that the rails on which it travels, so to speak, allow it to track real-​world properties and occurrences. These kinds of questions can be addressed both in children and in adults, and both are explored by Igor Bascandziev and Paul L. Harris (Chapter 11) and by Deena Skolnick Weisberg (Chapter 10). Empirical work in cognitive psychology can also help us better understand the two modes of imagination discussed earlier—​imagination as an attitude versus imagination as a perception-​like capacity. Empirical studies can look at the degree of association between these two aspects of the imagination. They can also look at the relative effectiveness of “seeing in the mind’s eye” compared to other methods of reasoning across different contexts. Cognitive psychology also has the potential to inform us about the role of imagination in scientific creativity, exploration, and discovery. What conditions facilitate creativity of the relevant sort? The connection between imagining and various other cognitive-​epistemic capacities is another interesting subject in this context. Does the imagination afford us special avenues of learning, and does such learning advance explanation, understanding, or other achievements that have been of central concern to philosophers of science? Does imagining some scenario improve our ability to generalize across different epistemic contexts, or to unify our knowledge? More generally, what kind of cognitive benefits can be reaped from engaging something in the imagination? The chapter by Tania Lombrozo addresses some of these issues. In other work, she has sought to draw broader philosophical lessons from these studies (Lombrozo 2011; Walker et al. 2014; Wilkenfeld et al. 2016).

8  The Scientific Imagination

I.4  Broader Philosophical Questions While this volume is devoted to the role of the imagination in science, clearly its focal topics are relevant to other parts of philosophy. This holds both within the philosophy of science and in other areas such as metaphysics and the philosophy of language. Let us briefly note a few of these connections. To begin with, there are topics within the philosophy of science that are only indirectly linked to the imagination and yet may be illuminated by the discussions in this volume. For instance, philosophers of science have been preoccupied with the nature of scientific representation—​with views differing on what, if anything, is distinctive about the manner in which theories and models in science represent (Frigg and Nguyen 2016). A  better understanding of how we represent things in our imagination, and related categories such as fictional representation, should benefit these discussions. Another issue in this vein can be illustrated by Michael Weisberg’s contribution to this volume (Chapter 8), in which he discusses computer-​based modeling, including the manner in which causal processes are represented and processed in computational simulation. In turn, a discussion of simulation may reveal illuminating similarities and contrasts with other scientific categories, such as experimentation and confirmation (Currie and Levy 2018; Winsberg 2010). Outside of philosophy of science, perhaps the most obvious connections are to the philosophy of mind and language. The imagination and associated topics like fiction and metaphor offer a distinct window onto questions of meaning, reference, content, and communication. Thinking about fictional discourse can inform our understanding of the connection between intention and meaning, the role and character of interpretation, the problem of reference, and related issues. Such mutual connections hold irrespective of the scientific context, of course. But when we attend to the case of science we find specific kinds of imaginary activity with potentially distinctive consequences. Moreover, as noted previously, the imagination seems to play a more robust epistemic role in science, relative to other areas. The question of whether and how we can glean knowledge and understanding from non-​ veridical descriptions, including models and metaphors, takes on a special importance in the context of science, especially when one considers the importance and widespread nature of idealization. This topic is explored in contributions by Stephen Yablo and Arnon Levy (Chapters 5 and 12, respectively). Camp’s contribution (Chapter 13) shows how one can subsume

Introduction  9 models and metaphors under a general view, thus illuminating the role of such devices both within and outside the scientific context. Another notable area of crossover is metaphysics. One point of contact has already been discussed:  the ontology of “missing systems” and the relationship to discussions in ontology that target other cases of not-​ straightforwardly-​material things, such as mathematics. There are other connections to metaphysics, too:  for instance, Godfrey-​Smith’s contribution draws on an approach to counterfactuals in order to offer an account of models and model-​based knowledge. A successful treatment of that case might inform our handling of counterfactuals in other domains. Finally, the issues explored in this volume exhibit various links with questions in aesthetics—​perhaps first and foremost the nature of fiction and our engagement with it. Several authors, including some contributors to this volume, have adapted ideas from the philosophy of fiction to the case of models, especially the pretense-​based approach of Kendall Walton (Walton 1990; see Frigg 2010; Toon 2010). Two chapters in this volume—​Thomasson’s (Chapter 2) and Thomson-​Jones’s (Chapter 3)—​take a different tack, applying the so-​called abstract artifact approach developed by Thomasson. It is then possible to look back to the philosophy of fiction and ask how approaches developed there fare, given lessons from the case of models and the scientific imagination. One may go as far as seeing the case of models as identical with, and in some ways paradigmatic of, the general category of fictional representation (Toon 2010). Alternatively, one may argue that the imagination’s role in science and in art differ in significant ways (Levy, forthcoming).

I.5  Overview of Chapters The authors whose contributions appear in this volume come from different areas of philosophy and psychology. Together they cover much of the thematic landscape described above. Their work represents the state of the art in discussions of the imagination within philosophy of science, philosophy of mind, metaphysics, and cognitive and developmental psychology. We view their contributions, taken together, as painting a rich picture of the role and nature of the imagination. Fiora Salis and Roman Frigg’s “Capturing the Scientific Imagination” (Chapter 1), briefly discussed earlier in this introduction, aims to taxonomize and clarify different varieties of the scientific imagination, and on that basis

10  The Scientific Imagination it argues for a view about the kind of imagination at work in modeling and thought experimentation. Salis and Frigg distinguish imagistic from non-​imagistic notions of imagination (a distinction discussed earlier) and objectual imagination (imagining an object) from propositional imagination (imagining that a certain proposition is the case). They then define a common core of propositional imagining, including freedom (from truth), quarantining, and mirroring. Different imaginative activities are seen as employing the common core in different ways and for different epistemic ends. In this fashion, a rich typology of supposition, counterfactual thinking, dreaming, and make-​believe is offered. Salis and Frigg then argue that imagistic imagination is neither required nor sufficient for modeling and thought experimentation, and they suggest in a tentative fashion that these practices should be understood as forms of make-​believe. Amie L. Thomasson’s “If Models Were Fictions, Then What Would They Be?” (Chapter 2) looks at the ontological status of models from a general metaphysical viewpoint, arguing that models can be understood via the “abstract artifacts” framework Thomasson develops in earlier work. She argues that existing work applying the anti-​realist pretense approach cannot do justice to “external” (i.e., critical and historical) references to models qua fictions. And she suggests that absent a realist element, the fictionalist approach to models cannot accommodate the fact that models represent the world and inform us about it. There are different ways, Thomasson suggests, for the artifact approach to accommodate the practice of modeling. Finally, she aims to defuse some of the ontological qualms involved in accepting a realist view of models/​fictions. Martin Thomson-​Jones’s “Realism About Missing Systems” (Chapter 3) returns to the ontology of modeling, arguing for a realist version of fictionalism, not unlike the one suggested by Thomasson in Chapter 1. He begins by making a detailed argument for a strong analogy between so-​called missing systems and fictions. The analogies include, according to Thomson-​ Jones, central aspects of the ontology, epistemology, and semantics of both practices, as well as significant similarities in terms of associated activities. He then motivates a realist approach to models as fictions and argues that the abstract artifacts approach is the best realist approach currently available. In many respects, this chapter is complementary to Thomasson’s. While she argues against anti-​realist views and for the general tenability of an abstract artifacts account, Thomson-​Jones focuses on laying out a positive case for the artifactualist viewpoint, and addresses the particular case of modeling in

Introduction  11 detail. He considers different ways of applying the approach, explores the potential role of pretense within a realist-​artifactualist approach, and proposes answers to a number of objections. Stacie Friend’s “The Fictional Character of Scientific Models” (Chapter 4) argues for what, in some ways, is the polar opposite of the views held by Thomasson and Thomson-​Jones. She suggests that the ontological status of models-​as-​fictions is uninteresting from the point of view of philosophy of science, as the most significant issue pertaining to models—​how they serve the epistemic ends of science—​is unaffected by one’s ontological stance. In particular, Friend argues that however one views the ontological status of models, one should think of model development and analysis as a matter of figuring out what to imagine and how. Moreover, she suggests that one’s view of model-​world comparisons is unaffected by one’s ontological stance. Thus, according to Friend, although there are interesting philosophical questions about fictional models, they are all epistemological rather than ontological—​ they pertain to how models manage to say something about the world, how we assess the truth of what they say, and so on. It is in addressing these questions, she suggests, that the real work lies. Stephen Yablo’s “Models and Reality” (Chapter 5) can be seen as taking up one of the challenges delineated by Friend—​namely, “the interpretation stage: converting findings, or more generally claims, about the model into claims about the target system.” Yablo proceeds by drawing an analogy—​ perhaps more than an analogy—​with the application of mathematics to the natural world, an application that, he reminds us, ought to be explicable whether one is a realist about mathematical entities or not. In doing so he applies an approach to content developed in his 2014 book Aboutness. In particular, he adapts the notion of “partial truth” to the context of models (see also Levy 2015). Very roughly, a statement is partly true if it is true with respect to part of what it is about. Yablo develops this notion in terms of partitions of possible worlds, showing how it allows us to glean truths from statements about a model that are, by every light, at least partly false. If successful, this framework allows the fictionalist to give a story about how modeling provides information about the world. Peter Godfrey-​Smith’s “Models, Fictions, and Conditionals” (Chapter 6) approaches the nature of modeling in a way guided by the need to give an account of how this kind of work can yield knowledge of the real, non-​ imaginary world. He criticizes several other approaches, including the abstract artifact approach discussed in other chapters of this volume, because

12  The Scientific Imagination they do not help with problems of this kind. He suggests that the imaginary forms a kind of folk-​ontological category. Imaginary objects are not mere possibilities, are not abstract objects, and are not Thomasson-​like abstract artifacts, either. He approaches the utility of modeling by way of the utility of conditionals. Model-​based science can be seen as furnishing the scientist with counterfactual conditionals. Those conditionals raise their own philosophical problems, but charting a road from fictional model through counterfactual conditional to material conditional yields a promising account, according to Godfrey-​Smith, of how we can learn from models. Benjamin Sheredos and William Bechtel’s “Imagining Mechanisms with Diagrams” (Chapter  7) discusses the role of the imagination in mechanistic science, and in particular its contribution to the discovery of potential mechanistic models—​that is, models that would provide a “how possibly” mechanism for a phenomenon of interest. They highlight four different features:  visualization, which often greatly enhances the efficiency of model development; creativity, which involves going further than the existing evidence and information about the target; fictivity, by which they mean a weak (or altogether absent) commitment to a truthful depiction of the target; and the presence of constrained flexibility, allowing for the development of new hypotheses while not straying too far from empirical plausibility. Thus, Sheredos and Bechtel employ a relatively narrow notion of the imagination, which they utilize to shed light on a specific (albeit important) scientific activity. Sheredos and Bechtel’s chapter is also notable for bringing the literature on the imagination and fiction into contact with the extensive body of work on mechanisms in contemporary philosophy of science. Michael Weisberg’s “Abstraction and Representational Capacity in Computational Structures” (Chapter 8) discusses, as the title suggests, representation in computational models, which Weisberg distinguishes from other sorts of mathematical models. He sets out by stating his opposition to the fictionalist approach (developed in detail in Weisberg 2013, ch. 4) but also acknowledges that advocates of that approach have raised significant concerns about his own, non-​fictionalist view. One of these concerns pertains to whether and how computational models can represent causal structure in real-​world targets. Weisberg gives an overview of the different components of a computational structure—​inputs, outputs, the algorithm, and so on—​ and discusses associated notions, giving special attention to the notion of abstraction, which both is important in thinking about computation and differs in significant respects from the notion of abstraction operative in other

Introduction  13 contexts. Together, these resources allow him to give an account of the scope and limits of the representational powers of computational models, enhancing the plausibility of his own account and indirectly strengthening the critique of fictionalist approaches. The next three chapters are by cognitive psychologists. Tania Lombrozo’s “ ‘Learning by Thinking’ in Science and in Everyday Life” (Chapter  9) looks at a category of learning that has been largely neglected by psychologists: learning by imaginative thinking. Lombrozo discusses results showing that when subjects are prompted to explain certain outcomes (or problem solutions), even if only to themselves, this results in an enhanced ability to generalize from cases and make predictions. The reason, she suggests, is that the requirement to think about potential explanations recruits constraints that support future prediction and generalization. She further argues that, epistemically speaking, these forms of learning by thinking are essential and not replaceable (for instance, because they support induction from specimens), and this may hold even if the objects of thought are not accurate representations of the targets of one’s thinking (for example, they are simplified or idealized). Overall, Lombrozo presents a view in which the capacity to imagine contributes to knowledge acquisition via its function and interaction with other cognitive capacities, irrespective of its local, specific-​target-​ related epistemic merits. Deena Skolnick Weisberg’s contribution is entitled “Is Imagination Constrained Enough for Science?” (Chapter  10). She answers the titular question in the affirmative. Weisberg reviews experimental results showing that in constructing imaginary scenarios, children and adults hew quite closely to principles gleaned from empirical reality, especially regarding causal interactions. She discusses results suggesting that while there are, as many suspect, biases involved in imagining—​things and scenarios that we find harder to imagine—​these can nevertheless be corrected for, especially within a formal institutional setting such as scientific education. Using the imagination involves risks, too, according to Weisberg, but in the other direction:  we somewhat too readily tend to export (i.e., to apply to reality) information gleaned in the imagination. While there are ways of placing a check on these tendencies, such inclinations represent a weak link in the process of learning by imagination. In Weisberg’s overall picture, the imagination is constrained “on the inside,” but there is often a danger of uncritical exportation of the imaginary into our thinking about the real world.

14  The Scientific Imagination Igor Bascandziev and Paul L.  Harris contribute “Can Children Benefit from Thought Experiments?” (Chapter 11). Like Weisberg, they argue for a positive answer to the question posed in their title. Bascandziev and Harris set out by describing the limitations of reasoning from empirical findings, noting the ways in which observations tend to be ignored, both by children and by adults, when they clash with a strongly held theory (e.g., an impetus-​ like theory of the motion of physical objects). They then look at the role and power of imaginative thinking—​ thought experimentation in rudimentary form—​that children engage in and the ways in which it allows them to overcome biases and other barriers to learning from empirical information. These abilities are associated with higher scores on tests of executive function. Moreover, Bascandziev and Harris discuss results showing that this kind of learning produces long-​term change in the children’s understanding of the types of scenarios involved. Overall, then, this chapter dovetails with Lombrozo’s contribution in suggesting that the psychological fruits of imaginative learning are real and indeed run deep and far back in development. Arnon Levy’s “Metaphor and Scientific Explanation” (Chapter  12) argues for a view of explanation in which imaginary devices, and in particular metaphors, can serve a genuine explanatory role in science. Levy begins by outlining an account of understanding that sees it as a capacity to make successful counterfactual inferences on the basis of a representation of the understanding’s target. He then argues that this notion of understanding can serve as the basis of a view of explanation: explanations are to be judged by their contribution to understanding. If this view is accepted, suggests Levy, then explanations need not be true or detailed with respect to the explanans, so long as they facilitate understanding. Metaphors that facilitate understanding can, therefore, carry explanatory weight. Levy illustrates this account with the example of information in biology, drawing on his previous work on the topic. He goes on to discuss the role of explanatory considerations in theory choice and inductive inference in light of this picture of explanation and understanding. The final essay in the volume is Elisabeth Camp’s “Imaginative Frames for Scientific Inquiry:  Metaphors, Telling Facts, and Just-​ So Stories” (Chapter 13), which argues for a different take on models, one that retains the focus on the imagination but embeds models, metaphors and fictions in the broader family of frames. A frame is a conceptual-​communicative device that imposes a perspective on a subject matter, highlighting some features while suppressing others, facilitating specific ways of integrating new

Introduction  15 information, and prompting novel angles on existing knowledge. Camp suggests that metaphors (in general) are best seen as framing devices; that this way of viewing metaphors is substantially different from treating them as instances of fiction; and that metaphors and other frames serve important epistemic roles, including in scientific inquiry.

References Cartwright, N. (1983). How the Laws of Physics Lie. Oxford: Oxford University Press. Currie, A., and Levy, A. (2018). “Why Experiments Matter.” Inquiry https://​doi.org/​ 10.1080/​0020174X.2018.1533883 Fine, A. (1993). “Fictionalism.” Midwest Studies in Philosophy 18, no. 1: 1‒18. Frigg, R. (2010). “Models and Fiction.” Synthese 172, no. 2: 251‒268. Frigg, R., and Nguyen, J. (2016). “The Fictions View of Models Reloaded.” Monist 99, no. 3: 225‒242. Gendler, T. S. (2004). “Thought Experiments Rethought—​and Reperceived.” Philosophy of Science 71, no. 5: 1152‒1163. Giere, R. N. (1988). Explaining Science:  A Cognitive Approach. Chicago:  University of Chicago Press. Giere, R. N. (2009). “Why Scientific Models Should Not Be Regarded as Works of Fiction.” In Fictions in Science:  Philosophical Essays on Modeling and Idealization, edited by Mauricio Suárez, 248‒258. New York: Routledge. Godfrey-​Smith, P. (2009). “Models and Fictions in Science.” Philosophical Studies 143, no. 1: 101‒116. Hull, D. (1988). Science as a Process. Chicago: University of Chicago Press. Kitcher, P. (1993). The Advancement of Science. New York: Oxford University Press. Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago:  University of Chicago Press. Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172, no. 3: 781‒798. Levy, A. (forthcoming). “Models and Fictions: Not So Similar After All?.” Lombrozo, T. (2011). “The Instrumental Value of Explanations.” Philosophy Compass 6: 539‒551. Norton, J. D. (2004). “On Thought Experiments:  Is There More to the Argument?” Philosophy of Science 71, no. 5: 1139‒1151. Popper, K. ([1934] 2002). The Logic of Scientific Discovery. London: Routledge. Quine, W. V. O. (1969). “Epistemology Naturalized.” In Ontological Relativity and Other Essays, 69–​90. New York: Columbia University Press. Reichenbach, H. (1938). Experience and Prediction: An Analysis of the Foundations and the Structure of Knowledge. Chicago: University of Chicago Press. Thomson-​Jones, M. (2010). “Missing Systems and the Face Value Practice.” Synthese 172, no. 2: 283‒299. Toon, A. (2012). Models as Make-​Believe. London: Palgrave Macmillan. Vaihinger, H. ([1911] 1924). The Philosophy of “As If ”: A System of the Theoretical, Practical and Religious Fictions of Mankind. Translated by C. K. Ogden. London: Routledge and Kegan Paul.

16  The Scientific Imagination Walker, C. M., Lombrozo, T., Legare, C., and Gopnik, A. (2014). “Explaining Prompts Children to Privilege Inductively Rich Properties.” Cognition 133: 343‒357. Walton, K. (1990). Mimesis as Make-​Believe. Cambridge, MA: Harvard University Press. Weisberg, M. (2013). Simulation and Similarity. New York: Oxford University Press. Wilkenfeld, D. A., Plunkett, D., and Lombrozo, T. (2016). “Depth and Deference: When and Why We Attribute Understanding.” Philosophical Studies 173, no. 2: 373‒393. Winsberg, E. (2010). Science in the Age of Computer Simulation. Chicago: University of Chicago Press.

1 Capturing the Scientific Imagination Fiora Salis and Roman Frigg

1.1  Introduction Maxwell, when investigating lines of force, sets himself the task of studying “the motion of an imaginary fluid,” which he conceives as “merely a collection of imaginary properties” (1965, 159–​160). Einstein explains the principle of equivalence by inviting the reader to first “imagine a large portion of empty space” and then “imagine a spacious chest resembling a room with an observer inside” (2005, 86). Maynard Smith asks us to “imagine a population of replicating RNA molecules” (quoted in Odenbaugh 2015, 284). In his study of the growth of an embryo Turing notes that “the matter of the organism is imagined as continuously distributed” (quoted in Levy 2015, 782). And in his investigation into the nature of contractual relations Edgeworth proposes to “imagine a simple case—​Robinson Crusoe contracting with Friday” (quoted in Morgan 2004, 756). These are examples of leading scientists appealing to the imagination. They do so talking about either a scientific model (SM) or a thought experiment (TE). So the imagination is seen as crucial to the performance of both. Philosophers concur. Brown presents one of Newton’s TEs as asking the reader to “imagine the universe completely empty” (2004, 1127). Laymon paraphrases TEs as “imagined but truly possible experiments” (1991, 192). And Gendler describes them as “imaginary scenarios” (2004, 1154). Weisberg reports that Volterra in his model “imagined a simple biological system” (2007, 208) and accepts that “modelers often speak about their work as if they were imagining systems” (2013, 48). Godfrey-​Smith suggests we “take at face value the fact that modelers often take themselves to be describing imaginary biological populations, imaginary neural networks, or imaginary economies” (2006, 735), and he sees modeling as involving an “act of imagination” (2009, 47). Harré sees models as things that are “imagined” (1988, 121). Sugden regards models as “imaginary” worlds (2009, 5). Fiora Salis and Roman Frigg, Capturing the Scientific Imagination In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0002

18  The Scientific Imagination Cartwright understands modeling as offering “descriptions of imaginary situations or systems” (2010, 22). Frigg (2010), Levy (2015), and Toon (2012) present analyses that place acts of imagination at the heart of the practice of scientific modeling, and Levy submits that “the imagination has a special cognitive role in modeling” (2015, 783). This enthusiasm notwithstanding, philosophers of science typically do not offer explicit analyses of imagination. It is, however, common to associate imagination with mental imagery.1 This is not surprising given that the word “imagination” derives from the Latin imago, which means “image,” “portrait,” “icon,” and “sculpture.” In this vein Levy observes that “imagining typically involves having a visual or other sensory-​like mental state—​a ‘seeing in the mind’s eye’ ” (2015, 785). Brown regards performing a TE as “a case of seeing with the mind’s eye” (2004, 1132), he characterizes TEs as being “visualizable” (1991, 1), and he regards being “picturable” as a “hallmark of any thought experiment” (1991, 17). Gendler emphasizes that “the presence of a mental image may play a crucial cognitive role” in a TE (2004, 1154). Likewise, Harré sees the “imagining of models” as providing scientists with a “picture of mechanisms of nature” (1970, 34–​35). And Weisberg attributes to Godfrey-​Smith the view that scientists form a “mental picture” of the “model system” (2013, 51). Those who hoped that this was going to be a rare occasion of philosophers agreeing with each other have gotten their hopes up too quickly. The veneer of harmony unravels as soon as we probe the nature of imagination and the role it plays in TEs and SMs. While some authors, most notably Gendler (2004) and Nersessian (1992, 1999, 2007), affirm the imagistic character of the imagination and see it as an asset in explaining how TEs and SMs work, most scientists and philosophers draw back as soon as the imagination is linked to mental imagery. Norton thinks that TEs “are merely picturesque argumentation” (2004, 1142). And Weisberg dismisses a view of SMs based on imagination as “folk ontology” (2013, ch. 3). Talking about the necessary statistical treatment of atomic phenomena within quantum mechanics, Bohr recognized “the absolute limitation of the applicability of visualizable conceptions of atomic phenomena” ([1934] 1961, 114). And Dirac famously proclaimed that “the object of physical science is not the provision of pictures” (1958, 10). 1 An exception is Odenbaugh (2015, 287), who explicitly recognizes a propositional variety of imagination.

Capturing the Scientific Imagination  19 We now find ourselves in a paradoxical situation. On the one hand, the imagination is widely seen as having an important role to play both in TEs and SMs. On the other hand, the imagination is dismissed because of its allegedly imagistic character. But one cannot both dismiss the imagination as ill-​suited for scientific reasoning and see it as being crucial to TEs and SMs. The way out of this predicament, we submit, is an investigation into the character of the imagination. Fortunately, such an investigation does not have to start from zero. There is a rich and intricate literature in aesthetics and philosophy of mind about the notion of imagination. But there has been little, if any, contact between that body of literature and debates in the philosophy of science. We therefore review this literature in a way that makes it relevant to TEs and SMs, and we propose a novel taxonomy of varieties of imagination that helps philosophers of science to orient themselves in this jungle of positions. One of the core messages emerging from this review is that the association of imagination with mental imagery has been too quick: there are propositional kinds of imagination that aren’t in any way tied to mental images. This indicates the way for a resolution of the paradox mentioned previously: we argue that SMs and TEs involve a specific kind of propositional imagination, namely, make-​believe. We begin the chapter by reflecting on the relationship between TEs and SMs. So far we have mentioned TEs and SMs in one breath, thereby suggesting that they can be treated side by side. First we argue that TEs and SMs indeed involve the same kind of imagination. Then we present the main arguments for and against the involvement of the imagination in TEs and SMs:  Norton’s on the con side, and Gendler’s and Nersessian’s on the pro side. Following that, we review the positions on the imagination in aesthetics and philosophy of mind and propose a classification of these positions. We analyze the arguments previously introduced with the instruments subsequently developed. We argue that imagistic imagination is unnecessary for the performance of TEs and use of SMs, and that a propositional kind of imagination is necessary. We examine what the different kinds of propositional imagination introduced earlier offer for an analysis of SMs and TEs, and we tentatively suggest that this imaginative activity is best analyzed in terms of make-​believe. Then we briefly summarize our results and draw some general conclusions. Before delving into the discussion, a number of caveats are necessary. The term “imagination” has many meanings. To avoid getting started on

20  The Scientific Imagination the wrong foot, let us set aside those meanings that are not relevant to our questions. First, “imagination” is often used as a synonym for “creativity.” Something is said to be “imaginative” if it is new, original, groundbreaking, or innovative. Needless to say, great scientific achievements are imaginative in this sense. Yet not all imaginative activities involve creativity, and not all creative activities involve imagination. A student who studies field lines, the principle of equivalence, or the nature of contracts has to engage in imaginative activities, but these aren’t creative because she is merely asked to retrace the steps outlined by Maxwell, Einstein, or Edgeworth. The creative imagination emerges when our imaginative abilities intersect with creativity to produce a novel output of any kind.2 The imaginary acts we are interested in can be creative but need not be. Second, “imagination” is often used to refer to false beliefs and misperceptions. This popular figure of speech is of no systematic interest because there is no specific ability to falsely believe or misperceive something. Rather, there is an ability to believe and an ability to perceive, both of which can go wrong.3 There are two corollaries to this point. First, imagination can be about real objects. We can imagine of Putin that he is a gambler to explore certain underlying features of his personality. In this case Putin is the focus of imaginative activities that are directed at improving our understanding of him. Second, imagination is independent of truth and belief. As Walton points out, “imagining something is entirely compatible with knowing it to be true” (1990, 13). So, for example, when reading Tolstoy’s War and Peace, we imagine that Napoleon was ruined by his great blunders, which is something that we also know to be true. Finally, a terminological comment. As it is common in the literature on imagination, we take “imagination” to refer to the mental attitude of the person who imagines something; we use the noun “imagining” for an act of imagination and “imaginings” as the plural for several such acts.

1.2  Models and Thought Experiments Is there a force needed to keep an object moving with constant velocity? In a classic TE Galileo argued that the answer to this question was no (Sorensen 2 See Gaut 2003, 2010 and the contributions in Gaut and Livingston 2003 for current discussions on the relation between creativity and imagination. 3 See Currie and Ravenscroft 2002, 9, for a similar remark on imagination and false belief.

Capturing the Scientific Imagination  21 1992, 8–​9). Galileo asked us to imagine a U-​shaped cavity, imagine we put a ball on the edge of one side, and imagine we let the ball roll down into the cavity. What is the trajectory of the ball? Galileo argued that it would have to reach the same height on the other side irrespective of the shape of the cavity. This is Galileo’s law of equal heights. Of course Galileo realized that the ball’s track was not perfectly smooth and that the ball faced air resistance, which is why the ball in an actual experiment does not reach equal height on the other side. So Galileo suggested considering an idealized situation in which there is neither friction nor air resistance and argued that the law was valid in that scenario. Galileo then asks us to continue the TE and derive the law of inertia from the law of equal heights. The law of inertia says that a body either stays at rest or moves at constant velocity if no force acts on it. Now imagine a situation in which the U-​shaped cavity is bent downward on the right side so that the cavity becomes flatter on that side while the height is still the same on both sides. According to the law of equal heights, a ball starting on top of the left side still eventually reaches the top of the right side, no matter how much you bend the cavity. We can now imagine a series of variations of this thought experiment in which the right side of the cavity is bent ever more—​and in each of them the ball reaches the top of the right side. If we continue this series indefinitely, we reach a scenario in which the right side is bent down all the way so that it becomes horizontal. The law of equal heights still applies, and so the ball should eventually reach the height at which it started on the left. However, since the right side of the cavity is horizontal now, the ball can’t move upward, and so it keeps moving forever. From this Galileo drew the conclusion that no force is needed to keep a ball moving with constant velocity, which is the law of inertia. Now consider a variation of this situation. Our protagonist is Malileo, a presumed mechanical philosopher of the nineteenth century. Malileo masters Lagrangean mechanics and can solve even difficult equations. He doesn’t trust any result that isn’t proven mathematically, and so he’s suspicious of Galileo’s informal reasoning. To get a mathematically rigorous justification of the law of inertia he assumes, with Galileo, that the cavity is frictionless and that there is no air resistance. He assumes that the ball is a perfect sphere with a homogenous mass distribution and with a radius that is much smaller than the width of the cavity. He further assumes that the only force acting on the ball is linearized gravity (that is, he screens off electromagnetic forces, etc.). He then conceptualizes the cavity as a

22  The Scientific Imagination conjunction of two half-​segments of a parabola that meet at the vertex. The right segment’s equation contains a parameter a controlling the inclination of the half-​parabola (the smaller a is, the flatter the parabola). He then uses the machinery of Lagrangean mechanics to write down the equation of motion of a ball moving under the constraint of the cavity. He solves the equation. The solution still depends on the parameter a. He then takes the limit for a → 0 and finds that in the limit the trajectory tends toward constant linear motion. This is formal proof of Galileo’s result. Malileo constructed a model of the cavity and the ball’s motion. In fact, when telling Malileo’s story it was difficult to avoid the word “model.” It would have been more natural to say that he models the ball as an ideal sphere with homogenous mass distribution, that he models the cavity as a parabola, and so on. His construct is a bona fide SM, similar to other SMs such as the logistic growth model of a population or the ideal chain model of a polymer. This observation matters because the kind of imaginings that Malileo entertains are the same as Galileo’s. Both imagine cavities and the motion of balls. For sure, Malileo also adds a mathematical description and uses a background theory (Lagrangean mechanics). But this does not detract from the fact that he imagines the same sort of objects in the same way as Galileo, who doesn’t have the additional formal apparatus. The conclusion we draw from this little scientific fairy tale is that insofar as imaginings are involved when a scientist performs a TE, these imaginings are of the same kind as the ones she has when working with a SM (and vice versa). Of course, the exact mental content is typically different. Malileo’s mathematical expressions are not on Galileo’s mind, but when Galileo and Malileo think about a cavity that can be flattened on one side and about a ball moving in it, they engage in the same kind of imaginative activity. This observation generalizes: TEs and SMs involve the same kind of imagination. The imaginative activities involved in SMs and TEs can be analyzed together.4 Views gesturing in the same direction have been voiced before. Harré submits that a “model is imagined and its behavior studied in a gedanken-​experiment” (1988, 121–​122), thereby putting SMs and TEs in the same category. Cartwright urges that models “are often experiments in thought” (2010, 19). Del Re, commenting on Galileo, observes that in 4 We here set aside reconstructions of SMs in terms of set theoretical structures (for a discussion of this view, see Frigg 2010). We agree with Weisberg (2013) that even those who think that the model-​ world relation is ultimately purely structural will have to admit fictional objects such as perfect spheres and unbounded populations at least as “folk ontology” into their understanding of models.

Capturing the Scientific Imagination  23 Gedankenexperimente we explore objects of an ideal world, and adds that “ ‘physical models’ applies to the objects of which this ideal world is made” (Del Re 2000, 6).5

1.3  Exorcism and Veneration As we have seen, there are diametrically opposed positions on the nature and role of the imagination in philosophy of science. In this section we review in some detail the most explicit pronouncements on either side of the divide. Norton advances a view of TEs as devoid of imagination. He characterizes TEs as picturesque arguments that “(i) posit hypothetical or counterfactual states of affairs, and (ii) invoke particulars irrelevant to the generality of the conclusion” (1991, 129). Condition (i) gives TEs their thought-​like character, otherwise they would be mere descriptions of real states of affairs. Condition (ii) gives them their experiment-​like character. The claim that TEs are arguments is motivated by Norton’s empiricism, the view that knowledge of the physical world derives from experience. Because TEs do not involve any new experimental data, “they can only reorganize or generalize what we already know from the physical world. . . . The outcome is reliable only insofar as our assumptions are true and the inference valid” (1996, 335). Norton introduces two related theses. According to the reconstruction thesis (ReT), “the analysis and appraisal of a thought experiment will involve reconstructing it explicitly as an argument” (1991, 131). According to the elimination thesis (ET), “thought experiments are arguments which contain particulars which are irrelevant to the generality of the conclusion” (1991, 131), but “these elements are always eliminable without compromising our ability to arrive at the conclusion,” and therefore “any thought experiment can be replaced by an argument without the character of a thought experiment” (1996, 336). Norton’s ET can be interpreted in two ways. According to a weak interpretation, ET is a thesis about the nature of the conclusion of a TE, which is a general proposition that does not involve any reference to the specific elements of a TE. According to a strong interpretation, the irrelevant particulars can also be eliminated from the argument itself. 5 Gedankenexperiment is the German word for TE; sometimes it’s also spelled Gedanken​Experiment.

24  The Scientific Imagination What is the role of the imagination in this framework? Norton barely mentions the word “imagination” and never explores the notion. However, when he talks about the “picturesque” character of TEs (1996, 2004), he seems to associate imagination with mental imagery. On other occasions he also seems to condemn imagination as irrational thinking, as when he writes that “empiricist philosophers of science . . . must resist all suggestions that one of the principal foundations of science, real experiments, can be replaced by the fantasies of the imagination” (1996, 335, italics added). So he seems to regard imagination as irrelevant both to the derivation of the outcomes of TEs and to their analysis and assessment. Nersessian and Gendler defend different versions of the imagistic view against the idea that TEs are mere logical arguments involving propositional reasoning. While they do not discuss Galileo’s TE, their proposals entail that when performing this TE we form a perception-​like representation of a U-​ shaped cavity and a ball rolling down into the cavity. Gendler claims that some TEs crucially require imagistic reasoning and that “the presence of a mental image may play a crucial cognitive role in the formation of the belief in question” (2004, 1154). To lend support to these claims she presents a series of examples from problem-​solving contexts where similar imagistic abilities would be crucial. For example, she asks the reader to imagine whether four elephants would fit comfortably in a certain room and suggests that “presumably . . . you called up an image of the room, made some sort of mental representation of its size . . . , called up proportionately-​sized images of four elephants, mentally arrayed them in the room, and tried to ascertain whether there was space for the four elephants within the confines of the room’s four walls” (2004, 1157). Nersessian develops this approach to TEs by appealing to the literature on mental modeling and mental simulation.6 On her view, the performance of a TE involves the manipulation of a mental model within the constraints of a specific domain of scientific inquiry. A mental model (which is distinct from a SM) is a mental analogue of a real-​world phenomenon. Accordingly, much of the work in Nersessian’s account goes into articulating the nature of mental analogues. She appeals to the distinction between two different kinds of mental representations enabling two different kinds of cognitive processes. On the one hand, there are linguistic and formulaic representations that enable logical and mathematical operations, which are rule-​based and truth-​ preserving. These representations “are interpreted as referring to physical 6 See especially Johnson-​Laird 1980, 1982, 1983, 1989.

Capturing the Scientific Imagination  25 objects, structures, processes, or events descriptively” (2007, 132). Their relationship to what they refer to “is truth, and thus the representation is evaluated as being true or false” (2007, 132). On the other hand, there are iconic representations, which include analogue models, diagrams and imagistic representations. They “involve transformations of the representations that change their properties and relations in ways consistent with the constraints of the domain” (2007, 132). For example, Nersessian asks the reader to think about how to move a sofa through a doorway and writes that “the usual approach to solving the problem is to imagine moving a mental token approximating the shape of the sofa through various rotations constrained by the boundaries of a doorway-​like token” (2007, 128). Iconic representations enable the latter sort of processing operations, or simulative model-​based reasoning. They “are interpreted as representing demonstratively” (2007, 132). And their relationship to what they represent “is similarity or goodness of fit. Iconic representations are similar in degrees and aspects to what they represent, and are thus evaluated as accurate or inaccurate” (2007, 132). Mental models are mental analogues of real-​world phenomena. And mental analogues are iconic representations that cannot be reduced to a set of propositions. In the next section we discuss positions on the imagination found in aesthetics and philosophy of mind, and based on the insights gained in this discussion we evaluate the positions introduced in this section. We argue that Gendler and Nersessian overstate the importance of the imagistic imagination, which we find to be unnecessary for the performance of TEs and the use of SMs. Norton’s account, by contrast, underplays the importance of the imagination. We argue that construing TEs as arguments presupposes a propositional kind of imagination, which we argue is necessary for the performance of TEs and SMs.

1.4  Varieties of Imagination This section provides tools for a reevaluation of the role of the imagination in TEs and SMs by presenting positions from the rich and intricate literature on imagination in aesthetics, philosophy of mind, and cognitive science in a way that makes them applicable to problems in the philosophy of science. In doing so we also offer a novel taxonomy of imaginative abilities. Central to accounts of imagination is the distinction between the content of a mental state and the attitude an agent takes toward this content. Different mental states can have the same content. One can believe that

26  The Scientific Imagination Imagination

objectual imagination

imagistic

propositional imagination

non-imagistic

MCPI

...

make-believe

counterfactual reasoning

dreaming

supposition

Figure 1.1  Varieties of imaginative abilities

there is a tree in the garden and one can imagine that there is a tree in the garden. Imagination and other states must therefore differ at the level of attitude. This said, a crucial distinction pertains to the kind of content toward which an imaginative attitude is taken. We can imagine that there is a tree in the garden, and we can imagine a tree in the garden. Whether we imagine a proposition7 or an object leads to the distinction between the two main varieties of imagination: propositional imagination and objectual imagination. Figure 1.1 shows the different accounts that we will discuss in this section along with their logical relations to each other to aid orientation.

7 Philosophers of language disagree about the nature of propositions. For the purpose of this chapter it suffices to say that propositions are the intersubjective objects of propositional attitudes, that they are the bearers of truth-​values, and that they are expressed by using syntactically well-​ formed sentences.

Capturing the Scientific Imagination  27

1.4.1 Objectual Imagination The objectual imagination is a mental relation to a representation of a real or nonexistent entity. One can imagine London or the fictional city Macondo, Napoleon or Raskolnikov, a tiger or a unicorn. Yablo characterizes objectual imagination as having referential content of the kind “that purports to depict an object” (1993, 27). Yet he emphasizes that depicting an object does not require forming a mental image of it, which is why we can imagine objects that are hard (or even impossible) to visualize. We can imagine a chiliagon (a thousand-​sided polygon) even if we cannot form a mental image of it (1993, 27 n. 55). However, if we cannot form a mental image of a chiliagon, how can we imagine it without imagining that it is so-​and-​so? Yablo does not consider this issue, but Gaut offers a natural solution: “Imagining some object x is a matter of entertaining the concept of x, where entertaining the concept of x is a matter of thinking of x without commitment to the existence (or nonexistence) of x” (2003, 153). Imagining a chiliagon simply amounts to entertaining the concept of a chiliagon. In contrast with this somewhat minimalist view, a long philosophical tradition characterized objectual imagination as a kind of imagery: a relation between a subject and an image-​like representation of an object (real or nonexistent). Different varieties of imagery experiences correspond to different sensory modalities. The most common is visual imagination, often referred as “seeing in the mind’s eye,” “visualizing,” or “imagining seeing.” Other modalities give rise to “imagining hearing,” “imagining feeling,” and so on. Colloquially, the term “mental image” is used to denote the phenomenal character of the imagery experience—​that is, what it feels like to form a mental image. Scientists use the term in this pre-​theoretical way when they report certain imagery experiences as the source of scientific discoveries. Kekulé’s famous introspective report of a reverie involving a snake-​like figure closing in a loop as if seizing its own tail involves a mental image of this kind.8 The contemporary debate on mental imagery is vast, and there is disagreement on many foundational issues.9 Most of these issues can be set aside safely in the context of a discussion of SMs and TEs. Two issues are pertinent for our discussion: the nature of the representational format of mental images and the role of imagery in cognition. 8 See Shepard 1978 for more paradigmatic examples. 9 See Nigel 2014 for an excellent review.

28  The Scientific Imagination Within the debate on the representational format, Kosslyn’s (1980, 1983, 1994, 2005) quasi-​pictorial theory of visual imagery, or analogical theory, has been influential in recent debates about TEs, and we therefore concentrate on it here. According to the quasi-​pictorial theory, visual mental images have intrinsic spatial representational properties: they represent in a way that is analogous to the way in which pictures represent. But what is meant by a mental image having spatial representational properties? To pump intuitions, consider an example taken from an important experiment (Shepard and Metzler 1971). Subjects were presented with pairs of images showing three-​dimensional objects from different angles, and they had to say whether the two objects were in fact identical. The experiment showed that the reaction time was a linearly increasing function of the angular difference in the orientations of the objects. Subjects reported that they had to form mental representations with spatial properties that allowed them to rotate the object in their mind and check whether some rotation would yield a view that was congruent with the second picture. Kosslyn takes this to show that mental images have much in common with perceptual images.10 He offers the following analogy: perception is like filming a scene with a camera while at the same time watching the scene on-​ screen; mental imagery is like playing back on-​screen what has been recorded previously. This view is backed by the fact that visually imagining something with our eyes closed activates 92% of the regions of the brain that are also activated when we visually perceive something similar. However, Kosslyn is quick to add that the analogy is not perfect in one crucial respect: imagistic imagination is not just a passive playback process. In fact, images are put together actively. This allows us to vary the setup we have perceived. For instance, we can move around, in our mind, the pieces of furniture in a room and imagine the room arranged differently. So imagistic imagination is informed but not constrained by what we perceive. A time-​ honored philosophical tradition attributed a central role to mental imagery in all cognitive processes. This idea is usually traced back to Aristotle’s claim that “the soul never thinks without an image” (1995, iii 7, 431a15–​17), and it lived on in classical British empiricism. It was largely abandoned in the wake of influential objections by Frege ([1884] 1953), Ryle (1949), and Wittgenstein (1953). The dominant view nowadays is that 10 See “PhotoWings Interview:  Stanford Cognitive Scientist Stephen Kosslyn—​Mental Imagery and Perception,” Vimeo, uploaded December 7, 2012, by PhotoWings, https://​vimeo.com/​55140759.

Capturing the Scientific Imagination  29 most thinking is sentential—​or propositional—​and non-​imagistic. Fodor (1975, 174–​194) recognizes that mental images play some role in cognition, but submits that their meaning—​what they are images of or what they represent—​must be determined by a description in a language of thought, or mentalese. Even modern proponents of Kosslyn’s view do not attribute a central cognitive role to imagery, which is seen as deriving most or all of its semantic content from mentalese. A dissenting voice is Barsalou’s (1999), which has been influential in recent discussions about TEs. He proposes an alternative theory of perceptual symbols according to which cognition uses the same representational systems as perception. He distinguishes between what he calls “amodal” and “modal” symbols.11 Amodal symbols are the not imagistic language-​like symbols of mentalese. They are akin to words in that they are “linked arbitrarily to the perceptual states that produced them . . . Just as the word ‘chair’ has no systematic similarity to physical chairs, the amodal symbol for chair has no systematic similarity to perceived chairs” (1999, 578–​579). Modal symbols, by contrast, are subsets of perceptual states stored in long-​term memory. They are analogical because “the structure of a perceptual symbol corresponds, at least somewhat, to the perceptual state that produced it” (1999, 578). Barsalou emphasizes that modal symbols should not be identified with mental images,12 but he conceives of modal symbols as closely related to traditional conceptions of imagery and as involved in our conscious imagery experiences. Unlike proponents of the quasi-​pictorial view, Barsalou attributes a crucial role to modal symbols and claims that they are involved both in perception and in cognition. Returning to our earlier distinction between attitude and content, it should be emphasized that objectual imagination cannot be defined in terms of the presence of mental images because mental images can accompany episodes of memory, belief, desire, hallucination, and more. What makes the deployment of a mental image an instance of imagination is the attitude we take toward the mental image. We may, for instance, suspend belief and not react to images (imagining a fighter jet flying at us does not make us run to the bomb shelter). What exactly the relevant attitudes are is an interesting 11 The use of the term “modal” in this context has nothing to do with the use of the same term in modal logic. A modal symbol is one that pertains to the relevant sensory modality (e.g., visual modality, haptic modality, olfactory modality). 12 His reason for this is that mental states may sometimes be active even when the agent is not conscious of them. Paivio (1986), however, suggests that mental images can be active even when we are not consciously aware of them.

30  The Scientific Imagination question. However, an answer to this question does not matter for the discussion of TEs and SMs to come, and so we set it aside here (yet we do pay attention to attitudes in the context of the propositional imagination, and some of the insights gained there could be carried over, mutatis mutandis, to the context of objectual imagination).

1.4.2 Propositional Imagination The propositional imagination is a relation to some particular proposition (or propositions). We analyze propositional imagination by first individuating a minimal core of propositional imagination (MCPI), which provides necessary and sufficient conditions for something to be an instance of propositional imagination. Different varieties of propositional imagination can then be distinguished by the further conditions they satisfy. Hence, each kind X of propositional imagination can be characterized by filling in the blank in the scheme

X = MCPI & ___ .

Three main features of the propositional imagination emerge from the literature. Taken together, these form MCPI. First, we are not free to believe whatever we want, but typically we are free to imagine whatever we want.13 To believe that p is to hold p as true at the actual world, and whether the actual world makes p true or false is not up to us. To imagine that p does not commit us to the truth of p. We can decide freely what to imagine, and we can engage in spontaneous imaginative activities such as daydreaming where our imagination is not guided consciously. We refer to this feature as freedom.14 Second, propositional imagination carries inferential commitments that are similar to those carried by belief, hence manifesting mirroring.15 If we believe that Anna is human and that humans have blood in their veins, we infer that Anna has blood in her veins irrespective of whether Anna is real or fictional. The inferences we make may depend on background assumptions 13 See, e.g., Currie and Ravenscroft 2002; Nichols and Stich 2000, 2003; and Velleman 2000. 14 We here set aside the issue of imaginative resistance (Walton 1994), which is fraught with controversy. 15 See, e.g., Gendler 2003; Leslie 1987; Nichols 2004, 2006; Nichols and Stich 2000; and Perner 1991.

Capturing the Scientific Imagination  31 and on the specific aims and interests that direct our reasoning, but this is true in both cases. Third, imagining that p does not entail believing that p. Typically, imagined episodes are taken to have effects only within the relevant imaginative context, hence manifesting quarantining.16 More generally, mental states of propositional imagination do not guide action in the real world. When watching a stage performance of Othello we may not want Desdemona to die, but only a hopeless country bumpkin would jump onstage to save the heroine. Quarantining does not imply that nothing of “real-​world relevance” can be learned from an act of pretense. Dickens’s Oliver Twist mandates us to imagine that many orphans in London in the mid-​nineteenth century were cruelly treated. We may well also believe that this was true. Such “exports” are, however, one step removed from the imagination. In sum, MCPI consists of freedom, mirroring, and quarantining. We are now in position to discuss specific varieties of propositional imagination. We consider supposition, counterfactual reasoning, dreaming, daydreaming, and make-​believe. There is no claim that this list is exhaustive, but we submit that it contains the main varieties needed to discuss SMs and TEs. Supposition Scientists often introduce SMs and TEs via the use of expressions such as “suppose,” “assume,” and “consider.” These are typically used interchangeably and so we regard them as synonyms, at least in the context of SMs and TEs. If a description of a model starts with “Suppose that three point masses move quantum-​mechanically in an infinite potential well . . . ,” then we are invited to engage in a particular imaginative activity. So when scientists introduce TEs and SMs by inviting us to suppose something, they typically invite us to imagine something without any commitment to its truth. The same use of the term can also be found in formal logic, where we sometimes assume a proposition in a process of inferential reasoning without any commitment to its truth—​for example, when we suppose that p in a proof by reductio. Supposition satisfies the three features of MCPI. We can suppose that most sentient life in the universe will soon be destroyed by an asteroid hitting the earth (freedom). The inferences we draw from this are similar, in relevant ways, to the ones we would make if we were to assume an attitude of belief



16 See, e.g., Gendler 2003; Leslie 1987; Nichols and Stich 2000; and Perner 1991.

32  The Scientific Imagination (mirroring). Yet we do not take action to protect the well-​being of our family and friends (quarantining). There are two standard features of supposition that typically distinguish it from other varieties of propositional imagination: epistemic purpose (EP) and rational thinking (RT). These features fill the blank in our schema:

Supposition = MCPI & EP & RT.

Supposition is typically associated with ratiocinative activities aimed at specific epistemic purposes. By “ratiocinative activities” we mean the sort of activities wherein a consequence is derived from certain premises via deductive or inductive reasoning. By “epistemic purpose” we mean that supposition is usually aimed at gaining knowledge. Some might doubt that supposition is a species of propositional imagination. In this vein Peacocke claimed that imagination is a “phenomenologically distinctive state whose presence is not guaranteed by any supposition alone” (1985, 20) because “to imagine something is always at least to imagine, from the inside, being in some conscious state” (1985, 21). This distinction is artificial since many of our imaginings do not involve any imagining from the inside, as when we imagine that Anna Karenina is in love with Vronsky without having any sort of love-​like experience ourselves. And some paradigmatic cases of supposition may involve a phenomenologically distinctive experience, as when we are invited to engage in hypothetical reasoning about being in such-​and-​such state or having this or that experience.17 Hence, supposition is a variety of propositional imagination, and one that is typically associated with ratiocinative activities aimed at specific epistemic purposes. Counterfactual Reasoning Counterfactual reasoning involves thinking about alternative scenarios and possible states of affairs via the use of counterfactual conditional statements of the form “If A were the case, then C would be the case,” or “A→ C ”in the standard formal notation. Counterfactual reasoning satisfies MCPI and therefore qualifies as a variety of propositional imagination. This ties in with 17 Another argument against regarding supposition as a kind of imagination is Gendler’s (1994) argument from imaginative resistance. Arguments pulling in the same direction have also been offered by Moran (1994) and Goldman (2006). We agree with Nichols (2006) that these arguments remain inconclusive.

Capturing the Scientific Imagination  33 the fact that Williamson recently advanced an account of counterfactual reasoning in terms of propositional imagination. He writes: “When we work out what would have happened if such-​and-​such had been the case, we frequently cannot do it without imagining such-​and-​such to be the case and letting things run” (2005, 19). On this view, if King Lear thinks, “If only I had not divided my kingdom between Goneril and Regan, Cordelia would still be alive,” he imagines a relevant situation in which he does not divide the kingdom between his two older daughters and from this he further imagines that Cordelia would still be alive. In order to do this, imagination must be constrained in specific ways. Stalnaker (1968) and Lewis (1973) advanced semantic analyses of counterfactuals that offer implicit criteria for how imagination should be constrained in counterfactual reasoning. The leading idea of both analyses is that a counterfactual A→ C is true if and only if in the closest possible world where A is true C is also true (we discuss differences between Stalnaker’s and Lewis’s development of this idea in section 1.6). It is important that the notion of closeness in the phrase “closest possible world” means closeness to the actual world, or to reality. Let us call a possible world in which A is the case an A-​world. The counterfactual conditional A→ C is then true if and only if C is true in the A-​world that is closest to the actual world. The truth conditions for counterfactuals provide the essential clues for the analysis of counterfactual imagination. The first essential feature is selectivity (S). When King Lear imagines what would have happened if he had not divided his kingdom between his two older daughters, he selects an antecedent that is contrary to a relevant fact in a very specific way. When thinking counterfactually one does not merely ponder that things could have been different. One selects a particular manner in which things could have been different (specified in A) and then reasons about a world in which this difference is the case (the A-​world). The second feature is reality orientation (RO). There could be many possible worlds in which A is true, and one could check for the truth of C in any of them. But those conditions don’t treat all A-​worlds on par. They single out an A-​world (or, as we shall see, a class of A-​worlds) that is closest to reality as the one that determines the truth of the counterfactual conditional. When King Lear pondered what would have happened if he had divided his kingdom differently, he wondered how things would be in a world that is just like the real world apart from the distribution of property in his family. Minimal departure from the actual world is an essential constraint on counterfactual reasoning.

34  The Scientific Imagination We can then fill the blank in in our schema as follows:

Counterfactual reasoning = MCPI & S & RO.

Contemporary work on counterfactual reasoning in empirical psychology backs the idea that when people evaluate counterfactual conditionals their imaginings are constrained in a reality-​oriented way. Byrne (2005) presents a series of experiments suggesting that people tend to imagine worlds with the same natural laws, with alternatives to more recent events rather than earlier events, and with alternatives to events that they can control rather than events that they cannot control.18 This is consonant with the reality orientation that emerges from Stalnaker’s and Lewis’s analyses. We note, however, that a more fine-​grained analysis of RO faces important issues. Stalnaker appeals to the “intuitive idea that the nearest, or least different, world in which [the] antecedent is true is the one that should be selected” (1981, 88) but provides no guidance as to what counts as “least different.” Lewis assumes a notion of similarity of worlds that is taken as a primitive, which, as Arló-​Costa and Egré notice, “leaves the notion of similarity unconstrained and mysterious” (2016, sec. 6.1). Dreams Scientists sometimes refer to their dreams as a source of inspiration for their discoveries, as in Kekulé’s introspective report mentioned earlier. Dreams satisfy MCPI to the extent that they are free, they usually mirror standard inferential mechanisms of reasoning, and they are quarantined since their content does not export to real-​world contexts. The individuating features of dreams are that they are solitary imaginative activities (SIA) that are performed while asleep (SI). These will fill the blank in the scheme:

Dream = MCPI & SIA & SI.

Walton describes dreams as also being “spontaneous, undeliberate imaginings that the imaginer not only does not but cannot direct (consciously)” (1990, 16), and so one might be tempted to add these features to the list of 18 Johnson-​Laird 1983 and Roese and Olson 1995 offer further empirical evidence that counterfactual reasoning is constrained in a reality-​oriented way. See also Weisberg 2016 for a discussion of philosophical and psychological treatments of how much of the real world is imported in counterfactual scenarios.

Capturing the Scientific Imagination  35 conditions. However, Ichikawa (2009) points out that those of us who can engage in lucid dreaming (which involves the subject’s awareness that he or she is dreaming) are able to consciously guide and explore their dreams. Dreams are often thought to involve some variety of imagery, but forming a mental image while dreaming is not necessary: we can dream conversations, jokes, philosophical arguments, and so on.19 Make-​believe Make-​believe emerges as a specific theoretical notion within Walton’s (1990) theory of fiction. Walton characterizes make-​believe as “the use of (external) props in imaginative activities” (1990, 67). Anything capable of affecting our senses can become a prop in virtue of there being a prescription to imagine something—​that is, a social convention either explicitly stipulated or implicitly understood as being in force within a certain game. Props are generators of fictional truths. Fictional truth is a property of those propositions that are among the prescriptions to imagine of a certain game. Walton’s notion of fictional truth is intrinsically normative and objective to the extent that the statement “it is fictional that p” is to be understood as “it is to be imagined that p.” Walton thinks that works of fiction are props in games of make-​believe. When reading the Sherlock Holmes stories we imagine that Holmes lives at 221B Baker Street in virtue of Conan Doyle’s prescription to imagine that this is the case. We can imagine that Holmes lives in Paris, but this does not conform to the story. Fictional truths divide into primary truths and implied truths, where the former are generated directly from the text while the latter are generated indirectly from the primary truths via general principles and standard rules of inference. These are called principles of generation. Sometimes implicit fictional truths are generated according to the so-​called reality principle, which keeps the world of the fiction as close as possible to the real world. For example, from the primary fictional truth that Sherlock Holmes lives in Baker Street and our knowledge of London’s geography we can infer the implied fictional truth that Holmes lives nearer to Paddington Station than to Waterloo Station. Depending on the context of interpretation, however, implied truths can also be generated according to the mutual belief principle, which is directed toward the mutual beliefs of the members of the community in which the story originated. Many of the implied truths of Dante’s Divine Comedy

19 Closely related to dreaming is daydreaming. For a discussion, see Walton 1990, 13.

36  The Scientific Imagination are generated from the primary truths of the story and the medieval belief in the main tenets of the Ptolemaic geocentric system. Two main features of make-​believe emerge from Walton’s characterization: make-​believe is a social activity (SA) and it involves props that convey a normative aspect (NA) to its content. It obviously satisfies the MCPI conditions, and so we obtain: Make-​believe = MCPI & SA & NA.

Some might question the characterization of make-​believe as a variety of propositional imagination. Walton himself distinguishes between “imagining a proposition, imagining a thing, and imagining doing something—​ between, for instance, imagining that there is a bear, imagining a bear, and imagining seeing a bear” (1990, 13). In particular, he develops the latter notion as imagining de se, as opposed to mere propositional imagination, and further claims that games of make-​believe involve a sort of participation that crucially requires de se imagining. The motivation for Walton’s claim is that on his view literary fictions have a specific cognitive purpose in granting us insight into ourselves, which requires imagining things from a participatory perspective.20 However, Currie (1990) argues, rightly in our view, that make-​believe, just like belief and desire, is a propositional attitude. He does not think of make-​believe as a phenomenologically distinctive attitude, although he does accept that make-​believe, like belief and desire, “is a kind of state that can be accompanied by or give rise to introspectible feelings and images” (1990, 21). This, however, is not necessary and hence not a defining feature of make-​believe. According to this characterization of make-​believe, episodes of supposition and counterfactual reasoning are also episodes of make-​believe if they involve props and are therefore constrained by the prescriptions to imagine in a game of make-​believe. In this way, they also satisfy NA and SA. Dreams, by contrast, cannot be interpreted in a similar way. Dreaming is a solitary activity that does not satisfy SA and NA because it does not involve props.



20 Cf. Currie 1990, sec. 1.4, 7.5.

Capturing the Scientific Imagination  37

1.5  Reconsidering the Scientific Imagination We now return to the views we introduced in section 1.3. As we have seen, Norton puts forward ET, suggesting that the picturesque character of a TE can be eliminated. However, at the same time condition (i) claims that TEs posit hypothetical or counterfactual states of affairs. As we have seen, counterfactual reasoning constitutes a variety of propositional imagination, which would suggest that conducting a TE involves propositional imagination. This suspicion firms up when we look at Norton’s reconstructions of TEs. Consider Galileo’s falling bodies, which Norton (1996, 341–​342) reconstructs as a reductio ad absurdum: 1. Assumption for reductio proof: The speed of fall of bodies in a given medium is proportionate to their weights. 2. From 1: If a large stone falls with 8 degrees of speed, a smaller stone half its weight will fall with 4 degrees of speed. 3. Assumption: If a slower falling stone is connected to a faster falling stone, the slower will retard the faster and the faster will speed the slower. 4. From 3: If the two stones of 2 are connected, their composite will fall slower than 8 degrees of speed. 5. Assumption: The composite of the two weights has greater weight than the larger. 6. From 1 and 5: The composite will fall faster than 8 degrees of speed. 7. Conclusions 4 and 6 contradict. 8. Therefore, we must reject Assumption 1. 9. Therefore, all stones fall alike. This argument satisfies ReT and the weak interpretation of ET since (9) is a general claim about all falling stones. However, it does not conform to the strong interpretation of ET because it does posit imagined states of affairs involving imagined particulars. Steps (2), (4), (5), and (6) explicitly involve reference to the objects described in Galileo’s original TE. None of the situations specified by these statements actually obtains in the real world. We assume them in the imagination for the purpose of drawing the relevant inferences. This does not mean that the general laws and principles reached via TEs could not be reached via some other means. But in TEs the arguments leading to the general conclusions involve imagined scenarios and particulars.

38  The Scientific Imagination We have pointed out that the propositional imagination is characterized by MCPI, positing an ability to ponder and evaluate alternative scenarios that is deliberate, mirrors the inferential mechanisms of belief, and quarantines content. This is exactly the sort of imagination required by TEs. Galileo deliberately imagines a certain hypothetical scenario, he develops a deductive reasoning leading to a contradiction, and he quarantines its content since he explicitly invites us to imagine a non-​actual situation. We conclude that TEs involve propositional imagination.21 The remaining question is, which kind of propositional imagination? We come back to this issue in section 1.6. Let us now consider the view that the imagistic variety of objectual imagination is crucial to the performance of TEs. We focus on Nersessian’s proposal because she offers the most detailed defense of this view. As we have seen, her account is based on the notions of mental analogues and iconic representations. She develops these concepts by appealing to Barsalou’s distinction between modal and amodal symbols, which we discussed in section 1.4.1. Mental models are iconic representations that can be composed of either modal or amodal symbols. So, for example, a cat-​like representation on a plane-​like representation is a mental model constituted by modal symbols (modal iconic). A circle resting on a square for a cat being on a plane is a mental model constituted by amodal symbols (amodal iconic).22 Iconic representations (be they modal iconic or amodal iconic) are imagistic according to the currently dominant notion of imagery, which, as we have seen, rejects the identification of mental images with pictures in the mind.23 Figure 1.2b is not a picture. The circle and the square are arbitrarily linked to what they represent, yet they preserve the spatial relations that Figure 1.2a has. Figure 1.2b is more abstract than Figure 1.2a, but it is an image nevertheless. The main problem with Nersessian’s proposal, as well as with other accounts produced within the literature on mental models, is that there is no general consensus on many foundational issues of this framework, a point that Nersessian (2007, 129ff.) herself acknowledges. In particular, the appeal to similarity and goodness of fit as the kind of relationship that characterizes iconic representations is controversial. As we have pointed out, most 21 This admission is also implicit in Sorensen’s (1992, 202–​203) discussion of supposition. 22 Thanks to Nancy Nersessian for suggesting these two examples to us in personal communication. 23 In fact, Nersessian rejects the old pictorial notion of imagery. See Nersessian 1992, 294; 2007, 133 and 149 n. 6. She declares, however, that iconic mental models are imagistic in the contemporary sense of the term (cf. 2007, 137).

Capturing the Scientific Imagination  39

Figure 1.2a  Modal iconic

CAT PLANE

Figure 1.2b  Amodal iconic

cognitive scientists nowadays recognize that mental images have a specific representational format. Yet the standard view is that the relationship between a mental image and the object it represents is determined by a description couched in mentalese. Mental images might share some properties with what they represent, but this is not what makes them representations of what they represent. As long as these basic issues remain unresolved, Nersessian’s claim that TEs are iconic representations and that the execution of a TE consists merely in the manipulation of such representations remains in need of clarification. However, even if we assume, for the sake of argument, that these issues can be resolved in a satisfactory manner, two concerns about the imagistic view remain. The first is whether imagistic reasoning is sufficient to the derivation of the outcome of a TE. The problem is that not all factors that matter to the successful performance of a TE seem to have sensory-​like correlates. When considering Galileo’s cavity we do not seem to have a perception-​like representation of the cavity being frictionless or of the lack of air resistance. Likewise, we cannot form a perception-​like representation of the concept of force without having a theoretical definition, which is usually given in linguistic and formulaic symbols. Similarly, Malileo’s SM assumes these concepts, but he also requires theoretical knowledge of Lagrangean mechanics,

40  The Scientific Imagination general principles and laws, mathematical abilities, and logical inferential abilities. We could not even begin to reason about the model and its domain of inquiry without the relevant theoretical, mathematical, and logical abilities. So it is not surprising that Nersessian admits that “information deriving from various representational formats, including language and mathematics, plays a role in scientific thought experimenting” (2004, 147). However, this form of reasoning is, by her own lights, fundamentally different from the reasoning with iconic representations, and so it is difficult to see how it fits into a view that places iconic representations at the heart of TEs. Imagistic reasoning therefore seems insufficient for the performance of TEs and use of SMs. The second concern is whether imagistic reasoning is essential (or necessary) to the performance of TEs. Our abilities to form mental images and perform the relevant kinds of operations are highly subjective and idiosyncratic. Yet it would be implausible to argue that individuals with a poor imagistic ability could not derive the correct outcome of Galileo’s TE (or, for that matter, of any TE).24 Presumably, one could perform the TE and draw the relevant conclusion by understanding the propositional content of the argument underlying it. When performing the TE we do not have to form a mental image of the U-​shaped cavity and the series of transformations we described in section 1.2. We need to grasp the relevant concepts, with or without forming a mental image of the objects and transformations they stand in for. The problem becomes even more perspicuous when we consider SMs. Malileo’s SM could be illustrated with figures that facilitate a scientist’s reasoning by making it more vivid, and some of us might form a mental image of the parabola and the ball. However, this is not necessary. We can calculate the trajectory of the ball by going through the relevant mathematical calculations and by deploying the mathematical and theoretical notions that are relevant for this specific domain of inquiry.

1.6  Analyzing the Scientific Imagination We have argued that while TEs and SMs do not require imagery, the propositional imagination is crucial to them. But what sort of propositional

24 As Arnon Levy pointed out to us, this would be an interesting empirical question.

Capturing the Scientific Imagination  41 imagination is required? In section 1.4.2 we individuated supposition, counterfactual reasoning, dreaming, and make-​believe as different varieties of propositional imagination. Scientists sometimes report their dreams as a source of inspiration for scientific discoveries. But these imaginative activities are typically subjective and unconstrained, and, more to the point, they are not involved in the performance of a TE or the exploration of a SM. So we can safely set dreams aside. This leaves the other three varieties as contenders. They are genuine options and deserve to be taken seriously. We now discuss what it would take to analyze TEs and SMs in terms of each of these options and make the challenges that emerge explicit. Our tentative conclusion is that SMs and TEs are most naturally explained in terms of make-​believe. The conclusion is tentative because we don’t claim to present a complete account of the scientific imagination, and a final analysis may well end up incorporating elements from all three accounts. Let us begin with supposition. Often scientists introduce TEs and SMs by explicitly inviting us to suppose that some (real or non-​actual) objects are endowed with certain properties and that they behave in certain ways. To perform a TE or use an SM would then amount to supposing a number of things and deriving consequences from them with the aim of gaining knowledge. Unfortunately, this is too weak. Supposition, as we have characterized it, is not an essentially social activity (since it can be purely private), and as such, it does not account for the social character of scientific activities. Furthermore, it does not have a normative element to it, and such elements seem to be characteristic of scientific thought. One can suppose anything, and as long as no further restrictions are imposed, one can conclude almost anything from certain assumptions. The notion of supposition imposes no constraints on inferences beyond those that follow from mirroring, which is part of MCPI. This is too little. First, mirroring alone is too weak to capture the way in which the imagination is constrained in TEs and SMs. Second, mirroring only provides a thin inferential structure that consists primarily of logical operations, but it doesn’t offer the kind of principles that would guide a process of investigation to the kind of inferred truths that the study of TEs and SMs aims to uncover. For these reasons supposition does not offer a satisfactory analysis of the propositional imagination in TEs and SMs. Let us now consider counterfactual reasoning. From this point of view the performance of a TE or the use of an SM amounts to evaluating the counterfactual M → C , where “M” is a description of the SM or TE. A claim C is

42  The Scientific Imagination then true in the TE or SM if the counterfactual M → C is true. For instance, it is true in Newton’s model of the solar system that planets move in elliptical orbits if the counterfactual “if planets were perfect spheres gravitationally interacting with each other and nothing else, then they would move in elliptical orbits” is true. A first challenge for this analysis of TEs and SMs is the issue of completeness. Possible worlds are complete. Intuitively, a possible world is complete when the principle of the excluded middle holds and for any proposition p it is the case that either p or not-​p holds.25 But models are not complete in this sense. Claims about the date of the Battle of Waterloo, the height of the tallest building in London, and the average rainfall in China last year are neither true nor false in, say, Einstein’s elevator TE or a mechanical model of the atom simply because battles, buildings, and levels of rainfall are not part of these TEs and SMs. However, the closest possible world in which M is true is one in which there are matters of fact about these things (because possible worlds are complete), and so the counterfactual M → C may have a truth-​value for claims that have nothing to do with the model. For instance, the counterfactual “if planets were perfect spheres gravitationally interacting with each other and nothing else, then the height of the tallest building in London would be 310 meters” could come out true. But in fact the truth-​ value of this counterfactual should be indeterminate (i.e., M → C should be neither true nor false). So the worry is that the standard semantics for counterfactuals would make TEs and SMs complete. Whether this worry is a real problem depends on the details of the account. The crucial question is whether the account one adopts accepts the so-​called principle of conditional excluded middle (CEM).26 CEM says that for all C either M → C is true or M →  C is true (where “ C” stands for “not-​C”). Stalnaker’s semantics works with a selection function that picks a unique nearest world w, and hence the truth-​value of M → C is simply the truth-​value of C in w. Since C is either true or false in w, either M → C or M →  C is true and CEM holds. Stalnaker (1981) has defended CEM, and a 25 See Van Inwagen 1986 for a critical discussion of the notion of completeness and the metaphysics of possible worlds, and Priest 2008 for a discussion of the notion of completeness in modal logic. Stalnaker (1986, esp. 117–​118) further discusses the notion of completeness and its role in framing the distinction between possible world semantics and situation semantics (e.g., Barwise and Perry 1983, 1985), where completeness applies only to possible worlds as total states that include everything that is the case, while situations can be construed as partial worlds or small parts of worlds and therefore cannot be complete. 26 We are grateful to Timothy Williamson, Matthieu Gallais, and Sonia Roca-​Royes for helpful discussions about CEM.

Capturing the Scientific Imagination  43 number of recent authors have followed suit (see, e.g., Cross 2009; Williams 2010). But CEM conflicts with the incompleteness of TEs and SMs, and defenders of CEM have to find a way around this problem. In contrast with Stalnaker’s, Lewis’s semantics works with a comparative similarity relation, which defines a weak total ordering of all possible worlds with respect to each possible world. When several possible worlds tie in for similarity, the truth of M → C requires the truth of C in all the nearest M-​ worlds. If C is true in some of the nearest M-​worlds but not in others, then both M → C and M →  C are false in the actual world and CEM fails. The failure of CEM is a step in the right direction, but by itself this is insufficient to solve the problem of incompleteness. For a solution of this problem not only requires that for some C neither M → C nor M →  C is true, but it requires that this be the case for all Cs that don’t belong to the TE or SM. This implies that for all Cs about which the TE or the SM remain silent, it must be the case that there are some M-​worlds in which C is true and some other M-​ worlds in which C is false that are at the same distance from the actual world. Since the set of Cs that belongs to the TE or SM is different from case to case, this approach requires that we give up on the notion of a universal similarity metric between possible worlds and postulate that each TE or SM comes with a tailor-​made cross-​world similarity metric that ensures that M → C has no determinate truth value for all the right Cs. The next issue is how we acquire counterfactual knowledge. Roca-​Royes submits that “how capable we are of counterfactual knowledge depends on how capable we are of tracking the similarity order” (2012, 154). In agreement with Kment (2006), she also holds that our capability for counterfactual knowledge “needs to be based on rules that permit us to determine which propositions are cotenable with a given antecedent” (Kment 2006, 288). Thus, any epistemology of counterfactuals needs to identify the relevant rules. This, however, is no easy feat. A rule that merely states that we shouldn’t go beyond considering possible worlds that are maximally similar to the actual world needs an indication of what counts as a maximally similar world. Kment (2006) offers a metaphysical account of different types of similarity facts and of their relative weights. However, there is no general agreement on these issues. These problems are inherited by a counterfactual epistemology for TEs and SMs. As previously noted, the set of Cs that belongs to a TE and a SM is different from case to case. Thus, we need a tailor-​ made cross-​world similarity metric for each case, or perhaps we can identify a series of overarching types of metrics for different types of TEs and SMs.

44  The Scientific Imagination A tenable account of counterfactual imagination will have to address these issues. Let us finally turn to make-​believe. Analyses of SMs in terms of make-​ believe have been suggested by Frigg (2010), Levy (2015), and Toon (2012), and of TEs by Meynell (2014). On this view, to perform a TE or use an SM amounts to exploring a fictional scenario that is defined by the primary truths and the principles of generation. In doing so, the scientist discovers things about the scenario and finds out what holds and what doesn’t hold in it. Make-​believe is a highly constrained form of imagination. The constraints come from the use of props and the principles of generation that are constitutive of a game of make-​believe. These constraints capture well how TEs and SMs work. When performing Galileo’s TE we imagine that so-​and-​so is the case in virtue of Galileo’s prescriptions. We could imagine that instead of a ball we put a toothpick on the edge of the cavity. But this is a violation of the prescriptions to imagine in force within Galileo’s TE. Furthermore, we derive the law of inertia from the law of equal heights (a general principle of generation) and the appropriate variations of the TE setting as further prescribed by Galileo. Likewise, when working with Malileo’s model we could imagine that the ball is oval and has an inhomogeneous mass distribution that causes it to wobble inside the cavity. But this is a violation of the rules of Malileo’s game of make-​believe. To use the model properly, we have to engage in the official game and derive the outcome from Malileo’s prescriptions in combination with the mathematical equation and theoretical principles of Lagrangean mechanics. Not only is make-​believe constrained due to its reliance on props and socially sanctioned principles of generation, but it is also an essentially social imaginative activity. It has an objective content that is normatively characterized in terms of social conventions implicitly or explicitly understood as being in force within the relevant game. The social character and objectivity of make-​ believe are typical for the sort of imaginative activities involved in TEs and SMs. The props in the game are the linguistic descriptions, graphs, and mathematical formulae used by scientists in the performance and communication of TEs and in the development and exploration of SMs as props. In this way, we can explain the notion of truth in a TE and truth in a SM in terms of fictional truth. The latter carries over to TEs and SMs simply by interpreting the propositions that are true in a TE and true in a SM as being among the prescriptions to imagine specified in their original assumptions, either explicitly or implicitly. In contrast with possible worlds, the content generated by a game of make-​believe is incomplete. Propositions that do not belong to

Capturing the Scientific Imagination  45 the game of make-​believe of a certain TE or SM are neither mandated to be imagined nor mandated not to be imagined, and hence they are neither fictionally true nor false. Make-​believe also accounts for the mechanisms of generation of the implicit truths of TEs and SMs. The performance of a TE and the exploration of an SM consist in finding out what is true according to a TE and what is true according to an SM, which goes beyond what is explicitly stated in the original assumptions. These implicit fictional truths can be inferred according to certain principles of generation. This also provides an epistemology for fictional truths: we investigate a TE or an SM by finding out what follows from the primary truths of the model and the principles of generation. This is in line with scientific practice, where a significant part of the work goes into studying the consequences of the basic assumptions of the TE or SM. Eventually this leads to the generation of hypotheses about the real world that can be tested for genuine truth or falsity.27 What principles of generation constrain the contents of TEs and SMs? We have presented the reality principle and the mutual belief principle as those constraining the generation of implicit fictional truths in stories. While these principles can be at work in certain TEs or SMs, other options may be possible. Meynell (2014, 4162–​4163) points out that different kinds of TEs make use of different principles, and which ones are chosen depends on disciplinary conventions and interpretative practices. Specifically, she points out that “which principles of generation a physicist brings most automatically to a TE will tend to reflect her beliefs about reality as well as the various theories and projects upon which she currently works” (2014, 4163). For this reason neither the reality principle nor the mutual belief principle is in any way privileged, and different principles may be needed in specific domains of scientific inquiry. It is an advantage of the framework of make-​believe that it has the flexibility to accommodate such context-​specific principles. Make-​believe is at once constrained (due to its reliance on props and principles of generation) and flexible (due the freedom of choosing different principles). This renders make-​believe a promising analysis of the kind of imaginative activity at work in TEs and SMs.

27 See Salis 2016 for a discussion of theoretical hypotheses generated in SMs in connection with make-​believe and for different analyses.

46  The Scientific Imagination

1.7  Conclusion This chapter investigated the nature of imaginative activities involved in TEs and SMs. We find ourselves in the seemingly paradoxical situation that the imagination is at once deemed crucial and dismissed because of its purportedly intrinsic imagistic character. This tension can be resolved, we submit, by recognizing that there is a propositional variety of imagination. A discussion of both imagistic and propositional kinds of imagination leads us to the conclusion that while propositional imagination is crucial to the performance of TEs and the use of SMs, imagistic imagination is neither sufficient nor necessary. We then tentatively suggest that the imaginative activities in SMs and TEs are most naturally analyzed in terms of make-​believe, leaving open the possibility that a final analysis may well end up incorporating elements from other varieties of propositional imagination.

Acknowledgments We would like to thank Nancy Nersessian for an extremely helpful email exchange, and Peter Godfrey-​Smith, Arnon Levy, and Mike Stuart for comments on earlier drafts. Thanks to Alisa Bokulich, Ruth Byrne, Greg Currie, Stacie Friend, Matthieu Gallais, Manuel García-​Carpintero, Sonia Roca-​Royes, Alberto Voltolini, Michael Weisberg, and Tim Williamson for helpful discussions. Previous versions of this chapter were presented at workshops at the Van Leer Institute in Jerusalem, at the Institute of Philosophy in London, and at the Centre for Philosophy of Natural and Social Science at the London School of Economics. We would like to thank the audiences of these events for their comments. Frigg would like to acknowledge financial support from the Spanish Ministry of Science and Innovation (MICINN) through grant FFI2012-​37354. Salis would like to acknowledge financial support from the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-​Curie grant agreement no. 654034.

References Aristotle. (1995). “De Anima.” In The Complete Works of Aristotle: The Revised Oxford Translation, Vol. 1. Bollingen Series LXXI. Princeton, NJ: Princeton University Press.

Capturing the Scientific Imagination  47 Arlo-​Costa, H., and Egré, P. (2016). “The Logic of Conditionals.” In The Stanford Encyclopedia of Philosophy (Winter 2016 ed.), edited by E. N. Zalta. http://​plato.stanford.edu/​archives/​win2016/​entries/​logic-​conditionals. Barsalou, L. W. (1999). “Perceptual Symbol Systems.” Behavioral and Brain Sciences 22: 577–​660. Barwise, J., and Perry, J. (1983). Situations and Attitudes. Cambridge, MA: MIT Press. Barwise, J., and Perry, J. (1985). “Shifting Situations and Shaken Attitudes.” Linguistics and Philosophy 8: 103–​161. Bohr, N. ([1934] 1961). Atomic Theory and the Description of Nature: Four Essays with an Introductory Survey. Cambridge: Cambridge University Press. Brown, J. R. (1991). The Laboratory of the Mind. Cambridge: Cambridge University Press. Brown, J. R. (2004). “Peeking into Plato’s Heaven.” Philosophy of Science 71, no. 5: 1126–​1138. Byrne, R. (2005). Rational Imagination. How People Create Alternatives to Reality. Cambridge, MA: MIT Press. Cartwright, N. (2010). “Models: Parables v. Fables.” In Beyond Mimesis and Convention: Representation in Art and Science, edited by R. Frigg and M. C. Hunter, 19–​32. Berlin: Springer. Cross, C. (2009). “Conditional Excluded Middle.” Erkenntnis 70: 173–​188. Currie, G. (1990). The Nature of Fiction. Cambridge: Cambridge University Press. Currie, G., and Ravenscroft, I. (2002). Recreative Minds: Imagination in Philosophy and Psychology. Oxford: Clarendon Press. Del Re, G. (2000). “Models and Analogies in Science.” Hyle 6: 5–​15. Dirac, P. A. M. (1958). Principles of Quantum Mechanics. 4th ed. Oxford: Clarendon Press. Einstein, A. (2005). Relativity: The Special and the General Theory. New York: Pi Press. Fodor, J. A. (1975). The Language of Thought. New York: Thomas Crowell. Frege, G. ([1884] 1953). The Foundations of Arithmetic. Translated by J. L. Austin. Oxford: Blackwell. Frigg, R. (2010). “Fiction and Scientific Representation.” In Beyond Mimesis and Nominalism: Representation in Art and Science, edited by R. Frigg and M. Hunter, 97–​ 138. Berlin: Springer. Gaut, B. (2003). “Imagination and Creativity.” In The Creation of Art:  New Essays in Philosophical Aesthetics, edited by B. Gaut and P. Livingston, 148–​ 173. Cambridge: Cambridge University Press. Gaut, B. (2010). “The Philosophy of Creativity.” Philosophy Compass 5, no. 12: 1034–​1046. Gaut, B., and Livingston, P. (Eds.). (2003). The Creation of Art: New Essays in Philosophical Aesthetics. Cambridge: Cambridge University Press. Gendler, T. S. (2004). “Thought Experiments Rethought—​and Reperceived.” Philosophy of Science 71: 1154–​1163. Godfrey-​Smith, P. (2006). “The Strategy of Model-​Based Science.” Biology and Philosophy 21: 725–​740. Godfrey-​Smith, P. (2009). “Models and Fictions in Science.” Philosophical Studies 143: 101–​116. Goldman, A. (2006). “Imagination and Simulation in Audience Responses to Fiction.” In The Architecture of the Imagination, edited by S. Nichols, 41–​ 56. Oxford: Clarendon Press. Harré, R. (1970). The Principles of Scientific Thinking. London: Macmillan. Harré, R. (1988). “Where Models and Analogies Really Count.” International Studies in the Philosophy of Science 2: 118–​133.

48  The Scientific Imagination Ichikawa, J. (2009). “Dreaming and Imagination.” Mind and Language 24, no. 1: 103–​121. Johnson-​Laird, P. N. (1980). “Mental Models in Cognitive Science.” Cognitive Science 4: 71–​115. Johnson-​Laird, P. N. (1982). “The Mental Representation of the Meaning of Words.” Cognition 25: 189–​211. Johnson-​Laird, P. N. (1983). Mental Models. Cambridge, MA: MIT Press. Johnson-​Laird, P. N. (1989). “Mental Models.” In Foundations of Cognitive Science, edited by M. Posner, 469–​500. Cambridge, MA: MIT Press. Kment, B. (2006). “Counterfactuals and the Analysis of Necessity.” Philosophical Perspectives 20: 237–​302. Kosslyn, S. M. (1980). Image and Mind. Cambridge, MA: Harvard University Press. Kosslyn, S. M. (1983). Ghosts in the Mind’s Machine: Creating and Using Images in the Brain. New York: Norton. Kosslyn, S. M. (1994). Image and Brain: The Resolution of the Imagery Debate. Cambridge, MA: MIT Press. Kosslyn, S. M. (2005). “Mental Images and the Brain.” Cognitive Neuropsychology 22: 333–​347. Laymon, R. (1991). “Thought Experiments by Stevin, Mach and Gouy:  Thought Experiments as Ideal Limits and as Semantic Domains.” In Thought Experiments in Science and Philosophy, edited by T. Horowitz and G. J. Massey, 167–​191. Savage, MD: Rowman and Littlefield. Leslie, A. (1987). “Pretense and Representation:  The Origins of ‘Theory of Mind.’” Psychological Review 94, no. 4: 412–​426. Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172, no. 3: 781–​798. Lewis, D. K. (1973). Counterfactuals. Oxford: Basil Blackwell. Maxwell, J. C. (1965). The Scientific Papers of James Clerk Maxwell. Edited by W. D. Niven. Mineola, NY: Dover Publications. Meynell, L. (2014). “Imagination and Insight: A New Account of the Content of Thought Experiments.” Synthese 191: 4149–​4168. Moran, R. (1994). “The Expression of Feeling in the Imagination.” Philosophical Review 103: 75–​106. Morgan, M. (2004). “Imagination and Imaging in Model Building.” Philosophy of Science 71: 753–​766. Nersessian, N. J. (1992). “In the Theoretician’s Laboratory:  Thought Experimenting as Mental Modeling.” Philosophy of Science 2: 291–​301. Nersessian, N. J. (1999). “Model-​Based Reasoning in Conceptual Change.” In Model-​ Based Reasoning in Scientific Discovery, edited by L. Magnani, N. J. Nersessian, and P. Thagard, 5–​22. New York: Kluwer Academic/​Plenum. Nersessian, N. J. (2007). “Thought Experimenting as Mental Modeling:  Empiricism Without Logic.” Croatian Journal of Philosophy 7, no. 20: 125–​154. Nichols, S. (2004). “Imagining and Believing: The Promise of a Single Code.” Journal of Aesthetics and Art Criticism 62: 129–​139. Nichols, S. (2006). “Just the Imagination: Why Imagining Doesn’t Behave Like Believing.” Mind and Language 21, no. 4: 459–​474. Nichols, S., and Stich, S. (2000). “A Cognitive Theory of Pretense.” Cognition 74: 115–​147. Nichols, S., and Stich, S. (2003). Mindreading:  An Integrated Account of Pretence, Self-​ Awareness, and Understanding Other Minds. Oxford: Clarendon Press.

Capturing the Scientific Imagination  49 Nigel, T. J. T. (2014). “Mental Imagery.” In The Stanford Encyclopedia of Philosophy (Fall 2014 ed.), edited by E. N. Zalta. http://​plato.stanford.edu/​archives/​fall2014/​entries/​ mental-​imagery. Norton, J. (1991). “Thought Experiments in Einstein’s Work.” In Thought Experiments in Science and Philosophy, edited by T. Horowitz and G. J. Massey, 129–​148. Savage, MD: Rowman and Littlefield. Norton, J. (1996). “Are Thought Experiments Just What You Thought?” Canadian Journal of Philosophy 26, no. 3: 333–​366. Norton, J. (2004). “On Thought Experiments: Is There More to the Argument?” Philosophy of Science 71: 1139–​1151. Odenbaugh, J. (2015). “Semblance or Similarity? Reflections on Simulation and Similarity.” Biology and Philosophy 30: 277–​291. Paivio, A. (1986). Mental Representations: A Dual Coding Approach. New York: Oxford University Press. Peacocke, C. (1985). “Imagination, Experience, and Possibility.” In Essays on Berkeley, edited by J. Foster and H. Robinson, 19–​35. Oxford: Oxford University Press. Perner, J. (1991). Understanding the Representational Mind. Cambridge, MA: MIT Press. Priest, G. (2008). “Many-​Valued Modal Logics: A Simple Approach.” Review of Symbolic Logic 1, no. 2: 190–​203. Roca-​ Royes, S. (2012). “Essentialist Blindness Would Not Preclude Counterfactual Knowledge.” Philosophia Scientiae 16, no. 2: 149–​172. Roese, N. J., and Olson, J. M. (1995). What Might Have Been: The Social Psychology of Counterfactual Thinking. Mahwah, NJ: Lawrence Erlbaum Associates. Ryle, G. (1949). The Concept of Mind. London: Hutchinson. Salis, F. (2016). “The Nature of Model-​World Comparisons.” Monist 99, no. 3: 243–​259. Shepard, R. N. (1978). “The Mental Image.” American Psychologist 33, no. 2: 125–​137. Shepard, R. N., and Metzler, J. (1971). “Mental Rotation of Three-​Dimensional Objects.” Science 171: 701–​703. Sorensen, R. (1992). Thought Experiments. New York: Oxford University Press. Stalnaker, R. (1968). “A Theory of Conditionals.” In Studies in Logical Theory, edited by N. Rescher, 2:98–​112. American Philosophical Quarterly Monograph Series, vol. 2. Oxford: Blackwell. Stalnaker, R. (1981). “A Defense of Conditional Excluded Middle.” In Ifs: Conditionals, Belief, Decision, Chance, and Time, edited by W. L. Harper, R. Stalnaker, and G. Pearce, 87–​102. University of Western Ontario Series in Philosophy of Science vol. 15. Dordrecht: D. Reidel. Stalnaker, R. (1986). “Possible Worlds and Situations.” Journal of Philosophical Logic 15, no. 1: 109–​123. Sugden, R. (2009). “Credible Worlds, Capacities and Mechanisms.” Erkenntnis 70: 3–​27. Toon, A. (2012). Models as Make-​ Believe:  Imagination, Fiction, and Scientific Representation. New York: Palgrave Macmillan. Van Inwagen, P. (1986). “Two Concepts of Possible Worlds.” Midwest Studies in Philosophy 11: 185–​213. Velleman, J. D. (2000). “The Aim of Belief.” In The Possibility of Practical Reason, 244–​281. Oxford: Oxford University Press. Walton, K. L. (1990). Mimesis as Make-​Believe: On the Foundations of the Representational Arts. Cambridge, MA: Harvard University Press.

50  The Scientific Imagination Walton K.  L. (1994). “Morals in Fiction and Fictional Morality.” Proceedings of the Aristotelian Society 68: 27–​50. Weisberg, D. S. (2016). “How Fictional Worlds Are Created.” Philosophy Compass 11, no. 8: 462–​470. Weisberg, M. (2007). “Who Is a Modeler?” British Journal for the Philosophy of Science 58: 207–​233. Weisberg, M. (2013). Simulation and Similarity: Using Models to Understand the World. New York: Oxford University Press. Williams, R. (2010). “Defending Conditional Excluded Middle.” Noûs 44, no. 4: 650–​668. Williamson, T. (2005). “Armchair Philosophy, Metaphysical Modality, and Counterfactual Thinking.” Proceedings of the Aristotelian Society 105: 1–​23. Wittgenstein, L. (1953). Philosophical Investigations. Edited by G. E. M. Anscombe and R. Rhees. Translated by G. E. M. Anscombe. Oxford: Blackwell. Yablo, S. (1993). “Is Conceivability a Guide to Possibility?” Philosophy and Phenomenological Research 53, no. 1: 1–​42.

2 If Models Were Fictions, Then What Would They Be? Amie L. Thomasson

Models have come to play an increasingly important role in the sciences, from physics and economics to biology and the earth sciences. But talk of models raises a metaphysical question: what are these models? We must first distinguish between model descriptions (the kinds of descriptions appearing in scientific papers, textbooks, and diagrams) and the model systems described (ideal pendulums, systems of purely rational self-​ interested agents, or infinite populations of animals). The metaphysical puzzles arise for the model systems, given that there are no (concrete) frictionless pendulums, systems of perfectly rational purely self-​interested agents, or infinite populations of animals. Recently, there has been an increasing interest in the idea that model descriptions should be thought of as similar to stories, and model systems should be thought of as akin to fictional characters; as Nancy Cartwright puts it, “A model is a work of fiction” (1983, 153). I will not argue for this idea, though others have done so. There are certainly good prima facie reasons for thinking of models this way (see Frigg 2010b, 102–​103). First, although there is typically nothing (concrete) that matches the descriptions in fictional stories or scientific model descriptions, there are things we can apparently say truly (and falsely) about the characters or model systems. Moreover, as Peter Godfrey-​Smith has emphasized, “scientific modelers often treat model systems in a ‘concrete’ way that suggests a strong analogy with ordinary fictions” (2006, 739). That is, scientists often think of themselves as describing “imaginary biological populations, imaginary neural networks, or imaginary economies” where an “imaginary population is something that, if it was real, would be a flesh-​and-​blood population, not a mathematical object” (2006, 735). And as in the literary case, we are concerned not just with what is explicitly attributed to the objects in the

Amie L. Thomasson, If Models Were Fictions, Then What Would They Be? In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0003

52  The Scientific Imagination model description but also (and to a greater extent) with what can be inferred from that basis, using the relevant rules at issue. But if model systems were (like) fictional characters, what would they be? One might hope to make some progress by looking at the various philosophical theories of fiction on the market. Traditionally, there were two dominant approaches to understanding fictional discourse. Neo-​Meinongians (Parsons 1980; Rapaport 1978; Wolterstorff 1980; Zalta 1983) hold that there is something that the stories correctly describe, and since there is no real, concrete object, they conclude that it must be a nonexistent or abstract object that (in some sense) has properties that fit the descriptions in the story. Their opponents, anti-​realists, deny that there are fictional characters at all. Any discourse that appears to refer to them, the anti-​realists hold, either can be paraphrased in a way that avoids the apparent reference or can be taken as in the context of a game of pretense, so we needn’t posit fictional objects. Some early approaches to scientific models parallel the neo-​Meinongian views of fiction—​taking model systems to be abstract objects that (in some sense) fit the model descriptions. But the idea that stories or models describe “description-​fitting” objects runs into complications and difficulties that have dimmed the initial appeal of such views, in the case of both fictions and models. In reaction against this, recent work on understanding scientific models as fiction has taken an anti-​realist turn, influenced by Kendall Walton’s pretense theory of fiction (1990). Indeed, both Roman Frigg (2010a, 2010b, 2010c) and Adam Toon (2010, 2012) adopt Walton’s theory wholesale, arguing that the pretense approach provides the basis for a good account of scientific modeling that avoids “ontological costs” (Frigg 2010c, 274; cf. Toon 2012). But while the pretense view does a great deal to advance our understanding of talk and thought about fictional characters and models alike, there are also problems with a pure pretense theory that are well known to those working in the philosophy of fiction. Walton treats not only discourse about what goes on within the content of a story but all discourse—​including “external” critical and historical claims—​as implicitly involving games of pretense. As a result, he gives us an unnecessarily convoluted and implausible reading of external discourse about fictional characters, with only dubious ontological benefits to show for it. I will argue that these problems for a pure pretense theory of fiction carry over as problems for a pure pretense theory of models. While the pretense approach provides a persuasive account of the “internal” discourse about

If Models Were Fictions, Then What Would They Be?  53 what goes on in a model, there are also many critical, historical, and theoretical contexts in which we seem to refer, without pretense, to model systems themselves. Indeed, scientific discourse itself requires these “external” ways of speaking about models. In the fiction literature, the problems with neo-​Meinongian realist views, on the one hand, and anti-​realist views of fictional characters, on the other, have led to the development of an increasingly popular alternative. That alternative is what I have elsewhere called an “artifactualist” view of fiction. Artifactualist approaches of various forms have been suggested or developed by, for example, Saul Kripke (2013), John Searle (1979, 71–​72), Nathan Salmon (1998), Stephen Schiffer (1996), and myself (Thomasson 1999, 2003a, 2003b, 2010). On the artifactualist view one can allow, with the pretense theorist, that talk about the content of a story is within the context of a pretense, and also allow that there are (typically) no objects that fit the descriptions in stories. Nonetheless, one can also allow that in writing such stories and introducing such games of pretense, authors thereby create fictional characters, understood as a kind of abstract cultural artifact. Artifactualist views thus preserve the advantages of pretense views without the costs that come from treating all talk about fictional characters as implicitly in the context of a pretense. But strangely, despite their popularity in fiction circles, artifactualist approaches to fiction have largely been overlooked by those aiming to understand scientific models on analogy with fiction. I will argue, however, that those who hope to treat scientific models as fictions would do better to abandon the pure pretense approach in favor of an artifactualist view. For models as for fiction, an artifactualist approach can retain the advantages of the pretense view while giving a far more straightforward account of external historical, theoretical, and critical discourse about models. The main perceived drawback to artifactualist views is their supposed “ontological costs.” In closing I will all too briefly suggest why ontological qualms of this sort should be discounted.

2.1  Troubles for Description-​Fitting Objects The literary works we read seem to refer to people, places, and activities and say things about them. Call the discourse within works of literature “fictionalizing discourse.” In discussing works of fiction, we often—​perhaps most

54  The Scientific Imagination often—​speak of fictional characters as they appear in the content of the story, or as they are described in the story. Call this “internal” fictional discourse. In such contexts, we will speak of Hamlet as a man—​a prince, born to Queen Gertrude and the now deceased King Hamlet. But we don’t just speak of the properties the character is directly ascribed in the story; we also speak of and discuss features of the character that are not directly mentioned. Although it’s never mentioned, it seems true to say that Hamlet has two legs, and literary critics might also make other inferences—​say, that he suffers from depression or an Oedipal complex. In giving an analysis of this sort of (internal) fictional discourse, we need some way to distinguish those attributions that seem true (Hamlet is a prince, Hamlet is Danish) from those that are false (Hamlet is a plum pudding, Hamlet is Italian). A natural move is to take fictionalizing and internal discourse literally, as describing objects of a certain kind. And in the first phase of philosophical theories of fiction, the dominant approach was to think of attributions like these as literal descriptions of objects that (in some sense) fit the descriptions (Parsons 1980; Wolterstorff 1980; Zalta 1983). Yet in most cases no such (concrete) objects that possess the properties ascribed in the stories really exist. Thus, those who take such statements descriptively tend to take them to describe not real, concrete objects but rather Meinongian nonexistent objects or abstract objects. Even if we accept that there are objects described in such attributions, however, problems arise. For a nonexistent object cannot be thought to literally have all of the properties it is ascribed in the work of literature: Hamlet (in contrast with the ghost of old King Hamlet) is ascribed the property of being a real or existent man. But the nonexistent man Hamlet can’t have the property of existing. If one thinks of fictional characters as abstract entities rather than as nonexistent objects, other problems arise. For abstract objects can’t be thought to literally have properties like walking down the street or breaking a leg. Indeed, for most of the properties commonly ascribed to the characters of a story, it would seem to be a category mistake to think abstracta could have those properties. In response to the first difficulty, Meinongian realists about fictional characters have taken one of two routes. Some, with Terence Parsons (1980), distinguish different types of properties: nonexistent objects may really possess the nuclear properties they are ascribed in the story (like being a man and being a prince) but not “extra-​nuclear” properties (like existence, completeness [in the sense of, for every property P, either having or lacking P],

If Models Were Fictions, Then What Would They Be?  55 etc.)—​although they may have “watered-​down nuclear” simulacra of these properties. Others, such as Edward Zalta (1983), treat fictional characters as abstract objects and distinguish two modes of predication. On Zalta’s view, fictional characters encode the properties ascribed to them in the story but do not exemplify those properties as regular things do; Hamlet, on this view, encodes but does not exemplify existence. This distinction also resolves the second problem: while abstracta can’t exemplify the property of walking down the street, they can encode it. The moral here is not that such views cannot be made coherent—​they can, as Parsons and Zalta have both shown. Instead, the moral is that difficulties in unraveling ways we think and talk about fictions are not easily resolved just by holding that there are description-​fitting objects. In work on models, as in work on fiction, one historically prominent approach was to take the model descriptions literally as about description-​fitting objects. Ronald Giere, for example, proposes taking the objects described in model descriptions, such as the simple harmonic oscillator, as “abstract entities having all and only the properties ascribed to them in the standard texts” (1988, 78). Martin Thomson-​Jones argues at length, however, that various attempts at treating models as abstract objects that have the properties they are described as having run into trouble. One problem is that the theories describe the objects of the model system as having properties that abstract entities cannot have. The simple pendulum described in a model, say, is said to have a certain length and to move through space over time in a certain way, but abstracta of course cannot do that (Thomson-​Jones 2010, 291). One could (analogously to Zalta’s move) take the relevant abstracta to bear some other relation to their properties (encoding them, or having them as parts). But that would undermine the original attractions of the view, which lay in the idea that models can be straightforwardly compared for similarity with the real-​world target systems (2010, 293). Thomson-​Jones concludes, “We should learn to do without description-​fitting entities corresponding to descriptions of missing systems” (2010, 298). The preceding is not meant as a refutation of or even original argumentation against views that take stories or model descriptions to be about description-​fitting objects. Instead, it is intended just to serve as a reminder of difficulties that are known to arise if we take that route. Such difficulties have motivated looking differently at fictional discourse—​and, similarly, at scientific discourse about model systems. Perhaps we were wrong to take these kinds of internal statements about fictional characters or model systems

56  The Scientific Imagination literally, as accurate descriptions of a certain (kind of) object. Perhaps instead we should see the statements in works of fiction not as descriptions of abstract, nonexistent, or other sorts of objects at all, but rather as props in games of make-​believe. This reaction against traditional forms of realism has led, in turn, to the popularity of anti-​realist views of fictional characters and models.

2.2  The Promise of Pretense In recent years, the most popular anti-​realist alternative to traditional realism about fictional characters has been to adopt a pretense view. On the pretense view, the text of Hamlet should not be taken literally, as describing a (nonexistent or abstract) prince. Instead, it should be seen as enjoining us to make believe that there was such a prince. On the pretense view, works of literature are taken as “props” in games of make-​believe, which make certain things fictional-​in-​the-​game. Whatever is directly stated (and not retracted or undermined) in the story is something we are instructed to imagine by the official game of make-​believe. But that’s not all. “Principles of generation” may also license us to infer what else we are to imagine, even when it is not explicitly stated. So, for example, for realistic works of fiction, we are entitled to infer (unless indicated otherwise) that the people described have the usual number of limbs and are psychologically similar to real people. Internal claims about the story’s content are counted as true if (given the principles of generation and the features of the “prop”—​in this case, what the story says) someone who utters them pretensefully makes it fictional of herself that she speaks truly in the make-​believe game authorized for the work (Walton 1990, 400). The pretense view offers a marked improvement over neo-​Meinongian views in treating discourse within and about the content of works of fiction. It is intuitively plausible that we are engaged in something like imagining or pretending when we write works of literature and discuss their content. Acknowledging that pretense means that we needn’t think of internal discourse descriptively, and so needn’t accept that there are nonexistent or abstract objects that (in some sense) have the properties described. This in turn saves us from other tangles: we needn’t distinguish nuclear from extra-​ nuclear properties or distinguish two different modes of predication to capture the sense in which statements like “Hamlet is a prince” are true: that

If Models Were Fictions, Then What Would They Be?  57 statement is true not if there is a nonexistent or abstract object that is (or encodes) being a prince, but rather if someone who says “Hamlet is a prince” makes it fictional of herself that she speaks truly in the game authorized by Hamlet. Frigg (2010a, 2010b, 2010c) and Toon (2010, 2012) develop views of scientific models based on Walton’s treatment of fiction. Frigg argues that model descriptions should be understood (parallel to stories) as props in games of make-​believe (2010a, 260). So to say that the simple pendulum has no frictional forces at the point of suspension is not to (falsely) report that there is a concrete object like that, nor is it to describe an abstract object as having properties it could not have (a location of suspension). Nor do we need to consider some alternative way in which an abstract object may have a property (or alternative, perhaps mathematical properties we can attribute to it). Instead, according to the pretense view, the model description of the simple pendulum serves as a prop in a game in which we are to imagine that there is a concrete object suspended in a frictionless way from a given point (and so on). Attributing concrete properties to models in internal discourse “is explained as it being fictional that the model system possesses these properties” (Frigg 2010a, 261). It is fictional that the model system has these properties, roughly, if the model description together with the appropriate rules of generation enjoin us to imagine that there is a system that has these properties (Frigg 2010c, 268). Generally, for a statement p about what is the case within a model system to be true, the model description together with the relevant principles of generation must prescribe that p be imagined (Frigg 2010c, 262).

2.3  Problems for Pure Pretense for Fiction But while the pretense view brings certain clear advantages in handling fictionalizing and internal discourse, it also faces well-​known problems. For we have more ways of talking about fictional characters than just participating in the pretense licensed by the story. Historical and literary critical discussions, in particular, often speak of fictional characters in what I have called “external” contexts (Thomasson 2003b, 207): speaking of them not as people but as fictional characters, discussing the circumstances of their creation, the sources for them and their further influences on literary history, their appearance in various stories or other media, their development or novelty, and

58  The Scientific Imagination so on. So, for example, we might explain to a puzzled child that Gregor Samsa is just a fictional character and that no real person could turn into an insect. Works of literary history may say things like “Hamlet is one of Shakespeare’s most famous creations,” “Hamlet also appears in Stoppard’s Rosencrantz and Guildenstern Are Dead,” or “The earliest portrayal of Hamlet onstage was by Richard Burbage.” Or we might speak of Jane Austen’s character Emma Woodhouse as drawn with a wealth of detail hitherto unseen in the English novel, or as being a core source drawn on for the character Cher in the movie Clueless, and so on. Intuitively, all of these are cases in which we step outside the pretense we participate in when reading the story or discussing its content. In these contexts we speak not of a person but of a character, a literary figure created by an author, in a given historical context (typically not the same as the circumstances of creation attributed to the person in the story), with distinctively literary attributes, sources, and influences. Walton, however, takes all discourse apparently involving reference to fictional characters to be best understood as (at least implicitly) in the context of a game of make-​believe. Having uncovered the role of make-​believe in internal statements about fiction, Walton goes on to see it everywhere. “When realists claim with a straight face that people refer to and talk about fictional entities,” he writes, “they are overlooking or underemphasizing the element of make-​believe that lies at the heart of the institution. They mistake the pretense of referring to fictions, combined with a serious interest in this pretense, for genuine ontological commitment” (1990, 390). As a result, he gives us an anti-​ realist view of fictional characters: on his view there are no such things, and so there is no ontology of them to give. We just pretend that there are. I will call this a “pure pretense” view, since it holds that pretense plays a prominent role in understanding both internal and external fictional discourse. But how can we give a pretense-​based reading of external discourse about fiction? Clearly we cannot understand external statements like those cited earlier as things that the text authorizes us to pretend. The text of “The Metamorphosis” does not authorize us to pretend that Gregor Samsa is a creation of Kafka’s, but rather that he is a man of woman born. The text of Hamlet certainly does not authorize us to pretend that Hamlet appears in Rosencrantz and Guildenstern Are Dead. Nor does the text of Emma authorize us to pretend that Emma Woodhouse is drawn with an unusual wealth of detail and psychological insight, or that she is the source for a later character in Clueless.

If Models Were Fictions, Then What Would They Be?  59 In order to hold that all such external statements are nonetheless in the context of a pretense, Walton suggests that those who utter them are involved in unofficial or ad hoc games of make-​believe (1990, 406). For example, where statements like “Gregor Samsa is a fictional character” are concerned, Walton writes: There may be an unofficial game in which one who says [“Gregor Samsa is a (purely fictional) character”] fictionally speaks the truth, a game in which it is fictional that there are two kinds of people: “real” people and “fictional characters.” (1990, 423)

Walton similarly treats claims about characters being created by authors as implicitly invoking an unofficial game “in which to author a fiction about people and things of certain kinds is fictionally to create such” (1990, 410). What should we do with talk about characters appearing in other works—​ treat them as invoking unofficial games in which we pretend that works are places and characters are people who go there? I am not even sure how to start thinking of claims about characters being drawn with an unusual wealth of detail in the terms that a pure pretense view would require. Certainly these seem to be straightforward claims about literary figures rather than pretenseful claims about imagined people. In short, while it seems right that pretense is involved in our internal claims about fictional characters (in the context of discussing the content of works of literature), it is far more of a stretch to think that all talk about fictional characters should be understood as in the context of a pretense. First, it is psychologically implausible—​those who are engaged in literary-​ historical discussions about the number of stories a character appeared in, about its historical sources and influence, about the techniques the author uses in developing the character, and so on seem to be involved in a straightforward literary-​historical discussion that involves stepping outside of the pretense. Even if the psychological plausibility point can be discounted, treating both internal and external fictional discourse as implicitly in the context of a pretense leads to an ad hoc theory. The analyses of apparently true external sentences must each be concocted by supposing there is some new ad hoc game of make-​believe implicitly invoked, according to which one who makes the relevant utterance speaks truly. There are no rules for detecting the

60  The Scientific Imagination presence of games, their presence is often not intuitively plausible, and they are a disparate lot. So it seems more like an idle hope of pretense theorists to analyze all fictional discourse in this way, in order to avoid accepting that there are fictional characters, than like a principled solution driven by linguistic or psychological evidence. Moreover, as I  have argued elsewhere (Thomasson 2003b), treating all apparent reference to fictional characters as pretenseful does double the revisionary work needed to offer an account of fictional discourse. Some revisions to a face-​value understanding of fictional discourse are essential—​ for there are apparent contradictions that arise, for example, between saying “Hamlet is a man” and “Hamlet is a fictional character,” or “Frankenstein’s monster was created by Dr. Frankenstein” and “Frankenstein’s monster was created by Mary Shelley,” and so on (2003b, 205). We can avoid the contradictions by treating the first statement in each case as implicitly in the context of a pretense—​even if we take the second to be literally true. But the pretense theorist proposes that we understand both as in the context of a pretense (two different pretenses) and thereby does double the revisionary work needed to make sense of fictional discourse. It seems a reasonable principle that we need justification for interpreting discourse in a revisionary way—​but what is the justification here for doing double the revisionary work needed? Pure pretense views of fiction have been well received and popular largely because they are thought to offer the key to avoiding “postulating” fictional characters, offering a sparer and less “mysterious” ontology. This is not prominent among Walton’s own explicit goals, however, and he denies being motivated by a goal of avoiding abstract entities in general (1990, 390). He does, however, think that there are “grounds for being wary of fictional entities that are not readily applicable to abstractions generally”—​in particular, that in many ordinary contexts we naturally claim that fictions do not exist (1990, 390). Walton also takes pride in giving paraphrases that don’t threaten “to force fictional entities on us” (1990, 416) and excoriates some sorts of fictional realists for engaging in “voodoo metaphysics” (1990, 385). I will return to discuss the alleged ontological advantages of avoiding fictional characters later. But if we take Walton at his word, that his primary interests lie not in parsimony but simply in giving a better account of the discourse, we might begin to question the grounds for thinking that pretense is always involved in talk apparently about fictional characters.

If Models Were Fictions, Then What Would They Be?  61

2.4  Problems with Pure Pretense for Models Anti-​realist accounts of models have been developed by Roman Frigg (2010a, 2010b, 2010c), Adam Toon (2012), and Arnon Levy (2015), in ways inspired by Walton’s pure pretense account of fiction. As in the fiction case, this approach appeals to many as a way of accounting for the discourse while avoiding ontological commitments. As Frigg puts it: What metaphysical commitments do we incur by understanding models in this way? The answer is: none. Walton’s theory is antirealist in that it renounces the postulation of fictional or abstract entities, and hence a theory of scientific modeling based on this account is also free of ontological commitments. (2010a, 264)

Elsewhere Frigg again emphasizes the fact that “this account is ontologically parsimonious; we have not incurred ontological commitments to fictional entities” (2010c, 274). Toon writes, “If we were to understand model systems in the same way that Walton understands fictional characters then it seems that we would conclude that there are no model systems” (2012, 58), and Levy notes that on his view “models need not be seen as genuine objects” (2015, 784). At points Frigg seems to embrace this anti-​realism and even present it as an advantage, urging that “we need to know what kind of commitments we incur when we understand model systems along the lines of fiction, and how these commitments, if any, can be justified” (2010b, 113), and giving voice to worries that “fictional entities are beset with philosophical problems so severe that avoiding fictional entities altogether would appear to be a better strategy” (2010b, 101). Toon similarly presents his anti-​realism as an advantage, writing, “The make-​believe view has an advantage over existing accounts of scientific representation, since it is able to accommodate models without objects” (2012, 82). Yet, like Walton, Frigg does not present ontological parsimony as his primary motivation. Indeed, at certain points he suggests that parsimony is not really important to him: It is not, in my view, a condition of adequacy that the account we propose be metaphysically parsimonious. As a matter of fact, the account I develop

62  The Scientific Imagination below eschews commitment to fictional entities, but this is accidental, as it were. To say it a different way, it just so happens that the theory that provides the most convincing answers [to five questions about models] is also metaphysically parsimonious; but if it had turned out that a metaphysically substantial theory (i.e. one that is committed to fictional entities) had provided the best answers, then we should have chosen that theory. (2010b, 113)

Instead of taking his view to give a knockdown case against realism about models (or fictional characters), he puts it more moderately: The point to emphasize here is just that whatever these reasons may be [for preferring a realist view], the needs of science are not one among them. (2010a, 264)

I will argue, however, that we have reason to think that the “needs of science” do give us reason to accept that there are models, and that our scientific discourse sometimes refers to them—​that our talk of models is not always in the context of a pretense. The well-​known problems with pure pretense views of fiction carry over to the parallel views of models. While the pretense approach gives us a good way of understanding internal talk of model systems as prescriptions to imagine, just as in the case of literary fictional characters, this is not the only way we talk about model systems. Nor is it the only way we need to talk about them in the practice and study of science. Attending to external discourse about models raises problems for a pure pretense theory of discourse about models that parallel the problems for pure pretense approaches to fictional discourse. In their primary use, we use models to learn about the world—​the target systems described. But we also, as Godfrey-​Smith points out, come to talk about models themselves, as objects of study (indeed, objects of study that can themselves be further represented in toy models). Thus Godfrey-​Smith writes: It may then happen that this fictionalizing becomes more systematic, giving rise to a tradition in which fictional objects are studied as topics in their own right. Scientists in the field get used to discussing how such systems behave, get used to talking of what is true or false of them—​get used to treating a fictional model system as an object in itself. (2009, 19)

If Models Were Fictions, Then What Would They Be?  63 The parallels with literary fiction are again striking. For just as we engage in “external” historical and critical discussion about fictional characters—​ where that discourse cannot be taken to be in the context of pretending what the story authorizes—​so is there a great deal of historical, theoretical, and critical talk about models that cannot be understood as implicitly in the context of a pretense authorized by the model description. So, for example, we speak historically when we discuss the sources and influence of a model, saying that the quantized shell model of the atom was proposed by Niels Bohr in 1913, on the basis of modifying the Rutherford model, and was in turn modified and enhanced by the Sommerfeld model. But the imagined atoms themselves are not pretended to have been proposed or created by Bohr or to have influenced the Sommerfeld model. Moreover, the truth of such external sentences, as Contessa (2010, 223) points out, is settled largely by empirical (in this case historical and archival) evidence—​ not by working out what follows from a model description. Theoretical discourse about models—​including philosophical discourse—​ also frequently requires us to say things about model systems that cannot be construed as implicitly in the context of what the model description instructs us to pretend. Thus, for example, Frigg himself (with Stephan Hartmann) describes model systems as capable of yielding results where theories remain silent, as capable of being good models even if they are false, and so on (Frigg and Hartmann 2012). But the model descriptions certainly don’t authorize us to imagine that infinite populations of rabbits are capable of yielding results or of being good even if false (how do you imagine a false rabbit?). Another key theoretical claim about models is that model systems represent a worldly target system (Frigg 2010b, 121). But this claim is also problematic on a pure pretense view that eschews all reference to models and treats all talk of models as implicitly in a pretense. For again, this is not something the model description instructs us to pretend—​and it seems like a straightforward theoretical claim about the model system. Frigg engages in a long discussion of how we can represent nonexistent objects (2010b, 123). But the more difficult question is how we can truly say that model systems represent target systems in the world if, as he apparently holds, there are no model systems. Frigg accepts a denotationist view according to which X represents Y if and only if X denotes Y and X comes with a key that specifies how facts about X are to be translated into facts about Y (2010b, 126). But how can we then attribute representation where X does not exist? As Toon puts it, “If there are no model-​ systems then there can be no facts about them and we cannot establish an

64  The Scientific Imagination object-​to-​object [representation] relation between model-​systems and the world” (2012, 58; cf. Levy 2015, 789–​790). Toon (2012, 58–​59) argues that Frigg thus must either provide a different account of the way model systems represent target systems (one that does not make reference to model systems) or become a realist after all. Moreover, scientists themselves do not only develop and use models—​it is also essential to the work of science itself that they critically examine models, discussing their relation to the real-​world target phenomena and their usefulness as providing a means of knowledge of the target phenomena. Thus the ability to make sense of external discourse about such models is arguably crucial for scientific models, even more so than for literary fictions. Such critical discourse might include claims that certain economic models that treat agents as fully rational and self-​interested are based on faulty psychological assumptions or are incapable of giving the desired information about real-​world target systems. We also comparatively evaluate models, saying, for example, that one model of hurricane development has higher resolution or greater historical accuracy than competing models. But, again, it would be a category mistake to think we are instructed to imagine that the interacting agents are based on faulty assumptions, or that the hurricanes represented in the model are to be imagined as high-​resolution or historically accurate (while models might be high-​resolution or accurate, giant windstorms are not the right sort of thing to be that). In short, the history, theory, and even internal critical work of science itself seem to require us to talk about models in external ways, quite distinct from the ways we talk of models while engaged in pretending what they authorize us to pretend. But it is far from obvious how to understand this external talk on a pretense model. Could we, following Walton, suggest that we are involved in some ad hoc game of pretense when we say that Bohr developed the quantized shell model of the atom, or that traditional economic models involve psychologically unrealistic assumptions? Toon (2010, 2012) explicitly suggests adopting Walton’s unofficial game strategy, writing: “When scientists appear to talk about theoretical models as objects . . . we should not take this talk too seriously” (2012, 131–​132). So suppose we say that Lopez developed a model of the bouncing bob as a simple harmonic oscillator that enabled better predictions of the movements of the actual bob. How can a pure pretense view understand this? Toon suggests:

If Models Were Fictions, Then What Would They Be?  65 Walton’s notion of unofficial games allows us to understand theoretical hypotheses as acts of pretence. Our theoretical hypothesis invokes an unofficial game in which it is fictional that there exists both the bob and an entity called “the model bob” which, fictionally, has all the properties attributed to the bob by the model. (2010, 314)

Yet it seems even more implausible here than in the fiction case to think that we are engaging in some unofficial game of pretense. If we say Lopez developed a model of the bob as a simple harmonic oscillator, are we really pretending that Lopez created a point mass subject to a uniform gravitational field? It seems far more plausible to think of it as straightforward reporting on scientific work. It is even less clear how to use the “unofficial games” strategy to understand claims that an economic model is based on faulty psychological assumptions, or that one model of hurricane development is higher-​resolution than another. One could, of course, try using various paraphrase techniques to avoid reference to model systems. We might try to paraphrase some external statements into talk about model descriptions rather than model systems:  we might, say, aim to paraphrase talk about creation in terms of talk of the relevant scientists writing certain model descriptions. But this itself is not straightforward:  a Nobel Prize–​winning scientist might develop a model, while the relevant model description is written by a graduate student; or a new model description may be written without a new model being created, since the same model may be described in many different ways. As Frigg and Hartmann argue (2012), we can’t on the whole replace objectual talk about model systems with talk of model descriptions: the same model system may be described in many different ways (even in different media—​e.g., verbally, diagrammatically). Moreover, much is true of the model description that is not true of the model system itself (e.g., that it consists of 5,325 words, is in French, is written by a graduate student), and vice versa. Those who are committed to avoiding all reference to model systems might succeed at devising, for each external sentence, a way in which we can understand it as pretenseful or paraphraseable. But, as in the case of fiction, there is risk of this becoming a very ad hoc procedure driven solely by ontological worries (worries that, on my view, are misplaced and misguided). Moreover, as in the case of fiction, it will involve giving revisionary interpretations of what seem like straightforward historical, critical, and theoretical statements. While some revisionism may be necessary to interpret

66  The Scientific Imagination model talk, the revisions that are needed can be handled by treating internal discourse pretensefully. As in the case of fiction, those who aim to treat internal and external discourse about models pretensefully do double the revisionary work necessary, attributing pretense even where we appear to have straightforward historical, theoretical, or critical claims. Those who aren’t antecedently committed to finding a way to avoid reference to model systems might find these increasingly ad hoc moves both unconvincing and unnecessary. The question to press on those who aim to paraphrase all apparent reference to model systems is this: What is the linguistic and psychological evidence (in each case) that the statement should not be interpreted literally? If the motivation comes (in any case) not from linguistic or psychological evidence relevant to interpreting the discourse but rather from “ontological concerns,” and if we can give grounds for dismissing those ontological concerns, then the paraphrases should be rejected and the simpler view of the discourse adopted.

2.5  An Overlooked Option: Artifactualism While neo-​Meinongian realism and anti-​realism were once the main contenders among views of fiction, those are not the only options. The problems arising for both of those traditional approaches have made an alternative view popular. That alternative is an artifactual theory of fictional characters—​an approach that is well known in the fiction literature but which seems to have been largely overlooked in the literature on scientific models (with the important exception of work by Thomson-​Jones, this volume). The basic idea is this: It makes sense to think of authors, in writing works of fiction, as engaging in a certain kind of pretense: that there were such-​and-​ such people, that certain events happened, and so on. So it may well be, as the pretense theorist has it, that the primary use of fictional names like “Emma Woodhouse” is a pretending use. Nonetheless, as Stephen Schiffer has put it, on the basis of these pretending uses we may become entitled to introduce a “hypostatizing” use of fictional names that refer (in external contexts) to fictional characters. As Schiffer puts it: “Whenever one of us uses a name in the fictional way . . . then that use automatically enables any of us to use the name in the hypostatizing way, in which case we are referring to an actually existing fictional entity” (1996, 156; see also Searle 1979, 71–​72). Just as we normally assume in our literary discussions, all it takes for a fictional

If Models Were Fictions, Then What Would They Be?  67 character (like Emma Woodhouse) to be created is for an author in the right context to write in a way that pretends to be about real individuals (but using a name that doesn’t refer back to any extant individual). When Austen pretended to assert various things about the young woman Emma Woodhouse, she thereby created a fictional character—​an abstract literary creation that we can go on to seriously refer to in the context of critical and historical discussions (see Thomasson 2003a, 147–​153). The artifactual approach has the advantage of enabling us to take external discourse about fictional characters at face value. We can take it to be a straightforward truth that Austen created the character Emma Woodhouse and that the character was a source for Cher of Clueless. Such characters are understood not as special kinds of (imaginary or nonexistent) people but rather as abstract artifacts—​cultural creations similar in kind to stories, theories, and laws. Thus when we talk about the history and development of a character we can take names for the character to refer not to an object that matches the descriptions in the story but rather to an abstract artifact (see Thomasson 2009 for an overview of the literature). There are various choice points about how to develop this basic idea. The key point of the artifactualist approach is to allow that singular terms for fictional characters do really refer (to abstract artifacts) in external discourse. But a great deal is left open about how to treat internal and fictionalizing discourse. One thing is clear: it is not to be read straightforwardly as about abstract or nonexistent objects. (The abstract artifacts do not have properties like being a woman or being handsome, clever, or rich.) But internal discourse might be paraphrased (as discussing what is true according to the story [see Thomasson 1999]) or treated along the lines suggested by pretense theorists—​that is, as participating in the game authorized by the story. As Thomson-​Jones (this volume, 85) suggests, there is room to develop an abstract artifacts view of models, paralleling the artifactualist view of fiction: “Missing systems such as simple pendula are abstract artifacts, created by physicists at a certain point (or over a certain period) in the history of classical mechanics.” As Godfrey-​Smith observes, scientific papers often begin with phrases such as “imagine a population of self-​replicating molecules  .  .  .  ,” “assume a three-​layer neural network learning by back-​ propagation  .  .  .  ,” or “consider a collection of agents playing one-​shot prisoner’s dilemmas at random . . .” (2009, 2). Such sentences we may take to enjoin us to imagine certain scenarios, and may legitimate us in counting certain further claims as true, if they involve or follow appropriately from

68  The Scientific Imagination what we are prescribed to imagine. But utterances or inscriptions of sentences like these, made in the appropriate theoretic context and in a way that enables us to follow implicit rules for determining further features of the model system with the aim of aiding in acquiring knowledge of a target system, may also entitle us to introduce reference to the model system itself. For that, according to ordinary and scientific standards, is all it takes to develop a model. We may then refer to it without pretense as a model, developed by a certain group of scientists with certain theoretical goals, intended to represent a certain target system, influenced in its design by prior models and improved by later models, and so on. Accepting that external discourse may refer to model systems brings significant advantages for the artifactualist theory over a pure pretense approach. For historical, theoretical, and critical discourse about model systems can then be read straightforwardly, outside of any pretense operators, as claims about the relevant model system, considered as an abstract artifact. Discussions about how it was developed, what its sources and influence have been, what techniques and assumptions were used in its development, what uses and failings it has turned out to have, what it represents, and so on can all be understood straightforwardly, without appeal to unofficial games of make-​believe or ad hoc paraphrases. This enables the artifactualist to give a much more straightforward, plausible, and less revisionary approach to external discourse than pure pretense views can. But how should an artifactualist read fictionalizing discourse and internal discourse about what goes on in the model system? Even if we accept with the pretense theorist that scientists are engaged in a kind of pretense when they write a model description, options remain open for the artifactualist. Do we take scientists to be engaged in a de re pretense about that very abstract artifact (created performatively in the initial description), and so to be referring back to the abstract artifact and pretending things of it (for example, that it moves sinusoidally)? Or do we take the scientists to be merely engaged in a de dicto pretense that there is such a pendulum, which moves sinusoidally, and simply take that to entitle others, in external contexts, to refer to the abstract artifact? Similar questions arise about how the artifactualist should interpret internal (metafictive) discourse about the objects as described in the models. When we say “the simple pendulum moves sinusoidally,” should this be understood (in de re mode) as referring to the abstract artifact and saying of it that it is fictional that that abstract artifact moves sinusoidally? Or should

If Models Were Fictions, Then What Would They Be?  69 we take this (in de dicto mode) as saying that it is fictional that there is a pendulum that moves sinusoidally? There is no need to settle these issues here. I would rather leave it to those who know better the particular challenges of understanding scientific modeling to see which options would serve best. But it’s worth noting that the artifactualist has options (treating discourse of either of these sorts as a de re pretense) that pure pretense theorists lack. Either that option is better than the de dicto pretense option or it is not. If it is, then the artifactualist has an advantage in handling internal and fictionalizing discourse—​as well as in handling external discourse. If it is not, then she should simply adopt the de dicto pretense strategy. Admittedly, accepting that there are abstract artifacts (and identifying these with the fictional characters or model systems) is not a panacea to solve all the puzzles of discourse about fictions or models. One sort of discourse remains tricky for both the artifactualist and the pure pretense view: discourse that compares features of the fiction/​model system with features of the real-​ world/​target system. If we comparatively say “Letoya is as smart as Holmes,” we cannot take this to be literally true on either view: on the pretense view, there is no Holmes to compare; on the artifactualist view, Holmes is an abstract artifact that cannot literally be smart. Similarly, if we say that the sun in the model system is more perfectly spherical than the real sun, we again cannot be taken as uttering a literal truth about the sun of the model system on either a pure pretense or artifactualist approach. Pure pretense theorists have tended to handle these difficulties by appealing to properties in a paraphrase (Frigg 2010a, 263): we can paraphrase such comparative sentences as saying, of a certain degree of smartness, that the degree of smartness we are enjoined to imagine Holmes has is the same as that degree of smartness Letoya has. Or we can say, of sphericality, that sphericality is approximated to a certain degree by the sun, and that the model system enjoins us to imagine that it is approximated to a closer degree by the model sun. Godfrey-​Smith criticizes Frigg on this score, suggesting that those who have qualms about accepting the existence of nonexistent objects should be similarly hesitant to appeal to uninstantiated properties (as one must to apply this strategy across the board) (2009, 113–​114). As he puts it: “It is not clear that giving an explanation of modeling in terms of uninstantiated properties is more down-​ to-​earth than giving one in terms of non-​existent objects” (2009, 114). In my view, however, these cautions about uninstantiated properties are unnecessary. As I have argued elsewhere (Thomasson 2015, following Schiffer 1994,

70  The Scientific Imagination 2003), we can get easy arguments for the existence of properties by making pleonastic inferences from “the shirt is red” to “the shirt has the property of redness” to “the property of redness is possessed by the shirt.” Similar arguments can lead us to accept uninstantiated properties, as we can move from “the wand is not magical” to “the property of magicalness is not possessed by the wand” to infer that there is a property of magicalness that the wand (indeed everything) lacks. But there is no analogously compelling easy inference to claims that there are nonexistent objects: moving from “there is no round square” to “there is a nonexistent round square” is not licensed by our ways of introducing object talk. So, for somewhat independent reasons, I  think neither the pure pretense theorist nor the artifactualist need have any qualms about appealing to properties in their analyses of comparative statements—​even where these are uninstatiated. Their common analyses of comparative statements might raise eyebrows simply for their cumbersomeness, but in my view they should not be thought to raise ontological worries. So where do we stand in comparing the artifactual view and the pure pretense view? Neither is perfect, and the difficulties of giving a smooth analysis of these areas of discourse are well known. Nonetheless, there is no reason that the artifactualist can’t take over everything the pretense theorist says about fictionalizing and internal discourse and preserve all of the advantages a pretense theory brings in handling discourse of these kinds. The artifactualist also, as I noted earlier, can take on board the pretense theorist’s approach to handling comparative statements. In that case, the two views end up on a par for internal and fictionalizing discourse, as well as for comparative discourse. However, as I have argued, the artifactualist view has advantages over pure pretense views in handling external discourse. Looked at from the point of view of understanding the discourse alone, then, the artifactualist approach looks likely to be preferable overall. At the very least, those interested in a fiction view of models would do well to consider an artifactual view as an option, instead of remaining confined by the old options of anti-​realism or neo-​Meinongian realism.

2.6 Ontological Qualms Despite its considerable attractions in understanding the discourse, some may be inclined to resist an artifactual approach on ontological grounds. For the artifactual approach apparently has ontological commitments a pure

If Models Were Fictions, Then What Would They Be?  71 pretense approach lacks: it accepts that there are fictional characters/​models, and that we sometimes (in external contexts) refer to them and say true things about them. The idea that “problematic ontological commitments can be avoided” is undoubtedly behind the attraction pure pretense views hold for many metaphysicians (though it ostensibly is not a primary motivation for Walton and Frigg themselves). So it may be worth saying something in closing about these qualms. Questions of how seriously we should take these alleged problems of “ontological commitment” and how much weight we should give to parsimony in choosing a metaphysical theory are themselves major issues in meta-​ metaphysics and cannot be resolved here. I have addressed these issues extensively elsewhere, and refer interested readers there for fuller discussion (Thomasson 2003a; 2007, ch. 9; 2015). Nonetheless, it is worth making a few closing remarks about three different types of ontological concerns that might arise. First, some might resist accepting that there are model systems on grounds of worries that they would be somehow problematic entities—​involving us in contradictions, implausible empirical commitments, difficulties with identity conditions, and so on. But as soon as we stop thinking of model systems as imaginary or abstract objects that (in some sense) have the properties attributed to them by the model description, many of these worries melt away. In any case, I have addressed worries of these kinds elsewhere for fictional characters (Thomasson 1999; 2003b, 219–​222). There is every hope that parallel solutions would carry over to model systems understood as abstract artifacts. Second are concerns about admitting a “strange kind” of object to our ontology: an abstract artifact. But again, as I have argued extensively elsewhere (Thomasson 1999; 2003b, 220–​222), abstract artifacts are extremely commonplace. Entities such as theories, stories, laws of state, symphonies, and so on all seem best understood as abstract artifacts. If you are prepared to accept that we refer to any of these, then there should be no barrier to accepting fictional characters and model systems—​considered as abstract artifacts. Finally, there are vague neo-​Quinean qualms about really “accepting such objects into our ontology,” about being unparsimonious, and the like. The neo-​Quinean approach to existence questions is one I have argued against extensively elsewhere (Thomasson 2007, ch. 9; 2015), and there isn’t space to repeat those arguments here. But think of it this way. On the version of artifactualism developed here, when scientists write a certain description,

72  The Scientific Imagination beginning with a phrase like “consider a collection of agents playing one-​shot prisoner’s dilemmas at random . . . ” (Godfrey-​Smith 2009, 2) or “consider a frictionless plane . . . ,” and go on to describe what would be the case in such a scenario, following certain implicit rules for generating further information about the imagined system (that then is supposed to help us gain knowledge of a real-​world target system), that is what it is to create a model. (Not just a model description, but also a model system—​just as authors who engage in the proper pretense in writing a text not only create a story but also fictional characters.) This seems entirely in accord with how we treat scientists and their productions, and with how we speak of the development of models in talking about science and its history. As a result, it seems that we can get “easy” arguments for the existence of models, so understood, by starting with premises about the relevant modeling activities of scientists. Why deny this face-​value view, accepting that all of the relevant activities take place (descriptions are written, predictions made) but denying that there are model systems? What more should one think it would take for a model system to be created than for scientists to engage in certain kinds of modeling activities and to provide certain model descriptions? (Not for there to really be a world with frictionless planes or an isolated population of rabbits!) Once the relevant qualms are put to the side, there seems no reason at all.

2.7  Conclusion The idea that the model systems (and discourse about them) can be understood on analogy with fictional characters (and discourse about them) has been increasingly popular, with good reason. But if we take the analogy seriously, we should bear in mind not only the ways we have of speaking of fictional characters and model systems “internally” as imaginary people, populations, or economies but also the ways we have of speaking of them from an external perspective. The analogies again hold up well. But considering the full range of fictional discourse gives reason for accepting that there are fictional characters we sometimes refer to—​and that these are a kind of abstract artifact. Similarly, bearing in mind the full range of discourse about models gives us reason to accept that there are model systems, where these too are a kind of abstract artifact. Perhaps surprisingly, taking seriously the idea that models are fictions also gives us good reason to take models

If Models Were Fictions, Then What Would They Be?  73 themselves seriously—​and to think that when we speak of them, we are not always pretending.

References Cartwright, N. (1983). How the Laws of Physics Lie. Oxford: Oxford University Press. Contessa, G. (2010). “Scientific Models and Fictional Objects.” Synthese 172: 215‒229. Frigg, R. (2010a). “Models and Fiction.” Synthese 172: 251‒268. Frigg, R. (2010b). “Fiction and Scientific Representation.” In Beyond Mimesis and Convention, edited by R. Frigg and M. C. Hunter, 97‒138. Boston Studies in the Philosophy of Science 262. Dordrecht: Springer. Frigg, R. (2010c). “Fiction in Science.” In Fiction and Models: New Essays, edited by J. Woods, 247‒287. Munich: Philosophia. Frigg, R., and S. Hartmann. (2012). “Models in Science.” In The Stanford Encyclopedia of Philosophy (Fall 2012 ed.), edited by Edward N. Zalta. http://​plato.stanford.edu/​archives/​fall2012/​entries/​models-​science. Giere, R. (1988). Explaining Science:  A Cognitive Approach. Chicago:  University of Chicago Press. Godfrey-​Smith, P. (2006). “The Strategy of Model-​Based Science.” Biology and Philosophy 21: 725‒740. Godfrey-​ Smith, P. (2009). “Models and Fictions in Science.” Philosophical Studies 143: 101‒116. Kripke, S. (2013). Reference and Existence: The John Locke Lectures. New York: Oxford University Press. Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172, no. 3: 781–​798. Parsons, T. (1980). Non-​existent Objects. New Haven, CT: Yale University Press. Salmon, N. (1998). “Nonexistence.” Noûs 32, no. 3: 277‒319. Schiffer, S. (1994). “A Paradox of Meaning.” Noûs 28: 279‒324. Schiffer, S. (1996). “Language-​Created Language-​Independent Entities.” Philosophical Topics 24, no. 1: 149‒167. Schiffer, S. (2003). The Things We Mean. Oxford: Oxford University Press. Searle, J. (1979). Expression and Meaning:  Studies in the Theory of Speech Acts. Cambridge: Cambridge University Press. Thomasson, A. L. (1999). Fiction and Metaphysics. Cambridge:  Cambridge University Press. Thomasson, A. L. (2003a). “Fictional Characters and Literary Practices.” British Journal of Aesthetics 43, no. 2: 138‒157. Thomasson, A. L. (2003b). “Speaking of Fictional Characters.” Dialectica 57, no. 2: 207‒226. Thomasson, A. L. (2007). Ordinary Objects. New York: Oxford University Press. Thomasson, A. L. (2009). “Fictional Entities.” In A Companion to Metaphysics, Second Edition, edited by Jaegwon Kim, Ernest Sosa, and Gary Rosenkrantz, 10–​ 18. Oxford: Blackwell. Thomasson, A. L.  (2010). “Fiction, Existence and Indeterminacy.” In Fictions and Models: New Essays, edited by John H. Woods and Nancy Cartwright. Thomasson, A. L. (2015). Ontology Made Easy. New York: Oxford University Press.

74  The Scientific Imagination Thomson-​Jones, M. (2010). “Missing Systems and the Face Value Practice.” Synthese 172: 283‒299. Toon, A. (2010). “The Ontology of Theoretical Modeling:  Models as Make-​Believe.” Synthese 172, no. 2: 301‒315. Toon, A. (2012). Models as Make-​ Believe:  Imagination, Fiction and Scientific Representation. New York: Palgrave Macmillan. Walton, K. (1990). Mimesis as Make-​Believe. Cambridge, MA: Harvard University Press. Wolterstorff, N. (1980). Works and Worlds of Art. Oxford: Clarendon Press. Zalta, E. (1983). Abstract Objects: An Introduction to Axiomatic Metaphysics. Dordrecht: Reidel.

3 Realism About Missing Systems Martin Thomson-​Jones

3.1  The Problem I’m Trying to Solve My starting point is this question: How does scientific modeling work? Given the roles modeling plays in a wide range of sciences—​in experimental design, prediction, the evaluation of evidence, the construction of explanations, and more—​any inquiry into the epistemology and methodology of the sciences will have to concern itself with this question at some point.1 But my topic in this chapter is not scientific modeling of all kinds; that would be far too large a focus. The particular kind of modeling I am concerned with here is what I will call missing-​systems modeling. It is by now a familiar fact about scientific practice (familiar to philosophers of science, anyway, and not only to them) that scientists of many stripes devote considerable time and energy to describing and imagining systems that cannot be found in the world around us. Physicists, for example, regularly invoke “the simple pendulum,” which comprises a mass attached to a perfectly rigid rod or cord, subject to a perfectly uniform gravitational field, and experiencing neither friction at the point of suspension nor air resistance. Similarly, neoclassical economics is based on the analysis of systems of exchange in which perfectly rational agents with perfect information trade goods with no transaction costs. It is natural to call such systems “imagined” or “imaginary” (cf. Frigg 2010a, 253; Godfrey-​Smith 2006, 734–​736). I have called them missing systems in an attempt to leave open, at least initially, questions about whether such things exist, and if so, what sorts of things they are (Thomson-​Jones 2007, 2010), and I shall stick with that term here. But this choice of terminology should not obscure the fact that when scientists are engaged in the activities we are 1 The question is also one part of a larger investigation into the nature of scientific representation (an investigation that, I take it, is motivated in just the same ways). In saying this, I am assuming only that some modeling involves representation, not that all modeling does—​although, in fact, I think the latter is quite likely. Martin Thomson-Jones, Realism About Missing Systems In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0004

76  The Scientific Imagination inclined to describe as “describing the simple pendulum” and “studying systems of exchange involving perfectly rational agents,” they are performing acts of the imagination, and often quite elaborate ones.2 At least two kinds of modeling involve the describing and imagining of missing systems. First, scientists often model actual, concrete systems as missing systems. Physicists might model a particular bridge swaying in high winds as a simple pendulum; economists might model the time-​series of prices for certain commodities as arising from a system of exchange between perfectly rational and well-​informed agents trading without transaction costs.3 The concrete system being modeled as a missing system in such cases is standardly called the target system; accordingly, we can call this sort of modeling targeted missing-​systems modeling. Second, scientists sometimes imagine and study missing systems without reference to any further target: one example is Poincaré’s famous description of a spherical, thermally varying world with an apparently non-​Euclidean geometry.4 This activity is in itself sometimes classed as a sort of modeling; we might call it untargeted missing-​systems modeling. My focus in this chapter will be on developing an account of targeted missing-​systems modeling.5 But I will take it that constructing a parallel account of untargeted missing-​systems modeling would largely be a simple matter of subtracting parts of the account of missing-​ targeted systems modeling I will present.6 We face a number of problems in attempting to construct an account of how missing-​systems modeling works. Here are two (the first of which has two parts): The semantic problem:  Scientists routinely treat as true sentences that make claims about the features of missing systems—​the sentence “Simple

2 The term “system” should not be read as heavily freighted in what follows. I am using it merely as a catchall, intended to cover the objects, events, processes, and so on (missing or otherwise) that scientists investigate in their various domains. 3 The first example, at least, is perhaps too simplified to be very realistic. I will persist in the use of such examples, however, for the sake of brevity and clarity, and in the conviction that we can make progress on the issues I mean to address without getting entangled in the complex details of more sophisticated examples. 4 Poincaré 1952, 64–​68. For more discussion of this example, see Achinstein 1968, 218‒219 and n. 18; Thomson-​Jones, 2007, 2, 9–​10; Thomson-​Jones 2010, 284–​285. 5 One reason for this is that I am interested in seeing whether there is a viable account of missing-​ systems modeling that holds on to the “indirect” picture of modeling that has formed the backdrop to so much philosophical work on modeling more generally. See section 3.3. 6 For different accounts of untargeted missing-​systems modeling, see Toon 2012, ch. 3, sec. 3.3 (esp. 77, para. 2); Levy 2015, sec. 4.4.

Realism About Missing Systems  77 pendula move sinusoidally,” for example (that is: approximately, at small angles).7 Yet scientists also say that the systems in question don’t exist. How, then, can such sentences even (i) manage to be meaningful, let alone (ii) express true claims? The knowledge problem: Scientists seem to study missing systems, and to discover things about them. If the systems in question don’t exist, how can that be?

A third problem about missing systems arises when we consider the philosophical literature on scientific modeling: The use problem: Many philosophers claim that scientists use missing systems as models. Given, again, that the systems in question don’t exist, and given that it is a necessary condition on one’s using something that it exist, how could such a claim be correct?

These are the problems I want to address. I will begin by characterizing a general approach to the problems I’ve just outlined, the fiction approach. Then I will present a specific implementation of that approach, the abstract artifacts account (or abstract artifacts view) of missing-​systems modeling, and show how the account solves these three problems. Finally, I will consider some objections to the abstract artifacts account, and do my best to defuse them.

3.2  The Fiction Approach A first step toward the sort of account I want to develop is provoked by the observation that there is another, very familiar kind of discourse in which we treat as true sentences that seem to make claims about various objects while, at the same time, we display an inclination to deny that the objects in question exist. This is discourse about what I’ll call ordinary fiction—​that is, fiction in the ordinary sense of the term, including novels, short stories, plays, fiction films, and the like.8 Suppose, for example, that Eliane and Karlsefne 7 I will omit this qualification in what follows. 8 Some of which, of course, is not ordinary at all; a famous remark of Strawson’s on the term “folk psychology” comes to mind (1985, 56). A standard qualification is called for, too: the term “ordinary fiction” is intended to bring to mind works that are purely fictional, or at least more nearly so, as opposed to, say, novels about the lives of historical figures that contain much factual detail (e.g., David

78  The Scientific Imagination are having coffee and discussing Madame Bovary, which Karlsefne has just finished and Eliane read some time ago. At one point in their conversation, Karlsefne, being an astute sort of fellow, says, “But Emma is unhappy.” Here Karlsefne is treating a sentence of the form “Emma is P” as meaningful; at the same time, being a typical reader, and free from confusion about the sort of book Madame Bovary is, Karlsefne would say that Emma Bovary does not exist. Eliane would agree that Emma does not exist; nonetheless, her response to Karlsefne’s utterance of the sentence “But Emma is unhappy” is to say, with just a hint of impatience, “Yes, of course that’s true.” So Karlsefne and Eliane are treating a sentence of the form “X is P” as both meaningful and true despite their shared inclination to deny that X exists. In these respects, the parallels to the behavior of scientists around talk of the simple pendulum are exact. Thus discourse about ordinary fiction gives rise to a problem that perfectly parallels the semantic problem about missing-​systems modeling. It might seem at first that the parallels do not extend to the knowledge problem or the use problem. It does, after all, sound unnatural to describe someone as “studying the properties of Emma Bovary,” or “discovering that Emma Bovary has certain features,” and perhaps downright odd to say that someone is “using Emma Bovary as a model of [so-​and-​so].” Nonetheless, there are analogies in these places, too. For the knowledge problem, compare such activities as “debating Emma Bovary’s motivations,” “making a case that Emma Bovary had an Electra complex,” and “working out that Emma must have been twenty-​four when she began the first affair” with the activity of studying the simple pendulum and discovering that it has certain features. And for the use problem, compare the activity Eliane is talking about when she says to Karlsefne, “Thinking about Emma helps me understand my sister” with the activity of using the simple pendulum as a model of a bridge swaying in the wind. Although these parallels are not quite as neat as the semantic parallels seem to be, they are close enough to add momentum to the fiction approach to missing-​systems modeling.9 Lodge’s novel about Henry James, Author, Author, which aims to be true to the historical evidence about dates and locations, and which interweaves a significant number of quoted passages from actual letters, reviews, and the like). 9 It is an interesting question just how far the second and third parallels go; I will limit myself to a few tentative and preliminary remarks here.   Certainly there are potentially significant disanalogies. With respect to the knowledge problem, one apparent disanalogy between ordinary fiction and scientific modeling has to do with our attempts to come to know things about fictional characters and about missing systems that are not explicit in the text—​in the novel Madame Bovary, say, or the initial description of the simple pendulum. One difference between the two cases would seem to be that we are often in a position to have

Realism About Missing Systems  79 I am using the term “fiction approach” to encompass a collection of ideas put forward by a number of authors who differ both in what they emphasize and in how they fill in the details.10 Nonetheless, we can get a good fix on the general thrust of the approach by considering four theses: Activities: The initial imagining and describing of a missing system in the sciences is an instance of fiction-​making (an activity, that is, of the same sort as any standard instance of the construction of a work of ordinary fiction) and subsequent episodes in which scientists think about the missing system in question are episodes of imaginative engagement with a work of fiction (just like, say, a reader’s interaction with a novel). Ontology: Missing systems are ontologically on a par with fictional characters, both with respect to whether they exist and, if they do, with respect to the sort of thing they are. (So: the simple pendulum exists if and only if Emma Bovary does, and if it exists, then it is the same sort of thing as Emma Bovary.) much greater confidence in such claims about the properties of missing systems than we are in similar claims about fictional characters—​compare the claim that simple pendula move sinusoidally, for example, with some claim about Emma Bovary’s motivations. There would seem to be disanalogies with respect to the use problem, too. Those who are willing to indulge talk of “using” missing systems will say that by using the simple pendulum as a model of some actual, concrete system in the world around us, we can make predictions about the behavior of those target systems—​even good predictions, when we know what we are doing. But the idea that Eliane might come to be in a position to make predictions about her sister’s behavior by thinking about Emma Bovary sounds less plausible.   At least in the second case, however, there are things we might say to narrow the gap a little. The claim that Eliane might come to understand her sister better by thinking about Emma Bovary has much more initial plausibility than the claim about prediction (at least to my ear). But it also seems plausible that if Eliane comes to understand her sister better, she will be better placed to know what to expect from her. And then we are at least in the neighborhood of talk about prediction. Perhaps the lesser degree of plausibility that attends talk of prediction in the case of ordinary fiction has to do with the connotations of precision carried by that term. If that is right, then perhaps the gap might narrow even further if we compare the ordinary fiction case to cases of purely qualitative, non-​mathematical missing-​systems modeling in the sciences. Looking at scientific modeling of that sort might also lead to a narrowing of the gap with respect to the knowledge problem, too, if it is true that the “principles of generation” tend to be less clear in cases of non-​mathematical scientific modeling than they are in mathematical cases. See, for example, Thomasson, this volume, on principles of generation; and see Thomson-​Jones 2012 for some discussion of non-​mathematical modeling in the sciences, though in a somewhat different context.   For the present discussion, in any case, it will not matter too much if there are some important disanalogies between the cases of ordinary fiction and scientific modeling. Even if so, there are also some significant parallels, as I have pointed out, and that is enough to suggest that it might be worth taking the fiction approach to missing-​systems modeling seriously. At this point in the discussion, in other words, we are in the context of discovery rather than the context of justification. On the other hand, see note 11. (Thanks to Arnon Levy for pressing this issue. And see Frigg 2010a, 257–​258, for another discussion of some parallels between ordinary fiction and scientific modeling.) 10 I  am thinking primarily of Gabriele Contessa (2010), Roman Frigg (2010a, 2010b), Peter Godfrey-​Smith (2006), Arnon Levy (2011, 2012, 2015), Adam Toon (2010a, 2010b, 2012), and myself (Thomson-​Jones 2007, 2010).

80  The Scientific Imagination Language: Scientific discourse about missing systems has the same semantics and pragmatics as our discourse about fictional characters. Epistemology: The epistemology of claims about missing systems and claims about fictional characters is the same.

These are relatively bold and unqualified versions of the theses; one could weaken them in various ways, and perhaps one should, but these simpler versions do a better job of getting the gist of the fiction approach across concisely.11 I think these ideas are interesting and suggestive, but they will not carry us very far until we combine them with accounts of how ordinary fiction works in the relevant respects—​accounts of the ontology, semantics, pragmatics, and epistemology of ordinary fiction, of our imaginative engagement with it, and of how we make it. I will devote the rest of this chapter to doing some of that work (but certainly not all of it). There are other ways to do it, and I and others have explored some of them—​in some cases very fully—​but I will not be able to undertake a detailed discussion of those other ways here.12 Two general points before we dive in. First, it is important not to confuse the fiction approach, as I’ve just characterized it, with various kinds of fictionalism that have been proposed in the philosophy of science:  Vaihinger’s, for example, or the kind Ernest Nagel discussed in The Structure of Science, or the kind van Fraassen’s constructive empiricism can be seen as embracing.13 Carefully delineating the relationships between those varieties of fictionalism, and between them and the fiction approach, is 11 One tempting weakening is this: shift to the proposal that missing-​systems modeling works the way some philosophical account of ordinary fiction says ordinary fiction does, while refraining from any claim about whether the philosophical account in question is, in fact, right about ordinary fiction. This means dropping the outright claim that missing-​systems modeling and ordinary fiction work the same way, and the even more intriguing suggestion that missing-​systems modeling just is a variety of fiction-​making. Such a weakening will, inevitably, make the fiction approach to missing-​ systems modeling easier to defend, but it also draws our attention away from the interesting larger question of whether it might be possible to have a unified account of ordinary fiction and missing-​ systems modeling. And insofar as unification is a theoretical virtue in philosophy, that larger question is a potentially important one.   Alternatively, we might simply replace the talk of sameness in activities, language, and epistemology with talk of similarity in certain respects. The seeming disanalogies between ordinary fiction and scientific modeling discussed in note 9 might provide some reason for thinking that we should weaken those theses in that way—​especially epistemology and, perhaps, activities. But again, it might be possible to limit the degree of weakening that is required by looking more closely at non-​ mathematical modeling in the sciences. Such an investigation might also lead us to conclude that there is no single epistemology of claims about missing systems. 12 See the work cited in note 10. 13 See Fine 1993; Kalderon 2005a; Nagel 1961, 134; Rosen 2005, 14–​18; Vaihinger 1924; van Fraassen 1980.

Realism About Missing Systems  81 not a trivial task, but I take it to be clear enough that those fictionalisms are distinct from the fiction approach, and nothing good can come of confusing the fiction approach with any of them. Second, let me admit at the outset that this discussion will not evoke a vivid sense of the fine-​grained texture of scientific practice. What’s worse, some of this is metaphysics, and some of it is philosophy of language. But it is metaphysics and philosophy of language in the service of the epistemology and methodology of science. And although it would be a bad thing if all philosophy of science were like this, it would also be a bad thing if none of it were.14

3.3  The Abstract Artifacts Account Implementations of the fiction approach to missing-​systems modeling can be divided into two kinds: those according to which such terms as “the simple pendulum” do refer to something, and those according to which they don’t.15 I will call these “realist” and “anti-​realist” accounts, respectively. This follows a usage in the philosophical literature on ordinary fiction, in which views postulating that names such as “Emma Bovary” refer (that is, views according to which there are such things as fictional characters) are sometimes called realist, and views that deny the existence of fictional characters are sometimes called anti-​realist. In my view, the most attractive implementations of the fiction approach to date are those developed by Roman Frigg, Adam Toon, and Arnon Levy, each of which draws on Kendall Walton’s account of ordinary fiction.16 These Waltonian accounts are avowedly anti-​realist.17 My interest here is in 14 As I argue in Thomson-​Jones 2017. 15 That is, according to the stand different implementations take on one of the ontological issues in play in the ontology component of the fiction approach. 16 See Frigg 2010a, 2010b; Levy 2015; Toon 2010a, 2010b, 2012; Walton 1990. 17 There are several claims in Frigg 2010a that are in some apparent tension with a reading of his account as anti-​realist. Using the term “model system” as a generic term for such things (if such things there be) as simple pendula and economic systems involving perfectly rational agents, Frigg writes, “The view of model systems that I advocate regards them as imagined physical systems, i.e. as hypothetical entities that, as a matter of fact, do not exist spatio-​temporally but are nevertheless not purely mathematical or structural in that they would be physical things if they were real” (253). The presence of the qualifier “spatio-​temporally” might reasonably be taken to suggest that Frigg does not mean to deny existence to model systems altogether. Moreover, this sentence seems to conjure a picture in which “purely mathematical or structural” objects are other kinds of things that “do not exist spatio-​temporally,” just like model systems, and so the suggestion that model systems exist nonetheless is strengthened by the fact that Frigg embraces an ontological commitment to mathematical and structural objects elsewhere in that work (“Structures themselves are assumed to be Platonic entities in that they exist independently of human minds” [265]). In addition, a couple of pages later

82  The Scientific Imagination developing an equally compelling realist alternative. If we have decided to take the fiction approach to missing-​systems modeling seriously, then there are at least two reasons, I think, for attempting to construct a realist implementation of the approach. The first is provided by a central part of the argument of Amie Thomasson’s contribution to this volume (sections 2.4 and 2.5), and it focuses on the challenges of providing a satisfying account of “external” discourse about missing systems—​discourse about the simple pendulum, say, as a model, such as when we say “The simple pendulum is just a model” or “Imagining the simple pendulum helps us to understand the behavior of bridges swaying in the wind.”18 Thomasson presents a case for concluding that, in the attempt to offer an account of such external discourse, the Waltonian approach is forced into a number of seemingly ad hoc maneuvers, and into making claims that lack psychological plausibility; even then, it is unclear how the account should be extended to cover all the relevant sorts of external modeling discourse. In contrast, Thomasson argues, an account that does posit such entities as the simple pendulum—​a realist account—​would seem capable of accounting for such discourse more straightforwardly.19 Frigg writes that “hypothetical systems are an important part of the theoretical apparatus we employ” (255), and his note 7 is devoted to the difficulties of avoiding ontological commitment to the relevant hypothetical systems. And the hypothetical systems in question are clearly intended just to be the model systems—​see both the sentence quoted from 253 and this one: “My suggestion is that these hypothetical systems in fact are the models [sic] systems” (255). Of course, the sentence quoted from 253 also implies that model systems are not “real,” so there is a certain amount of tension internal to this claim. Separately, Frigg’s inclusion of the relations of “p-​representation” and “t-​representation” (264; cf. fig. 1, on 266) also strikes a realist note, given that in both cases one of the intended relata is a model system.   Despite all this, when Frigg addresses the question of realism head-​on, he says quite emphatically that his account is anti-​realist: “What metaphysical commitments do we incur by understanding models in this way? The answer is: none” (264). I think there is something of a puzzle about how exactly we should reconcile these different parts of Frigg’s account. (See also Toon 2012, 58–​59, on Frigg’s talk of p-​representation and t-​representation, and Thomasson, this volume.) But for present purposes I will simply assume that they can be reconciled, perhaps by way of judicious paraphrase, in such a way as to yield an unambiguously anti-​realist account, given that Frigg seems clearly to be aiming at such an end result.   Incidentally, Godfrey-​Smith employs a formulation very similar to Frigg’s talk of imagined physical systems:  “Roughly, we might say that model systems are often treated as ‘imagined concrete things’—​things that are imaginary or hypothetical, but which would be concrete if they were real” (2006, 734–​735). Given this, some of the same puzzles arise in interpreting Godfrey-​Smith’s account. 18 It is tempting to characterize external discourse by saying that it involves dropping the pretense we seem to be engaging in when we speak as though there are simple pendula. This would be tendentious on more than one front, however, not least because it is in tension with the account the Waltonian wants to give of such utterances. 19 I will have little to add about external modeling discourse in what follows. I should immediately note, however, that the realist account I am about to present will also need to address a worry about psychological plausibility; see section 3.5.

Realism About Missing Systems  83 A second reason for developing a realist implementation of the fiction approach is that a realist account will enable us to hold on to a basic picture of modeling, or at least of targeted modeling, that many have found very compelling. On that picture, modeling is indirect, to borrow a term from Michael Weisberg:  we use language, in the first instance, to pick out some sort of non-​linguistic object that is intermediate between our utterances and the target system, and which then represents the target in some non-​linguistic way.20 One central question that arises when we employ this picture is: What sort of thing is the intermediate object? Another, related question is: What is the representational relationship between the intermediate object and the target? The best-​known and most influential answers to the first question are perhaps that the intermediate object is a mathematical structure of one sort or another, such as a set-​theoretical tuple (Suppes 1957, 1960), or a state space with a trajectory running through it (van Fraassen 1987), or that it is an abstract object of a different sort, one that has all and only the properties mentioned in some relevant bit of scientific discourse (Giere 1988).21 Answers to the second question, about the nature of the non-​linguistic representational relationship between intermediate object and target system, have varied in their details, depending in part on how the first question is answered, but they have most often involved talk of similarity of one sort or another. The view that the intermediate object is a mathematical structure of some sort has often been accompanied by the idea that the crucial non-​ linguistic representational relationship is grounded in relations of isomorphism, and isomorphism is just perfect structural similarity (at least on one notion of structure). For Giere and, following him, Godfrey-​Smith, on the other hand, the crucial relation seems to be similarity of a more common or garden-​variety sort (Thomson-​Jones 2010, 291–​292).22 20 Weisberg 2007, 209–​210. Note that the language we use to pick out the intermediate object can include mathematical language; and that, in addition to language, we might use pictures, graphs, and the like. (Weisberg notes these things, too: 2007, 217.) Weisberg’s characterization of the picture is not exactly the same as mine, but both formulations center on the idea that targeted modeling involves a non-​linguistic object that serves as an intermediary between language and the target system. (See Weisberg 2007, 216–​217, for more emphasis on the non-​linguistic nature of the intermediate object.) Another important part of Weisberg’s characterization of the picture is that targeted modeling takes place in three stages (2007, 209–​210); that seems to me true, at least in many cases, and important, but not of crucial relevance to my project here. 21 Of course, modeling that employs a concrete physical object as model fits this indirect picture, too. (In that sort of modeling, our picking out of the intermediate object might involve simple acts of pointing.) See Sterrett 2006 and Weisberg 2013 for careful treatments of this sort of modeling. 22 R. I. G. Hughes offers an account of modeling that explicitly rejects the widespread emphasis on similarity (1997, S330). As I understand it, however, Hughes’s account upholds the indirect picture of modeling.

84  The Scientific Imagination There are good reasons for attempting to hold on to the indirect picture as a picture of modeling in general. For one, it makes for a smooth fit with the “surface grammar” of much modeling discourse (including much of the discourse involved in missing-​systems modeling). For another, it seems perfectly plausible for some varieties of modeling, at least—​when modeling employs a concrete object as a model of the target system (Crick and Watson’s tin-​plate model of DNA, say), and when modeling simply (!) involves characterizing a mathematical structure and then treating it as a representation of the target system23—​and the prospect of a picture of modeling that is unified, at least to some degree, is an appealing one, I take it. We thus have some reason to aim for an account of targeted missing-​systems modeling in which it is indirect. I have argued elsewhere, however, that if missing-​systems modeling involves an intermediate object, it is neither a mathematical structure nor an abstract object of the sort Giere postulates.24 So can the fiction approach provide us with a way of understanding missing-​systems modeling that is true to the indirect picture? The Waltonian implementations of the fiction approach to missing-​ systems modeling are, in many respects, very attractive, but they reject the indirect picture of targeted missing-​systems modeling. Targeted missing-​ systems modeling then becomes a matter of purely linguistic representation.25 That is not in itself a problem, in my view, despite the long-​standing push away from a focus on linguistic representation in the philosophy of science. (Indeed, I have argued elsewhere that we should put more emphasis on linguistic representation of a certain sort; see Thomson-​Jones 2012.) But I am interested nonetheless in seeing whether an account of 23 At least, the indirect picture is plausible in the second case provided we are willing to countenance mathematical structures in our ontology. 24 Thomson-​Jones 2010, in which I also consider and reject a third view suggested by Paul Teller (2001). (And note that the points I make there about Giere’s proposal mean that it is unworkable as a part of an account of any sort of modeling.) See also Thomson-​Jones 2007 for an examination of yet more options. In that work I explore several versions of what I then called the “little fictions approach,” including a view that draws on the account of truth in fiction given in Lewis 1978 to flesh out the idea that the intermediate objects might be concrete possibilia, and the view that the intermediate objects might be the sorts of things van Inwagen (1983) takes fictional characters to be. As will become clear later, it seems to me that Thomasson’s (1999) account of ordinary fiction improves on van Inwagen’s in ways that enable us to construct from it a much more appealing account of missing-​ systems modeling. 25 Frigg’s explicit inclusion of mathematical structures in his overall picture complicates the story here. But note two things: First, if we take Frigg’s anti-​realism seriously and elide the model system from his fig. 1 (2010a, 266) but leave the mathematical structure in place, we would seem to return to a picture of modeling in the tradition of Suppes and van Fraassen. Second, not all missing-​systems modeling is mathematical, and when we elide both the mathematical structure and the model system from Frigg’s picture, linguistic representation is all that is left.

Realism About Missing Systems  85 targeted missing-​systems modeling can be made out that hews more closely to the indirect picture. And to do that under the aegis of the fiction approach fairly clearly means seeing whether a realist account can be made to work.26 If we wish to be realists about the simple pendulum while deferring to physicists on the question of whether there are any concrete simple pendula, then the obvious option is to take the simple pendulum to be an abstract object. In the philosophical literature on ordinary fiction, Peter van Inwagen (1983) was one of the first exponents of the analogous view about fictional characters. The core of van Inwagen’s view is that a fictional character is an abstract object that “holds” the properties mentioned in the fiction, rather than having them. The holding relation in question is understood to be sui generis; we grasp it simply by grasping the sense of the word “is” in which the sentence “Emma Bovary is an unhappy woman,” for example, says something true.27 I think a more promising way of developing the idea that missing systems are abstracta, however, is to draw on Amie Thomasson’s more extensively elaborated account of the semantics and ontology of ordinary fiction, as laid out in her 1999 book Fiction and Metaphysics.28 On that account, fictional characters are abstract artifacts, brought into existence at specific times by the creative activities of authors. The account also includes proposals about the semantics of utterances composing works of fiction, and of various kinds of utterance about works of fiction and fictional characters. The semantic proposals relevant to our purposes are those concerning just two kinds of utterance: fictive utterances, those that go to make up a work of fiction (when it is wholly or partly made up of utterances), and metafictive utterances, those that make claims about the content of a work of fiction without explicitly acknowledging that it is a work of fiction (such as Karlsefne’s utterance of the sentence “But Emma is unhappy” over coffee).29

26 See Contessa 2010 for a different realist implementation of the fiction approach. Unfortunately, I  will have to postpone discussion of the relationship between the account I  present below and Contessa’s, and my reasons for preferring the former; my apologies to Contessa. 27 Section 3.5 of Thomson-​Jones 2007 is devoted to a discussion of the idea of basing an account of missing-​systems modeling on van Inwagen’s account of fictional characters. 28 See Thomasson, this volume, for references to related work on ordinary fiction by others. 29 I am borrowing the terms “fictive” and “metafictive” from Greg Currie, without meaning to import his views about how such utterances should be understood. (See Currie 1990, 30ff., 158.) In Thomasson’s terminology, fictive utterances are part of “fictionalizing discourse,” and metafictive utterances are part of “internal discourse”; see Thomasson 2003, 207.

86  The Scientific Imagination 1. When we utter the sentence “Emma is unhappy” as part of a conversation about Madame Bovary (a metafictive utterance), the name “Emma” refers to an abstract artifact created by Flaubert in writing his novel. Let’s give that abstract artifact another name, and call it “A1.” 2. Utterances of the name “Emma” in the novel Madame Bovary (fictive utterances of the name) refer to the same abstract artifact, A1.30 3. The proposition expressed by a metafictive utterance of “Emma is unhappy” is the proposition A1 is such that, according to the novel Madame Bovary, it is unhappy Thus, the proposition expressed by the metafictive utterance comes out true (given what’s true in the fiction Madame Bovary), just as we would ordinarily take it to. The account of missing-​systems modeling we get by adapting this account of ordinary fiction is the view I will call the abstract artifacts account. On the abstract artifacts account of targeted missing-​systems modeling, then, missing systems such as simple pendula are abstract artifacts, created by physicists at a certain point (or over a certain period) in the history of classical mechanics. An initial presentation of the notion of the simple pendulum—​in a textbook, for example—​is a presentation of what we might call the simple pendulum fiction.31 Utterances involving the term “simple pendula” that occur as part of the simple pendulum fiction are fictive utterances; certain other utterances involving that term, making claims that are at least in part about the content of the simple pendulum fiction and about the “fictional characters” it introduces (without explicitly acknowledging that the simple pendulum fiction is a fiction), are metafictive utterances. More specifically, we have the following three semantic proposals: 30 One qualification here: On the view in question, the first use of the name in the work is a “sort of performative.” Nonetheless, “later [uses] by the author within the novel simply refer back to the character [that is, the abstract artifact] and ascribe it certain properties” (Thomasson 2003, 211; see also Thomasson 1999, 46–​49). 31 By “initial presentation,” I do not to mean to single out the first presentation of the notion of the simple pendulum in history. Rather, I mean the first presentation of the simple pendulum fiction along with what we might call any “retelling” of the fiction. A presentation of the notion of the simple pendulum fiction in a brand-​new textbook written this year, or in a university lecture given yesterday, thus counts as an initial presentation of that notion; I take such presentations to be analogous to my retelling of the story of Hansel and Gretel to my daughter before bed. Such initial presentations and retellings are, like first presentations and storytellings, composed of fictive utterances. The contrast I intend by using the term “initial presentation,” then, is with discussions of the simple pendulum (or the notion of the simple pendulum), which are composed of metafictive utterances.

Realism About Missing Systems  87 1. When we utter the sentence “Simple pendula move sinusoidally” metafictively—​say, as part of the discourse involved in modeling a swaying bridge as a simple pendulum—​the term “simple pendula” picks out a class of abstract artifacts created by physicists at a certain point in the history of classical mechanics. Let’s give those abstract artifacts another name, and call them “the AA1’s.” 2. Utterances of the term “simple pendula” in an initial presentation of the notion of the simple pendulum (i.e., fictive utterances of the term) pick out the same class of abstract artifacts.32 3. The metafictive utterance makes the claim that those abstract artifacts are such that, according to the simple pendulum fiction, they move sinusoidally. Given the content of the simple pendulum fiction, it follows that typical utterances of “Simple pendula move sinusoidally,” occurring as bits of modeling discourse, say something true—​just as physicists ordinarily take them to. With these elements of the abstract artifacts account of modeling in place, we can see how it solves our three problems. First, part (i) of the semantic problem. Let “S” denote the sentence form , where “φ” is some predicate. Then there is no difficulty in the fact that scientists utter meaningful sentences of form S while also telling us that there are no such things as simple pendula. When they say there are no simple pendula, they mean that there are no concrete simple pendula in the spatiotemporal world around us; the claim they make when they utter the sentence “Simple pendula move sinusoidally,” however, is the claim that certain abstract artifacts are such that, according to the simple pendulum fiction, they move sinusoidally. Understanding the two halves of the puzzle this way resolves the initial appearance of a tension between them.33 32 See note 30 for a small qualification. 33 Two further comments: (i) I take the proposal about what scientists mean when they say that there are no simple pendula to be intuitively plausible. But there is certainly more to be said about how this proposal is to be fleshed out so as to give a detailed semantics for such utterances that coheres with the other semantic proposals involved in the abstract artifacts account. One option might be to apply the idea that domains of quantification can vary from one context to another. Alternatively, perhaps this proposal should be replaced by another that would do the same work: that scientists’ utterances of the sentence “There are no simple pendula” should be taken metalinguistically, as (possibly preemptive) comments on uses of the term “simple pendula” by speakers intending to pick out concrete objects in the spatiotemporal world around us. (Cf. Thomasson 2003, sec. 2.2.) Perhaps it is wrong to say that the utterances in question mean that there are no simple pendula in the spatiotemporal world around us on this second option, but those utterances would still convey as much, and

88  The Scientific Imagination Second, part (ii) of the semantic problem. The abstract artifacts account can also explain the further fact that scientists are willing to characterize some claims of form S as true, despite their insistence that there are no simple pendula. We need only be willing to assume that when scientists say “Now, of course it’s true that simple pendula move sinusoidally,” for example, the claim they are classifying as true is the claim expressed by a metafictive utterance of the sentence “Simple pendula move sinusoidally”—​that is, the claim that a certain class of abstract artifacts are such that, according to the simple pendulum fiction, they move sinusoidally. That claim is true, and scientists know that it is true, so it is hardly surprising that they are willing to characterize it as true. And all of that is perfectly compatible with their claim that there are no concrete simple pendula in the spatiotemporal world around us. Third, the knowledge problem. We can give an account of what is going on when we describe scientists as “studying the properties of simple pendula” and “discovering that simple pendula have certain features” that coheres with the scientists’ claim that there are no simple pendula. The scientists’ claim that there are no simple pendula is, again, to be understood as the claim that there are no concrete simple pendula. When scientists are engaged in the activity we call “studying the properties of simple pendula,” however, their attention is focused on the abstract artifacts that the term “simple pendula” picks out in an initial presentation of the simple pendulum fiction; what they are studying at such times is the properties that, according to the simple pendulum fiction, those abstract artifacts have. Similarly, when scientists are engaged in the activity we call “discovering that simple pendula have certain features,” they are discovering that the abstract artifacts in question are such that, according to the simple pendulum fiction, they have certain features. In other words, the activities in question are the activities of studying and discovering things about the content of the simple pendulum fiction.34 (presumably) would be intended to do so. In any case, I leave open the question of exactly how the details should go here, as it requires further work. The reader may thus prefer to regard the simple formulation appearing in the text as a temporary placeholder. (ii) There is still the general question of how it is that we manage to refer to abstracta. This, however, is a problem shared by any account of modeling that embraces mathematical structures. And as Thomasson argues (1999, ch. 4), it is a problem that might more easily be solved for abstract artifacts than for Platonic abstracta. 34 Here I am assuming that some satisfactory account can be given of how we can come to know what is true in such fictions—​an account, moreover, that is compatible with the various elements of the abstract artifacts account laid out thus far. Note, too, that I am not pretending to offer a complete account here of how it is that we can come to know things about our target systems by modeling them as missing systems. I take it, though, that when we are engaged in missing-​systems modeling, we learn things about our targets in part by learning things about the missing systems we model

Realism About Missing Systems  89 Note, relatedly, that the abstract artifacts account can also accommodate the fact that scientists are sometimes mistaken about the truth-​values of claims of the form , where “M’s” is a term for some kind of missing system (or the form , where “R” is a term for some particular missing system): such mistakes are mistakes about the content of the relevant fiction. Similarly, scientists can be uncertain about the truth-​values of such claims by being uncertain about the content of the fiction. Neither of these cases is likely when the fiction in question is the simple pendulum fiction and the scientist considering the claim has training in classical mechanics, but both can arise with respect to fictions that are less elementary, or more novel. Finally, the use problem. The abstract artifacts account can reconcile the claim that scientists use the simple pendulum as a model—​a claim many philosophers would make—​with the scientists’ own claim that there are no simple pendula. The reconciliation is straightforward, and probably quite obvious at this point: certainly there are no concrete simple pendula in the spatiotemporal world around us, but this is no obstacle to claiming that scientists use certain abstract artifacts as models. The abstract artifacts scientists use as models are the ones picked out by the term “simple pendula” in an initial presentation of the simple pendulum fiction, and in a typical metafictive utterance of the sentence “Simple pendula move sinusoidally.” There may be a problem about how we can manage to use abstract objects at all, but that is an entirely general problem, and one that must be faced just as squarely by any account of modeling according to which modeling involves the use of mathematical structures. In the remaining sections of this chapter, I will consider a number of objections to the account I have just presented. Before moving on, however, we should note that there are other possible ways of drawing on the notion of an abstract artifact to develop a realist implementation of the fiction approach to targeted missing-​systems modeling. I have drawn on the account of ordinary fiction Thomasson gave in Fiction and Metaphysics (1999). Thomasson herself, however, presented a variant of that account in “Speaking of Fictional Characters” (2003), one that, she argued, has certain advantages over the earlier version. In section 3.5, I will consider some of the reasons Thomasson gave for hesitating over the earlier version of her account of them as. Solving the problem that, for the purposes of this discussion, I have called “the knowledge problem”—​the problem of understanding how it could be that scientists study missing systems and learn things about them when (as we, and they, are inclined to say) the missing systems do not exist—​ is thus one important step in the direction of a complete epistemology of missing-​systems modeling.

90  The Scientific Imagination ordinary fiction, and ask whether they should give us reason to hesitate over the corresponding account of missing-​systems modeling. For now, though, it is enough to see that there is at least one reason for basing our account of missing-​systems modeling on the earlier version of Thomasson’s account rather than the later.35 The crucial difference between the earlier and later versions for our purposes is that, whereas the earlier version offers a de re reading of metafictive utterances, the later version offers a de dicto reading (Thomasson 2003, 211). So, for example, on the earlier, de re version of the account, when Karlsefne utters the sentence “Emma is unhappy” metafictively, the name “Emma” refers to an abstract artifact, A1, brought into existence by Flaubert in the act of writing Madame Bovary, and Karlsefne’s utterance expresses the proposition that A1 is such that, according to the novel Madame Bovary, it is unhappy

On the later version of the account, fictional characters are still abstract artifacts, and we refer to them when we are engaged in external discourse. So, for example, the name “Emma Bovary” refers to an abstract artifact when we utter the sentence “Emma Bovary is a very finely sketched character.” There is no reference to that abstract artifact when Karlsefne uses the name “Emma” in his metafictive utterance, however; instead, the sentence he utters is given a de dicto reading, and is taken to express the proposition that According to Madame Bovary, there is a woman called Emma who is unhappy

If we were to base our account of targeted missing-​systems modeling on the de dicto version of Thomasson’s account, then we would be committed to saying that a large part (at least) of our modeling discourse makes no reference to abstract artifacts (as, for example, when we utter the sentence “Well, of course simple pendula move sinusoidally” on our way to drawing some conclusion about a swaying bridge). It follows that we would be leaving the indirect picture of modeling behind. The reasons I gave earlier for attempting

35 Thomasson (this volume) also draws attention to the fact that the two versions of her account of ordinary fiction provide (at least) two options for developing what she calls an “artifactualist” account of scientific modeling, and she leaves the question open.

Realism About Missing Systems  91 to hold on to the indirect picture of modeling—​which were among my reasons for attempting to develop a realist implementation of the fiction approach—​are thus reasons for basing our account of targeted missing-​ systems modeling on the de re version of Thomasson’s account of ordinary fiction rather than the de dicto version.36

3.4  Objection One: Too Many Falsehoods Let us turn now to consider some objections to the abstract artifacts view. The first is this: On the abstract artifacts view, the initial description of the simple pendulum, or of any other missing system, is composed largely or entirely of falsehoods. That is because the initial description of any missing system in the natural and social sciences will, on the abstract artifacts view, be composed largely or entirely of claims to the effect that some abstract artifact has various properties which no abstract object has, or could have. So, for example, suppose the initial description of the simple pendulum contains the sentence “Simple pendula move through perfectly uniform gravitational fields.” According to the abstract artifacts view, when it appears as part of the initial description of the simple pendulum, this sentence expresses the claim that the AA1’s—​the abstract artifacts brought into existence by the act of writing down the initial description for the first time—​move through perfectly uniform gravitational fields.37 As the AA1’s are abstract objects, that is false (and necessarily so). This is indeed a consequence of the abstract artifacts view. But why see it as a problematic consequence? One might think it a problem for the abstract artifacts view that it has this consequence because we treat the claims in question as true. But do we? Certainly we treat many utterances of the sentence “Simple pendula move through perfectly uniform gravitational fields” as making true claims. To dispel the sense that there is a problem here, however, all the defender of the abstract artifacts view needs to do to is claim that all those utterances of the sentence that we treat as true are metafictive utterances. Metafictive utterances of that sentence are true on the abstract artifacts view, because they express claims about the properties ascribed to 36 If precision were the only consideration, a better name for the account I have presented would be “the de re abstract artifacts account.” The shorter title should risk no confusion in the present discussion, however. 37 Again, see note 30 for a small qualification.

92  The Scientific Imagination the AA1’s by the simple pendulum fiction—​and one of those properties is the property of moving through a uniform gravitational field. It seems to me entirely plausible to claim that the utterances of the sentence “Simple pendula move through perfectly uniform gravitational fields” that we treat as true are all metafictive, and so I see no problem for the abstract artifacts view here.38

3.5  Objection Two: Claims About Abstract Artifacts—​Really? The second objection I want to consider focuses not on what the abstract artifacts view implies about the truth-​values of various utterances but on what the view says those utterances are about. On the abstract artifacts view, both the fictive utterances making up an initial description of the simple pendulum and the metafictive utterances produced by scientists in their subsequent modeling activities are about abstract artifacts. The objection is that this is simply implausible. Surely scientists understand what they’re saying when they say such things as “The simple pendulum moves sinusoidally,” and surely they don’t understand themselves to be making claims about abstract artifacts. So, the objection goes, it is implausible that such utterances are about abstract artifacts. This objection is reminiscent of a worry Amie Thomasson raises about the earlier, de re version of her account of ordinary fiction in “Speaking of Fictional Characters” (2003), a worry that provides one of her motivations for presenting a revised set of semantic proposals for both fictive and metafictive utterances.39 On those revised proposals, as we have seen, fictional characters are still abstract artifacts, and we still refer to them when engaged in external discourse, but fictive and metafictive utterances are now given a de dicto 38 Part of the idea here, then, is that when an utterance of the sentence in question appears as part of the initial description of the simple pendulum, we do not treat it as true. (This is compatible with our treating it as correct in a different way—​namely, as properly part of an initial description of the simple pendulum.) I mean to take no stand here on the question of what scientists are doing, exactly, with the false claims making up an initial description of a missing system when they write the initial description down. Perhaps they are pretending to assert them; perhaps they are merely presenting them as having some particular value (“Consider this [cluster of claims]—​I think you’ll find it useful [in such-​and-​such way]”); perhaps they are performing some other sort of speech act. I leave this open as a matter for further thought. 39 One worry is reminiscent of the other, but they are not quite the same: the apparently implausible consequence that catches Thomasson’s eye is that “we must take works of literature to invoke the pretense, of some abstract object, that it is a detective, is a man, solves crimes, etc.” (2003, 212). For an objection that parallels this worry more closely, see the end of this section.

Realism About Missing Systems  93 reading and, as a result, are not about the abstract artifacts in question.40 One way of responding to this objection would thus be to amend the abstract artifacts account so that it mirrors the de dicto version of Thomasson’s account of ordinary fiction, rather than the de re version. As I explained earlier, however, the price of such a maneuver would be the abandonment of the indirect picture of modeling, at least in some significant measure, and an important part of my project here is precisely to see whether we can hold on to that picture when developing an account of targeted missing-​systems modeling.41 I will thus propose a different way of responding to the objection.42 So: notice that there is a parallel here to a worry one might have about certain views in the philosophy of mathematics. Consider, for the purposes of illustration, a naive variety of nominalism about numbers according to which, for example, the number 2 is just the set of all two-​membered sets of physical objects. One might object to such a view as follows: surely the folk understand the arithmetical sentence “2 + 3 = 5,” and yet it seems quite implausible that ordinary people in ordinary contexts have always understood themselves to be making the rather complex claim about relationships between sets of sets of physical objects that such a nominalism would insist this sentence expresses. The nominalist must thus be wrong about what the sentence says, and, correspondingly, about what numbers are. Note, furthermore, that various other views about the ontology of mathematics are open to this objection, too. The standard-​issue Platonist, for example, says that the sentence “2 + 3 = 5” is about the relationship between two mind-​independent, necessarily and eternally existing, causally impotent, non-​spatiotemporal objects. How plausible is it that the folk, who surely understand the sentence in question, understand it to be a claim about such objects? Thinking about the mathematics case helps us see a line of response we can employ on behalf of the abstract artifacts account of missing-​systems

40 Thomasson 2003, 211. On at least one way of filling in the details, metafictive utterances do nonetheless involve reference to entities that Thomasson regards as abstract artifacts—​novels, stories, and the like. (See section 3.3 and Thomasson 2003, 220.) So perhaps a version of the objection I have just sketched lingers even on the de dicto version of Thomasson’s account: surely Karlsefne understands what he is saying when he utters the sentence “Emma is unhappy,” and surely he does not take himself to be making a claim that is in part about an abstract artifact (Flaubert’s novel). Even if this seems like a worry at first sight, it seems to me that one could quite satisfactorily adapt the response I am about to present for the scientific modeling case. 41 I say “in some significant measure” because some modeling discourse is external discourse, and at that point, at least, abstract artifacts would make a reappearance, even on the de dicto approach. 42 In what follows I am indebted to Paddy Blanchette and Bruno Whittle for helpful discussion.

94  The Scientific Imagination modeling. The key, I would suggest, is the thought that one can understand a claim, or at least count as understanding it sufficiently well for various purposes, without having a complete grasp of the nature of the objects the claim is about. Perhaps understanding such claims requires some grasp of the nature of the objects it concerns, but then perhaps a case can be made that the folk do have some grasp of it. In the mathematics cases, the nominalist might argue that ordinary people understand that the claim expressed by the sentence “2 + 2 = 4” is at least tightly connected to claims about grouping and counting physical objects; the Platonist might argue, similarly, that ordinary people understand that the numerals appearing in the sentence in question don’t refer to objects you can touch or see, and so on.43 To defend the abstract artifacts view against this objection, then, perhaps it is enough to point out that the scientists who understand what they are saying when they produce fictive and metafictive utterances about simple pendula understand enough:  they understand that the term “simple pendulum” doesn’t pick out any concrete object in the spatiotemporal world around us, that the simple pendulum was introduced at a certain point in the history of physics, and so on.44 They understand, in other words, that the simple pendulum has certain characteristics, and as those characteristics are characteristics of abstract artifacts—​indeed, central characteristics of abstract artifacts—​scientists can be said to have at least a partial grasp of the nature of the objects the claims in question are about. And, we can plausibly add, a partial grasp is all that is required for the scientists’ successful engagement with such claims in a variety of scientific contexts. After all, what they lack a complete grasp of on this picture is just the metaphysical nature of missing systems. Here is another objection along similar lines:  When scientists are engaging in missing-​systems modeling, they are imaginatively engaging with the initial description of the missing system, or kind of missing system, in question. But imaginative engagement with the initial description is a matter 43 For that matter, consider the scientist who claims that water is H2O: surely there is a mistake involved in objecting to such a claim on the grounds that there are lots of people who understand claims about water without even having the concept of hydrogen, or oxygen, or a molecule. As Arnon Levy has suggested (personal communication), reflection on this particular parallel might lead one to make some Putnam-​style remarks about the division of semantic labor. And perhaps it is philosophers who carry the burden of being experts on the metaphysics of modeling. 44 The facts that the scientists understand, and the fact of their understanding them, are compatible with other accounts of missing-​systems modeling, of course. That might be a problem if I were trying to adduce evidence in favor of the abstract artifacts account at this point, but my aim here is only to defuse an objection.

Realism About Missing Systems  95 of imagining that the claims making up the initial description are true (along with at least some of the other claims that are true in that fiction, presumably). Thus, on the abstract artifacts account, missing-​systems modeling involves imagining (or trying to imagine) that abstract artifacts have various properties they do not and could not have, such as the property of moving through a uniform gravitational field. But this is a highly implausible claim about the phenomenological character of missing-​systems modeling.45 A plausible response here is simply that the imagining we are required to do when we imaginatively engage with the initial description of a missing system is all de re, and that we are only required to imagine to be true (de re) claims that are true in the fiction. To put the emphasis on the de re/​de dicto distinction: although we are required to imagine of certain things that are in fact abstract artifacts—​the AA1’s—​that they move through a uniform gravitational field, we are not asked to imagine the proposition to be true. (Cf. Thomasson 2003, 212, who cites a relevant discussion by Nathan Salmon [1998].) Or, to put the emphasis on the fact that we are asked to imagine to be true only claims that are true in the fiction: because it is true in the fiction that the AA1’s are spatiotemporal objects, but (though true simpliciter) not true in the fiction that they are abstract artifacts, imaginative engagement with the initial description may involve imagining that there are spatiotemporal objects that have the relevant features (and probably will), but will not involve imagining that there are abstract artifacts that do. On closer inspection, then, it is far from clear that there is a problem for the account here.

3.6  Objection Three: Fungibility To find our way to the final objection I want to consider, recall one of the primary motivations I cited for developing the abstract artifacts account: that, as a realist implementation of the fiction approach, it would provide us with a way of seeing targeted missing-​systems modeling as indirect. The hope, in other words, was that the abstract artifacts account would enable us to understand targeted missing-​systems modeling as an activity in which we use language (often including mathematical language, and sometimes along 45 This objection comes closer to paralleling Thomasson’s worry about the nature of the pretense invoked by a work of literature on the de re version of her account of ordinary fiction (2003, 212; see also note 39).

96  The Scientific Imagination with diagrams and the like) to pick out some sort of non-​linguistic object that is intermediate between our utterances and the target system, and which then represents the target in some non-​linguistic way. How exactly does that go on the abstract artifacts account? The first part of the story, obviously, is that we use language to pick out some abstract artifact or other. But how does the abstract artifact then represent the target system? Well, suppose we’re using one of the AA1’s—​one of the abstract artifacts that were created when the initial description of the simple pendulum was first laid out—​as a model of a bridge swaying in the wind. As a first stab, we might be tempted to propose that, in a simple case, the representation relation that holds between the abstract artifact—​a simple pendulum—​and the bridge is in part grounded in certain similarities between the sinusoidal motion of the simple pendulum and the motion of the bridge.46 But this is clearly not right: being an abstract object, the abstract artifact we are using as a model—​the intermediate object—​does not move sinusoidally, or in any other way. Instead, it seems, we should say that on the abstract artifacts account the non-​linguistic relation of representation between abstract artifact and target is grounded in similarities between properties the target has and properties that are ascribed to the abstract artifact by the relevant fiction (most or all of which the abstract artifact doesn’t and couldn’t have). Once we have spelled this out, a final objection to the abstract artifacts account arises: surely, as the indirect picture of modeling is usually understood, the idea is that the particular intermediate object we use to represent the target as being a certain way is supposed to be particularly well suited to representing the target as being that way, and it is supposed to be so in virtue of its intrinsic properties. Think, for example, of a mathematical structure used to represent: it is well suited to do the particular representational job it does because it has a certain structure.47 Or think of a physical model: it has, for example, a certain shape. But this is clearly not how things work on the abstract artifacts account: the property of having sinusoidal motion ascribed 46 Or, perhaps better, in certain similarities between the two objects, similarities that obtain in virtue of their respective motions.   Note, by the way, that the talk here is of representation being grounded in relations of similarity, in part: even this first stab is not so naive as to take representation to be similarity. And it is entirely compatible with the idea that representation of the target system by the intermediate object has a pragmatic component—​the idea that the intermediate object represents the target system at least partly due to our using the intermediate object in certain ways. (Recall that one of the problems the abstract artifacts account is intended to solve is the use problem—​see section 3.1.) 47 Its having that structure may involve the holding of relations between its parts, but it is still an intrinsic property of the mathematical structure.

Realism About Missing Systems  97 to it by the simple pendulum fiction—​an example of what we might call an “ascription property”—​is not an intrinsic property of the abstract artifact we are using to represent the target. To get at the worry another way, the abstract artifacts seem completely interchangeable or, to use an unlovely legal term, fungible. As far as their own intrinsic properties go, the abstract artifact that is in fact used to model the bridge could just as well have been used to model some economic system, and vice versa—​merely switch their ascription properties. Does this not diverge too far from the indirect picture of modeling to count as a successful implementation of that picture? I think it is true that something of the original picture has been lost here. But, first, I doubt that there is any good full-​blooded way of holding on to every aspect of the original picture.48 Second, the force of this objection can be lessened by reflecting on a component of Thomasson’s account of fictional characters that has gone unmentioned thus far. Briefly, the thought we will come to is this: it is true that the ascription properties of our abstract artifacts—​properties of the form having P ascribed to it by the fiction—​play a crucial role in the representational work they do for us (or, if you prefer, the representational work we do with them), and it is true that their ascription properties are not among their intrinsic properties. It can, however, also plausibly be maintained that the ascription properties of our abstract artifacts are among their essential properties. To see how this might go, we will use one of Thomasson’s examples: the fictional character Emma Woodhouse, created by Jane Austen in writing the novel Emma. Suppose that we use “A2”as a name for the abstract artifact that is the fictional character in question. Then it is a feature of Thomasson’s view that it is an essential property of A2 that it was created by Austen, and, moreover, by the particular act of Austen’s that in fact resulted in the creation of A2.49 Given this, we can plausibly claim that at least some of A2’s ascription 48 See Thomson-​Jones 2007, 2010 for some arguments to support this claim. 49 See Thomasson 1999, 35, 38–​39, and (on the creation of fictional characters) 5–​7, 12–​13. Although Thomasson does not use the term “essential property” in these places (or much at all), it seems clear enough that she would accept this paraphrase, given that all I mean in saying “P is an essential property of X” is that X has P in every possible world in which X exists. Thomasson writes, “Because a fictional character is rigidly dependent on its author for coming into existence, any possible world containing a given character as a member is a world containing that very author and his or her creative acts” (1999, 39). It does not follow from that claim alone that it is an essential property of the fictional character that it is created by its actual creator, or by the acts that actually created it, but that seems to be Thomasson’s intention, as her next sentence helps to make clear: “Indeed we might add that because a character cannot exist before it is created by such acts, for any time and world containing a character, that world must contain the author’s creative acts at that time or at some prior time” (1999, 39).

98  The Scientific Imagination properties are essential properties—​namely, those ascription properties A2 has in virtue of the act of its creation. The act in question is the one Austen performed when she wrote the sentence that begins “Emma Woodhouse, handsome, clever, and rich, with a comfortable home and happy disposition, seemed to unite some of the best blessings of existence” (Thomasson 1999, 12). So, on the view we are considering, having the property of being rich ascribed to it is plausibly an essential property of A2.50 A world in which Austen instead writes a novel in which the main character is poor is one that does not contain the creative act that occurred in our world when Austen wrote Emma, and so does not contain A2 (even though it might contain some other abstract artifact referred to, in that world, by the name “Emma Woodhouse”).51 No doubt there are differences between ordinary fiction and scientific modeling here. For one thing, it does not seem intuitively plausible that the AA1’s—​the simple pendula—​exist only in those worlds in which the particular scientists who created them in this world exist, and create them. Nonetheless, once we are taking the abstract artifacts account seriously, it does seem plausible that the existence of the AA1’s in a world depends on there occurring a creative act of a certain sort in that world—​namely, a creative act that counts as a constructing of the simple pendulum fiction.52 Assuming that the content of a fiction is essential to it, the properties ascribed to the AA1’s by the simple pendulum fiction in the actual world will then be ascribed to them by the simple pendulum fiction in every world in which they exist. Thus the ascription properties of the AA1’s—​the properties that do the crucial representational work when we use the AA1’s as models—​are essential properties. And this means that the abstract artifacts we use as models are not fungible after all. The AA1 we use as a model of the bridge is particularly well suited to representing the bridge as being such-​and-​such a way 50 We can, despite this, happily make sense of any inclination we might have to say that Emma Woodhouse could have been poor. The idea will just be that what we are most plausibly saying when we say such a thing is that it is true in Emma that Emma Woodhouse could have been poor—​or, in other words, that A2 is such that, according to Emma, it has the modal property of having possibly been poor. The claim < A2 could have failed to have the property of being such that, according to Emma, it is rich > is a distinct claim, and does not follow from the first. 51 I am relying here on the assumption that it is an essential property of the act of A2’s creation that it involves the ascription of the property of being rich to A2; I take that assumption to be a plausible one. But note that what matters for my ultimate point in this section is the plausibility of certain roughly parallel claims about the case of scientific modeling (discussed later). 52 I am allowing for the possibility that two initial descriptions might be presentations of the same fiction even though they make explicit mention of different sets of properties—​the content of a fiction is in general not exhausted by what is explicitly stated in a given presentation of it.

Realism About Missing Systems  99 in virtue of its ascription properties, and the distinct abstract artifact we use to represent a certain economy as being a certain other way is particularly well suited to that different representational task in virtue of its quite distinct ascription properties. But because the ascription properties involved in representation are essential properties, the two abstract artifacts could not have switched places and done each other’s jobs equally well. And this is so even though the properties of an abstract artifact that make it particularly well suited to a certain representational task, being ascription properties, are relational rather than intrinsic.53

3.7  Conclusion There is a lot more to say than I have said here about how the abstract artifacts account compares to other accounts of missing-​systems modeling.54 My hope, nonetheless, is that I have shown that there is a realist version of the fiction approach to that puzzling variety of scientific modeling that is worth taking seriously.

Acknowledgments I presented the core ideas of the abstract artifacts account (in its de re form) and the motivations for pursuing it in talks at Harvard University and the University of Washington, Seattle, in the spring of 2010. (The talk at Harvard was part of the conference “Model-​Building and Make-​Believe”; thanks to Peter Godfrey-​Smith for organizing that.) More recently, I  presented something closer to this chapter in its present form in talks at “Models and Simulations 6” at the University of Notre Dame and at the annual conference of the British Society for Aesthetics at Oxford University in 2014, and at the Department of Logic and Philosophy of Science at the University of California, Irvine, in 2015. Thanks to audiences at these talks for some very helpful discussion. Special thanks to David Davies and Adam Toon, my co-​ symposiasts at the BSA, and, alphabetically, to Arthur Fine, Stacie Friend, 53 Thanks to David Davies for a very helpful discussion of these issues. 54 For further discussion of the Waltonian approach, and an argument that an implementation of the fiction approach that treats missing systems as abstract artifacts fares better on the semantical front at no real ontological cost, see Thomasson, this volume.

100  The Scientific Imagination Roman Frigg, Peter Godfrey-​Smith, Arnon Levy, Kyle Stanford, Mauricio Suárez, Amie Thomasson, Andrew Wayne, and Andrea Woody. I have had multiple conversations about the fiction approach to scientific modeling with many of the people on that list, and to them I am especially grateful. Thanks also to Peter Godfrey-​Smith and Arnon Levy for excellent editorial feedback, to Paddy Blanchette and Bruno Whittle for conversation and correspondence about the philosophy of mathematics, and to Fabian Lange for a useful conversation about missing systems in economics.

References Achinstein, P. (1968). Concepts of Science:  A Philosophical Analysis. Baltimore:  Johns Hopkins University Press. Contessa, G. (2010). “Scientific Models and Fictional Objects.” Synthese 172: 215–​229. Currie, G. (1990). The Nature of Fiction. Cambridge: Cambridge University Press. Fine, A. (1993). “Fictionalism.” In Midwest Studies in Philosophy, vol. 18, Philosophy of Science, edited by P. A. French, T. E. Uehling Jr., and H. K. Wettstein, 1‒18. Notre Dame, IN: University of Notre Dame Press. Frigg, R. (2010a). “Models and Fiction.” Synthese 172: 251–​268. Frigg, R. (2010b). “Fiction and Scientific Representation.” In Beyond Mimesis and Convention: Representation in Art and Science, edited by R. Frigg and M. C. Hunter, 97–​138. Boston Studies in the Philosophy of Science vol. 262. Dordrecht: Springer. Frigg, R., and Hunter, M. C. (Eds.) (2010). Beyond Mimesis and Convention: Representation in Art and Science. Boston Studies in the Philosophy of Science vol. 262. Dordrecht: Springer. Giere, R. N. (1988). Explaining Science:  A Cognitive Approach. Chicago:  University of Chicago Press. Godfrey-​Smith, P. (2006). “The Strategy of Model-​Based Science.” Biology and Philosophy 21: 725–​740. Hughes, R. I. G. (1997). “Models and Representation.” Philosophy of science 64: S325‒S336. Kalderon, M. E. (2005a). “Introduction.” In Fictionalism in Metaphysics, edited by M. E. Kalderon, 1–​13. Oxford: Oxford University Press. Kalderon, M. E. (Ed.) (2005b). Fictionalism in Metaphysics. Oxford:  Oxford University Press. Levy, A. (2011). “Information in Biology: A Fictionalist Account.” Noûs 45: 640–​657. Levy, A. (2012). “Models, Fictions, and Realism:  Two Packages.” Philosophy of Science 79: 738–​748. Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172: 781–​798. Lewis, D. K. (1978). “Truth in Fiction.” American Philosophical Quarterly 15: 37‒46. Nagel, E. (1961). The Structure of Science: Problems in the Logic of Scientific Explanation. New York: Harcourt, Brace & World. Poincaré, H. (1952). Science and Hypothesis. New York: Dover. Rosen, G. (2005). “Problems in the History of Fictionalism.” In Fictionalism in Metaphysics, edited by M. E. Kalderon, 14‒64. Oxford: Oxford University Press.

Realism About Missing Systems  101 Salmon, N. (1998). “Nonexistence.” Noûs 32: 277‒319. Sterrett, S. G. (2006). Wittgenstein Flies a Kite: A Story of Models of Wings and Models of the World. New York: Pi Press. Strawson, P. F. (1985). Skepticism and Naturalism: Some Varieties. London: Methuen. Suppes, P. (1957). Introduction to Logic. Princeton, NJ: Van Nostrand. Suppes, P. (1960). “A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences.” Synthese 12: 287‒301. Teller, P. (2001). “Twilight of the Perfect Model Model.” Erkenntnis 55: 393‒415. Thomasson, A. L. (1999). Fiction and Metaphysics. Cambridge:  Cambridge University Press. Thomasson, A. L. (2003). “Speaking of Fictional Characters.” Dialectica 57: 205–​223. Thomson-​Jones, M. (2007). “Missing Systems and the Face Value Practice.” Retrievable from http://​philsci-​archive.pitt.edu/​3519. Thomson-​Jones, M. (2010). “Missing Systems and the Face Value Practice.” Synthese 172: 283‒299. Thomson-​Jones, M.  (2012). “Modeling Without Mathematics.” Philosophy of Science 79: 761‒772. Thomson-​Jones (2017). “Against Bracketing and Complacency:  Metaphysics and the Methodology of the Sciences.” In Metaphysics in the Philosophy of Science, edited by M. Slater and Z. Yudell, 229–​250. Oxford: Oxford University Press. Toon, A. (2010a). “The Ontology of Theoretical Modelling:  Models as Make-​Believe.” Synthese 172: 301–​315. Toon, A. (2010b). “Models as Make-​ Believe.” In Beyond Mimesis and Convention: Representation in Art and Science, edited by R. Frigg and M. C. Hunter, 71–​96. Boston Studies in the Philosophy of Science vol. 262. Dordrecht: Springer. Toon, A. (2012). Models as Make-​ Believe:  Imagination, Fiction and Scientific Representation. Houndmills, Basingstoke: Palgrave Macmillan. Vaihinger, H. (1924). The Philosophy of “As If ”: A System of the Theoretical, Practical, and Religious Fictions of Mankind. Translated by C. K. Ogden. New York: Harcourt, Brace. van Fraassen, B. C. (1980). The Scientific Image. Oxford: Clarendon Press. van Fraassen, B. C. (1987). “The Semantic Approach to Scientific Theories.” In The Process of Science, edited by N. J. Nersessian, 105‒124. Dordrecht: Martinus Nijhoff. van Inwagen, P. (1983). “Fiction and Metaphysics.” Philosophy and Literature 7: 67‒77. Walton, K. L. (1990). Mimesis as Make-​Believe: On the Foundations of the Representational Arts. Cambridge, MA: Harvard University Press. Weisberg, M. (2007). “Who is a Modeler?.” British Journal for Philosophy of Science 58, no. 2: 207‒233. Weisberg, M. (2013). Simulation and Similarity: Using Models to Understand the World. Oxford: Oxford University Press.

4 The Fictional Character of Scientific Models Stacie Friend

4.1  Introduction Many philosophers have drawn parallels between scientific models and fictions, from Vaihinger’s ([1911] 2009) invocation of the “as-​if ” to Cartwright’s construal of models as “work[s]‌of fiction” (1983) or “fables” (1999) and other theorists’ noting of the role of idealization, invention, and imagination in modeling.1 In this chapter I will be concerned with a recent version of the analogy, which compares models to the imagined characters of fictional literature. Roman Frigg summarizes the approach this way: The core of the fiction view of model-​systems is the claim that model-​ systems are akin to places and characters in literary fiction. When modeling the solar system as consisting of ten perfectly spherical spinning tops physicists describe (and take themselves to be describing) an imaginary physical system; when considering an ecosystem with only one species biologists describe an imaginary population; and when investigating an economy without money and transaction costs economists describe an imaginary economy. These imaginary scenarios are tellingly like the places and characters in works of fiction like Madame Bovary and Sherlock Holmes. These are scenarios we can talk about and make claims about, yet they don’t exist. (Frigg 2010a, 101; cf. Godfrey-​Smith 2006, 735)

1 See, for example, the papers in Suárez 2009a and the bibliography in Magnani 2012. These discussions invoke fiction in a wide variety of ways, but only some draw the analogy with imagined characters that is my focus. Stacie Friend, The Fictional Character of Scientific Models In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0005

The Fictional Character of Scientific Models  103 Though versions of the position differ, the shared idea is that modeling essentially involves imagining concrete systems analogously to the way that we imagine characters and events in response to works of fiction. I will call this view the account of models as imagined systems, or MIS for short.2 Philosophers in other domains who assimilate puzzling entities—​ properties, possible worlds, numbers, and so forth—​to fictional or imaginary characters are usually motivated by a desire for ontological parsimony, with the appeal to fiction designed to undermine commitment to the entities in question. In the debate over scientific realism, for instance, fictionalism is a deflationary position (Fine 1993). By contrast, the goal of MIS is to capture a central feature of scientific practice. What matters to advocates of this position is the cognition or methodology of modeling, not the ontology.3 They argue that imagining concrete systems plays an ineliminable role in the practice of modeling that cannot be captured by other accounts. The approach thus leaves open what we should say about the ontological status of scientific models, or more accurately about the status of model systems, the hypothetical scenarios described by modelers. (To avoid ambiguity I will distinguish model specifications—​broadly construed to include texts, equations, diagrams, and so forth—​from model systems in what follows; I use the term “model” for the combination of these elements.) The analogy with literature does not settle the matter, for the status of fictional entities is the subject of ongoing debate (Friend 2007; Kroon and Voltolini 2011). According to some fictional realists, fictional characters are the inhabitants of possible worlds; according to others, they are nonexistent concrete objects; for still others they are abstracta of one sort or another. Antirealists deny that there are any fictional characters at all, understanding claims apparently about them in alternative ways. Correspondingly, some MIS advocates adopt realism about model systems, others propose interpretations of those practices designed to avoid realism, and still others remain neutral but consider the ontological issue a challenge. In defending their positions these philosophers take for granted that the ontological debate has implications for how to construe scientific practice. 2 Advocates of MIS who take model systems to be akin to fictional characters include Contessa (2010), Frigg (2010a, 2010b), and Godfrey-​Smith (2006, 2009). Levy (2012, 2015) and Toon (2012) defend versions of MIS according to which model descriptions sometimes (Toon) or always (Levy) invite imaginings directed at real entities. Morgan’s (2014) account of economic modeling is similar to Toon’s, though her purpose is not to defend MIS. 3 Suárez (2010) also makes this point in discussing a fictions approach to modeling. For a similar point from advocates of other approaches to modeling, see French 2010 and Weisberg 2013, 19‒20.

104  The Scientific Imagination In this chapter I argue that the debate over the ontological status of model systems is misguided. If model systems are the kinds of objects fictional realists posit, they can play no role in explaining the epistemology of modeling for an advocate of MIS.4 So they are at best superfluous. Defenders of MIS should focus on developing an account of the epistemological role of imagining model systems. In what follows I describe MIS and the motivations behind it in more detail and outline the epistemological challenges it faces. I then consider several realist accounts of fictional characters, arguing that none helps MIS meet those challenges. I conclude by looking at more promising approaches to the epistemology consistent with MIS. Though I do elaborate MIS in the way I find most plausible, my goal is not to defend it against critics of the general approach. Rather, it is to move the discussion among advocates of MIS away from issues of ontology.

4.2  Motivating the Theory Before looking at MIS in more detail, I note two important qualifications. First, the analogy between model systems and fictional characters is limited; it does not imply, for example, that books or articles containing model specifications are works of fiction rather than nonfiction.5 Second, the imagining involved need not involve images. Although many discussions of scientific modeling seem to identify imagination with imagery, the kind of imagining engaged by fiction is standardly taken to be propositional (see, e.g., Currie 1990; Walton 1990).6 On any plausible version of MIS, modeling requires imagining systems to possess concrete properties; it does not require picturing those properties.7 Advocates of MIS motivate their position by arguing that more traditional accounts, which construe models as set-​theoretic or mathematical structures, neglect the significance of imagining concrete systems such as biological populations. Peter Godfrey-​Smith is explicit about this motivation: “An 4 This is not to deny that different realist positions would interpret the epistemological issues differently, but rather that postulating models as fictional characters does not itself resolve epistemic questions. 5 Giere 2009 raises this concern. 6 My own view is that imagining in response to fiction involves a combination of propositional and objectual imagining in Yablo’s (1993) sense, yielding the possibility of analogical representations, which may or may not involve imagery (Friend n.d.). 7 Though imagery may play an important role in some cases (Morgan 2004).

The Fictional Character of Scientific Models  105 imaginary population is something that, if it was real, would be a flesh-​and-​ blood population, not a mathematical object” (2006, 735). In this respect it is similar to fictional characters and places. The issue here is not ontological. Godfrey-​Smith rejects the identification of models with mathematical objects, not because of doubts about the existence of such objects, but because the identification neglects an essential aspect of modeling. MIS advocates thus aim to capture what Martin Thomson-​Jones calls the face-​value practice of modeling, essential to which are “descriptions of missing systems” (2010, 284). Modelers may describe a pendulum as a point mass attached to a massless string and subject to no external forces such as friction, or populations of predators and prey that interact with each other but with nothing else. The model specifications that specify the simple pendulum or the Lotka-​Volterra model of predator-​prey interaction are satisfied by no actual pendula or populations. But if there were such pendula or populations, they would be concrete rather than abstract. Of course, many models can be identified with actual concrete objects:  for instance, Crick and Watson’s double-​helix model of DNA and the Phillips-​Newlyn hydraulic model of the British economy. Though these material models exist in space and time, they share with theoretical models—​the ones involving “missing systems”—​their purposeful misrepresentation of real-​world target systems. My primary focus will be on theoretical models. There are other features of the face-​value practice that parallel our engagement with fiction. One is the way that both readers and scientists systematically “fill in the gaps” to elaborate scenarios on the basis of relatively little explicit information. For instance, we know that it is “true-​in-​the-​ fiction” that Anna Karenina has a liver and a heart but no wings even if this is never made explicit.8 Although model specifications provide only a limited number of details, scientists achieve widespread agreement in inferring many other properties of model systems. “It is, for instance, true that the Newtonian model-​system representing the solar system is stable and that the model-​earth moves in an elliptic orbit; but none of this is part of the explicit content of the model-​system’s original specification” (Frigg 2010a, 102). A related feature of practice is that we can take internal and external perspectives on the model system, analogous to the duality that characterizes our engagement with fiction (Contessa 2010, 223; Levy 2012, 739). From a perspective internal to Tolstoy’s novel, Anna Karenina is a flesh-​and-​blood 8 For an overview of accounts of truth-​in-​fiction, see Woodward 2011.

106  The Scientific Imagination human being, born to human parents; from an external perspective, Anna is a fictional character created by Tolstoy (see Friend 2007). The internal perspective generates the statements that are true-​in-​the-​fiction, whereas external statements appear to be true simpliciter. The same applies to discourse about models. An internal statement such as “electrons orbit the nucleus at discrete intervals” is “true” within the Bohr model of the atom, whereas an external statement such as “the Bohr model of the atom represents the hydrogen spectrum” seems to be true simpliciter. This duality of perspective contrasts with our way of thinking about such abstracta as numbers or sets. Now, one might accept that the face-​value practice reflects scientists’ “folk ontology” of models without accepting that imagining concrete systems plays a role in the epistemology of modeling.9 Defenders of MIS offer several reasons that such imagining plays an essential role. First, many important models appear to be entirely non-​mathematical. Godfrey-​Smith cites Maynard Smith and Szathmáry’s models of the origin of membranes and compartmentalization and models of memory in cognitive science (2006, 736; see also Levy 2015). Another example is the standard diagram of a eukaryotic cell given in biology textbooks, which presents an idealized model of no particular kind of cell (Downes 1992, 145). Plausibly, the diagram specifies an imaginary concrete cell.10 The thought is that if we can learn from such models, it is not by engaging with mathematical structures. Second, proponents of MIS argue that understanding how a model relates to reality seems to depend on grasping concrete but imaginary features of a model system. Frigg gives the example of Fibonacci’s model of the rate at which a pair of rabbits would breed in one’s garden (2010a, 105–​106). The model specification provides an equation that applies only to “rabbits that never die, a garden that is infinitely large and contains enough food for any number of rabbits, and rabbits that procreate at a constant rate at constant speed” (2010a, 106). To understand the implications of the model, Frigg argues, the mathematics is insufficient.11 We must recognize that the fictional scenario differs from the actual situation, in which the rabbits are mortal, the garden and food supply are limited, and so on. This contrast leads us to 9 The term is Deena Skolnick Weisberg’s, cited in Godfrey-​Smith 2006, 735, and Weisberg 2013, 68. The view stated in this sentence is defended by Weisberg (2013, ch. 4). 10 Here I disagree with Weisberg’s interpretation (2013, 19). 11 This is not to say that opponents of MIS cannot accommodate this observation. Weisberg (2013), for example, says that models are not merely structures but interpreted structures.

The Fictional Character of Scientific Models  107 conclude that the Fibonacci model is appropriate for only a certain length of time. A third advantage, according to some advocates of MIS, presupposes that scientists learn from modeling by comparing model systems to target systems. The argument is that treating model systems as imagined concrete entities rather than abstract mathematical structures illuminates the sense in which models can resemble targets. For example, the simple pendulum is said to swing sinusoidally, but an abstract object cannot have this property, at least not in the same sense (Thomson-​Jones 2010, 291). The difficulty applies not just to claims about isomorphism between mathematical structures and concrete targets but also to Giere’s (1988) account of models as description-​ fitting abstracta. Thus Giere is often criticized for leaving unexplained the similarity relations between abstract models and concrete target systems. Godfrey-​Smith contrasts the situation with fiction, where “we have an effortless informal facility with the assessment of resemblance relations. . . . We often assess similarities between two imagined systems (Middle Earth and Narnia), and between imagined and real-​world systems (Middle Earth and Medieval Europe)” (2006, 737). If this is right, taking the analogy with fiction seriously has the potential to clarify how scientists learn from models through comparison.12 The idea that we learn from modeling through comparison takes for granted that models represent phenomena indirectly. This is the standard position in the literature on modeling, and some take indirectness to define modeling as a distinctive scientific practice (Weisberg 2007). Godfrey-​Smith adopts the assumption explicitly: Model-​based science is fundamentally a strategy of indirect representation of the world. In understanding a real-​world system, the modeler’s first move is the specification and investigation of a hypothetical system, or structure. The second is consideration of resemblance relations between this hypothetical system and the real world “target system” that we are trying to understand. (Godfrey-​Smith 2006, 730; see also Frigg 2010b, 252)

Though this approach is motivated by function rather than ontology, it is hard to see how scientists can use models to represent target systems or 12 In fact, it is not clear that comparisons in the fiction case are any more (or less) straightforward than comparisons in the modeling case (Giere 2009), as will be discussed later in the chapter.

108  The Scientific Imagination engage in comparisons between model systems and relevant phenomena if model systems do not exist. In addition, scientists seem to investigate models and thereby discover new features of model systems, again implying a real object of investigation. And scientists make apparently true statements about model systems that seem to presuppose that there are such things. The first two aspects concern the epistemic significance of modeling, whereas the last concerns the semantics of the associated discourse. Much of discussion about the ontological commitments of MIS, like the debate over fictional characters, focuses on semantic issues. However, in my view semantic considerations provide little motivation for realism. I have argued elsewhere that realism about fictional characters offers little advantage over anti-​realism concerning the semantics of fictional discourse (Friend 2007). I would say the same about the semantics of model discourse. I return to this point later. The far more interesting question is whether we should adopt realism about model systems to explain the epistemology of modeling. I propose to address this question by considering deflationary forms of MIS—​those that resist postulating model systems as real entities—​and asking what, if anything, these accounts leave out.

4.3 Deflationary MIS A deflationary approach accords nicely with the idea behind MIS that model systems are imagined concrete systems. Ordinarily, when we describe something as “imagined” we imply that it does not exist. Moreover, philosophers of science are typically inclined toward a naturalistic picture of the world, incompatible with treating a model system “as a shadowy additional graspable thing, either of an abstract platonist kind or a modally-​realist kind” (Godfrey-​Smith 2009, 108). The concern is just that a deflationary account is insufficient to explain the epistemic value of modeling. To address this concern I will look at the deflationary versions of MIS advocated by Frigg (2010a, 2010b), Levy (2012, 2015), and Toon (2012). Though they differ in other ways, all take as their starting point Kendall Walton’s (1990) account of fiction. Walton proposes a theory of fiction or (equivalently) representation, a category that encompasses far more than works of fictional literature. Walton defines representations as objects that have the function of serving as “props in games of make-​believe.” The allusion to children’s games, where the props

The Fictional Character of Scientific Models  109 may be dolls or toy trains, is intentional, for there is continuity between the two kinds of game. The basic idea is that representations prescribe imaginings about their content; for example a story invites us to imagine that certain people did certain things. Imagining what is prescribed is participating in the official game of make-​believe authorized by the prop. In a game some moves are licensed and others are not. Though nothing prevents me from imagining that Clarissa Dalloway zips around town on a small drone, for example, there is a clear sense in which this would be an unauthorized response to Woolf ’s novel. This explains why it is not true-​in-​the-​fiction or (in Walton’s terminology) “fictional” that Clarissa travels by drone. Significantly, this approach rejects the traditional association between representation and denotation.13 A work can be a representation without referring to anything, so long as it is a prop in certain kinds of games of make-​believe. Some props are reflexive, prescribing imaginings about themselves (Walton 1990, 117). A doll does not simply prompt Alex to imagine a baby; it prompts him to imagine, of the doll itself, that it is a baby. When Alex places the doll in the bath, he imagines himself to be placing the baby in the bath. Although some fictional texts are reflexive props—​such as Swift’s Gulliver’s Travels—​more typically they prescribe imaginings directed at something else. This may be a real entity, such as a historical person or actual place; when fictions refer to such individuals, they are objects of representation (Walton 1990, ch. 3). Here the prescription is to engage in singular thought, that is, de re imagining about the object. Orwell’s 1984 prescribes imagining of London that it is the capital of Airstrip One, and Tolstoy’s War and Peace prescribes imagining of Napoleon that Pierre wants to assassinate him. Fictions also prescribe imaginings apparently about nonexistent individuals. Orwell’s novel invites us to imagine such invented characters as Winston and Julia, and Tolstoy’s novel not only Pierre but Natasha, Prince Andrei, and so on. For Walton, these prescriptions involve only the pretense of reference or existence; in reality they are not directed at anything, and they express no propositions (1990, ch. 10). But this does not mean that an account of the prescriptions cannot be given. We certainly imagine something when we imagine that Clarissa Dalloway plans a party, and explicating this content is a task for a more general semantics of intentionality.14 Whatever the explanation, it will make no appeal to a real Clarissa. 13 The importance of this point is not always appreciated by advocates of MIS. An exception is Toon (2012). 14 I develop an account consistent with a deflationary approach in Friend 2011.

110  The Scientific Imagination For deflationary MIS, model specifications constitute fictions or representations in Walton’s sense: they function as props in games of make-​believe. For instance, descriptions of the Lotka-​Volterra model prescribe imagining a population of predators and a population of prey whose growth rates and death rates are specified by a set of equations. John Maynard Smith makes the invitation to imagine explicit in presenting a model of RNA replication:  “Imagine a population of replicating RNA molecules. There is some unique sequence, S, that produces copies at a rate R; all other sequences produce copies at a lower rate, r” (Maynard Smith 1989, 22, quoted in Weisberg 2013, 48). Advocates of MIS do not deny that in these and many other cases mathematics plays an essential role. Instead, the equations that are part of many model specifications serve to specify the features of the model system to be imagined, which may include instantiating certain mathematical structures. The key to modeling remains the role of models as props in games of make-​believe. From this perspective, material models are analogous to model specifications, since both are representations that function as props prescribing imaginings. A scale model of a bridge that is one meter long might prescribe imagining a bridge that is one thousand meters long (where the scale functions as a principle of generation), just as a specification of a simple pendulum prescribes imagining a pendulum whose bob is a point mass (Toon 2012, 37–​40). And like children’s toys, material models may be reflexive props. A ball-​and-​stick model of a water molecule does not just prescribe imagining that there is a water molecule; given the conventional understanding of such models, it prescribes imagining, of itself, that it is a water molecule.15 When we manipulate the model, we imagine ourselves to be manipulating the molecule. Theoretical models are more akin to written fictions that prescribe imaginings about something other than themselves. Toon argues that in many cases, this something is real. Instead of taking a specification of the Newtonian solar system model to represent a hypothetical system, for example, he argues that we should treat it “as prescribing us to imagine things about the sun and earth themselves” (Toon 2012, 56; emphasis in original). The sun and earth are thus objects of representation in Walton’s sense; we are to imagine, of the actual celestial bodies, that they are homogeneous spheres and so on. If this is right, the Newtonian solar system model represents the solar system

15 Toon (2012, ch. 5) provides empirical evidence that this is how we treat such models.

The Fictional Character of Scientific Models  111 directly, rather than via a hypothetical model system, so the imaginings are about the real system. Toon and Levy take this kind of direct representation to be central to modeling.16 When scientists want to understand a real-​world phenomenon, they construct model specifications that invite imagining the target in ways contrary to fact. Direct representation presents no difficulties for a deflationary approach, since the targets are ordinary real-​world entities. Like fictions, though, many model specifications seem to invite imaginings about systems that do not exist. The Bohr model of the atom is not meant to describe a particular real-​world atom, and a description of the simple pendulum need not designate a particular real-​world pendulum. The same may be said of mechanical models of the ether, architectural models for buildings never constructed, models of generalized phenomena such as evolution, and models of unrealized scenarios such as three-​sex biology.17 None of these models has an object in Walton’s sense. For Toon, the relevant model specifications simply do not prescribe imaginings about any real thing. The puzzle of how to understand imaginings directed at what does not exist is, he points out, a general problem of intentionality rather than a problem specific to scientific modeling (Toon 2012, 82). Whatever anti-​realist explanation works in other cases should also work for model systems. Levy argues, by contrast, that apparently objectless model specifications should be reinterpreted as inviting imaginings about real-​world targets: “All we have are targets, imaginatively described” (2015, 791). If Levy is right, deflationary MIS is straightforward, since the objects of imagining are always ordinary existents. However, this view is controversial. More importantly, even if Levy is correct that there are no targetless models, there is a contrast between an “object of representation” in Walton’s sense—​a particular referent of singular imaginings—​and a target, which may be more general. This is clear in the fiction case. Works of fiction that invite imaginings about fictional characters can also represent or “target” (without referring to or inviting singular imaginings about) real-​world individuals. For example, in Oliver Twist Dickens uses the title character to represent real orphans in Victorian England. It is by inviting readers to imagine Oliver’s travails that Dickens throws light on the plight of real orphans. So if the analogy with fiction holds, we should expect model specifications that prescribe singular imaginings about real entities and model specifications that do not. 16 Weisberg counters that this view reduces all modeling to idealization and thus fails to recognize modeling as a distinctive practice (2013, 64). See Levy 2013 for a reply. 17 The first three examples come from Toon 2012 and the last from Weisberg 2013.

112  The Scientific Imagination The important question in either case is how imagining as prescribed results in knowledge of target systems. I consider how deflationary MIS addresses this challenge in the next section.

4.4  The Epistemological Challenge Recall that there are two epistemically significant aspects of modeling that appear to imply realism:  First, scientists seem to investigate models and thereby discover new features of model systems. Second, they seem to compare model systems to real-​world phenomena. These two elements of the practice are closely related, since the point of investigating the model is to shed light on the target system. According to deflationary MIS, there are no model systems; there are only representations prescribing imaginings, often about what does not exist. How can engaging in such imaginings generate discoveries about model systems and knowledge of real-​world phenomena? For deflationary MIS the starting point for answering this question is understanding how readers of fiction discover features of a “fictional world.” As noted previously, much of what a representation makes fictional is implicit, so readers must fill in the gaps. To do this they rely on what Walton calls principles of generation. In some games these are stipulations: “Anyone whose feet touch the floor has landed in the sea.” With complex representations such as works of literature they are the conventions of the practice, which competent participants have internalized. Although some general principles can be identified—​such as the Reality Principle, which authorizes reliance on real-​world truths to fill in the background of a story—​Walton argues that the principles of generation are too varied to subsume under a few rules (1990, ch. 4). For example, experienced readers of whodunits immediately recognize that the obvious suspect is not guilty, and experienced viewers of screwball comedies know that cleverly insulting banter signals love. The ability to deploy such principles appropriately constitutes competence with the relevant sort of representation. Correspondingly, for deflationary MIS investigating the model system is not investigating a real system. Instead, it is making inferences from the explicit content of the model specification to conclusions about what is fictional (prescribed to be imagined) by deploying appropriate principles of generation. Like fictions, model specifications provide only a limited number of details explicitly, so scientists must infer many other properties of model

The Fictional Character of Scientific Models  113 systems. Frigg uses his example of “the solar system is stable,” which is fictional according to the Newtonian model because it is implied by a combination of the model specification and “the laws and principles assumed to hold in the system (the laws of classical mechanics, the law of gravity, and some general assumptions about physical objects)” (2010a, 118). Exactly how to spell out the principles of generation that determine what is fictional in a model will be at least as complex as the corresponding task for fiction (see section 4.6). Still, there is systematic agreement about the features of model systems among practitioners, suggesting that scientists have internalized certain rules and conventions that enable them to make the relevant inferences. A worry about this picture is that different scientists may fill in the gaps in different ways, and a high degree of variation is inconsistent with the epistemic role models play in science (Weisberg 2013, 57). However, Walton invokes prescriptions to imagine in explaining what is fictional—​what is to be imagined, not what anyone actually imagines. These are distinct. First, readers rarely imagine everything that is fictional; few readers will ever consider the proposition that Anna Karenina has a liver. A  prescription to imagine should not be construed as an unconditional mandate. Rather, a work prescribes imagining P if, given the choice between imagining P and imagining not-​P, one should imagine the former.18 At the same time, readers imagine a great deal that is not fictional, that is specific to their own games. For instance, I may imagine Clarissa one way while you imagine her a different way, both ways compatible with the text. If no specific way of imagining certain features of the character is prescribed for every authorized game, none is fictional and the matter is left indeterminate. Applied to modeling, the upshot is that there are facts of the matter about the features of the model system independent of particular scientists’ imaginings. These are determined by the model specification in combination with appropriate principles of generation. For example, part of the specification of the Lotka-​Volterra model is a set of equations. It will be fictional in any version of the model that the model system conforms to these equations, whether anyone actually works out the mathematical conclusions or not. Note that this applies both when we construe the model as inviting imaginings directly about actual target populations—​such as predator and prey species in the Adriatic, Volterra’s original concern—​or imaginary populations. In either case scientists, by using computers to calculate results, discover

18 See Friend 2014b; 2017.

114  The Scientific Imagination what is to be imagined. Other aspects of the model system will be determined in other ways, just as inferences about non-​mathematical models must be made deploying non-​mathematical principles of generation. The important point is that what is fictional in the model is not determined by the inferences scientists actually make but by the inferences that are licensed by the model specification and principles of generation. At the same time, much will be left indeterminate. A specification of the Lotka-​Volterra model may be silent as to the species of the predators and prey. Some scientists may imagine them to be sharks and fish, foxes and rabbits, or what have you. Although there is indeterminacy, it is not the case that anything goes. Given that this is a model of predator-​prey interaction, scientists are not authorized to imagine that the predators are rabbits and the prey sharks. The principles of generation determined by the model constrain what is fictional. The next question is how to explain applications to the real world. On the indirect picture suggested by Godfrey-​Smith and Frigg, knowledge about real phenomena is acquired by comparing model systems to target systems, but if there are no model systems, such comparisons are impossible. Correlatively, comparative statements such as “This population of rabbits is dying off faster than the population in the Lotka-​Volterra model” cannot literally be true. Frigg offers a solution to the semantic problem that can be applied to the epistemological concern. He claims that we can “rephrase” apparent comparisons between model systems and target systems as comparisons between properties—​specifically, the features we imagine the model system to have and the corresponding features possessed by the target system (2010b, 263).19 The implication is that modeling does teach by comparison, though of properties rather than objects. Godfrey-​Smith objects to Frigg’s account that many of the properties attributed to model systems will be uninstantiated, such as perfect sphericality for planets or infinity for populations, and therefore equally mysterious (Godfrey-​Smith 2009, 113).20 Indeed some properties attributed to model systems may be metaphysically impossible, such as being a point mass for the bob of the simple pendulum. In that case it is not clear what is being compared. 19 Frigg (2010b) appears to suggest a fictional operator account of comparisons. Presumably his proposal could be reformulated with a pretense account, which is preferable to an operator approach (see Everett 2013, esp. 46‒53). 20 The point is not that the model system fails to instantiate the properties—​which is entailed by the deflationary assumption that there are no model systems—​but rather that the properties are uninstantiated by anything.

The Fictional Character of Scientific Models  115 Advocates of the direct view of modeling eschew any appeal to comparisons. Levy (2015) instead invokes Yablo’s (2014) concept of “partial truth” to explain how imagining in response to model specifications produces knowledge about target systems. For example, a model that describes the motion of a body as sliding down a frictionless plane can represent truly the relationship between mass and velocity in the actual system even if it falsely represents the lack of friction (Levy 2015, 794). The Newtonian model can represent truly the relationship between the mass and motion of the planets despite falsely representing the distribution of mass. However, even if the concept of partial truth can be elaborated sufficiently to explain these examples, it does not appear suited to cases where the imagining seems to be directed at nonexistent model systems. Here there are no partial truths in the relevant sense. We still want to know how, by inviting us to imagine such systems, representations manage to teach us about the world. Godfrey-​Smith casts doubt on the prospects for a deflationary explanation of the epistemology of modeling, on the grounds of the “unreasonable effectiveness” it shares with mathematics: “By means of modeling we learn a great deal about how things do and can work in the world. A description of the coordination and elaboration of imaginings cannot be a complete explanation” (Godfrey-​Smith 2009, 109). In the mathematics case, epistemic success is taken as an argument for realism about mathematical objects. In the next section I consider whether the same argument can be made on behalf of fictional realism about model systems.

4.5  Fictional Realism About Model Systems There are two reasons someone might think that deflationary MIS is inadequate to the epistemology. First, the view entails that we are not discovering genuine features of a model but only finding out what we are supposed to imagine; this does not appear sufficiently objective to do justice to the practice. Second, we require an explanation of how imagining in response to model specifications generates knowledge applicable to the real world. The standard explanation of knowledge from modeling invokes comparison, but on a deflationary approach there is nothing to compare. For these reasons treating model systems as real may be tempting. However, we will find that no form of fictional realism illuminates the epistemology. Whether or not there are

116  The Scientific Imagination model systems, advocates of MIS must provide a distinct account of how we learn about the real world through modeling. Versions of fictional realism can be distinguished by the sort of entities they postulate, in particular whether they are concrete or abstract.21 Given the emphasis of MIS on imagining concrete systems, the place to start is with a version of realism according to which fictional characters are concrete entities. Neo-​Meinongians such as Parsons (1980) who take this approach construe characters as nonexistent concreta, whereas modal realists such as Lewis (1983) take them to be non-​actual inhabitants of possible worlds. The appeal of these theories is that they postulate entities that seem to have properties in the same sense as existent or actual individuals do. If model systems fit into one of these categories, the idea of learning through comparison would be vindicated. Comparisons between concrete objects in respect to their ordinary properties appear straightforward. There are, however, reasons to resist concrete realism. The most obvious is the ontological commitment, either to an infinity of possible worlds or to an infinite number of nonexistent concrete objects. In fact, the commitment is stronger than this, since many model specifications, like works of fiction, describe impossible scenarios, so the commitments may include impossible objects or worlds, consistent with some accounts of fiction (e.g., Priest 2005). A second concern is that concrete realism fails to do justice to the very features of practice that motivate MIS. In particular, concrete realism has difficulty accommodating the external perspective. According to modal realism, for example, Emma Bovary is not a fictional character at all; she is a possible person (or set of possible persons) who possesses the properties determined in some way by the novel (Lewis 1983, 263).22 Similarly, the Bohr model of the atom is not a model or model system but a possible atom (or set of possible atoms) that conforms in the right way to the model specification (Contessa 2010, 222). A similar challenge faces the neo-​Meinongian approach.23 Suppose, however, that we set aside these worries. Let us assume that we have at our disposal the full plenitude of nonexistent concreta and an infinity of possible or impossible worlds. The question before us is what role any such 21 Contessa (2010) claims that both abstract entities and concrete (possible) individuals play a role in modeling. The criticisms that follow apply equally (perhaps doubly) to this account. 22 In the fiction case the determination in question is not simply a matter of satisfying the descriptions in the novel, but something more complex (see Lewis 1983 and Woodward 2011). However, for model specifications the simpler formulation is commonly assumed. 23 Neo-​Meinongians address the external perspective by distinguishing between kinds of properties or ways of having properties. I discuss this proposal later.

The Fictional Character of Scientific Models  117 entities might play in the epistemology of modeling for an advocate of MIS. The answer is none. The first reason can be traced to an assumption common to both forms of concrete realism: that whenever authors or scientists specify a set of properties, there is an entity—​whether a nonexistent concrete object or a set of (im)possible individuals—​individuated by those properties. (I use the term “entity” for all of these options.) In fact, according to these views, there is an entity corresponding to every set of properties. When authors attribute various properties to a character in writing, they merely specify one such entity among the plenitude. Had the author set down different descriptions, she would pick out a different entity. Suppose we wish to know which entity is Emma Bovary. The answer for a concrete realist is whichever nonexistent object or inhabitant of an alternative world has the properties specified in Madame Bovary. To discover Emma’s properties—​which is the same as discovering which of all the available entities is Emma—​requires figuring out which properties are specified by the novel. To do this, readers must make inferences from the explicit text using relevant principles of generation. The same applies in modeling. For any model specification, there is a model system individuated by all and only the properties specified.24 So for concrete realists, “investigating a model system” is just interpreting the model specification using principles of generation, thereby simultaneously determining which of the available entities counts as the model system. With respect to discovering the features of the model system, concrete realism has no advantage over deflationary MIS. On the other hand, concrete realism seems to have an advantage in applying these discoveries to real-​world phenomena. Because the simple pendulum is a concrete entity, it appears to swing in just the same sense as an actual pendulum, allowing for straightforward comparisons of motion. In fact, however, it is not at all clear that this is so. The following argument is from Anthony Everett (2013, 171–​172).25 Consider such properties as being colored or having a non-​zero mass. If the entities postulated by concrete realists are colored, they should reflect light, and if they have non-​zero mass, they should be detectable by their gravitational fields. Needless to say, they do not 24 An implication is that different specifications pick out distinct model-​systems. There is not a single Lotka-​Volterra model, but a different one corresponding to each specification. I  set this problem aside for the sake of argument. 25 Everett’s argument is directed against neo-​Meinongianism but can be extended to modal realism.

118  The Scientific Imagination reflect light and are not so detectable. The neo-​Meinongian could reply that nonexistent objects reflect nonexistent light and produce nonexistent gravitational fields, and the modal realist that possible individuals reflect possible light and produce possible gravitational fields.26 But these properties are not the same as those possessed by actual, existent concrete objects. An alternative is to distinguish ways of possessing properties. Some neo-​Meinongians draw a distinction between encoding properties and exemplifying them, so that nonexistent objects might encode the ordinary property of having non-​ zero mass while existent objects exemplify (have in the ordinary sense) the same property (Zalta 1983). Whatever the unexplained primitive notion of encoding turns out to be, the entities fail to possess properties in the same sense as existent, actual objects. Either way, comparisons between model systems and target systems are not straightforward. Everett suggests a way of construing encoded properties consistent with MIS: they are the properties a fiction invites us to imagine exemplified by a character (2013, 173). But that is no different from the deflationary position. The advocate of MIS has little reason to postulate nonexistent objects or nonactual worlds. We turn next to realist accounts that take fictional characters to be existent abstract objects. Versions of this position differ in how they individuate the objects, whether by internal or by external properties—​that is, the properties the character has from each perspective. From an internal perspective, Emma Bovary has such properties as “being a woman,” “being married to a doctor,” and “being an adulteress”; these are the same properties that concrete realists would attribute to her. From an external perspective, she has such properties as “being created by Flaubert” and “being Flaubert’s most famous character.” For realists who individuate characters by internal properties, fictional characters are person-​kinds, roles, or character-​types (respectively, Wolterstorff 1980, Currie 1990, and Lamarque 2010; see also Zalta 1983). Call this type realism. Applied to model systems, the position is similar to Giere’s (1988) proposal that models are description-​fitting abstract objects. It is therefore subject to the same objection:  namely, that abstract objects cannot possess internal properties in the same sense as concrete objects. Just as an abstract Emma Bovary is not a woman in the same sense as Marie Curie, an abstract pendulum does not swing in the same sense as the concrete 26 Compare the distinction between nuclear and extra-​nuclear properties invoked by some neo-​ Meinongians (Parsons 1980).

The Fictional Character of Scientific Models  119 pendulum. Paul Teller addresses this objection by distinguishing ways in which objects possess properties; whereas “concrete objects HAVE properties . . . properties are PARTS of models” (2001, 399). This is essentially the distinction between exemplifying and encoding, and equally mysterious. But we could say that if model systems are construed as roles or types, the idea might be that the character-​type encodes concrete properties in the sense that any concrete individual who possesses the properties thereby fills the role or instantiates the type. In reading Madame Bovary we imagine a concrete human being filling the role delineated by Flaubert, and in grasping the simple pendulum model we imagine a concrete pendulum satisfying the model specification.27 Even if this solution is correct, though, it remains the case that neither the character nor the model system, qua abstract, has the specified properties in the way that concrete objects do. More importantly, anti-​realists could simply accept the existence of roles or character-​ types while denying that these are identical to characters or model systems. According to MIS, we treat model systems as concrete, so they instantiate types rather than being identical to them. A more popular version of realism maintains that fictional characters are existent abstract objects individuated by external properties, in the same ontological category as novels, plots, theories, and laws. These are real, constructed entities that are not identical to, or instantiated by, any concrete objects. According to most proponents of the view, fictional characters come into existence through the creative acts of authors; Thomasson (1999) thus describes them as abstract artifacts. On this view, there is no literal sense in which Emma possesses the properties attributed to her in Madame Bovary, such as being a woman or being an adulteress. Rather, readers are invited to pretend or imagine that she does. Thomasson proposes that such imaginings are directed at the real abstract object Emma.28 Just as War and Peace prescribes imagining, of the real Napoleon and Russia, that the one invades the other, Madame Bovary prescribes imagining, of the real Emma Bovary, that she (it?) commits adultery. This form of fictional realism—​call it external realism—​accords with the motivations behind MIS. Take the Bohr model of the atom. For the external 27 This interpretation was inspired by some points in Giere 2009, though I cannot attribute the view to him. 28 Others hold that the imaginings are not directed at anything; they involve the pretense that there are such concrete individuals, with characters playing a role only from the external perspective (e.g., Kripke 2013).

120  The Scientific Imagination realist, this is an abstract object created by Nils Bohr’s modifications of Rutherford’s model, just as Emma Bovary is an abstract object created by Flaubert in writing Madame Bovary. Or the model could be construed as socially constructed, the product of the collective effort of scientists (thus the Rutherford-​Bohr model), but still a created abstract entity (Giere 2009, 251). Scientists make straightforwardly true claims about the model—​for instance, that it is used to make successful predictions about the absorption spectrum of the hydrogen atom. From the internal perspective, though, Bohr’s model specification prompts us to imagine a concrete atom, not an abstract object, one constituted by a nucleus surrounded by orbiting electrons. Thus when we say “electrons follow classical trajectories,” we make a claim that is merely fictional, merely true-​in-​the-​model, but not actually true. In short, external realism captures the motivating idea of MIS, that imagining concrete systems is central to modeling, while taking the external perspective to specify what is genuinely true. Though it captures these aspects of the face-​value practice, however, external realism will not help advocates of MIS to explain the epistemology of modeling. The difficulty is simple. According to external realism, model systems possess only external properties; but the properties that are epistemically relevant are the internal ones. For instance, when scientists draw conclusions about the rate at which prey are consumed by predators within the Lotka-​Volterra model, they are not discovering features of a real model system, for these features are merely imaginary. There are no prey or predators, and thus no rate at which the former are consumed by the latter. So for external realists, “investigating the model system” cannot be construed as anything other than interpreting the model specification according to relevant principles of generation to determine what should be imagined—​just as deflationary MIS maintains. External realism also has no advantage in providing an account of comparisons between model systems and target systems. The properties that provide the respects of comparison are, once again, internal. Since according to external realism these features are merely imagined, the account is in exactly the same position as deflationary MIS. Notice that in consequence, external realism offers little advantage over a deflationary approach when it comes to the semantics of comparative statements. “The rabbits in the Lotka-​Volterra model are dying off faster than the rabbits in my garden” cannot literally be true, since the model does not contain any rabbits. The intuition that the statement is true must ultimately be explained by what the model specification prescribes imagining. It is worth

The Fictional Character of Scientific Models  121 pointing out that other forms of realism fare little better. On these views the statement turns out to be ambiguous. Either the rabbits in the model encode dying off whereas the rabbits in the garden exemplify it, or the model rabbits have a nonexistent or merely possible property of dying off whereas the garden rabbits have the existent or actual property. So realism does not necessarily offer a smoother semantic account than a deflationary approach (Friend 2007). Even if it did, though, there would be no epistemological argument in favor of adopting any of these versions of fictional realism about model systems. This is not to deny that there may be other reasons either for or against accepting possible worlds, nonexistent concreta, or various sorts of abstract objects. The claim is just that it would make no difference to the epistemology of modeling for an advocate of MIS.

4.6 Moving Forward If this is right, advocates of MIS should set aside the debate over the ontology of model systems. Even if these are real entities of one sort or another, they are epistemologically superfluous.29 Instead, the focus should be on the key claim of MIS, that the practice of modeling essentially involves imagining concrete systems in response to model specifications analogously to the way that we imagine characters and events in response to works of fiction. Defending MIS against other accounts of modeling requires answering the question of how this sort of imagining produces knowledge of the real world. The upshot of the previous section is that model systems, thought of as real fictional entities, can play no role in the answer. By way of conclusion I briefly sketch the resources available to MIS in addressing the epistemological challenge. As we have seen, for any version of MIS, investigating a model system involves determining what a model specification prescribes imagining given appropriate principles of generation—​whether or not the imagining is about anything real. To defend this position, advocates of MIS must provide detail about how this kind of determination works. What are the principles of generation? How do they take us from the model specification to conclusions 29 This is compatible with their having implications for the epistemology if they are posited for other reasons; the point is just that postulating them is not motivated by the epistemological worries raised by MIS.

122  The Scientific Imagination about the model system? What justifies scientists in deploying the principles they do? Although arguments explicitly defending MIS have not addressed these questions in detail, there is plenty of material in related work upon which they can draw. For example, Morgan (2014), relying on her previous work (Morgan 2012), identifies two sources of principles of generation for the games of make-​believe that economists play with models: “the medium or language of the model (whether it is made up of equations or diagrams or is an hydraulic model)” and “the economics subject knowledge which acts both as a constraint on, and a prompt for questions about, the kinds of things that are imagined to happen within that world in the model” (2014, 232). A specification of the Lotka-​Volterra model might simply postulate a population of predators and a population of prey related by the relevant differential equations, without further explicit information. Ecologists know that the only solutions to the equations that are relevant are those that assume an integer number of members of each population, though this restriction is not explicit in the model specification (Weisberg 2013, 59).30 Relying on something like the Reality Principle, the biologists import information about real populations—​which have no fractional members—​in drawing their conclusions, because their purpose is to explain precisely those populations. Justifying the principles deployed and the inferences made in any particular instance of modeling requires detailed analysis of the specific case. For this reason Morgan defends her position by looking at how the principles operate for different economic models and why the models are successful. Notice, though, that the same is true whatever one’s account of modeling. Weisberg, who takes the Lotka-​Volterra model to be an interpreted mathematical structure, points out that the mathematical equations by themselves do not determine that the populations have an integer number of members. This fact must come from the interpretation, and the same question of how the interpretation is justified arises for Weisberg as for the advocate of MIS. Justification can only be given on a case-​by-​case basis, looking at the details of particular models and their fruitfulness. The more difficult challenge to MIS is explaining how determining these fictional truths and imagining as prescribed generates knowledge of the real world. If the argument in the preceding situation is correct, the explanation cannot ultimately appeal to comparisons between model systems

30 Weisberg takes this example to be a problem for fictionalists, but I do not agree.

The Fictional Character of Scientific Models  123 and target systems, or indeed any other relation that presupposes the reality of model systems. In this respect advocates of a direct approach to modeling are right. However, if the direct approach relied only on the notion of partial truth invoked by Levy (2015), it would not be sufficiently general to account for the epistemological power of model specifications that invite imaginings about nonexistent model systems. There is, though, a more general account of how modeling yields knowledge of the world available to advocates of MIS. Bokulich (2011), Frigg (2010a), and Morgan (2014) each appeal to a “translation key” that takes one from information about the model to information about the world.31 This proposal is promising but requires development. First, the notion of a translation key must be articulated in the right way. For example, Frigg takes the translation to occur between model systems and targets (2010a, 128), but given the argument in section 4.5, it cannot be model systems playing this role. Bokulich says instead that “the translation key is from statements about the fictions to statements about the underlying structures or causes of the explanandum phenomenon” (2012, 735). If we treat “statements about the fictions” to mean fictional truths, this construal is on the right track. However, I suggest that a focus on statements is too narrow.32 As with maps, information need not be contained in sentence-​like structures. We should think of the translation key as taking us from representations of the model system to representations of the target. The next question is which representations are the ones between which we translate. In an instance of learning by an individual, the answer is straightforward:  mental representations. Consider a toy explanation of how we learn by reading works of fiction—​for instance, how we learn from Oliver Twist about the plight of real Victorian orphans.33 In reading the novel we first form a mental representation of what obtains in the “fictional world.” This mental representation constitutes the content we imagine in response to the fiction (Friend n.d.). The next step is selectively exporting aspects of this mental representation into our beliefs about Victorian orphans. For example, from our imagined representation of Oliver Twist as having been sent to a work farm we may form a belief that Victorian orphans were at least sometimes sent to work farms. If the belief is true and has been formed in the right 31 Though he does not use the same terminology, Suárez’s (2009b, 2010) proposal that scientific fictions function as rules of inference is closely related. 32 Nothing Bokulich says commits her to disagreeing with this point. 33 The following is the basic account given in Friend 2006. I elaborate on it in Friend n.d.

124  The Scientific Imagination way, we will have acquired knowledge (Friend 2014a). Part of what defines “the right way” is the utilization of appropriate principles of export, which determine how we move from the representation of the fictional world to beliefs about the real world. In the case of Oliver Twist, the principles derive from our knowledge of Dickens’s methods and purposes and his reliability in describing the contemporary situation. Principles of export can be compared to the translation keys discussed by philosophers of science. And I think something like the picture in the previous paragraph can be used by advocates of MIS when we focus on the individual scientist, who learns by deploying principles of generation to determine the features of the model system to be imagined, and then infers features of the target by using principles of export. However, although learning is a psychological process, the epistemological value of modeling is not restricted to the mental states of the individual scientist. For the knowledge acquired through modeling to play a role in the social practice of science, it must be available through public representations. From this perspective a key enables translation between the content of a model specification—​ thought of abstractly as what is to be imagined by anyone engaged with the model—​and the content of collective beliefs about the target. Making sense of this claim requires accounts of collective beliefs and of semantic content, and no such accounts will be uncontroversial.34 But this is a general problem, not one specific to modeling. For either the intrapersonal or interpersonal case the question is how to justify the translation keys. As with the justification of principles of generation, the answer turns on the particularities of the model. For some models considerations similar to those in the Dickens case may be important, such as historically oriented economic models (Morgan 2014). In many other cases, the factual reliability of the model’s creator will be irrelevant and different considerations will play the justificatory role. For example, Bokulich (2009) details the experimental evidence that has convinced physicists of a semiclassical explanation of absorption spectra for certain elements, which combines the (true) assumption that electrons in highly excited atoms function as waves exhibiting interference with the (false) assumption that they travel in classical trajectories. No other explanation generates predictions as

34 For instance, semantic content is sometimes identified with sets of possible worlds. This view is unlikely to appeal to the advocate of MIS unless the reference to possible worlds can be given a deflationary explanation.

The Fictional Character of Scientific Models  125 precise or as much in agreement with experimental data. Once again, the justificatory work is in the detailed analysis of particular cases. To summarize, the advocate of MIS need not appeal to model systems construed as real entities in order to explain the epistemological value of modeling. What is necessary instead is a detailed examination of principles of generation and translation keys across different instances of modeling, with attention to the ways in which concrete imaginings play a role in elaborating fictional truths about models and selectively exporting them to beliefs about the real world. For purposes of this project the ontology is simply irrelevant.

Acknowledgments I would like to thank the audience at the Conference on Fiction and Scientific Models at the Institute of Philosophy (June 2015) for feedback on an earlier version of the paper; Robert Northcott, Mauricio Suárez, and Adam Toon for illuminating discussion of relevant ideas; and the editors of this volume for helpful comments on a previous draft.

References Bokulich, A.  (2009). “Explanatory Fictions.” In Fictions in Science:  Philosophical Essays on Modeling and Idealization, edited by M.  Suárez, 91–​109. New  York and Oxford: Routledge. Bokulich, A. (2011). “How Scientific Models Can Explain.” Synthese 180 (1): 33–​45. Cartwright, N. (1983). How the Laws of Physics Lie. Oxford: Oxford University Press. Cartwright, N. (1999). The Dappled World:  A Study of the Boundaries of Science. New York: Cambridge University Press. Contessa, G. (2010). “Scientific Models and Fictional Objects.” Synthese 172, no. 2: 215–​229. Currie, G. (1990). The Nature of Fiction. Cambridge: Cambridge University Press. Downes, S. M. (1992). “The Importance of Models in Theorizing: A Deflationary Semantic View.” PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1992: 142–​153. Everett, A. (2013). The Nonexistent. Oxford: Oxford University Press. Fine, A. (1993). “Fictionalism.” Midwest Studies in Philosophy 18, no. 1: 1–​18. French, S. (2010). “Keeping Quiet on the Ontology of Models.” Synthese 172, no. 2: 231–​249. Friend, S. (n.d.). Matters of Fact and Fiction. Oxford: Oxford University Press. Friend, S. (2006). “Narrating the Truth (More or Less).” In Knowing Art: Essays in Aesthetics and Epistemology, edited by M. Kieran and D. Lopes, 43–​54. Dordrecht: Springer. Friend, S. (2007). “Fictional Characters.” Philosophy Compass 2, no. 2: 141–​156. Friend, S. (2011). “The Great Beetle Debate:  A Study in Imagining with Names.” Philosophical Studies 153, no. 2: 183–​211.

126  The Scientific Imagination Friend, S. (2014a). “Believing in Stories.” In Aesthetics and the Sciences of Mind, edited by G. Currie, M. Kieran, and A. Meskin, 227–​248. Oxford: Oxford University Press. Friend, S. (2014b). “Walton, Kendall.” In The Encyclopedia of Aesthetics, 2nd ed., edited by M. Kelly, 253–​257. New York: Oxford University Press. Friend, S. (2017). “The Real Foundation of Fictional Worlds.” Australasian Journal of Philosophy 95, no. 1: 29–​42. Frigg, R. (2010a). “Fiction and Scientific Representation.” In Beyond Mimesis and Convention: Representation in Art and Science, edited by R. Frigg and M. Hunter, 97–​ 138. Dordrecht: Springer. Frigg, R. (2010b). “Models and Fiction.” Synthese 172, no. 2: 251–​268. Giere, R. (2009). “Why Scientific Models Should Not Be Regarded as Works of Fiction.” In Fictions in Science: Philosophical Essays on Modeling and Idealization, edited by M. Suárez, 248–​258. New York: Routledge. Giere, R. (1988). Explaining Science:  A Cognitive Approach. Chicago:  University of Chicago Press. Godfrey-​Smith, P. (2006). “The Strategy of Model-​Based Science.” Biology and Philosophy 21, no. 5: 725–​740. Godfrey-​Smith, P. (2009). “Models and Fictions in Science.” Philosophical Studies 143, no. 1: 101–​116. Kripke, S. A. (2013). Reference and Existence: The John Locke Lectures. New York: Oxford University Press. Kroon, F., and Voltolini, A. (2011). “Fiction.” In The Stanford Encyclopedia of Philosophy (Fall 2011 ed.), edited by Edward N. Zalta. Lamarque, P. (2010). Work and Object. Oxford: Oxford University Press. Levy, A. (2012). “Models, Fictions, and Realism: Two Packages.” Philosophy of Science 79, no. 5: 738–​748. Levy, A. (2013). “Anchoring Fictional Models.” Biology and Philosophy 28, no. 4: 693–​701. Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172, no. 3: 781–​798. Lewis, D. (1983). “Truth in Fiction.” In Philosophical Papers, 1:261–​280. New York: Oxford University Press. Magnani, L. (2012). “Scientific Models Are Not Fictions.” In Philosophy and Cognitive Science, edited by L. Magnani and L, Ping, 1–​38. Dordrecht: Springer. Morgan, M. S. (2004). “Imagination and Imaging in Model Building.” Philosophy of Science 71, no. 5: 753–​766. Morgan, M. S. (2012). The World in the Model:  How Economists Work and Think. Cambridge; New York: Cambridge University Press. Morgan, M. S. (2014). “What If? Models, Fact and Fiction in Economics.” Journal of the British Academy 2: 231–​68. Parsons. (1980). Nonexistent Objects. New Haven, CT: Yale University Press. Priest, G. (2005). Towards Non-​Being: The Logic and Metaphysics of Intentionality. Oxford; New York: Oxford University Press. Smith, J. M. (1989). Evolutionary Genetics. Oxford: Oxford University Press. Suárez, M. (Ed.). (2009a). Fictions in Science:  Philosophical Essays on Modeling and Idealization. New York: Routledge. Suárez, M. (2009b). “Scientific Fictions as Rules of Inference.” In Fictions in Science: Philosophical Essays on Modeling and Idealization, edited by M. Suárez, 158–​ 178. New York: Routledge.

The Fictional Character of Scientific Models  127 Suárez, M. (2010). “Fictions, Inference, and Realism.” In Fictions and Models: New Essays, edited by J. Woods, 225–​245. Munich: Philosophia Verlag. Teller, P. (2001). “Twilight of the Perfect Model Model.” Erkenntnis 55, no. 3: 393–​415. Thomasson, A. L. (1999). Fiction and Metaphysics. New York: Cambridge University Press. Thomson-​Jones, M. (2010). “Missing Systems and the Face Value Practice.” Synthese 172, no. 2: 283–​299. Toon, A. (2012). Models as Make-​ Believe:  Imagination, Fiction, and Scientific Representation. Basingstoke: Palgrave Macmillan. Vaihinger, H. (2009). The Philosophy of “As If.” Mansfield Center, CT: Martino Fine Books. Walton, K. L. (1990). Mimesis as Make-​Believe: On the Foundations of the Representational Arts. Cambridge, MA: Harvard University Press. Weisberg, M. (2007). “Three Kinds of Idealization.” Journal of Philosophy 104, no. 12: 639–​659. Weisberg, M. (2013). Simulation and Similarity: Using Models to Understand the World. New York: Oxford University Press. Wolterstorff, N. (1980). Works and Worlds of Art. Oxford: Clarendon Press. Woodward, R. (2011). “Truth in Fiction.” Philosophy Compass 6, no. 3: 158–​167. Yablo, S. (1993). “Is Conceivability a Guide to Possibility?” Philosophy and Phenomenological Research 53, no. 1: 1–​42. Yablo, S. (2014). Aboutness. Princeton, NJ: Princeton University Press. Zalta, E. (1983). Abstract Objects:  An Introduction to Axiomatic Metaphysics. Dordrecht: Springer.

5 Models and Reality Stephen Yablo

The title of this chapter comes from a well-​known paper of Putnam’s (1980), but the content is very different. Putnam uses model theory to cast doubt on our ability to engage semantically with an objective world.1 For him, the role of mathematics is to prove this pessimistic conclusion. I, on the other hand, am wondering how models can help us to engage semantically with the objective world. Mathematics functions for me as an analogy. Numbers, among their many other accomplishments, boost the language’s expressive power; they give us access to recondite non-​numerical facts. Models, among their many other accomplishments, do the same thing. They provide a way of expressing facts about motion due to gravity, say, or the stabilization of sex ratios. Such, anyway, is the analogy to be explored in this chapter.

5.1  Applications Mathematics is useful in physics. Frege was hugely impressed by this: “It is applicability alone that raises arithmetic from the rank of a game to that of a science.”2 Wigner found it mysterious, which is why he speaks of “the unreasonable effectiveness of mathematics” in physics.3 The mystery has only deepened with the attention in recent years to the ways in which math can be effective. Why should objects causally disconnected from the physical be so helpful in representing physical phenomena and making physical theories tractable? Why should math be such a good source of physical hypotheses? Why should it shed light on physical outcomes?

1 Specifically, Putnam uses the Lowenheim-​Skolem theorem. 2 Frege 1997, 366. 3 Wigner 1960, 1. Frege refers near the passage quoted to the “miracle of number.” Stephen Yablo, Models and Reality In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0006

Models and Reality  129 Before digging into these questions, consider models of the type appealed to in the natural sciences. They too are helpful in representing physical phenomena. They too make complex theories tractable. They too suggest hypotheses, and are apt to be cited in explanations. Why is there not a problem of the unreasonable effectiveness of models, as there is for mathematical objects? The simplest answer is that we are dealing with a selection effect. Of all the technically eligible models that could be invoked, we focus, naturally, on the useful ones. But if that solved the problem for models, why not also for mathematical objects? The same selection effects are at work with them.4 Numbers are important because of their relation to counting and cardinality. Geometry grew out of land measurement problems, as the name suggests. Real numbers owe at least some of their prominence to being “complete” in the way space and time are thought to be complete. Calculus came to the fore in connection with Newtonian mechanics. Cantor’s theory of the infinite grew out of calculus problems to do with integrability. Add to this that scientific models are causally independent, in most cases, of the phenomena they model, and the contrast is hard to make out. The utility of models begins to seem as puzzling as that of mathematical objects.

5.2  Isms The dialectic is not so different, either. One popular theory of mathematical applications is instrumentalism. Numbers are useful, according to instrumentalists such as Field, not for what they let us say, but for what they let us do: Even someone who doesn’t believe in mathematical objects is free to use mathematical existence-​assertions in a limited context:  he can use them freely in deducing nominalistically-​ stated consequences from nominalistically-​stated premises. (Field 1980, 14)

What about models? They too can be used in a purely instrumental way. There are 4 Balaguer 1998; Pérez Carballo 2014.

130  The Scientific Imagination “probing models,” “developmental models,” “study models,” “toy models,” [and] “heuristic models.” The purpose of such model-​systems is not to represent anything in nature; instead they are used to test and study theoretical tools that are later used to build representational models. (Frigg 2010b, 123)

Another leading approach to applications is structuralism. Structuralism about applications has been advocated in the philosophy of mathematics by Shapiro (1983), and in the theory of modeling by van Fraassen (2006) and others (e.g., Andreas and Zenker 2014). Fictionalism is an old standby where mathematical applications are concerned (Balaguer 1996; Leng 2010; Papineau 1988). It has recently made the jump to models: A natural first description of [frictionless planes] is as fictions. . . . They do not exist, but at least many of them might have existed, and if they had, they would have been concrete, physical things, located in space and time and engaging in causal relations. Though imaginary, these things are often the common property of a community of scientists. They can be investigated collaboratively. Surprising properties might be uncovered by one investigator after being denied by another. In their status, though not their role, they seem analogous to the fictions of literature. (Godfrey-​Smith 2009, 102)

Fictionalism has morphed in recent years into figuralism, which sees numbers as creatures of metaphor or (in Walton’s version) of prop-​oriented make-​believe (Yablo 2002, 2005). The make-​believe approach has been tried for models too. For Frigg, models are Waltonian games of make-​believe. A set of equations or a mechanism sketch is a prop that, together with the rules relevant for the scientific context, determines what those engaging with the model—​the game’s participants—​ought to imagine. . . . [The] text and equations aren’t, in this view, a description of an imaginary entity but a prescription to imagine a ring-​shaped embryo with the specified chemical makeup. Thus, there is no object . . . to which [the] equations somehow correspond. There are only inscriptions on a page which function as instructions for the imaginations of modelers. (Levy 2015, 789)

Models and Reality  131 Levy too thinks of models as rules for the imagination. But the props, in his view, are the real-​world target phenomena we are trying to understand. The role of the model is to portray a target as simpler (or just different) than it actually is. The goal of this special mode of description is to facilitate reasoning about the target. In Levy's picture, modeling doesn’t involve an appeal to an imaginary concrete entity over and above the target. All we have are targets, imaginatively described (Levy 2015). Of course, different applications may call for different approaches; we may want to be instrumentalists here and fictionalists there. More on this in a moment. Let’s try to bear the above analogies in mind as we turn from theories of applicability to models as entities in their own right.

5.3  Types of Model What is a model? If you think that this is not the most important question to be asking, you are probably right. But we need to say something about it, for the word is seriously ambiguous. Model citizens are paragons or exemplars of good citizenship. Role models are figures worthy of emulation. Fashion models are, well, you know. A certain Joseph Bell was reportedly the model for Sherlock Holmes. Car models are things like the Ford Cortina and Fiat Panda. So far, these are irrelevant to scientific modeling. Model cars are a bit more like it. These stand in for real cars, and serve as a guide—​in wind tunnel experiments, for instance—​to real cars’ properties. Likewise, the wind-​up models of the solar system encountered in science class stand in for, and are a guide to the properties of, the actual solar system. Model solar systems and the like are valued for the light they shed on whatever it is that they model. Models serving as a guide to the properties of real systems are called representational. They will be our main focus. Not all models are representational, as already noted. Some may be for playing around with, to get a feel for certain real systems. Some may be valued for the hypotheses they suggest. Some may play a proof-​of-​concept role. Morrison and Morgan note a number of further possibilities: Just as we use tools as instruments to build things, we use models as instruments to build theory. . . . Models are often used as instruments for exploring or experimenting on a theory that is already in place. . . . Models are

132  The Scientific Imagination instruments that can both structure and display measuring practices. . . . The [class] of models as instruments includes those that are used for design and the production of various technologies. (Morrison and Morgan 1999,  18–​23)

Given all these alternatives, why the focus on representational models? First, because they’re central to the scientific project. Science aims, so it is said, at the accurate representation of real systems. Second, because there is work in the philosophy of mathematics we’d like to draw on, which construes numbers (among other things) in representational terms. Representational items go hand in hand with things represented. These, oddly enough, are apt to be called models too. Think, for instance, of artist’s models, or the solar system as the model for that wind-​up gadget in science class.5 So although representational models are the focus, room will also have to be made for models of the thing-​represented sort; the key question about the former is, after all, how they relate to the latter. Models of the thing-​ represented sort are sometimes called targets or target systems since they are what representational models are aimed or directed at. Models will be representational unless otherwise indicated.

5.4  Types of Truth Representational models, like models in general, come in lots of varieties. There are scale models, like the wind-​up solar system or the balsa-​wood wing in the wind tunnel. These are actual concrete particulars. The Bohr model of the atom is a type of concrete particular, a type that is not actually instantiated. Ideal models, frictionless planes and the like, are would-​be concrete particulars; that is what they would be if they existed (Godfrey-​Smith 2009). The National Weather Service’s climate models are computer simulations. Models of computing, like Turing machines or pushdown automata, might be seen as abstract particulars, or types of abstract particular. The model of electric current as water flow is an analogy. The Lotka-​Volterra model of predator-​prey relations is a set of equations.6 If there is anything tying 5 I do not include Joseph Bell here, because Holmes is not supposed to be true to Joseph Bell, nor does one function as a guide to the other’s properties. 6 The models employed in philosophical logic, like Kripke’s fixed point model of a semantically closed language or the Bayesian model of belief update, are constructions or construction

Models and Reality  133 these various types of model together, it is not their ontological category. That being said, we can without too much violence force most of them into the hypothetical-​concrete-​particular mold. The role of the Locke-​Volterra model is played by concrete populations described by the equations. The models associated with a computer simulation of El Niño are the concrete meteorological processes that satisfy the simulation’s assumptions. In many cases this puts models into the same metaphysical category as target systems, which doesn’t matter now but will come in handy later. A better bet for the common element would be how they function—​that is, what they do for us. Once again, models in general have lots of functions. They are used for testing and prediction, as aids to calculation and visualization, to manage complexity, and to facilitate understanding. One can say more about the function of representational models. These are meant to improve our access to the reality being modeled—​the target system—​by providing an epistemically accessible substitute, information about which translates into information about the target. Models aspire, if the characterization is right, to be somehow a reliable guide to—​I will say “true to”—​the facts. One of the fundamental issues about models is to see what “true to the facts” could possibly mean here. Balsa-​ wood wings and computer simulations are not even apt for truth, it would seem, for they don’t say anything. Truth is a property of statements or claims, not pieces of wood or programs. But let us push a little further. The property reserved to statements is declarative truth, the kind Aristotle and Tarski talked about. You may say that declarative truth is not the only kind of truth out there. If we ask for a true copy of some document, or call a portrait true to its subject, we seem to be talking about accuracy or lifelikeness or fidelity. How far copies and likenesses are to be counted non-​idiomatically true seems open to doubt. (Rooms with more and better portraits in them do not seem to contain more truths.) But we don't need to worry about that. It is enough for us that (i) declarative truth is a kind of truth, and (ii) declarative truths are at least part of what we hope to gain from our (representational) models. The question either way is, how will this be possible if models are not candidates for declarative truth? But you might equally wonder how truths can be learned from maps, or by inference, or by asking for directions. These things are not apt for declarative techniques. This may apply to philosophical models more generally (Godfrey-​Smith 2006; Paul 2012; Williamson 2016).

134  The Scientific Imagination truth either. Of course the answer is obvious. There are declarative truths in the neighborhood, to which newspapers (inferences, requests) provide access. One has, for example, the mapmaker’s claim that things are laid out as depicted, or your informant’s claim on behalf of the directions she offers that they will get you downtown. With models too, there is a candidate for declarative truth in the neighborhood. We have the theorist’s claim on behalf of the model that it is faithful in such-​and-​such ways to its target. All of this is roughly in line with R. I. G. Hughes’s theory of how models function (Hughes 1997), characterized here by Frigg and Hartmann: According to [the] DDI account of modeling, learning takes place in three stages:  denotation, demonstration, and interpretation. One begins by establishing a representation relation (denotation) between the model and the target. Then one investigates the features of the model in order to demonstrate certain theoretical claims about its internal constitution or mechanism; i.e., one learns about the model (demonstration). Finally, these findings have to be converted into claims about the target system; Hughes refers to this step as “interpretation.” (Frigg and Hartmann 2005, 744–​745)7

Our concern is mainly with the interpretation stage:  converting findings, or more generally claims, about the model into claims about the target system. First we must ask what it means for a claim to be about the target system. Is it enough that it mentions the target inter alia?

5.5  Targeting I: The Role of Math in Real Content “The model matches in such-​and-​such respects the target” is a declarative truth, all right. But I wonder if it is the kind of declarative truth we were after. To say that the target system resembles the model is to speak in part of the model. And we wanted a claim about the target, or more broadly the world. Why should it not be about both? I will give some reasons in a minute, but the problem in a nutshell is that although inquiry avails itself of models, 7 Hughes is not aiming here for an analysis: “I am not arguing that denotation, demonstration, and interpretation constitute a set of speech acts individually necessary and jointly sufficient for an act of theoretical representation to take place. I am making the more modest suggestion that, if we examine a theoretical model with these three activities in mind, we shall achieve some insight into the kind of representation that it provides” (Hughes 1997, 329).

Models and Reality  135 it should not be about models. There should be the possibility, at least, of wringing truths entirely about the target out of properties of the model. This is nothing special about models. Self-​directedness is unwelcome with all kinds of representational devices. Take graphs, or barometers. Inquiry employs them, but does it aim at truths about graphs and barometers? These would be truths like “The graphs in Feynman’s Lectures are largely accurate” or “Air pressure as measured by barometers falls in a thunderstorm.” Facts even partly about the graph are not what the inquirer is after. Representational devices are supposed to be a means of access to information wholly about the world. This is so far just an appeal to intuition. But we can do better, for a parallel issue comes up in the literature on mathematical applications. A common idea there is that math-​infused science is hyperbolic. One quasi-​asserts an S that is directed in part at numbers, in order to really assert a weaker claim t(S) that is not about numbers at all. I might quasi-​assert that the number of cells in this petri dish is doubling every day, in order to really assert that there are always twice as many cells as the day before—​which is, to anticipate a little, the part of S about the dish and the cells. What S says about concreta is S’s “real content” in a setting where we are talking about concreta. Why should the real content have to be wholly about concreta? Representational devices, including numbers but not only them, are “out of place” in certain contexts. The real content has to be number-​free to have, in those contexts, the right truth-​conditional effects. Numbers are out of place because allowing them into the real content winds up falsifying a larger claim that ought to come out true.8 For a sense of how this might work, consider the context of causal explanation. Field’s case for nominalism in Science Without Numbers relies on the idea that explanations ought to be “intrinsic”: If we need to invoke some real numbers like 6.67 × 10–​11 (the gravitational constant in [SI units]) in our explanation of why the moon follows the path that it does, it isn’t because we think that that real number plays a role as a cause of the moon’s moving that way. . . . The role it plays is as an entity extrinsic to the process to be explained, an entity related to the process to be explained only by a function (a rather arbitrarily chosen function at that). (Field 1980, 43) 8 Arguments of this type are developed at greater length in Yablo 2001, 2002, 2017a.

136  The Scientific Imagination The real reason the moon follows that path has to do with the strength of the forces acting on it, not their numerical representation. Suppose Field is right that explanations should confine themselves to the entities actually doing the work. Allusions to numbers could then make an explanation defective—​not to split hairs, false—​in roughly the way that the allusion to God casts doubt on “The patient recovered because God knows she was given antibiotics.” Numbers are unwelcome in these contexts because they are extrinsic to the causal scene. The bag ripped because it had too many apples in it, not because a certain number (the number of apples in it) was too large. There is nothing wrong with numerals appearing in “P because Q” to fix the real contents of the flanking sentences.9 The claim is only that numbers should not participate in the real contents themselves, at least not when this would violate some version of the intrinsicness constraint. If numbers are objectionable in causal/​explanatory contexts, perhaps the real content should treat them as existing only according to a certain story, the story of standard math. But the story of standard math is just as extrinsic to the causal scene as the numbers of which it treats. The bag didn’t rip because a certain number (the number of apples) was too large according to standard math, any more than it ripped because a certain number was too large.10 Or consider nomological contexts. Galileo’s law of falling bodies “says” that d(t)—​the distance a dropped object falls in t seconds—​is proportional to t2. Suppose we are convinced for broadly Fieldian reasons that the real content of d(t)∝t2 in this setting does not involve numbers or numerical operations (like squaring); the law treats of concrete objects, not mathematical ones. Matters are not improved by putting the numbers under a story prefix, for natural laws know nothing of stories either. If Galileo’s law is really to be a law, the real content of d(t)∝t2 should not involve the story of standard math. Now contexts of understanding. I  may need to know some math to understand Galileo’s law in its standard formulations. I  do not, however, need to know what standard math is in order to understand the law. 9 For example, “Members of Congress cannot be paired off one-​to-​one because the number of them is odd.” 10 I am using “the story of standard math” loosely to allow the importation of truths about non-​ mathematical objects. Incorporation of real or apparent truths into the content of a fiction is standard operating procedure. See Walton 1990, 144ff., on the reality principle and the mutual belief principle.

Models and Reality  137 Since I apparently do need to know which math is standard to understand “Assuming standard math, distance fallen is proportional to the square of the time elapsed,” the latter is not a very good candidate for the real content of Galileo’s law. Agreement contexts are similar. People agree, I take it, on Galileo’s law; those who believe it believe the same thing, or at least there is no obvious obstacle to their believing the same thing. A potential obstacle emerges, though, if the law has the story of standard math in its real content. For who is to say that you and I agree on the content of that story? Modal contexts put a different kind of pressure on real content. The number of (F∨G)s is bound to be even if the number of Fs = the number of Gs and the number of (F∧G)s = 0; it could not be otherwise. But it could (perhaps) have been otherwise according to standard math, for standard math could have been different. Consider finally epistemic contexts. It is a priori that if the number of Fs = the number of Gs, and the number of (F∧G)s = 0, then the number of (F∨G)s is even. But do we know a priori that this is so according to standard math? No, because we do not know the content of standard math a priori.11 Again, we seem to know a priori that a set’s subsets outnumber its members. But this holds only on a combinatorial conception of set. And it is somewhat of a historical accident—​it was not in any way inevitable—​that set theory developed in that particular direction.

5.6  Targeting II: The Role of Models in Real Content These are some of the problems that arise if representational devices are written into the real content of mathematically formulated statements. It would be surprising if similar problems did not sometimes arise when representational models are written into the real content of statements of model-​ based science. Suppose we are working with a purely gravitational model of the solar system in which planets interact exclusively with the sun. (I will use α for actual systems and ω for models.) And suppose that the hypothesis about α that we access by quasi-​asserting S—​asserting it in reference to ω—​is not entirely

11 Just as to know what is true in the Holmes stories, one has to look at the stories.

138  The Scientific Imagination about α but involves also ω. It is, let’s say, the hypothesis that ω is similar in respect R to α. To use our earlier terminology, ωRα is the real content t(S) of our quasi assertion that S. What kind of trouble is caused, in what contexts, by the real content’s alluding not only to the target system α but also to the model ω? Start again with causal/​explanatory contexts. The “effect” is that planets speed up on approaching the sun. We’d like to explain it with Kepler’s second law: a planet always sweeps out the same area in the same amount of time. This is not strictly true of α. The real content t(S) of Kepler’s law is true, however, and we look to it for the explanation. We look in vain if the real content is of the form ωRα, because the model does not participate in the Earth’s reasons for speeding up when it approaches the sun; it figures only in the representation of those reasons.12 A law’s holding without exception in ω is meant to tell us something lawful about actuality. So it reflects a robust fact about the solar system α that planetary orbits are elliptical in ω. But it reflects no deep fact about the solar system that ω bears R to it. This is for two reasons, one pertaining to R and one to ω. Why would an astronomical law of this world bring in a system ω that exists only in other worlds? And why would it bring in α’s relations to this merely possible structure? Again, if the real content is ωRα, then it is accessible only to those acquainted with ω. To agree on the real content of S, we have to be working with the same ω. If the real content is ωRα, information gleaned from multiple models does not paint a unified picture; the most we can say is that α resembles this model in one respect, that one in another, a third in a third respect, and so on.13

12 Bokulich 2011. 13 See Weisberg 2007, 2012 for multiple models of the same target, and multiple targets for the same model (e.g., a compressed spring governed by Hooke’s law stands in for harmonic oscillators generally). Admittedly, a unified picture is not always desirable, or possible; one of the glories of model-​based theorizing is supposed to be that it takes this in stride. Weisberg quotes Levins: “The multiplicity of models is imposed by the contradictory demands of a complex, heterogeneous nature and a mind that can only cope with few variables at a time; by the contradictory desiderata of generality, realism, and precision; by the need to understand and also to control; even by the opposing aesthetic standards which emphasize the stark simplicity and power of a general theorem as against the richness and the diversity of living nature. These conflicts are irreconcilable. Therefore, the alternative approaches even of contending schools are part of a larger mixed strategy. But the conflict is about method, not nature, for the individual models, while they are essential for understanding reality, should not be confused with that reality itself ” (Levins 1966, 431).

Models and Reality  139

5.7  Actuality The target system α is supposed in most cases to be real; it is part of the actual world. It simplifies matters to treat it as identical to the actual world, on the understanding that truths S about the model translate into truths (true real contents) t(S) that pertain only to the bits of actuality that are being modeled. The model itself is presumed not to be real; it is part of a counterfactual world w. I propose again to treat it as identical to that world, on the understanding that S speaks only to the bits that do the modeling. (There are other options here. The target system could be a mini-​world, or a situation, or a set of worlds with only the mini-​world in common; and similarly for the model.) I am not going to fret too much about the ontology of models and target systems, because the action is really elsewhere. Truths about ω are supposed to translate into truths t(S) about α. As Frigg puts it, Models not only represent their target; they do so in a clearly specifiable and unambiguous way, and one that allows scientists to “read off ” features of the target from the model. . . . [W]‌e study a model and thereby discover features of the thing it stands for. We do this by first finding out what is true in the model-​system itself, and then translating the findings into claims about the target itself. (Frigg 2010a)

But where are we to look for the translation manual? It should be possible, on Frigg’s picture, to identify t(S) on the basis of S, α, ω, and ω’s relation to α (namely, R). His schematic solution is that ω comes with a key K specifying how facts [S] about ω are to be translated into claims [t(S)] about α. (Frigg 2010a)

This is not very informative, as Frigg acknowledges: There is much more to be said  .  .  .  than is contained in [the conditions given]—​they are merely blanks to be filled in every particular instance. Thus, the claim that something is a representation amounts to an invitation to spell out how exactly ω comes to denote the target system α and what K is. (Frigg 2010a)

140  The Scientific Imagination The key, of course, is none other than the sought-​after translation manual. How it might be found is to be judged on a case-​by-​case basis. Contessa tries to say something more general. Someone using a model adopts an interpretation of the [model] in terms of the target . . . and this interpretation provides the user with a set of systematic rules to “translate” facts about the vehicle into (putative) facts about the target. (Contessa 2011, 126)

The interpretation sounds at first like a way of seeing the model that treats certain aspects as representational and others as adventitious. This is tantamount in Contessa’s view to adopting a set of systematic rules that translate facts about the vehicle into facts about the target. But seeing the model a certain way does not itself provide systematic translation rules. If the interpretation involves more than a way of seeing—​if it is defined so as to provide rules—​then it is not clear that actual users of models ever adopt interpretations, or, if they do, how they find the ones that are worth adopting. Adam Morton pushes back further, to the reasons for our initial choice of model. A candidate model, to attract our attention, must (among other things) give predictions that are reliable in specific but rarely explicitly specified respects (Morton 1993, 663). And now one might argue as follows.14 The person choosing the model must have some idea of the type of reliability that motivated her choice. Isn’t knowing the relevant type of reliability tantamount to knowing a translation manual mapping truths S about ω to truths t(S) about α? Again we face a dilemma. Either a “type of reliability” is tantamount to a translation manual or it falls short, providing, as it might be, an indication of the range over which the model is trustworthy without a specification of the message worthy of trust. If it falls short, then the argument never gets off the ground. If it suffices for a translation manual, then while someone choosing a model may have “some idea” of how the model is reliable, and “some idea” of how to wring predictions out of it, the chooser does not normally know details; the predictions licensed by a property of the model are, as Morton says, not explicitly specified. To narrow the search space, it might help to consider the form that reliable predictions or claims must take. A number of proposals have been made, or



14 Morton does not argue this way. We are mining his work for constraints on a translation manual.

Models and Reality  141 hinted at, about the implication for α of the fact that ω satisfies S. The implication might be that (1) α has features analogous to the features that ω needs, to satisfy S (2) models ω of S have a part (“appearances,” e.g.) isomorphic to part of α15 (3) α is such as to make S true in a certain story, the “story of ω”16 (4) an analogue for α of S answers an analogue for α of the question S addresses These, however, all allude in one way or another to ω, which we saw earlier to cause trouble. A t(S) partly about ω is vulnerable to the objections raised previously to a real content that portrays α as resembling ω in a certain respect. Look, for instance, at (1). No one can understand the features that (1) attributes to α if they are not acquainted with ω. (1)-​type information is shareable only between theorists working with the same ω. Possession of features analogous to those by which ω satisfies S does not cause possession of features analogous to those by which ω satisfies S′. The model’s role should be to induce a content in which it does not itself figure.

5.8  Translation The problem as we’ve been conceiving it so far (but not much longer) is how to translate a truth S about the model into a truth t(S) about the world—​that is, a truth full stop. In schematic form: 15 Van Fraassen 1980, 64. 16 Frigg (2010b), Godfrey-​Smith (2009), Levy (2012, 2015), and Toon (2010) all explore the idea of treating truth in the model as a kind of fictional truth, or pretense-​worthiness in a Walton-​style prop-​oriented make-​believe game (Walton 1993). But although these authors cite Walton, they seem mostly—​Levy (2015) is an exception—​to ignore his picture of how make-​believe can be used in the cause of real-​world representation. The point of prop-​oriented make-​believe for Walton is to give information about the props—​the real-​world items determining what is to be pretended. To utter S in the context of a game is supposed to be a way of representing the props as in a condition to make S pretense-​worthy in that game. If we’re trying to represent the solar system, the prop should be the solar system. Suppose we were to identify props with model descriptions, as Frigg appears to; this in Walton’s scheme means that the point of uttering S is to give descriptions of model descriptions. Likewise if the props are the models themselves. The point of uttering S in connection with a representational device is not to give information about the device; that would be like treating “Crotone is in the arch of the Italian boot” as a guide to the model of Italy whereby Italy is a boot. It’s to give information about the world. (To be sure, one can follow Walton in his theory of make-​believe entities without following him on the representational point of such entities. As far as I know, only Levy goes all the way.)

142  The Scientific Imagination N:S holding in a model suitably related to α testifies not to the truth of S itself in α but to the truth in α of a hypothesis t(S) suitably related to S.

This way of setting the problem up requires us to identify t(S), however—​ which has been proving difficult. We should ask ourselves whether a translation of S into α-​ese is really necessary. You might think it clearly is necessary, since S untranslated is (normally) false of α. But there are other alethic commendations we can give to a sentence besides truth. Perhaps all that S aspires to is to be partly true in α—​true apart from an issue we’re now ignoring. Or perhaps it aspires only to be true about a certain aspect of α. Or it might aim to be true about α where a subject matter of particular interest is concerned. The idea more generally is that, rather than attempting to translate S into a claim that is wholly true, we might try to scale “wholly true” back to a compliment that S is worthy as it is.17 How would this work in practice? Let S be Kepler’s first law: the planets trace elliptical orbits. This law, although false in α, is true in a model with gravitational forces only, and a single planet revolving around a massive central body like the Sun.18 Now, what is the fact about α that is indicated by Kepler’s law holding in the model? That Kepler’s law is true in α about the matter of planetary motion due to centripetal gravitational forces. Another fact indicated is that Kepler’s law is roughly true in α, or true about the matter of planetary motion give or take a certain fudge factor. Schema N thus gives way to: M:S holding in a model suitably related to α testifies not to S’s truth simpliciter in α but to its truth in α about a certain subject matter m, or where m is concerned.

Continuing along the same lines, gases are modeled as collections of randomly moving, perfectly elastic, point-​ sized, non-​ interacting particles trapped in a perfectly smooth and rigid container. Pressure rises in such 17 Much as van Fraassen’s constructive empiricist, rather than trying to translate the realist’s theories into some imagined language of observation, leaves the theory as it is while presenting it only as true about observables: “According to the anti-​realist, the proposer does not assert the theory to be true; he displays it, and claims certain virtues for it. These virtues may fall short of truth” (van Fraassen 1980, 10). Van Fraassen in our terms uses T-​worlds w to model observational aspects of our world @. T is true about observation in @ by virtue of being fully true in an observationally equivalent T-​world. (The T-​world is probably simpler than @; we’re no position to tell.) 18 See Cartwright 1983 for the idea that laws hold only in simplified models, and Yablo 2014, 84‒85, for discussion.

Models and Reality  143 a model with the number and speed of the particles colliding with the container’s wall. This cannot be said of real gases, since gas particles are not literally point-​sized, and they interact; the wall is not really smooth or rigid, it is made of atoms; the particles do not really collide with these atoms but are repelled by them electromagnetically. So what is going on here? I am told that the number and speed of collisions stand in for the kinetic energy of the impinging particles; the particles are moving randomly, so the mean kinetic energy of impinging particles is constant throughout; they are imagined as point-​sized lest some of this energy be lost to rotation. If anything like that is right, then although the original statement is not true overall in α, it is true of an aspect of α, namely, pressure as a function of mean translational kinetic energy. Consider next Schelling’s “grid” model of housing preferences. Families relocate, on this model, if and only if six or more of their eight nearest neighbors are of a different race. They are content, in other words, to have three same-​race neighbors along with five of a different race. What is interesting is that racial segregation results after a few iterations of this tolerant-​seeming process. The ease with which grids become racially divided shows, it is said, how segregation can arise “innocently,” out of a desire not to be outnumbered three to one in the immediate neighborhood (Schelling 2006). What is the lesson of Schelling’s model? Not that racial segregation does result from dislike of extreme racial isolation. This is true in the model, but not necessarily in the world. The lesson is that racial segregation could result just from dislike of isolation, as far as the statistical evidence goes. Or, to say it a bit differently, the original statement gets a certain aspect of our world right—​namely, how much racial animus is required for segregation. It tells us how segregation can arise through the operation of relatively accepting attitudes. A final example is Fisher’s equilibrium model of one-​to-​one sex ratios. There are forces at work, he suggests, that lessen disparities if the numbers get out of whack. These forces operate, in his model, by exerting selective pressure on a hypothetical gene that biases the sex of offspring toward males. Suppose that females are in the majority at some point in evolutionary history. Newborn boys will then have better mating prospects than newborn girls. So their parents, with the male-​favoring gene, will have on average more grandchildren. To be sure, the male-​biased grandparent will not have more children (only grandchildren). But there will be more boys among

144  The Scientific Imagination those children. These, on account of their rarity in the population, will mate more frequently, putting copies of the male-​tending gene into more grandchildren. And so on and so on, for as long as males have the mating advantages conferred by being in the minority. (Mutatis mutandis, if males were more common.) Fisher’s story does not have to be true in all details (or at all!) to shed light on actual sex ratios. It serves at the very least as a proof of concept for the idea of selectional pressures favoring genes that de-​skew unequal populations. But it could conceivably be more than that, for we might get similar dynamics if sex ratios got out of whack for other reasons. Suppose that some resource (food, say) is divided equally between the sexes. Then as the male population dwindles, each man becomes better fed, which increases longevity and thus the number of males. Or perhaps men protect women from predators, while women protect men from poisonous fruit. As males become uncommon, women are increasingly preyed upon, bringing their portion of the population down to male levels. As females become uncommon, men increasingly die of poisoning, bringing their representation down to female levels. These are just-​so stories, of course. But there would, if any such story worked, be a compliment we could pay to Fisher’s theory even if it was false: the theory is true about the tendency of unequal sex ratios to correct themselves. The structure of all these cases seems broadly similar. We have a statement S that is true in ω but false in α. S’s truth in ω signals its truth in α about a subject matter m that ω and α agree on. Altogether, then: for S to be true in ω indicates its truth in α where m is concerned. (1) “The Earth traces an elliptical orbit” is true in ω and false in α. But it is true in α about planetary motion due to central gravitational forces. (2) “Pressure rises as point particles collide harder with the wall” is true in ω and false in α. But it is true in α about how mean translational kinetic energy relates to pressure. (3) “Desire for more than two same-​race neighbors out of eight neighbors leads to seven such neighbors out of eight neighbors” is true in ω and false in α. But it is true in α about how fear of racial isolation can lead to segregation. (4) “Selective pressures on sex-​bias genes make uneven ratios unstable” is true in ω and false in α. But it may be true in α about the instability and self-​correctingness of uneven sex ratios.

Models and Reality  145

5.9 Subject Matter The role of subject matter for us is to be the kind of thing sentences can be true about in a world, even if they’re not true outright in that world. It doesn’t matter for this purpose whether subject matters are “of sentences,” or whether there is such a thing as “the subject matter of A” for particular sentences A.19 Subject matters, for our purposes, can be entities in their own right. David Lewis initiated this approach in “Statements Partly About Observation” (Lewis 1988). The nineteenth century, for instance, is a kind of subject matter for Lewis. It’s the kind he calls parts-​based. A subject matter m is parts-​based if for worlds to be alike with respect to m is for corresponding parts of those worlds to be intrinsically indiscernible. The nineteenth century is parts-​based because worlds are alike with respect to it if and only if the one’s nineteenth century is an intrinsic duplicate of the other’s nineteenth century. Note that the historical period known as “the nineteenth century” is not to be confused with the nineteenth century. The first is a part of one particular world (ours), or of its history. The second is a way of grouping worlds according to what goes on in their respective nineteenth centuries. This approach is not sufficiently general, Lewis observes. Take the matter of how many stars there are. The universe has no “star-​counter” in it, such that worlds agree on how many stars they contain if and only if one world’s counter is an intrinsic duplicate of the other world’s counter. Facts about how many stars there are are not stored up in particular spatiotemporal regions. Or consider the subject matter of observables, which van Fraassen uses to define an empirically adequate theory.20 This may seem at first a parts-​ based subject matter, like the Sun. Worlds are observationally equivalent just if their observables—​whatever in them can be seen, felt, heard, et cetera—​are intrinsically alike. But that cannot on reflection be right. Dirt is observable, yet among dirt’s intrinsic properties are some that are highly theoretical—​for instance, the property of being full of quarks. It is not supposed to count against a theory’s empirical adequacy that it gets subatomic structure wrong. Observables—​what an empirically adequate theory should get right—​ is best regarded as a non-​parts-​based subject matter, like the number of

19 I think there is such a thing, but that’s another story (Yablo 2014). 20 Lewis calls it “observation.”

146  The Scientific Imagination stars. Worlds are alike with respect to observables if they’re observationally indistinguishable—​that is, if they look and feel and sound (etc.) the same.21

5.10  Partitions A parts-​based subject matter, whatever else it does, induces an equivalence relation on, or partition of, “logical space.”22 Worlds are equivalent, or cellmates, if corresponding parts are intrinsically alike. A non-​parts-​based subject matter, however, also induces an equivalence relation on logical space: worlds are equivalent, or cellmates, just in case they are indiscernible where that subject matter is concerned. The number of stars is the relation one world bears to another just if they have equally many stars. But then, if one wants a notion of subject matter that works for both cases, let them be not parts but partitions. The second notion subsumes the first while exceeding it in generality. So, to review: One starts out thinking of subject matters as parts of the world, like the Western Hemisphere or Queen Victoria or the nineteenth century. These then give way to world-​partitions, which are ways of grouping worlds. Should the grouping be on the basis of goings-​on in corresponding world-​parts, we get the kind of subject matter that, although still thoroughly partitional, looks back to world-​parts for its identity conditions. A subject matter (or topic, or matter, or issue) on this view is a system of differences, a pattern of cross-​world variation.23 Where the identity of a set is given by its members, the identity of a subject matter is given by how things are liable to change where it is concerned: SM:  m1 = m2 iff worlds differing on the one differ also with respect to the other. 21 What becomes of the idea, seemingly essential to constructive empiricism, that T need only be true to the observable part of reality, if “observables” does not correspond to a part of reality (Yablo 2014, ch. 1)? 22 Lewis 1988. An equivalence relation is a binary R that’s reflexive (everything bears R to itself), symmetric (if x bears R to y, then y bears R to x), and transitive (if x bears R to y and y bears R to z, then x bears R to z). A partition is a decomposition of some set into mutually disjoint subsets, called cells. Equivalence relations are interdefinable with partitions as follows: x’s cell [x] is the set of all y such that y is equivalent to x; x is equivalent to y if they lie in the same cell. 23 Linguists have their own notion of topic; a sentence’s topic/​focus structure is something like its subject/​predicate structure. Topics in the linguist’s sense may or may not be reflected in a sentence’s subject matter. (See Fine 2018 for a different and, he argues, better way of conceiving subject matter.)

Models and Reality  147 This might seem too abstract and structural. To know what m1 is as opposed to m3 doesn’t seem to tell us what goes into a world’s m1-​condition, as opposed to its m3-​condition. Surely, though, I do not grasp a subject matter m if I can never tell you what the proposition m(w) is that specifies how matters stand in w where m is concerned. But subject matters, as just explained, do tell us what w is like where m is concerned. The proposition we’re looking for should be true in all and only worlds in the same m-​condition as w; on an intensional view of propositions, it is the set of worlds in the same m-​condition as w. That proposition is already in our possession. To be in the same m-​condition as w is to be m-​ equivalent to w, and the set of worlds m-​equivalent to w is just w’s cell in the partition. A world’s m-​cell is thus the proposition saying how matters stand in it m-​wise. Suppose, following Lewis, we write nos for the number of stars. How does one find the proposition specifying how matters stand in a world where nos is concerned? Well, w has a certain number of stars, let’s say a billion. Its nos-​ cell is the set of worlds with exactly as many stars as w. The worlds with exactly as many stars as w are thus the ones with a billion stars. The worlds with a billion stars comprise the proposition that there are a billion stars. That it contains a billion stars sums up w’s nos-​condition quite nicely. And so its nos-​ cell sums up w’s nos-​condition quite nicely.

5.11  Ways to Be Where does this leave us? The subject matters discussed in the last section seemed at first too abstract and structural to tell us what is going on m-​wise in a given world. But each m determines a function m(...) that encodes precisely that information. It works in the other direction, too; one can recover the equivalence relation from the function by counting worlds as m-​equivalent if they are mapped to the same proposition.24 Thus m can be conceived either as (i) an equivalence relation (that’s what it is “officially”), or (ii) a partition, or (iii) a specification for each world of what is going on there m-​wise. (The number of stars, for instance, can be construed as a function taking each k-​ star world w to the proposition that there are exactly k stars.) 24 This won’t work with just any old function from worlds to sets of them. The proposition associated with w must be true in it; the propositions associated with distinct worlds should be identical or incompatible.

148  The Scientific Imagination The problem may seem to recur at a deeper level. How are we to get an intuitive handle on the function m(...) taking worlds to their m-​conditions? It’s one thing if m(...) is introduced in the first place as specifying how many stars a world contains. But all we know of specification functions considered in themselves is that they are mathematical objects (sets, or partial sets, presumably) built in such-​and-​such ways out of worlds. It is not clear how we are to think about sets like this, other than by laying out the membership tree and describing the worlds at terminal nodes as best we can. But remember, each specification function m(...) has associated with it a set of propositions, expressing between them the various ways matters can stand where m is concerned. (A proposition goes into the set if it is m(w) for some world w.) The operation is again reversible: to find m(w), look for the set-​of-​worlds proposition to which w belongs. A subject matter can also be conceived, then, as (iv) a set of propositions. Sets of this type function in semantics as what is expressed by sentences in the interrogative mood. Questions, as they are called, stand to interrogative mood sentences Q as propositions stand to sentences S in the indicative mood.25 To find a Q expressing a particular set of propositions, look for one to which those propositions are the possible answers. This Q provides an immediately comprehensible designator for the set of propositions at issue.26 What, for instance, is the Q to which “There are exactly k stars,” for specific values of k, is the possible answer?27 It is “How many stars are there?” We are dealing, then, with the issue or matter of how many stars there are. What is the question addressed by “She said X to Francine” for specific values of X? It is “She said what to Francine?” Thinking how to answer that is reviewing the matter of what she said to Francine. The question addressed by “Cats paint” is “Do cats paint?” The corresponding subject matter is whether cats paint. The question addressed by “Cats paint to relieve tension” is “Why do cats paint?” Pondering that question is considering the matter of why cats paint.28

25 I will sometimes use “question” sloppily as standing also for the sentences. 26 By pointing us to the corresponding indirect question. The indirect question corresponding to “Do cats paint?” is “Whether cats paint.” The indirect question corresponding to “Why do they paint?” is “Why cats paint.” 27 Sans-​serif A stands for the set of A-​worlds 28 See Silver and Busch 2006.

Models and Reality  149

5.12  Targeting III: Directed Truth That S holds in a certain model ω tells us, or can tell us, that S is true, period, where a certain subject matter is concerned. How does it do this? Recall that we earlier reconceived the target system α as a world or mini-​ world, and decided to think of the model as a world or mini-​world ω. This makes both into the kinds of thing that can be alike, or different, where a subject matter is concerned. And now we can elaborate schema M in two different ways: M∃ :  S is true in α about m iff it is true simpliciter in a world ω that is m-​wise indiscernible from α.

For S to be true about m in α means (according to M∃) that S, should it be false in α, is at any rate not false because of how matters stand there with respect to m. This admits of a simple test: S is not false about how matters stand m-​wise iff one can make it true, without changing how matters stand m-​wise. One role a model ω can play is to witness this possibility: the possibility of morphing the actual world into an S-​world without disrupting the state of things m-​wise. Now, what kind of compliment are we paying S when we call it true about m? Does truth about the subject matter under discussion make S “as good as true” for discussion purposes? Does “true about m” function in descriptions of α the way truth simpliciter does? One has to be careful here. Truth about m, considered as a modality, is possibility-​like: A is true about m in a world just if it could be true, for all that that world’s m-​condition has to say about it. The logic of directed truth can, on the present view, be read off the logic of possibility. This is fine for some purposes. Sometimes all we want from truth about m in α is that S could be true in the same m-​conditions as obtain in α. This is how it works, for instance, with models of the solar system that replace planets with point masses stationed at each planet’s center of gravity, and (as far as I can understand) the models in solid-​state physics that have “quasi particles,” such as phonons, standing in for diffuse, large-​scale vibrations.29 The possibility-​like notion of truth about m seems to suffice when, roughly speaking, there is only one ω for a given α, or S is satisfied by all the models of interest if any.

29 See Falkenburg 2015 and Gelfert 2003. Thanks to Jay Hodges for the example.

150  The Scientific Imagination A particularly vivid example is invoking mathematical objects to increase expressive power. Take “The rabbit population in Australia was 27 × 2n on the nth day of 1866.” This is not true outright in α if α lacks numbers, as maintained by most Australians. But it would still be true about how matters stood physically if the rabbit population allowed the statement to be true outright. That it could also be false under such conditions is not a problem. There is essentially only one way of fitting a physical world out with numbers. Sometimes, though, a diamond-​like logic seems just wrong for truth-​ about-​m. A hypothesis and its negation can be possible at the same time. Similarly, there is nothing to stop them from both being true about m as that notion was defined. That a world’s m-​condition permits each of A and ~A to be true doesn’t mean it permits them both to be true together. Call this the phenomenon of quasi contradiction. How much of a problem is it if truths where m is concerned contradict each other? That depends on whether the contradiction is on-​topic. (There is a problem only if A and ~A say contradictory things about m.) Take the statement “The author of Principia Mathematica taught at Harvard.” This gets something right, in that Whitehead taught there. Its negation, however, also gets something right, since Russell did not teach at Harvard. There is nothing contradictory about only Whitehead teaching at Harvard. That A and ~A can both be true about m, as long as they are consistent in what they say about m, is a nice outcome. But we pay a heavy price for it: truth about m is not closed under conjunction.30 To obtain a notion of truth-​about that is closed under conjunction, we need to put a universal quantifier in for the existential in M∃. This can be done along Kratzerian lines (Kratzer 1977) as follows: M∀:  S is true in α about m iff it is true simpliciter in all the “best” ωs that are m-​wise indiscernible from α.

Here we are imagining subject matters fitted out with a better-​than relation; one of two m-​equivalent worlds is better than another just if it better illuminates what is going on m-​wise—​for example, by containing fewer distorting influences or irrelevant complications. The best motion due to gravity-​equivalents of α, for instance, will have gravity as their only force and

30 Dorr 2010; Fine 2018.

Models and Reality  151 gravitationally induced motion as their only motion. This prevents “Objects never move” coming out true in our world about motion due to gravity, thanks to a world where gravitational forces are exactly canceled out by other forces.

5.13  Conclusion An analogy has been suggested between how numbers boost (can boost) expressive power and how models do. It runs as follows. Sometimes the best way to get our point across is to put a sentence S forward as true, not in full but where a certain subject matter is concerned. S’s truth here in α about m is analyzed as its truth simpliciter in worlds ω that are just like ours where m is concerned. This much applies to numbers and models both. In the number case m is concreta, and in the model case m is, perhaps, motion due to gravity. An interesting difference is that ω in the number case is (assuming nominalism) more complicated than our world. It contains both a duplicate of α and a bunch of abstracta that are missing from α. Whereas ω in the model case is typically simpler than our world, that being the point of working with models. Our main idea about models has been that (i) translating truths in the model into truths about the target system is difficult, but (ii) there’s an alternative: rather than trying to morph S into a truth about α, we can morph “true” into “true as far as m goes.”31 This provides a format for representational model talk. It also raises a question: Where do we look for a subject matter m such that it’s illuminating to know that S gets our world right where m is concerned?32

31 If you like, what S says about m is true. 32 Let Q be a question about ω that is addressed by S, conceived as a partition of logical space. We are given that ω belongs to the S-​cell of Q. We cast about for a question Q′ about α one of whose cells (the one with α in it) overlaps the S-​cell of Q. If one is found, this tells us that α resembles the S-​worlds in the area of overlap in how it answers Q′.   Now, let m′ be the subject matter corresponding to Q′—​the one such that the states of things m′-​ wise are all and only the answers to Q′. (The number of stars relates in this way to “How many stars are there?”) That the area of overlap is non-​empty means that S is true in α about m′—​in the weak sense that is not closed under conjunction. If the area of overlap contains, moreover, all the best worlds in α’s m′-​cell, then S is true in α about m′ in the strong sense that is closed under conjunction. The trick is to find a question whereof these are interesting things to know. See Pérez Carballo (2014, 2018) and Yalcin (2016) for discussion.

152  The Scientific Imagination

References Andreas, H., and Zenker, F. (2014). “Basic Concepts of Structuralism.” Erkenntnis 79, supp. 8: 1367–​1372. Balaguer, M. (1996). “A Fictionalist Account of the Indispensable Applications of Mathematics.” Philosophical Studies 83, no. 3: 291–​314. Balaguer, M. (1998). Platonism and Anti-​Platonism in Mathematics. New York: Oxford University Press. Bokulich, A. (2011). “How Scientific Models Can Explain.” Synthese 180, no. 1: 33–​45. Cartwright, N. (1983). How the Laws of Physics Lie. Oxford: Clarendon Press. Contessa, G. (2011). “Scientific Models and Representation.” In The Continuum Companion to the Philosophy of Science, edited by S. French and J. Saatsi, 120–​137. London: Continuum. Dorr, C. (2010). “Of Numbers and Electrons.” Proceedings of the Aristotelian Society 110: 133–​181. Falkenburg, B. (2015). “How Do Quasi-​Particles Exist?” In Why More Is Different, edited by B. Falkenburg and M. Morrison, 227–​250. Berlin: Springer. Field, H. (1980). Science Without Numbers:  A Defense of Nominalism. Princeton, NJ: Princeton University Press. Fine, K. (2017). “A Theory of Truthmaker Content II: Subject-​Matter, Common Content, Remainder and Ground.” Journal of Philosophical Logic 46, no. 6: 675–​702. Fine, K. (2018). “Yablo on Subject Matter.” Philosophical Studies. https://​doi.org/​10.1007/​ s11098-​018-​1183-​7. Frege, G.  (1997). “Notes for Ludwig Darmstaedter.” In The Frege Reader, edited by M. Beaney, 362–​367. London: Blackwell. Frigg, R. (2010a). “Fiction and Scientific Representation.” In Beyond Mimesis and Convention, edited by R. Frigg and M. C. Hunter, 97–​138. Dordrecht: Springer. Frigg, R. (2010b). “Models and Fiction.” Synthese 172, no. 2: 251–​268. Frigg, R., and Hartmann, S. (2005). “Scientific Models.” In The Philosophy of Science: An Encyclopedia, edited by S. Sarkar et al., vol. 2. New York: Routledge. Gelfert, A. (2003). “Manipulative Success and the Unreal.” International Studies in the Philosophy of Science 17, no. 3: 245–​263. Godfrey-​Smith, P. (2006). “Theories and Models in Metaphysics.” Harvard Review of Philosophy 14, no. 1: 4–​19. Godfrey-​Smith, P. (2009). “Models and Fictions in Science.” Philosophical Studies 143, no. 1: 101–​116. Hughes, R. I.  G. (1997). “Models and Representation.” Philosophy of Science 65, no. 4: S325–​S336. Kratzer, A. (1977). “What ‘Must’ and ‘Can’ Must and Can Mean.” Linguistics and Philosophy 1, no. 3: 337–​355. Leng, M. (2010). Mathematics and Reality. Oxford: Oxford University Press. Levins, R. (1966). “The Strategy of Model Building in Population Biology.” American Scientist 54, no. 4: 421–​431. Levy, A. (2012). “Models, Fictions, and Realism: Two Packages.” Philosophy of Science 79, no. 5: 738–​748. Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172, no. 3: 781–​798. Lewis, D. (1988). “Statements Partly About Observation.” In Papers in Philosophical Logic, 1:125–​155. Cambridge: Cambridge University Press.

Models and Reality  153 Morrison, M., and Morgan, M. S. (1999). “Models as Mediating Instruments.” Ideas in Context 52: 10–​37. Morton, A. (1993). “Mathematical Models: Questions of Trustworthiness.” British Journal for the Philosophy of Science 44, no. 4: 659–​674. Papineau, D. (1988). “Mathematical Fictionalism.” International Studies in the Philosophy of Science 2: 151–​174. Paul, L. A. (2012). “Metaphysics As Modeling:  The Handmaiden’s Tale.” Philosophical Studies 160, no. 1: 1–​29. Pérez Carballo, A. (2018). “Good Questions.” In Epistemic Consequentialism, edited by K. Ahlstrom-​Vij and J. Dunn, 123–​146. Oxford: Oxford University Press. Pérez Carballo, A. (2014). “Structuring Logical Space.” Philosophy and Phenomenological Research 92, no. 2: 460–​491. Putnam, H. (1980). “Models and Reality.” Journal of Symbolic Logic 45, no. 3: 464–​482. Schelling, T. C. (2006). Micromotives and Macrobehavior. New York: W. W. Norton. Shapiro, S. (1983). “Mathematics and Reality.” Philosophy of Science 50, no. 4: 523–​548. Silver, B., and Busch, H. (2006). Why Cats Paint:  A Theory of Feline Aesthetics. New York: Random House. Toon, A. (2010). “The Ontology of Theoretical Modelling:  Models as Make-​Believe.” Synthese 172, no. 2: 301–​315. van Fraassen, B. C. (1980). The Scientific Image. New York: Oxford University Press. van Fraassen, B. C. (2006). “Representation: The Problem for Structuralism.” Philosophy of Science 73, no. 5: 536–​547. Walton, K. (1990). Mimesis as Make-​Believe: On the Foundations of the Representational Arts. Cambridge, MA: Harvard University Press. Walton, K. (1993). “Metaphor and Prop Oriented Make-​Believe.” European Journal of Philosophy 1, no. 1: 39–​57. Weisberg, M. (2007). “Three Kinds of Idealization.” Journal of Philosophy 104, no. 12: 639–​659. Weisberg, M. (2012). Simulation and Similarity: Using Models to Understand the World. New York: Oxford University Press. Wigner, E. (1960). “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Communications on Pure and Applied Mathematics 13, no. 1: 1–​14. Williamson, T. (2016). “Model-​Building in Philosophy.” In Philosophy’s Future:  The Problem of Philosophical Progress, edited by R. Blackford and D. Broderick, 159–​162. Oxford: Wiley Blackwell. Yablo, S. (2001). “Go Figure: A Path Through Fictionalism.” Midwest Studies in Philosophy 25, no. 1: 72–​102. Yablo, S. (2002). “Abstract Objects: A Case Study.” Noûs 36, supp. 1: 220–​240. Yablo, S. (2005). “The Myth of the Seven.” In Fictionalism in Metaphysics, edited by M. Kalderon, 88–​115. Oxford: Clarendon Press. Yablo, S. (2014). Aboutness. Princeton, NJ: Princeton University Press. Yablo, S. (2017a). “If-​Thenism.” Australasian Philosophical Review 1, no. 2: 115‒132. Yablo, S. (2017b). “Fine on Subject Matter.” Philosophical Studies (online): 1‒18. Yalcin, S. (2016). “Belief as Question-​ Sensitive.” Philosophy and Phenomenological Research 97, no. 3: 23–​47.

6 Models, Fictions, and Conditionals Peter Godfrey-​Smith

The logical simplicity characteristic of the relations dealt with in a science is never attained by nature alone without any admixture of fiction.  —​Frank  Ramsey

6.1  Introduction A philosophical outlook associated with Quine (1960) champions science and is distrustful of modality—​of possibility and necessity, essences, counterfactuals, and so on. This combination shows some tension, however; scientific language and practice seem alarmingly imbued with the modal. In response to Quine himself, this challenge was made by pointing at science’s use of causal modalities (Føllesdal 1986). Another aspect of everyday modal thinking is treating the actual as surrounded by a cloud of possibilities—​ other ways things could be or could have gone. In science, the cloud acquires more structure; it can become a state-​space. One of the ways nature has been mathematicized is by means of organized spaces of possibilities.1 Once the properties of things are seen as the values of variables, it’s natural to recognize other values, including those never instantiated. Laws and the like usually relate variables with respect to all those values. This scientific organization of possibilities is akin to the philosophical idea of an array of possible worlds, with comparative nearness relations between the worlds. In the philosophical project of Lewis (1986), the two roles for modality I’ve mentioned so far are linked: the basis for the causal push, to the extent that such a thing exists, is nearness relations between worlds. 1 See Williamson (2016, n.d.) for discussions of modality and state space models. Benjamin Sheredos and William Bechtel, Imagining Mechanisms with Diagrams In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0007

Models, Fictions, and Conditionals  155 The most obvious role for something beyond the actual in science, though, is the kind of model-​building in which the merely possible is overtly on the table—​the science of infinite populations, frictionless planes, and wholly rational economic agents.2 In this kind of model-​based science, we see attempts to understand complex actual systems by analyzing simplified analogues of those systems, analogues that do not, as far as anyone knows, actually exist. Many useful scientific tools can only be applied, or can only yield comprehensible results, if you apply them to a modified analogue of the system you are trying to understand. The push of causation, the organization of clouds of possibilities, and overt detours through fiction can all be understood as some sort of scientific employment of the merely possible. Science takes possibility seriously. That is not to say that science takes the non-​actual to be in some way real, but it treats possibility in an apparently weight-​bearing way. In contemporary metaphysics, questions about what might be going on in talk about the merely possible are embraced, but in philosophy of science there’s sometimes a reluctance to attempt a general story about it. The metaphysical road is seen as one to avoid. Arthur Fine, commenting on recent discussions of the role of fictions in scientific modeling, says that this work is looking for a theory where one is not needed: “I am suggesting that there is no genuine problem for anyone to look into” (2009, 124).3 Science makes use of fictions, and this sometimes helps us to understand things, and that’s the end of the story. I think this is a mistake, and the attempt to give a general account of how fictions work in modeling is appropriate. The problem is analogous to a well-​known family of problems about mathematics, especially what Wigner (1967) called the “unreasonable effectiveness” of mathematics in science. Fictions are not as unreasonably effective as mathematics, but their effectiveness does raise questions. Various chapters in this volume also link questions about scientific modeling to questions about the psychological faculty of imagination. In this chapter I’ll offer—​eventually—​a way of thinking about the role of fictions in model-​building. I’ll get there by a road that visits several other proposals that have been offered on this topic, and also embeds 2 Lewis made a brief comment about this in his 1986 book about modal realism—​he saw this as another application of his framework. 3 “I am suggesting that there is no genuine problem for anyone to look into. There is no need for the philosophical equivalent of an ‘impetus theory’ to explain how a gap is bridged. Properly understood, nothing about a gap calls for philosophical explanation” (Fine 2009, 124).

156  The Scientific Imagination scientific modeling within a fragmentary but general treatment of fiction, possibility, and the imaginary.

6.2 The Imaginary A set of central cases pose the problems clearly. In these cases, scientists want to understand a particular complex system, and they make what seems to be a detour and investigate a simplified analogue of their target. An example used by Weisberg (2007, 2013) is Vito Volterra’s attempt to understand fish populations in the Mediterranean. The work starts with a real system (a sea), populated by unproblematic entities (fish), but the system is intractably complicated, and that motivates the detour. There’s an apparent similarity between cases like this and those where the “model” is a physical object—​say, a scale model such as the San Francisco Bay model (also used by Weisberg). In the latter case the simplified analogue is built; in the former the model is merely imagined. This practice is related to several others. Salis and Frigg, in this volume, discuss Galileo’s remarkably effective use of thought experiments. Thought experiments need not involve a path through the deliberately fictional; they may be concerned merely to bracket questions of actuality, and if what is imaged turns out to be real, that would be fine. Imagination can function in the service of “direct” representation; a scientist freely imagines, but the goal is literal truth.4 The situation is different when what is imagined is known to be non-​actual but remains scientifically useful despite this. Salis and Frigg use the example of Maxwell, who made use of an “imaginary” fluid in his study of lines of force. Maxwell’s discussion makes explicit the fact that he does not see his construct as a “hypothetical” fluid, but as “merely a collection of imaginary properties”—​Maxwell distinguishes the hypothetical (but perhaps actual) from the avowedly fictional. Maxwell also says this side of his work is merely “heuristic.” But some constructs of this kind are not merely heuristic in any ordinary sense of that term. Consider large-​scale climate models and the like. Here, idealized models appear to be a central element in theorizing, a large part of our means for theoretical understanding.5 4 “Direct” here does not mean entirely unmediated or infallible. It just means not mediated by a deliberate introduction of fiction. 5 For the case of climate models, see Parker (2014) and Katzav and Parker (2015).

Models, Fictions, and Conditionals  157 As I have set things up, the detour is a choice, something optional. It is possible to reach a related view by saying that all scientific work has, and must have, some of this character. That is the view expressed in the Ramsey quote I used as the epigraph for this chapter: “The logical simplicity characteristic of the relations dealt with in a science is never attained by nature alone without any admixture of fiction” (1931, 168). One tradition holds that manageable scientific ideas must have certain features, including simplicity and unity in forms the world may be reluctant to furnish. (That is a theme in Kuhn 1962, though it can be seen as going back to Kant.) If the world is that way, and the scientist recognizes that it is, then an attitude like Ramsey’s is motivated. But Ramsey’s “never” is something I deny, and I’ll assume in this chapter that the description of fictional analogues of real systems is an optional matter, one scientific strategy among several. The style of work I invoked earlier—​“Imagine a population . . .”—​is one category of cases within model-​based science, but there are others. There is also a style of work in which mathematical structure itself is central. The first move is not imagining a physical system but specifying a mathematical structure—​a space, parameters, functions—​and then giving an interpretation of this structure that relates it to a real system. The “semantic view of theories,” in some forms, was an account of science that made this style of work central and tried to understand other kinds of theorizing in those terms (Suppes 1960). Instead, I think this kind of work is real and somewhat different from modeling that makes use of imagined economies, populations, and planets. Work that moves directly to consideration of a mathematical structure need not have significant dealings with fictions. The philosophical problems it poses are instead related to Platonism and the status of mathematical objects. That suggests a view in which there are two distinct phenomena, the scientific use of imaginary systems and the scientific use of apparently freestanding mathematical objects, with distinct metaphysical questions arising in each case. I think the situation is more complicated, as each practice shades into the other. The “imagined” systems that matter in science are usually not single systems but classes of them, related by their mathematical properties. Modeling involves a lot of semi-​schematic imagining, moving between more abstract and more quasi-​concrete structures. In this chapter I am concerned mostly with the style of work in which imaginary systems are evident features of the practice.

158  The Scientific Imagination One option that has been explored is the view that a model is a representation—​not just a structure that might be used as one, but one that has, as an inherent feature, a key to its interpretation. This is the view that guides Yablo’s chapter in this volume. Yablo acknowledges Frigg’s view and treats it as similar—​models “not only represent their target; they do so in a clearly specifiable and unambiguous way” (Frigg 2010a, 275).6 This can be distinguished from a situation in which a model system is a resource with which one can do as one chooses, and which has no natural or “unambiguous” interpretation. I think model systems do not carry with them their interpretations; they are “ambiguous” in that sense. This issue may be partly terminological—​“model” might refer to a model system plus something that determines its interpretation. Either way, I think progress has been made by distinguishing between model systems and the commentaries that bring them to bear on empirical systems—​what Giere (1988) called “theoretical hypotheses” and I called “construals” (Godfrey-​Smith 2006). One kind of work is exploring the model system itself—​this is true of physical scale models, imaginary systems, and purely mathematical structures—​and a separate activity is working out what the structure might tell us about empirical goings-​on. Models are often developed in one domain and used as a resource in another. Spin-​glass models were taken from physics into biology. Game theory went from economics into biology—​and then went back to economics again, after it had been evolutionized by biologists. Places such as the Santa Fe Institute specialize in encouraging work of this kind—​formal but multidisciplinary, with a continual eye to the exchange of models between fields. As Weisberg (this volume) discusses, computer models raise special problems of interpretation: which aspects of the computational processing are to be treated in a representational way, and which are to be regarded just as how a set of procedures are being physically implemented? These questions about interpretation have connections to the more vexed ontological questions about modeling. Some views oppose the idea of model systems as objects—​they oppose this even as the right way to see things prima facie, at face value. Modelers, on this view, do not talk about model systems 6 I think that Frigg’s view is different on this point, though the passage Yablo quotes does support what he attributes to Frigg. I am not sure how some parts of his view fit together. Frigg says, “Taking model-​systems to be intrinsically representational is a fundamental mistake. Model-​systems, first and foremost are objects of sorts, which can, and de facto often are, used as representations of a target-​system” (2010c, 99). He also says scientific “models not only represent their target; they do so in a clearly specifiable and unambiguous way” (2010b, p. 275). I think that if models are not inherently representational, they are not “unambiguous.”

Models, Fictions, and Conditionals  159 that are distinct from real-​world targets, but merely engage in make-​believe about real-​world systems. I think views of that kind have an element of truth, and I’ll discuss them later, but I think they have to massage the phenomena quite a lot. It is partly the actual reification of model systems by scientists—​ who write as if constructing a model is one thing and determining its application is another—​that contributes to the putative reifiability of model systems in a philosophical context. Suppose we do take seriously the apparent treatment of model systems as objects, as topics of discussion. What sort of thing are those systems taken to be? There are two standard options, one of which acts mostly as a foil in discussion. That one is the view that model systems might be non-​actual but real and concrete entities, in the style of Lewis (1986). The other option is that they are abstract objects like numbers and other mathematical entities. Those views need not exhaust the options—​“abstract artifacts” of the sort described by Thomasson do not have the familiar features of abstract objects like numbers (such as necessary existence). I will discuss Thomasson’s view shortly. First I want to add something to the list of options. I think that “the imaginary” is a folk-​ontological category, somewhat different from the categories philosophers usually work with. If something is imaginary, then it is not actual and concrete, but the imaginary is not merely the possible; it has a made-​up or constructed side to it. Everyday thinking has it that imaginary things are contingent products of human activity (they have to be imagined to get their imaginary status), but they are not abstract in the way numbers and sets are. They are unsuccessful candidates for concreteness, and this makes them different from things that were never countenanced at all, and also different from abstract objects. Imaginary things are candidates for occupying space, though they don’t actually do so, and candidates for entering into causal relations with real objects. So they are not like numbers, which could not occupy space, and not like the occupants of Lewisian possible worlds, which don’t have to be thought up. Once imagined, they can be talked about by many different people. (Many of the quotations from scientific modelers used by Salis and Frigg in their chapter in this volume strongly suggest a conception of the imaginary like this.) You might say at this point that “the imaginary” in this sense is a dubious ontological category, one that falls between the good ones: if things are real but not in space, then they are abstract and should be like numbers. In that case, they are not contingent products of human action. I agree that imaginary things are dubious in this sense. I am trying to capture a folk-​ontological

160  The Scientific Imagination category, not a professional-​ontological one. The idea is indeed a bit of a mess. But I think the way people think and talk involves a mild commitment to this category despite its instability. Metaphysicians spend their time coming up with ontological categories that are better-​behaved. I think that the objects described in literary fictions are imaginary also. This claim, again, does not determine how those fictions will be treated in a fully considered, non-​folk ontology. Noting this role for the imaginary is just a first step. Some years ago Ron Giere entitled a paper “Why Scientific Models Should Not be Regarded as Works of Fiction” (2008). The paper was a response to the literature linking scientific models with fiction. Several people, including me, who had been influenced by Giere’s 1988 book thought that, although Giere did not put things in these terms, the only way to make sense of what he said was to see Giere’s “model systems” as fictional objects (see Thompson-​Jones 2010 and Thomasson, this volume, for these arguments). Giere said in the title of that paper that scientific models should not be regarded as works of fiction. In the paper itself, though, he said this: It is widely assumed that a work of fiction is a creation of human imagination. I think the same is true of scientific models. So, ontologically, scientific models and works of fiction are on a par. They are both imaginary constructs. (2008, 249)

Giere goes on to say: “In spite of sharing an ontology as imagined objects, scientific models and works of fiction function in different cultural worlds” (2008, 251). This talk of “cultural worlds” is a picturesque way of saying that they have different roles in our lives. Of course they have different roles; no one had suggested otherwise. Literary fictions function in recreation, in artistic endeavor, and in allegorical exposition; scientific models are part of an attempt to understand the workings of natural systems. The question being grappled with in this literature is about the “ontological status” of model systems. On this point, Giere in his 2008 paper endorses the same view as some of the authors he said he was opposing. Giere said he does not want models-​ as-​fictions talk to help anti-​scientific movements such as creationism and postmodernism. That is understandable, and it helps explain the tension in Giere’s discussion. But it is true, and Giere accepts it as true, that modeling makes use of imaginary constructs, and somehow this does help us understand the actual world.

Models, Fictions, and Conditionals  161

6.3  Make-​Believe, Abstract Artifacts, and Easy Ontology I’ll look now at two families of ideas developed in response to these questions, and will offer some arguments against both. The arguments are not intended to be decisive, but they help motivate my positive view. I’ll be looking mostly at proposals due to Thomasson and Levy, with input also from Frigg and Toon. All make use of a notion of make-​believe, influential within theories of fiction elsewhere in philosophy, and they augment this idea in different ways. The idea of make-​believe is central to Kendall Walton’s account of fiction (1990). Make-​believe is a psychological attitude that (in some forms) is part of a social practice guided by “props” that writers and other artists produce. Acts of make-​believe are constrained by “principles of generation” associated with the practice. This analysis aims to avoid the wrong kind of reification of fictional objects. For example, the text of Hamlet should not be taken to describe a “nonexistent or abstract” prince. Instead, it should be seen as “enjoining us to make-​believe” that there was such a prince (Thomasson, this volume). The same view can be applied to modeling in science: we pretend that there is a system with such-​and-​such features. And then: “For a statement p about what is the case within a model system to be true, is for the model-​description together with the relevant principles of generation to prescribe p as to be imagined” (Frigg 2010a, 262). This is, so far, a partial account that does not address several topics, such as comparisons scientists make between model systems and real-​world targets. I’ll look at two views that augment this first move, developed by Thomasson and Levy.7 Thomasson holds that make-​believe gives a good analysis of talk that is “internal” to model-​building but works less well for talk that compares models with real systems. This, she says, parallels the situation with literary fictions. She uses the term “pure pretense” for views that use a psychological state of make-​believe to analyze both talk within a fiction and “external” talk about fictions, such as talk about fiction/​world relations. She thinks that make-​believe does not do a good job of making sense of external discourse in this sense, and “well-​known problems with pure pretense views of fiction carry over to the parallel views of models” (this volume). 7 I use the terms “pretense” and “make-​believe” interchangeably.

162  The Scientific Imagination Responding to the case of literary fictions, Thomasson developed a view that treats fictional objects as abstract artifacts. Make-​believe is the first step in fictionalizing, but once socially organized make-​believe has been undertaken, it can be seen as giving rise to an abstract artifact that can be talked about as an object. The idea of an abstract artifact she sees as independently motivated; laws (not scientific laws but positive, societal ones) and symphonies are also abstract artifacts.8 Recognition of abstract artifacts enables a “far more straightforward account of external historical, theoretical, and critical discourse”—​about literary fictions and scientific models alike. Thomasson thinks that it is only general ontological qualms that might make us reluctant to say that model systems are real in this way. She argues that abstract artifacts are not ontologically costly in either scientific or literary cases: “What more should one think it would take for a model system to be created than for scientists to engage in certain kinds of modeling activities and to provide certain model descriptions?” (this volume). My reservations about this view don’t primarily involve ontological costs. They have to do with what role abstract artifacts can play in making sense of modeling, with how much they might explain. I’ll start with a general idea. I think there are at least two roles that talk of objects (object-​talk) can play in a description of a practice of this kind, a minimal role and a richer one. The minimal role is one where objects function as little more than intersection points in a certain kind of pattern of talk. Participants in a practice (perhaps novelists and readers, mathematicians, scientists) use nouns in a certain way, either casually or on the basis of some kind of real commitment, and we as commentators follow them, but do so in a deliberately low-​key spirit. We don’t think (or necessarily deny) that if we were constructing a pattern of first-​order talk in this area from scratch, we would use nouns in the same way they do. Once playwrights and actors talk of Hamlet or the ideal pendulum as a thing, we follow them. (I think this is close to what Thomasson [2009] in other work has called the “covering” use of object-​talk.) The richer kind of object-​talk that commentators might engage in is one in which we are committed to objects as a certain kind of source of constraint on what is said within the practice. We take them to be targets of the talk, which can somehow constrain what is said. This talk of “constraint” is not 8 Thomasson: “Abstract artifacts are extremely commonplace. Entities such as theories, stories, laws of state, symphonies, and so on all seem best understood as abstract artifacts. If you are prepared to accept that we refer to any of these, then there should be no barrier to accepting fictional characters and model systems—​considered as abstract artifacts.”

Models, Fictions, and Conditionals  163 merely a relabeling of constraints that are internal to the practice. To talk of Hamlet as an object is not to see that object as exerting any constraint over what is said about him; more exactly, such an object exerts no constraint over and above those inherent in the pattern of talk itself. Ordinary physical objects, on the other hand, are sources of constraint in their own right. They affect what we say about them in ways that outrun decisions we might make internally to the practice. Mathematical objects are also supposed to be a bit like this—​at least, many would say so, and Platonism is an expression of that attitude. Mathematical objects, such as the real numbers, seem to be a source of constraint. This may be an illusion, or something that invites a revisionary analysis (see section 6.4), but the prima facie case is there. What about imaginary systems of the kind that figure in modeling? The problem might be expressed by saying they seem a bit like Hamlet, but also a bit like numbers, and the category of abstract artifact only captures their Hamlet-​like role, their role as mere points of intersection in practices of talking and imagining. My reservations about what abstract artifacts contribute can be illustrated by looking more closely at one kind of “external” discourse about models, the kind that involves comparisons of model systems to empirical targets. In his paper “Models and Fictions” (2010a), Frigg moves some distance away from a “pure” pretense view (in Thomasson’s sense) in order to make sense of these comparisons. Frigg says that talk of model/​world matches involves comparison of properties. Building a model puts on the table certain properties, and a good model puts on the table properties similar to those actually possessed by the target. To talk about the empirical value of a model (over and above its predictive role) is to talk about this relation between properties. I objected (Godfrey-​Smith 2009) to Frigg that the realism about nowhere-​ instantiated properties this approach requires is not so much different from realism about non-​actual objects. Thomasson, on behalf of Frigg, responds by defending “easy ontology” for nowhere-​instantiated properties. She does not say that these properties are abstract artifacts, or similar; she thinks such properties are real for different reasons. They have a direct licensing (by way of negative predications) that is not applicable in the case of non-​actual objects. I am not saying this move fails—​perhaps it is fine. But it does show how little is being done by the abstract artifacts themselves. If the most important “external” questions about models are those that involve world-​target relations, then abstract artifacts are not playing much of a role. The move Frigg

164  The Scientific Imagination made did not require introduction of abstract artifacts, and Thomasson’s development of Frigg’s view does not employ them either. Thomasson says that abstract artifacts are not very demanding as posits. I reply that this may be fine, but the most important problems with “external” discourse about models do not seem to be helped much by them. We don’t pay much to introduce them, and don’t seem to gain much either. When I say this, I assume that the most important “external” talk about scientific models is talk about model/​world relations. In the case of literary fictions, those kinds of comparisons are not usually seen as especially important, certainly no more important than talk about the history of a literary work and its relations to other works. As Arnon Levy noted in comments on a draft of this chapter, abstract artifacts do seem to help us handle those other kinds of external talk, in scientific as well as literary cases. A different approach to all these questions is to be firmer on the worldly side (“hard” as opposed to “easy” ontology) and then be prepared for a complicated story about how the practice relates to the world. That is the approach I’ll take in the next section. A different view that uses make-​believe in company with extra arguments has been defended by Levy (2015). He thinks a wrong move was made early on when people accepted the face-​value appearance that model systems are distinct from their targets. Instead, models are “directly about the world.” My suggestion is that we treat models as games of prop oriented make-​ believe—​where the props, as it were, are the real-​world target phenomena. To put the idea more plainly: models are special descriptions, which portray a target as simpler (or just different) than it actually is. The goal of this special mode of description is to facilitate reasoning about the target. In this picture, modeling doesn’t involve an appeal to an imaginary concrete entity, over and above the target. All we have are targets, imaginatively described. (2015, 791)

The make-​believe a modeler engages in is always directed on real-​world objects, and the fictional side of modeling is imagining modifications to those objects. There are no extra objects we need to make sense of; there are only unusual things being said about ordinary target systems. As Levy notes, Adam Toon (2012) has developed a similar view. This approach is in a sense of the opposite of Thomasson’s. She thinks it is important to recognize that model systems are additional objects that people can

Models, Fictions, and Conditionals  165 talk about, and argues that this is not as ontologically costly as people have thought. Levy and Toon think there is no need to recognize such objects at all, whether they are cheap or expensive. Let’s move immediately to the hard questions about the utility of modeling. For Levy, models are not merely predictive instruments. They have a descriptive goal that includes truth. You can compare what you are prescribed to imagine about a target with what you may consider believing about the same thing. Model descriptions, because of idealization, are not in the simplest sense true of their targets, but they can be “partly true” of them: “Partial truth (of a sentence, or a collection of sentences) is . . . understood as truth of a part (of the sentence)” (2015, 792). Levy’s position draws on Yablo (2014), who develops the view further in his chapter in this volume. The case of numbers provides a model: “The number of planets in the solar system is nine” equates the number of planets with the number nine. Its truth or falsity supervenes in part on facts about numbers, and in part on the composition of the solar system. Even if we assume that there are no numbers, it would still seem that this sentence says something true about the solar system. (Levy 2015, 792–​793)

To find the partial truth in a description, we subtract some of what it says. Similarly, what we learn from a model can be seen by setting aside its idealizing features. The ideal gas model, for example, can be seen as saying some true things about real gases, as well as saying some false things (because of its simplifying assumptions). This is not an account in terms of approximate truth, and Levy explicitly distances himself from that path.9 Later I’ll give a treatment that does make approximation central. For Levy, you engage in make-​believe about a target system, and you are enabled by this exercise to say some things that are partly true. Levy’s is an attempt at a very parsimonious view. Its downside is that it insists that all cases fit a certain mold even when they seem not to. There seems to be modeling that is not about actual systems. And many modeling practices seem to be more indirect; a model system is explored first, in its own right, and 9 Levy: “It should be emphasized that partial truth is not approximate truth: it is not that “the number of planets in the solar system is nine” is more or less true. Rather it has a distinct part that is true, i.e. the part concerning the solar system and a distinct part that is false, i.e. the part concerning numbers (at least if we suppose that there are no numbers)” (2015, 793).

166  The Scientific Imagination comparisons are made to targets later. I think that Levy might want to cut those cases loose, as ultimately not entirely coherent, and say that his account captures the part of the practice that works. I will develop my own view next and then compare it further with Levy’s.

6.4  Models, Conditionals, and Truth This section will outline a positive view of scientific modeling. I’ll approach it by way of a general picture of the imagination and modal thinking, and will also return to the case of mathematics at the end. I begin by taking a practical and psychological approach to the idea of possibility. Plausibly, the idea of possibility has a primitive association with action: the world at large determines how things are; we determine what to do, and in these episodes we take ourselves to choose from possibilities. From there, a sense of possibility projects backward and sideways. We see other events, including past events, as embedded in a cloud of ways-​things-​might-​ have-​been.10 (As Alison Gopnik says, regret is the evolutionary price paid for planning.) Action gives us the idea of possibility, and also an accompanying idea of dependence:  if I  do this, things will go like that. The forward models used in planning can also be applied to testing (if I do this, I expect things to look like that—​unless I am wrong). The sense of possibility thus gains an epistemic role. On top of this deep-​rooted family of skills of useful modalizing are laid additions that depend on language and the social organization of science. In a central class of scientific cases, you specify a setup and try to see what follows. In work of this kind, computers are now a powerful aid to the scientific imagination, as they enable complex dependence relationships to be broken down into many smaller ones. Large numbers of small and well-​understood procedures can be combined to enable us to imaginatively explore a larger system. 10 This modal orientation linked to action might be neurobiologically deep and seen outside of humans. Neurobiological work on internal “spatial maps” in rats has progressed so far that it is possible to read some things off their neural activity, and one study reports that as rats make a spatial decision, they activate a collection of neural paths that sweep ahead of the animal’s representation of its current position, running “first down one path and then the other,” apparently representing future possibilities (Johnson and Redish 2007). It’s interesting to think in this context about the neglect of ordinary action in Quine, foe of modality.

Models, Fictions, and Conditionals  167 In a wide range of cases, the output of a piece of scientific modeling can be expressed as a conditional, a statement of the form “if A, then C” (or a collection of these conditionals).11 The upshot of modeling need not always be stated in this form, but I think it is generally pretty close to the surface. Philosophy recognizes several kinds of conditionals. Material conditionals are simple: “if A then C” is treated as equivalent to “either C or not A.” The others, including how many others there are, are more controversial. An example is the indicative conditional: “if it rains, the show will be called off.” Is that conditional truth-​functional, and if so, is it a material conditional (true in any case where it does not rain, as well as any case where the show is called off)? More overtly problematic are subjunctive conditionals: If it were to rain, the show would be called off. If Jones had taken arsenic, he’d show these symptoms. When a subjunctive conditional has an antecedent that is known or assumed false, the result is a counterfactual conditional: If it had rained, the show would have been called off. If Oswald had not killed Kennedy, someone else would have. Unlike the Kennedy and Nixon cases often used in discussions of counterfactuals, the conditionals generated by model-​building often have generalities in antecedent and consequent: if there was a setup like this, it would do that. This might be expressed instead as: any setup like this would do that. Now it looks like a statement of a law (or a “law in situ” [Millikan 1984]). You might then simply say that modeling yields laws, except that natural laws are not supposed to have false (never satisfied) antecedents. A law with a false antecedent could be said to apply in the actual world if it was a material conditional, but then it would not tell you what to expect on the consequent side.12 Conditionals are both philosophically controversial and practically indispensable, especially in planning and determining responsibility. As I said, the products of modeling can often be seen as conditionals. Re-​expressing some familiar cases:

11 A first sketch of this view is in Godfrey-​Smith 2014.

12 This is a way of expressing some of Cartwright’s (1983) insights.

168  The Scientific Imagination If a pair of populations had features F, then it would have/​do G (Lotka-​ Volterra behaviors). If an object had the features of an ideal pendulum, then it would do this. These look like counterfactual conditionals. They are subjunctive with respect to the kind of link asserted between antecedent and consequent, and their antecedents specify arrangements that are assumed to be never realized in the actual world. Not all cases need fit this pattern—​you might model without knowing whether the antecedent is satisfied. The use to which these products are applied also varies. You might want to make predictions for use in guiding behavior (if you are confident in the model) or for testing (see what happens and give the model credit or blame). I’ll focus here on cases where the modeler’s aim is working out what a system will do. In describing the scientific role of these conditionals, there are two features to consider. One is the status of the conditional itself—​whether it is true, for example. The other is the conditional’s relation to actual events, something that depends both on the status of the conditional and on whether the conditions specified in the antecedent actually obtain. I’ll come to the status of the conditionals in a moment. Assume for the moment that some of them are true. In the case of antecedents, I’ll talk of satisfaction as the truth-​like feature that some of them have (this is supposed to be more general than truth, as antecedents need not be propositions). Let’s assume that the antecedent of some particular conditional is never satisfied. How could such a conditional help us with the actual world? If conditionals of this kind are the exported products of modeling, how can they bear on real-​world targets? The key is approximation. Suppose a model is built and a conditional is extracted at the end. The antecedent of the conditional specifies all the assumptions that went into the model, and this is a fictional setup. The condition in the antecedent is fictional, but you might think it is close to situations that do actually obtain. Then we can tell a possible story about utility, by way of the idea of approximation. I’ll work through what I take to be a typical sort of case. By modeling you learn that “if A then C” (A is the antecedent, C is the consequent).13 You also know, of a target system, “approximately A.” That is, there is approximate 13 I think that formal analysis of a model system can only lead us to a conditional of this kind in the context of a construal (Godfrey-​Smith 2006). In this discussion I assume that a construal has either explicitly or implicitly been introduced.

Models, Fictions, and Conditionals  169 satisfaction of the antecedent. In this chapter I won’t say more about what that amounts to. Suppose you know “if A then C” and “approximately A.” Can you say “if approximately A, then C”? No, as there are many cases where only A suffices (consider a credit card number). How about “if approximately A, then approximately C”? This is still not okay in general, even if “approximately C” makes sense. But sometimes you do have reason to move from “if A then C” to the version with “approximately” on both sides. Being able to do this in the right cases is an important skill. If this object approximates a simple pendulum, then its swinging will have an approximately constant frequency (as long as the amplitude is small). Sometimes slight variation in A leads to slight variation in C. Then you can use the conditional as a guide to behavior. You can’t derive the exact behavior of the real, very complex setup, but you can work out the behavior of a simpler analogue, and that can be a guide. How much can we say about when it is okay to infer from “If A then C” to “If approximately A, then approximately C?” Particularities and case-​ specific skills probably rule to a large extent, but they are not the whole story. Robustness analysis, often discussed in connection with model-​based science, has some of its utility explained here.14 If you can show that many models that make different idealizing assumptions all lead to the same outcome, exactly or approximately, that can be very valuable. If an outcome is robust over many different antecedents, all of which simplify actuality differently, then the outcome might be found (at least approximately) in the actual world as well. (We learn that “if A then C,” and “if A* then C,” and “if A** then C . . . ,” where C is coarse-​grained, not too detailed). The “spread” achieved over a range of antecedents may be such as to suggest that the actual world is included in the set of worlds that leads to C. In other cases you might not need robustness in this sense. You might have some other basis for going from the subjunctive to something directly usable. Either way, a characteristic sequence of steps may run like this: 1. If A then C (subjunctive conditional with an unsatisfied antecedent, hence a counterfactual, determined by modeling). 2. If approximately A, then approximately C (also a subjunctive, inferred invalidly but perhaps reasonably from (1)).



14 See Levins 1966; Weisberg and Reisman 2008.

170  The Scientific Imagination 3. Either approximately A does not hold, or approximately C (a material conditional, or perhaps something a bit more complicated [Edgington 2014], but something aimed squarely at the actual, and as dependent as possible on truth-​functional features. From (2)). 4. Approximately A (via other information). 5. Approximately C (a conclusion about the actual world, from (3) and (4)). I think this sequence is often a useful one to walk down, even though some of its steps are not deductively valid.15 What is the status of the counterfactual conditionals at the top? Are they true when all goes well? I don’t think that is a required part of the account. It should be possible to have a view along the lines I am describing that includes a non-​factualist treatment of counterfactuals. I  won’t try to resolve these questions here, but will briefly sketch some ideas. A great deal of ingenious work has tried to find rules that assign truth-​ conditions to familiar kinds of counterfactuals, and does so such a way that the most uncontroversial counterfactuals come out as true. This has proven a difficult task. For example, Hájek (n.d.) argues that familiar counterfactual conditionals are almost all false. “If A then C” can always be defeated by “If A, it might not be that C.” The role of probability in physics generally tells us that the “might” claim is true, so the counterfactual is false, strictly speaking. Hájek thinks that false counterfactuals can still be useful, and in many cases they can also be carefully re-​expressed (perhaps probabilistically) to yield a truth. I think considerations like these suggest that non-​factualism about counterfactuals may be appropriate: there are good and bad counterfactuals, but truth might be the wrong thing to ask for. There is reason to reconsider approaches like the one developed by John Mackie (1974), an approach regrettably sidelined by the elegant and more technical work of Lewis and Stalnaker around the same time. Mackie thought that in counterfactual thinking we stipulate setups, mentally and/​or verbally, with varying degrees of detail, and try to run the scenarios forward in a way guided by what we take to be laws and other true generalizations. The results of this exercise are often useful, and some counterfactual claims are better than others, but they are not determinately true or false. Our sketches of the initial conditions are 15 Note also that this sort of chain can be expressed with “approximately C” or just a weak and coarse-​grained  C.

Models, Fictions, and Conditionals  171 never complete enough for a particular outcome to be guaranteed. Hájek’s argument makes this thought more precise. You might object that this view only applies to informal counterfactual thinking, and the use of mathematics and computers ensures that the conditionals established by modelers have a tighter connection between A and C. I think this thought is partly right but not in a way that makes a difference to the essential features of the situation. Though in modeling the aim is (sometimes) to push conditional claims as far as possible toward a situation where the relation between antecedent and consequent is mathematically guaranteed, in the kind of modeling we’re talking about here—​a kind where the product is a conditional about a physical system—​it’s not (ever? generally?) possible to get all the way there. Consider a very simple case: “7 + 5 = 12” is mathematically necessary, but “if you put seven marbles on a table and add five there will be twelve marbles on the table” is not mathematically necessary (and the problem is not fixable by means of a consequent that is probabilistic). Whether you will find yourself with twelve marbles depends on the physical characteristics of marbles and tables, and initial conditions. The same applies to conditionals about what will happen in an ecological system where a certain kind of predator eats a certain kind of prey. Mathematics may be where the work is done, but the conditionals that result are not merely mathematical statements. They are statements about physical systems, dependent on physical regularities and the details of initial conditions. These ideas about counterfactuals are not essential to the view of modeling defended here. The story would be easier to develop if the goodness of a counterfactual was a matter of simple truth. How does my position relate to Levy’s view, discussed in the previous section? One difference is my use of approximation rather than partial truth—​ truth with respect to some of what is said. There might be some hidden equivalence between the two approaches, but I think approximation is an important and somewhat neglected element in this area. The role of approximation is also important in consideration of scientific progress. Many claims made on the basis of old theories and models are better seen as approximately true (close to the truth) than as saying literally true things about part of what they describe (McMullin 1984; see also later in this chapter). The role given to approximation also seems to better capture the role of assessments of model/​ world similarity within the face-​value practice of modeling. The view I’ve outlined also does not make much of the difference between modeling-​talk

172  The Scientific Imagination that is aimed at a particular target and modeling-​talk that is not. A person might say, “If the Adriatic Sea contained just two species of fish . . .” That is a Levy-​Toon-​style antecedent. But the person could also just say, “If there was a sea like this . . .” and build a similar model. Target systems as objects of make-​believe need not be in the picture at the model-​building stage. I’ll mention a few other connections. Suárez (2009) has developed a view in which models are tools for inference, and the specific role for fictional assumptions in models is to furnish us with conditional statements. He does not make use of approximation in the way I did earlier, but this might be added to his view. Bokulich (2011) has argued that the way models explain is by being a basis for counterfactuals: models and targets must have isomorphic counterfactual structure. This seems a strong requirement, perhaps appropriately loosened with an appeal to approximation. Bokulich also discusses McMullin, and says that her view draws on his. McMullin was indeed on the right track, as seen in this quote: The fact that the Bohr model worked out so remarkably indicates that the structure it postulated for the [hydrogen] atom had some sort of approximate basis in the real.  . . .  Later [quantum mechanics] would modify this simple model in all sorts of fundamental ways. But a careful consideration of the history of the model . . . strongly suggests that the guidance it gave to theoretical research in quantum mechanics for an immensely fruitful fifteen years must ultimately have derived from a “fit” of some sort, however complex and however loose it may have been, between the model and the structure of the real it so successfully explained. (McMullin 1968, 396)

My view is intended to be not far from this. I’ll also say more about the analogy to mathematics, especially the problems of ontology and efficacy seen in that older and more famous case. In mathematics, a Platonist attitude is often expressed by practitioners—​I accept that as a sociological fact. The puzzles this attitude raises are made more acute by the “unreasonable effectiveness” of mathematics as a tool for dealing with the world. I  see the analogy with imagined-​system modeling—​more than an analogy—​as follows. In both cases there is a constrained inference practice, prompted by real-​world questions (questions about magnitudes and counts in the case of mathematics, questions about the tendencies of complex systems in the case of modeling). The inferential practice gives rise to claims whose fit to

Models, Fictions, and Conditionals  173 their targets is not straightforward in some respects (abstraction, idealization). The practice also comes to include reifying moves—​there comes to be talk of objects of a special kind, distinct from concrete actual objects (numbers, imaginary systems). There is then an expansion of the inferential practice in the light of this reification (an expansion much more elaborate in the mathematical case). As an outsider, one might then try to explain what is going on. You might argue that the Platonic mathematical objects or imaginary systems have a kind of reality that fits the ways these things are talked about within the practice, or you might give a more revisionary treatment. That revisionary treatment may be one that takes the story back to concrete actual-​world objects, where it all started. A sample theory of this kind in the case of mathematics is nominalist structuralism. The philosophy of mathematics is an intricate field, and I am no expert. I introduce nominalist structuralism here mostly for the way it illustrates a strategy of analysis. Nominalist structuralism denies that there are any special mathematical objects. Mathematical claims are claims about concrete, actual-​world structures. When you assert mathematical propositions, you say things of this form: any system with property F has property G as well. Following the sketch given by Horsten (2016), if p is a sentence within real analysis, and RA are the principles of real analysis, the content of p is (roughly) given by “Every concrete system S that makes RA true also makes p true.” All that is recognized in this interpretation are concrete systems (and the problems with this view partly concern whether concrete systems of the right kind really exist). There are no mathematical objects, even though mathematicians routinely talk as if there were, saying “there is an infinite number of primes,” and so on. The face-​value practice is object-​based and Platonist, but the outside observer can look at mathematical claims differently. Mathematicians are Platonists, but we don’t have to be Platonists when we describe what they do and what they achieve. Similarly, modelers are a bit like modal realists—​or modal realists and Platonists, given what I have said about more purely mathematical modeling. Modelers think that imaginary systems can be topics of discussion and assessed for similarity to non-​imaginary systems.16 But the outside observer 16 When I have engaged in modeling of the relevant kind, mostly in collaboration with Ben Kerr and Manolo Martínez, I have certainly thought that way.

174  The Scientific Imagination can give an interpretation of what is going on, and what is achieved, that is different. An example is the interpretation I gave earlier using subjunctive conditionals and approximation. The two cases have interesting similarities and differences. Both have a role for conditionals. That role is somewhat hidden in the mathematics case (assuming nominalist structuralism is true), and only barely hidden in the modeling case. In the modeling case there is a role for idealization and approximation, and some of the conditionals might lack truth values. In the mathematics case, the antecedents of the crucial conditions are supposed to be satisfied, and there is not supposed (at least at this first stage) to be a role for approximation, though an account of how mathematics usefully applies to actual systems might include this. In both cases, the practice is admitted to include reifying moves, moves that commit to special objects, and there is a structuralist explanation of why the practice works despite the nonexistence of the objects thereby introduced. In a sketch given by John Burgess (2005) of how nominalist structuralism fits into a larger picture, real numbers are initially seen in a physically grounded way—​Newton said that he saw them as “abstracted ratios” of magnitudes such as lengths, and Burgess says this view was common. As mathematical practice matures, it comes to include reification—​there is the ordered field of real numbers, R. But what is this thing and how could it help us? Thus the nominalist structuralist interpretation: there is no such thing, and we should re-​express mathematical claims conditionally. The case of model-​ based science is less elaborate and the reifying moves are less integrated. Similarly, though, the story starts with reasoning about tendencies in complex systems with the aid of imagined simplifications, and then there comes to be reification: I am studying the two-​locus system with inbreeding, or the ideal pendulum. We can then look on, from the outside, and tell a story in terms of conditionals and approximation, a story in which imaginary systems do not figure. I can also make a connection here to my earlier discussion of objects, in response to Thomasson. I distinguished between object-​talk of a minimal kind and a richer kind. In the richer kind, objects are seen as sources of constraint. I am denying the second role for model systems here, just as the nominalist structuralist does for mathematical objects. If someone wants to hold on to talk of model systems or numbers as objects in the more minimal way, then I don’t think this has to be resisted.

Models, Fictions, and Conditionals  175

6.5  Conclusion The imagination is a psychological faculty with several roles—​a role seen in planning, a recreational role, and a set of epistemic roles that include a place in science. The scientific role for the imagination, in turn, includes at least three aspects: (i) consideration of epistemic possibilities—​ways things might be, (ii) modal organization of the actual with the possible (state spaces, etc.), and (iii) consideration of tractable analogues of complex real systems in model-​based science. In modeling of the kind discussed here, we imagine scenarios—​construct the if side of the conditional—​and have to work out what would ensue, how the system would behave. A central difference between this scientific activity and recreational cases is that in science, there has to be a highly constrained way of working out what would follow from the setup imaged. The aim is to set up the if side in a way that lends itself to the determination of what follows using methods of known reliability. Otherwise the model is just a game. If your if is an empirically irrelevant if, then the model also has reduced value as science . . . but we should not be too quick there, as what looks like an empirically unimportant if may become important later. In scientific modeling, the face-​value practice is also invested in imaginary systems as subjects of discussion. The view I have defended is one whose first step is to acknowledge the role played by the imaginary in the practice of modeling, but this view then gives an account of modeling that concedes no role to imaginary objects. Instead, the account involves a route from subjunctive conditionals through approximation to indicative conditionals that can be used to draw conclusions about empirical systems. Regarding the general shape of the explanation that results, I am encouraged by the mathematical case and the analogy with nominalist structuralism. In both cases, in mathematics and model-​based science, these problems are ones to address with positive accounts, rather than matters to pass over quietly.

References Bokulich, A (2011). “How Scientific Models Can Explain.” Synthese 180: 33–​45. Burgess, J. P. (2005). “Review of Charles S. Chihara, A Structural Account of Mathematics.” Philosophia Mathematica 3, no. 13: 78–​113.

176  The Scientific Imagination Cartwright, N. (1983). How the Laws of Physics Lie. Oxford: Oxford University Press. Edgington, D. (2014). “Indicative Conditionals.” The Stanford Encyclopedia of Philosophy (Winter 2014 ed.), edited by Edward N. Zalta, https://​plato.stanford.edu/​archives/​ win2014/​entries/​conditionals. Fine, A. (2009). “Science Fictions: Comment on Godfrey-​Smith.” Philosophical Studies 143: 117–​125. Føllesdal, D. (1986). “Essentialism and Reference.” In The Philosophy of W.  V. Quine, edited by L. Hahn and P. A. Schilpp, 97‒113. Library of Living Philosophers vol. 18. Chicago: Open Court. Frigg, R. (2010a). “Models and Fiction.” Synthese 172: 251–​268. Frigg, R. (2010b). “Fiction in Science.” In Fictions and Models: New Essays, edited by J. Woods, 247‒287. Munich: Philosophia Verlag. Frigg, R. (2010c). “Fiction and Scientific Representation.” In Beyond Mimesis and Nominalism:  Representation in Art and Science, edited by R. Frigg and M. Hunter, 97‒138. Boston Studies in the Philosophy of Science. Berlin: Springer, 2010. Giere, R. (1988). Explaining Science:  A Cognitive Approach. Chicago:  University of Chicago Press. Giere, R. (2008). “Why Scientific Models Should Not Be Regarded as Works of Fiction.” In Fictions in Science: Philosophical Essays on Modeling and Idealization, edited by M. Suárez, 248‒258. London: Routledge. Godfrey-​Smith, P. (2006). “The Strategy of Model-​Based Science.” Biology and Philosophy 21: 725–​740. Godfrey-​ Smith, P. (2009). “Models and Fictions in Science.” Philosophical Studies 143: 101‒116. Godfrey-​ Smith, P. (2014). Philosophy of Biology. Princeton, NJ:  Princeton University Press. Hájek, A (n.d.). “Most Counterfactuals Are False.” Http://​Philrsss.Anu.Edu.Au/​People-​ Defaults/​Alanh/​Papers/​Mcf.Pdf. Accessed June 12, 2014. Horsten, L. (2016). “Philosophy of Mathematics.” In The Stanford Encyclopedia of Philosophy (Summer 2016 ed.), edited by Edward N. Zalta, http://​plato.stanford.edu/​ archives/​sum2016/​entries/​philosophy-​mathematics. Johnson, A., and Redish, A. D. (2007). “Neural Ensembles in CA3 Transiently Encode Paths Forward of the Animal at a Decision Point.” Journal of Neuroscience 27: 12176‒12189. Katzav, J., and Parker, W. S. (2015). “The Future of Climate Modeling.” Climatic Change 132: 475–​487 Kuhn, T. S. (1962). The Structure of Scientific Revolutions. Chicago:  University of Chicago Press. Levins, R. (1966). “The Strategy of Model-​Building in Population Biology.” American Scientist 54: 421–​31 Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172, no. 3: 781‒798. Lewis, D. (1986). On the Plurality of Worlds. Oxford: Blackwell. Mackie, J. (1974). The Cement of the Universe:  A Study of Causation. Oxford: Clarendon Press. McMullin, E. (1968). “What Do Physical Models Tell Us?” In Proceedings of the Third International Congress for Logic, Methodology and Philosophy of Science, edited by B. van Rootselaar and J. Staal, 385–​396. Amsterdam: North Holland.

Models, Fictions, and Conditionals  177 McMullin, E. (1984). “A Case for Scientific Realism.” In Scientific Realism, edited by J. Leplin, 8–​40. Berkeley: University of California Press. Millikan, R. M. (1984). Language, Thought, and Other Biological Categories. Cambridge, MA: MIT Press. Parker, W. S. (2014). “Simulation and Understanding in the Study of Weather and Climate.” Perspectives on Science 22: 336‒357. Quine, W. V. O. (1960). Word and Object. Cambridge, MA: MIT Press. Ramsey, F. (1931). “Truth and Probability.” In The Foundations of Mathematics and Other Logical Essays, edited by R. B. Braithwaite, 156‒198. London:  Kegan, Paul, Trench, Trubner. Suárez, M. (2009). “Scientific Fictions as Rules of Inference.” In Fictions in Science:  Philosophical Essays on Modeling and Idealization, edited by M. Suárez, 158‒178. London: Routledge. Suppes, P. (1960). “A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences.” Synthese 12: 287–​301. Thompson-​Jones, M. (2010). “Missing Systems and the Face-​Value Practice.” Synthese 172: 283. Thomasson, A. (2009). “Answerable and Unanswerable Questions.” In Metametaphysics: New Essays on the Foundations of Ontology, edited by D. Chalmers, D. Manley, and R. Wasserman, 444–​471. Oxford: Oxford University Press. Toon, A. (2012). Models as Make-​Believe. London: Palgrave Macmillan. Walton, K. (1990). Mimesis as Make-​Believe: On the Foundations of the Representational Arts. Cambridge, MA: Harvard University Press. Wigner, E. (1967). “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” In Symmetries and Reflections:  Scientific Essays, 222–​237. Bloomington: Indiana University Press. Williamson, T. (2016). “Modal Science.” Canadian Journal of Philosophy 46, nos. 4–​5: 453–​492. Williamson, T. (n.d.). “Objective Possibilities.” Saul Kripke Lecture, CUNY Graduate Center Philosophy program, March 31, 2016. Weisberg, M. (2007). “Who Is a Modeler?” British Journal for the Philosophy of Science 58: 207‒233. Weisberg, M. (2013). Simulation and Similarity: Using Models to Understand the World. Oxford: Oxford University Press. Weisberg, M., and Reisman, K. (2008). “The Robust Volterra Principle.” Philosophy of Science 75: 106–​131. Yablo, S. (2014). Aboutness. Princeton, NJ: Princeton University Press.

7 Imagining Mechanisms with Diagrams Benjamin Sheredos and William Bechtel

7.1  Introduction Discovery of mechanisms has figured prominently in accounts of mechanistic explanation (Bechtel 2006; Bechtel and Richardson [1993] 2010; Craver and Darden 2013). Much of the discussion has focused on the experimental procedures researchers use to delineate the phenomenon to be explained and to characterize the parts and operations included in a mechanism. Craver and Darden (2013) emphasize a variety of evidential constraints (many generated through experiments of various types) concerning the location and structure of a mechanism’s components, the abilities of the components and the activities in which they participate, temporal features of activities, and the need to maintain productive continuity between activities. They also identify a variety of inference strategies such as employing a schema type, invoking an analogy, or forward/​backward chaining. But behind all this, there is a further critical aspect of the discovery process—​the activities of the scientists in putting the pieces together into a mechanistic hypothesis. As important as the constraints and strategies are, they typically do not completely dictate the design of the mechanism. Crucially, it is the scientists who impose the constraints, and the constraints are imposed upon their own understanding of the mechanism, embodied in what Craver and Darden call mechanism schemas (2013, ch. 7). A robust philosophy of mechanistic science should not only assume that scientists fill this lacuna between available constraints (often derived from experiments) and proposals of mechanisms but also offer an account of the cognitive activity involved in constructing mechanistic proposals. This activity produces hypotheses of possible mechanisms. Craver and Darden make much of the distinction between how-​possibly and how-​actually accounts of mechanisms. In their treatment, how-​possibly accounts are regarded as valuable to the extent that they serve as heuristics and facilitate the Benjamin Sheredos and William Bechtel, Imagining Mechanisms with Diagrams In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0008

Imagining Mechanisms with Diagrams  179 design of experiments, which eventually enable the development of how-​ actually accounts. Our focus lies elsewhere: in understanding the success involved in attaining how-​possibly accounts, without treating them simply as means to the ultimate end of how-​actually accounts. Often, scientists advance how-​possibly explanations before they are in any position to evaluate experimentally what is actual. Here we provide an account of this epistemic activity. Its central features are that it often involves visualization, it is creative in going beyond the given evidence and existing accounts, it is fictive in the sense of not entailing a commitment at the outset to the actuality of the results, and it allows for constrained flexibility in generating a design. In light of these features, we say that such reasoning processes are imaginative. Although researchers might aspire to an account of the actual mechanism responsible for a phenomenon, a fundamental part of their reasoning involves the imaginative generation of possible mechanisms. Imagining a possible mechanism that coheres with available evidence and is hypothetically capable of producing an explanandum phenomenon is a kind of success in scientific reasoning. We call this imaginative success. It is another step, subject to distinct norms of success, to evaluate whether the envisioned mechanism is “actually” the one responsible. A failure to find “the actual mechanism” is not a failure tout court. Here we are mainly interested in understanding imaginative success. If one views imagination as involving private “flights of fancy” operating exclusively in the heads of scientists and not directly accessible by others, then one might dismiss as impossible any investigation into scientific imagination. This is not our view. External representations (sometimes words, but typically diagrams) provide public expressions of imaginative reasoning. Even when all we have are published diagrams, we can view them as traces of the imaginative processes the researchers went through in developing hypothetical mechanistic explanations. But, more strongly, imaginative reasoning is often performed interactively with external representations (cf. Kirsh and Maglio 1994). Rather than trying to keep an internal visualization “in the mind’s eye,” scientists frequently report drawing diagrams of hypothetical mechanisms as they are reasoning, and not as a post-​hoc expression. In many cases, the draft diagrams are lost from the record, but they are sometimes available, and provide evidence of intermediate steps in scientists’ imaginative reasoning (cf. Burnston et al. 2014). Regarding the design of external representations as often integral to scientists’ imaginative success in developing a mechanistic explanation, we treat diagrams as entrees into their imaginative reasoning.

180  The Scientific Imagination We begin in section 7.2 by articulating our working conception of imagination. A key aspect of the diagrams that provide our window into the activity of imagination is that they are visual representations that use space. Sometimes the space in the diagram represents physical space, but often it does not, and we will discuss the different spaces used in imagining mechanisms. To make our discussion of the roles of imagination in the design of mechanisms more concrete, in the remainder of the chapter we examine the role imagination has played in developing mechanistic accounts in the scientific field concerned with the generation of circadian rhythms in cyanobacteria. In section 7.3 we focus on imaginative mechanism design, involving prototypical diagrams in which the organization of the components is presented. In section 7.4 we focus on how researchers imagine these mechanisms operating to generate the phenomenon. In the simplest case this involves mental simulation, but often the designs biologists have constructed are too complex (involving non-​sequential execution of non-​linear operations) to be simulated mentally. In such circumstances, researchers turn to simulations using computational models to determine how the mechanism will behave and sometimes to understand why it produces the phenomenon (Bechtel and Abrahamsen 2010). In constructing simulations, researchers often begin with a mechanism diagram, identifying the properties of parts as variables and of operations as parameters, and use this to guide the construction of mathematical equations (Jones and Wolkenhauer 2012). We consider this one of the clearest cases in which imaginative success involves the use of external visualizations.

7.2  A Working Characterization of Scientific Imagination We characterize the scientific reasoning involved in positing hypothetical mechanisms (or in creating how-​possibly models) as imaginative. In saying this, we mean to communicate that such reasoning exhibits features that are paradigmatically associated with “imagination” as it is understood by the folk, by philosophers, and in science. A “definition” of imagination is unlikely to be forthcoming (cf. Thomas 1999), and we do not propose that what we call “imaginative reasoning” has all the features one might attribute to some variety of imagination or other. But by most accounts, something will count as worthy of the title “imagination” if it has the following four features.

Imagining Mechanisms with Diagrams  181 First, as the name itself suggests, imagination has historically been regarded as imagistic in that it involves sensory representations of objects in some modality or other. In this it is like perception, and perhaps unlike abstract thought.1 Second, it is also paradigmatically fictive, in that imagined objects are not presumed to be actual. Although what one imagines may turn out to be the case, in the first instance an imaginative act is not itself an act that necessarily carries any ontic import. In this it is unlike belief about the way the world is. Third, imagination is paradigmatically creative. In memory, one simply “calls to mind” something one has experienced before. One may not even seek to do so; instead a memory may simply “pop up.” In contrast, in imagination one paradigmatically tries to envision something new. Fourth, it is paradigmatically freely variable within some range of freedom. One cannot help but remember (or fail to remember) what has occurred in the past as one (perhaps mistakenly) remembers it. In contrast, in imagination one can paradigmatically vary features of the envisioned scene at will. This does not mean that every feature is freely variable, however. Some basic constraints inform all imagining (one cannot imagine a visible surface that has no apparent color). And some forms of imagining, as we shall see, are even more determinately constrained. Later we will demonstrate that what we call imaginative reasoning has all of these intuitive features, and this is what we have in mind by calling it imaginative. We presume that the foregoing features are components of the folk view.2 We do not, however, intend to be offering a kind of “ordinary-​language argument” regarding scientific reasoning. The features reviewed above are also common in philosophical views of imagination, even if not all of them are always centrally present. Thus Aristotle’s De Anima clearly upholds the imagistic and fictive character of imagination. Kant’s first Critique contains a conception of (productive) imagination as having all four features. Husserl’s phenomenology wields imagination as part of its methodology to study consciousness, emphasizing the last three features. Sartre’s early work challenges Husserl on the imagistic character of imagination (though his view is not inconsistent with our gloss); he uses the term more widely than others, but clearly holds that it is in all cases fictive and, in paradigmatic cases at 1 We are thus not pursuing a conception of “imagination” in terms of pretense, perspective-​taking, or what is sometimes called “recreative imagination” (Curie and Ravenscroft 2002; Liao and Gendler 2010). We fully acknowledge that this is one sense of “imagination” and a worthy topic of study; we only ask proponents of it to extend us the same semantic courtesy. 2 We grant, of course, that the folk conception may cover a surplus of components beyond this. See note 1.

182  The Scientific Imagination least, creative and freely variable. Strawson (1970) identifies all four features, lumping creativity and free variability together. Similarly, all four features are countenanced in scientific research on imagination. In scientific research, folk categories are often split into distinct targets of investigation. This is especially so in the case of folk psychological categories such as “memory” (cf. Bechtel 2007), and the case is the same with “imagination.” Thus Boden’s (2004) analysis of creative reasoning treats imagination mainly with regard to our features of creativity and free variability but is not centrally concerned with other features. Byrne (2005) distinguishes such “creative imagining” from “everyday imagining,” regarding the latter as centrally involving what we have called the fictive character of imagination; she refers to this as “counterfactual imagination.” Some researchers hold that any theory of mental imagery ought to provide a theory of imagination, upholding our claim that imagination is often regarded as imagistic (cf. Thomas 1999). We are not denying that there is significant debate about the nature of imagination (in general or in any of its more specific varieties), or claiming that there is a universally accepted taxonomy of forms of imagination, or claiming that it is widely held that all forms of imagination must involve all four of the features we have named. We acknowledge that other things one might wish to call “imagination” might involve features we have not listed here. For our purposes, what matters is only that if a reasoning process does involve all four of these features, it can aptly be called imaginative. This is what we seek to show regarding scientific reasoning involving diagrams as they posit hypothetical mechanisms. Insofar as we are appealing to diagrams in understanding scientific imagination, we are concerned specifically with visual representations. A diagram typically involves glyphs (shapes, arrows) situated in a space (Tversky 2011). Neither the glyphs nor the space needs to resemble what is being represented—​resemblance is not required by our construal of imagination as imagistic. Although sometimes an iconographic shape is used to represent a particular object, at least as often an arbitrary shape (e.g., an oval) is used. The lack of resemblance is even clearer in the case of space. Sometimes the two dimensions of a visual representation do correspond to physical space, as in maps that respect a “natural mapping” to worldly space. A visual representation of an ecosystem may show where different resources such as water are located and how different organisms are distributed through the space of the environment. But in reasoning about mechanisms, researchers often find

Imagining Mechanisms with Diagrams  183 it useful to employ abstract “spaces” whose dimensions are not anchored to the space of the physical world. A graph can be used, for example, to represent a range of values that can be assigned to mathematical variables in scientific accounts. A “location” in such an abstract space simply denotes that a represented object has certain quantitative properties within some range of possible quantities. Such an abstract form of space is employed in the state-​ space plots we discuss later. In other cases, especially the mechanism diagrams we will be discussing, the spatial dimensions of ink on a page are not to be interpreted as depicting any objective space, or even as depicting an abstract quantitative space. Rather, space on the page is used merely to situate glyphs representing distinct components of a mechanism, and arrows are used to indicate functional connections between them.

7.3  Imagining a Mechanism In this section we will examine how diagrams figure in the construction of mechanistic hypotheses. The dominant twentieth-​century accounts of explanations treated explanation as involving subsumption of a phenomenon under laws, but laws don’t figure prominently in many domains of biology. Rather, biologists offer accounts of mechanisms when they attempt to explain phenomena. In recent decades, several philosophers of science (Bechtel and Abrahamsen 2005; Bechtel and Richardson [1993] 2010; Machamer et al. 2000) have characterized mechanisms as consisting of parts performing operations organized to generate the phenomenon under appropriate conditions, and have characterized many of the strategies through which biologists search for the parts and operations of a mechanism. Once researchers think they have characterized parts and operations, their challenge is to figure out how they are organized so as to generate the phenomenon. Here diagrams play a crucial role as external aids that enable researchers to imagine a mechanism by representing entities and relating these representations with arrows or other symbols whenever the operation of one is thought to produce or affect another. To illustrate the role of diagrams in reasoning about mechanisms, we turn to research on the mechanism responsible for circadian rhythms in cyanobacteria. Circadian rhythms are endogenously generated oscillations that recur with a period of approximately twenty-​four hours in the physiology or behavior of an organism, and which can be entrained to the light/​dark

184  The Scientific Imagination cycle of the local environment. Often the basic circadian oscillations involve concentrations of molecules, but these then regulate a host of other biological functions by determining the time for the expression of relevant genes. The occurrence of such rhythms was only demonstrated in cyanobacteria in the early 1990s, opening the way for inquiry into the responsible mechanism. Through screens of mutant bacteria with altered rhythms, Ishiura and colleagues identified three genes that seemed to figure in the generation of these rhythms—​kaiA, kaiB, and kaiC. By that time, researchers investigating circadian rhythms in eukaryotic cells (especially in fruit flies and mice) had proposed that the core mechanism involved a transcription-​translation feedback loop (TTFL) in which the proteins produced from a gene feed back to inhibit their own expression. Oscillation would result from the fact that when the protein is in low concentration, its synthesis is enhanced, increasing its concentration, but with increased concentration, synthesis is inhibited, causing its concentration to drop. Ishiura et al. summarize the evidence that led them to propose a similar mechanism for the prokaryotic cyanobacteria Synechococcus elongatus: We suggest a feedback-​ loop model for the circadian oscillator of Synechococcus. The following four sets of data—​(i) mapping of various clock mutations to the kai cluster, (ii) rhythmicity in the expression of the kai genes, (iii) alteration of the rhythmicity of kai expression by the mutations mapped to the kai cluster, and (iv) elimination of rhythms caused by inactivation or overexpression of each kai gene—​all support a model in which the kai genes are essential to the circadian clock, and the feedback regulation of the expression of the kai genes by their gene products generates the circadian oscillation in cyanobacteria. (Ishiura et al. 1998, 1521)

The passage references the diagram reproduced in Figure 7.1, which fleshes out how the researchers imagine the mechanism. We suspect that for many readers, it will be apparent how useful this diagram is for thinking about the hypothetical mechanism, and what aid it provides beyond the linguistic description given in the quoted passage. In this diagram the rectangle on the right represents the stretch of DNA where the kai genes reside, while the proteins are shown in the box on the left. Arrows running right to left from the genes are labeled “transcription” and “translation,” and feedback loops, excitatory and inhibitory, are shown running left to right from the protein box to the DNA.

Imagining Mechanisms with Diagrams  185 Clock-controlled genes Circadian rhythms

Clock output b

PkaiA

a

kaiA

KaiC Interactions?

Y

KaiB X

PkaiBC kaiB

kaiC

Transcription

KaiA

Process for Time delay? KaiA

KaiB

Translation

KaiC

Figure 7.1  Ishiura et al.’s proposal of a TTFL mechanism for generating circadian rhythms in cyanobacteria based on analogy with the mechanisms that had been identified in eukaryotic cells. In their caption, the authors comment: “Hatched box at left represents an unknown part of the feedback loop. X and Y are unidentified clock components. α and β are unidentified DNA binding proteins.” Note also the use of question marks to underscore this point. From Ishiura, M., Kutsuna, S., Aoki, S., Iwasaki, H., Andersson, C. R., Tanabe, A., Golden, S. S., Johnson, C. H., & Kondo, T. (1998). Expression of a gene cluster kaiABC as a circadian feedback process in Cyanobacteria. Science, 281, 1519–​1523, Figure 5. Reprinted with permission from AAAS.

With this example, we can provide a preliminary illustration of how each of the four features of imagination are present, and why the scientific reasoning involved in this diagram (both in crafting the diagram to convey a hypothetical mechanism and in reading the diagram to understand that mechanism) is in our sense imaginative. • The reasoning is imagistic insofar as it relies upon the drawn diagram. In this case, the diagram’s own spatiality (space on the page) is not systematically utilized to convey claims about worldly spatiality. For example, the

186  The Scientific Imagination shaded box at the left is not meant to depict that the Kai proteins reside in some intracellular compartment (there are no discrete intracellular compartments in prokaryotes). Rather, it is being used to make a categorical claim:  everything represented within the shaded box, including by the text, is regarded as poorly understood, and is set as a target of future investigation. Here the caption helps to specify the represented content, telling the reader how the image is to be understood. But often, ambiguities are left in place. For example, the diagram is not representing that the kai gene cluster is located “to the right” of the proteins. Likewise, the glyphs used to represent entities are abstract, and their shapes do not represent the shapes of those entities. Novices often misunderstand this (incorrectly inferring the location of the gene cluster or the shapes of entities) because reading the diagram does involve imagistic reasoning. These errors arise when one misunderstands how imagistic reasoning is to be deployed. A caption can clarify how to reason imagistically using the diagram, but the use of the diagram demands that one must reason imagistically to understand the proposal. Moreover, as we have already suggested, the creation of the diagram surpasses mere verbal presentation in framing the hypothesis. • It is fictive in that the authors are imagining (and inviting readers to imagine) how a mechanism might work in cyanobacteria, not simply reporting on results that have been established. While some of the parts and their relations are known, putting them together to form a mechanism that is capable of producing the phenomenon required a fair bit of speculation. The researchers are not asserting that the mechanism does in fact work this way, but rather showing how it would be functionally organized if it did. • A great deal of creativity is involved insofar as there was, at the time of publication, no solid evidence that any prokaryote has a TTFL mechanism of the sort that had been discovered in eukaryotic cells. The researchers employ this diagram to formulate a novel, testable hypothesis. Moreover, creativity is involved in choosing arbitrary shapes and locations to convey the proposal and in selectively foregrounding certain information and backgrounding other information (e.g., location, shape). Finally, this diagram is inspired by similar box-​and-​arrow diagrams showing the organization of eukaryotic TTFLs, but the researchers must make creative alterations to fit the known data about cyanobacteria, reconceive the functional organization with new parts and players, and hypothesize unknown relations between them.

Imagining Mechanisms with Diagrams  187 • To illustrate the constrained free variability involved here, we highlight the use of question marks in this figure. These indicate uncertainty about the details of the operations being proposed. Why, then, posit those details at all? In most cases, they are posited because if the TTFL model is to be applied to cyanobacteria, then something must play the assigned roles: there must be some process of time delay, some additional clock components, and some DNA binding elements. The authors are constrained to posit such entities and activities in order to illustrate their creative hypothesis at all. By marking these posits with question marks, the authors indicate precisely where free variability is permitted in imagining a mechanism that fits this model. Where free variability is permitted in thinking of a mechanism, the mechanism’s parts and operations remain unknown, and thus serve as targets for future investigation. Of special note is the question mark after the word “interactions,” as the relationships between the Kai proteins became the focus of research in the subsequent fifteen years. Figure 7.1 represents the original proposal to apply the TTFL model to cyanobacteria. But the imaginative character of scientists’ reasoning is not limited to their initial formulations of the hypothesis. It is interesting to compare this figure with Figure 7.2, published just two years later by two of the same authors, Iwasaki and Kondo (2000). Many newly discovered parts (proteins) are shown in this diagram, such as CikA and SasA, which became focal objects of subsequent research. The question marks involving the interaction of the Kai proteins are gone. This reflects the growth of evidence during this period that KaiC binds ATP and autophosphorylates. KaiA was determined to increase the rate of KaiC autophosphorylation by binding to KaiC, whereas when KaiB binds KaiC, it was found to reduce the rate of phosphorylation and, if KaiC was already phosphorylated, to facilitate dephosphorylation. This cycle of phosphorylation and dephosphorylation of KaiC itself takes twenty-​four hours, leading Ditty et  al. (2003, 524)  to conclude that these post-​translational activities “are central to the timekeeping ability of the Kai oscillator.” What is important for our purposes here is the manner in which this diagram rules out whole domains of free variability in thinking about how the hypothetical mechanism might operate.3 Figure 7.1 permits a viewer 3 We cannot discuss it here, but the manner in which the “growth of evidence” enables researchers to import new constraints into their understanding of the mechanism relies on further details of how scientists reason using diagrams. Prototypically, a variety of data graphics in many formats must be

188  The Scientific Imagination feedback to inputs

CpmA

response regulator

+

SasA Interaction

CikA Light

CR-1

+



Overt rhythms

KaiC Interaction

Photosystem? (photosynthesis)

RpoD2

KaiA

PkaiA

PkaiBC

kaiA

kaiB

kaiC

Transcription

Pex?

KaiB

Post-translational control (protein-protein interaction, phosphorylation, etc.)

Translation

feedback to inputs ?

Figure 7.2  Iwasaki and Kondo’s representation of the cyanobacterial clock mechanism. See text for details. Reprinted from Iwasaki, H., & Kondo, T. The current state and problems of circadian clock studies in cyanobacteria. Plant Cell Physiology, 2000, vol. 41, 1013–​1020, Figure 1 by permission of Oxford University Press.

to wonder whether the Kai proteins interact; Figure 7.2 constrains a viewer to imagine that they do. Simultaneously, however, whole new domains of free variability are opened up in Figure 7.2: the viewer is permitted to imagine any number of mechanisms whereby clock output might provide feedback to input mechanisms. While imagining a mechanism of cyanobacterial rhythmicity had become more constrained, it remained imaginative. This is clear if we attend to an important reorientation of later research. The focus on the phosphorylation and dephosphorylation of KaiC and its interactions with KaiA and KaiB became even more central as a result of two papers from the Kondo group in 2005. In the first, Tomita et al. (2005) demonstrated sustained circadian rhythms when bacteria were maintained in darkness, or in the presence of transcription or translation inhibitors—​that, is contexts in which no protein synthesis occurs. In the second, Nakajima et  al. (2005) reported circadian rhythms in a preparation containing only simultaneously deployed to work out the constraints before these can be used to construct a new mechanism diagram. Compare the role of data graphics in computational modeling, discussed later in the chapter.

Imagining Mechanisms with Diagrams  189 the three Kai proteins and ATP. These studies compellingly demonstrated the sufficiency of operations involving the proteins to sustain circadian rhythms:  the TTFL model, previously borrowed from eukaryotic systems and applied to cyanobacteria, appeared to be false. Neither transcription nor translation was necessary for circadian rhythms in cyanobacteria. Concerted focus was now directed at the post-​translational processes by which the Kai proteins formed complexes and how KaiC was phosphorylated and dephosphorylated over a twenty-​four-​hour period. In short, the hypothetical mechanisms posited in Figures 7.1 and 7.2 had been shown not to be the actual mechanisms centrally responsible for circadian rhythmicity in cyanobacteria. This underscores the fictive aspects of Figures 7.1 and 7.2. Despite the fact that these how-​possibly models did not pan out, a great deal of scientific reasoning was involved not only in initially proposing the TTFL model for cyanobacteria but also in incorporating into that model parts and operations suggested by new evidence, as well as in repeatedly wielding the model to identify new targets of research. In our presentation, one can think of research as proceeding by relentlessly closing off domains of free variability and opening up others. We regard this as a notable form of ongoing success in scientific research: it is no simple feat to take a mechanistic model built for one class of organisms, apply it wholesale to another, and provide an articulate depiction of how the resulting hypothetical mechanism could actually be constituted so as to produce the target phenomenon. Likewise, it is no simple feat to adapt such a model in the face of new data. While those initial models proved to be factually inaccurate, we regard the researchers as having attained a kind of imaginative success simply by constructing the diagrams in Figures 7.1 and 7.2. The success consists in integrating known data regarding cyanobacteria, fitting these into a generalized hypothesis regarding TTFLs as the mechanisms of circadian rhythms, generating a new, specific model of how such a mechanism could work in this case, and identifying the gaps in this new model as a way of driving research forward. This kind of success is common in scientific research, wherein piecemeal discoveries are synthesized into cohesive, mechanistic hypotheses that have only provisional standing as how-​possibly models. The success we are highlighting does not consist in reaching an endpoint of scientific explanation, at which scientific reasoning about this topic can cease and be diverted elsewhere. The success consists rather in advancing beyond limited data, instead of stagnating, by sketching out a model that both suggests unidentified parts and operations as targets of future research and suggests new ways

190  The Scientific Imagination of experimenting on known parts and operations to determine whether the model fits. In attaining imaginative success, scientists are actively succeeding in their ongoing endeavors. Such imaginative success is presupposed in any case where a mechanistic hypothesis is put forth and subsequently thought to be correct—​any time a how-​possibly model gets polemically upgraded to the status of a how-​actually model, and scientific reasoning is retroactively reified as a finished success. Turning the focus to the Kai proteins generated a new set of challenges for research on the cyanobacterial clock: TTFL mechanisms had been sufficiently studied, including with computational models, that researchers felt they understood how they could generate sustained circadian oscillations. They lacked such understanding for how phosphorylation and dephosphorylation of KaiC could generate sustained oscillations. As a result, the question of how the central clock in cyanobacteria could function was essentially a wide-​open domain of relatively free variability. Two basic constraints that all researchers came to accept were that phosphorylation could occur at two loci on KaiC—​serine residue 431 (S) and threonine 432 (T)—​and it would take twenty-​four hours to complete the cycle of phosphorylation and dephosphorylation. The problem, as illustrated in Figure 7.3b, is that, as is typical of biochemical reactions, each of these steps is in principle reversible. This suggested an additional constraint: without something driving the mechanism to carry out only the clockwise sequence, the mechanism would not oscillate but settle into a steady state. The other diagrams in Figure 7.3 originated with two research groups that proposed different mechanisms that could meet these constraints. As a result of their electron micrograph studies showing the differential binding of KaiC with KaiA and KaiB at different times of day, Mori et al. (2007) advanced the hypothesis represented in Figure 7.3a. Lavender arrows show the progression of KaiC through its phosphorylation cycle. A key aspect of their proposal, symbolized by the asterisk by KaiC at the bottom and on the left of the figure, is that as a result of phosphorylation and binding with KaiB, KaiC changes its conformation to a form that inhibits phosphorylation and promotes dephosphorylation. This conformational change was hypothesized to help drive the phosphorylation cycle in its observed progression. There was little evidence for this imaginative proposal, but Mori et al. included a computational model to show that such a mechanism could generate sustained oscillations. (There was more evidence for the operations, shown in the center of the figure, of monomer exchange, in which monomers from

(a)

KaiA KaiA return to original conformation phosphorylation of KaiC until hyperphosphorylated

KaiC

KaiA

KaiA

KaiA

Monomer

Exchange KaiC

KaiC*

dephosphorylation

KaiB

conformational change and dephosphorylation

KaiC*

KaiA KaiB (b)

T

U

ST S T

U

KaiA

ST

KaiB

(c)

S

Figure 7.3  Early proposals for a mechanism producing oscillations through phosphorylation and dephosphorylation of KaiC. Mori et al.’s proposal involving a conformation change from KaiC to KaiC*. Reprinted from Mori, T., Williams, D. R., Byrne, M. O., Qin, X., Egli, M., McHaourab, H. S., Stewart, P. L., & Johnson, C. H. (2007). Elucidating the Ticking of an In Vitro Circadian Clockwork. PLoS Biology, 5, e93 under Creative Commons Attribution (CC BY) license. Rust et al.’s (2007) proposal based on phosphorylation at different sites on KaiC. From Rust, M. J., Markson, J. S., Lane, W. S., Fisher, D. S., & O'Shea, E. K. (2007). Ordered phosphorylation governs oscillation of a three-​protein circadian clock. Science, 318, 809–​812, Figures 2c and 4a. Reprinted with permission from AAAS.

192  The Scientific Imagination different hexamers exchange, enabling different hexamers to synchronize with each other.) Figures  7.3b and 7.3c, from Rust et  al. (2007), reflect these authors’ discovery that there is a specific sequence of phosphorylation and dephosphorylation at the S and T loci—​T is the first site phosphorylated, followed by S, and T is the first site dephosphorylated, followed by S. As a result, given the current phosphorylation state of KaiC, there is no ambiguity as to what stage of the phosphorylation rhythm KaiC is in. This allowed researchers to close off one domain of free variability, imposing a clear constraint on how to think of the cyclical progression of KaiC’s phosphorylation rhythm. Beginning with the representation in Figure 7.3b, in Figure  7.3c Rust et  al. inserted representations of KaiA and KaiB, using arrows to represent their hypothesis:  until KaiB binds to KaiC in the S-​ phosphoform, KaiA drives the system toward phosphorylation, but once enough of a concentration of the S-​phosphoform emerges, KaiB interacts to repress KaiA, and these interactions between the Kai proteins drive the one-​way progression of the phosphorylation rhythm. Like Mori et al., the authors devised a computational model, showing that their mechanism not only could produce sustained oscillations but also replicated precise quantitative dynamics of the abundance of KaiC phosphoforms exhibited by an in vitro oscillator. Given that Rust et al. started with the data about the order of phosphorylation at the S and T sites, and with data about all the entities involved, one may question whether their hypothesis counts as imaginative. In particular, one may question whether it is fictive. What is fictive in this case is not the order of phosphorylation and dephosphorylation at the two loci, or the roles of KaiA and KaiC in affecting KaiC’s phosphorylation. Rather, the hypothesis that these activities, working in concert, sufficed to generate circadian (~24h) oscillations is fictive. Here the fictive character of scientists’ reasoning concerns not the structure of the mechanism but its precise dynamics. As we shall discuss further, the researchers relied heavily on imaginative reasoning to develop a computational model to show that the set of operations identified would suffice to account for the precise quantitative dynamics of circadian oscillations observed in vitro. The fact that the proposal was soon widely embraced does not detract from its creativity. Moreover, the proposal left many domains of free variability, especially concerning the exact time course of each operation. These are still being pursued, in part by detailed submolecular inquiries. None of these issues is taken up by Mori at al. or

Imagining Mechanisms with Diagrams  193 by Rust et al. That research can proceed apace without settling such questions does not mean they have been adequately settled. Likewise, that a fictive inference strategy commands widespread consensus as a heuristic does not make it any less fictive. What Rust et al.’s case illustrates is that the norms of imaginative success may be so widely shared within a field that they go without much explicit mention at all. It goes without saying that more work must be done to determine whether the hypothetical mechanism is actually present within a cyanobacterium. So far we have focused on published diagrams that reflect how the authors of the papers imagined the interaction of operations constituting the mechanism when they published the paper. We conclude this section with an example in which we had access to drafts of diagrams the researchers generated in the course of formulating their proposed mechanism. With attention to this case, we show that the researchers imaginatively explored diagrammatic possibilities in a self-​conscious attempt to attain the type of imaginative success we have characterized in discussing published diagrams. With the discovery that the Kai proteins alone could oscillate in vitro, many researchers temporarily restricted their investigations, operating on the fictive assumption that an understanding of the in vitro oscillator would carry over, somehow, to the in vivo case. When it came time to resituate the Kai oscillator in the context of a living cell, a pressing question was how to understand the link between KaiC’s phosphorylation rhythms and clock output. In cyanobacteria, the clock regulates transcription of the entire genome, with one class of genes achieving maximal expression at dusk and another at dawn. As the activity at the promoters upstream of genes govern their expression, researchers accordingly differentiated “class I” and “class II” promoters. Paddock et al. (2013) investigated which phosphoform of KaiC affected gene transcription. They advanced evidence that the S phosphoform is involved in both the inhibition of class I promoters (such as the one governing the kaiB and kaiC genes) and the activation of class II promoters. Notably, Paddock et al. could neither cite nor offer clear data regarding how the S phosphoform has this downstream effect. The most direct relationship would be for the S phosphoform to simply bind DNA and regulate gene expression, but there was (and remains) little evidence to support this view, since KaiC does not possess any known DNA binding domain. Paddock et al. ingeniously devised a measure of “oscillator output activity” (OOA) that circumvented these questions. The measure takes the expression

194  The Scientific Imagination of class I and class II genes (measured by bioluminescence) that are observed in cyanobacteria containing a mimetic of one of KaiC’s phosphoforms, and subtracts from this the expression observed in a total kaiC-​knockout (lacking all phosphoforms). The result is a set of measurements showing how much the expression of each class of genes can be ultimately attributed to each KaiC phosphoform. Without knowing the mechanism of how the S phosphoform regulates clock output, Paddock et al. were able to demonstrate convincingly that it did, and that, in their preparation in which native KaiC was knocked out, no other phosphoform did. A whole domain of free variability regarding the previously known effects of KaiC in inhibiting class I and activating class II promoters could thus be closed off, as these were now shown to be the effects of the S phosphoform. Yet simultaneously, the details of this newly identified S phosphoform output pathway remained a domain of free variability. As they were working out their account of the S phosphoform output pathway, the authors produced numerous diagrams that appeared in different drafts of the paper from January to April 2013. Three of these are shown in Figure 7.4. Figure 7.4a appeared in the January drafts of the paper when the authors focused just on the effects of the S-​phosphoform in inhibiting the class I and activating the class II promoters. (In these figures the S phosphoform is labeled “KaiC-​pST,” meaning that while the seronine site is phosphorylated [hence the “p” preceding the “S”], the threonine site is not [hence, no “p” preceding the “T”]). Subsequently, the authors expanded the focus of their paper to consider the relation between their newly discovered output pathway from the S-​ phosphoform and a previously identified output pathway involving SasA and RpaA (shown in Figure 7.2, where RpaA is simply designated “response regulator”). Figure 7.4b appeared in drafts in March and early April. Note the bifurcated use of space. On the left side, space on the page is not to be interpreted as systematically depicting any worldly space: the S phosphoform, shown at the bottom, is not to be regarded as “below” the T phosphoform, shown at the top. Despite that, the proximity of the glyphs for each of the Kai proteins is meant to convey spatial proximity and entanglement (binding) of the represented proteins: KaiA and KaiB are shown bound to the S phosphoform of KaiC. Meanwhile, on the right side of the panel, there is a mock-​up of a quantitative graph showing observed oscillations. To a researcher of circadian rhythms, this simple waveform is iconic, and would be taken immediately to represent peaks and troughs of gene expression over several days.

(a)

Central oscillator P

Oscillator input responses (e.g., ATP, Q)

P P KaiC-pST P

Oscillator output activity

PkaiBC (class I) PpurF (class II)

(b) Central oscillator P

Oscillator input responses (e.g., ATP, Q)

RpaA-P P P

Oscillator output activity

P KaiB KaiC-pST (c)

AP

kaiBC

RpaA-independent

(class I)

Kai oscillator

P-RpaA

KaiBC-pST PkaiBC

B

PpurF (class II) Kai oscillator

P-RpaA

KaiBC-pST PpurF

Figure 7.4  Three figures developed by Paddock et al. in which they explore possible ways of relating the S-​phosphoform and RpaA to the two classes of gene promoters in cyanobacteria. Only the last appeared in the published paper. Panel C reprinted from Paddock, M. L., Boyd, J. S., Adin, D. M., & Golden, S. S. (2013). Active output state of the Synechococcus Kai circadian oscillator. Proceedings of the National Academy of Sciences, 110, E3849–​E3857, Figure 7a and 7b, with permission of the National Academy of Sciences.

196  The Scientific Imagination And yet, since no axes are specified, this “abstract” space cannot be systematically interpreted and assigned any clear quantitative values. Running from left to right across the panel are several lines. The solid arrows, running from the S phosphoform to peaks on the pseudo-​graph, indicate that output from KaiC is at its peak when the abundance of the S phosphoform is at its peak. The dotted lines, running from the other phosphoforms to the trough on the pseudo-​graph, indicate that output from KaiC when the S phosphoform is not abundant remains at the same low level as in the KaiC knockout.4 The challenge was to imagine how the newly identified output pathway involving the S phosphoform interacts with the previously known output pathway involving SasA and RpaA. Both SasA and RpaA exhibit circadian cycles in their own phosphorylation (they are considered active when phosphorylated); hence RpaA is shown with a circle. The relation between KaiC oscillation and RpaA oscillation is purposely left vague, since the relative timing of these oscillations is not known. In their experimental work, Paddock et al. had found that RpaA knockouts exhibited the opposite effect on OOA as KaiC knockouts. Hence, in these diagrams, they show RpaA inhibiting the effect of the KaiC phosphoforms. Figure  7.4b only shows RpaA opposing the output from the S-​ phospho­form, not the specifics of what it does. The authors flesh out their idea in April when they produced a first draft of the diagram in Figure 7.4c, which eventually appeared in the published paper. They now hypothesize that in the case of class I promoters, activated RpaA both inhibits the inhibitory pathway from the S-​phosphoform and directly activates the promoter. In the case of class  II promoters RpaA is presented as simply inhibiting the activating role of the S-​phosphoform. Even this figure is challenging to understand, since an important factor is that phosphorylation of RpaA peaks out of phase with the S phosphoform, so that when phosphorylated, RpaA is inhibiting already reduced output from the S phosphoform, and when the S phosphoform is most affecting the output, RpaA is relatively unphosphorylated and exerting minimal effect. For our purposes here, what is striking is the diversity of diagrammatic formats that the authors constructed in the attempt to articulate and communicate their hypothetical mechanism, and the amount of work that went into devising a diagram that was adequate to this task. We have provided here only a small sampling of the formats the authors drafted. Even with this 4 The measure of oscillator output activity does not distinguish between the S phosphoform’s role in inhibiting Class I genes and activating Class II genes: both of these are simply “output activity.”

Imagining Mechanisms with Diagrams  197 small sample, we can see the kinds of difficulties that make imaginative success so highly prized. Figure 7.4a draws upon previously established conventions for depicting the KaiC oscillator (compare Figure 7.3a). To these it adds a concise addendum to indicate which phosphoform of KaiC is critical to output. Unfortunately, it does not include information about RpaA’s role in output. In this regard, Figure 7.4a was regarded by its designers as what we would call a (partial) imaginative failure: it fails to put forth a hypothesis that incorporates all known constraints, and it fails to integrate all parts of the mechanism that are known to be relevant into a cohesive proposal. There is still imaginative success in this case, but it is limited. A  how-​possibly mechanism is imagined that can direct research forward by identifying targets of inquiry. However, it fails to integrate all known data, and thus it is known in advance that the hypothesis is incomplete and selective, and that it likely will not provide an understanding of the whole mechanism. To overcome this limitation, Figure 7.4b attempts to incorporate RpaA’s role in a conservative manner. It employs the same conventions for depicting the KaiC oscillator, adds a schematic cycle for RpaA, and combines these with an iconic representation of oscillating gene expression. The result, however, is unduly cognitively taxing, in part due to the bifurcated use of space on the page. This is another example of a (partial) imaginative failure: both the importance of the S-​phosphoform for KaiC oscillator output and the influence of RpaA are successfully shown, but the authors regard the graphic as leaving unwanted free variability, as it still does not sufficiently incorporate constraints about the precise effects of RpaA in modulating the expression of class I and class II promoters. Figure 7.4c abandons the standard representation of the KaiC oscillator, reducing it to a schematic cycle, and in this way foregrounds the relative contributions of KaiC and RpaA to output. Yet the entire graphic must be duplicated in order to make clear the differential effects of KaiC and RpaA on both class I promoters (top) and class II promoters (bottom). It is just this kind of creative deployment of visual representations that often serves as a prerequisite in putting forth any clear mechanistic hypothesis, and it helps to illustrate the utility of external representations in pursuing imaginative success. One would be hard-​pressed to keep a private “image in the mind’s eye” of both of the two images shown in Figure 7.4c, or to alternate between imagining each of them, hoping to compare them. Utilizing space on the page and outsourcing these representations, one attains a visualization that is sufficient to incorporate all relevant constraints and to advance a clear hypothesis.

198  The Scientific Imagination

7.4  Imagining Mechanisms in Operation Although the mechanisms represented in the figures in the previous section are dynamic, the diagrams are static. To understand how the operations portrayed can result in the generation of the phenomenon (in the case of circadian rhythms, sustained oscillation), the viewer must imagine the execution of the operations shown in the diagrams. Sometimes this can be done by mentally animating the diagram, visualizing each operation and taking into account the changes in which it results. Using a task in which participants were required to determine the effect of pulling a rope on parts of a pulley system (Figure 7.5) while reaction times and errors were measured and eye movements tracked, Hegarty (1992; see Schwartz and Black 1999 for related results) demonstrated that people sequentially animate individual parts of the pulley system in causal order. Although the operations

Figure 7.5  Pulley systems used in Hegarty’s (1992) experiments. Reprinted, with permission, from Hegarty, M. (1992). Mental animation: Inferring motion from static displays of mechanical systems. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, Figure 1.

Imagining Mechanisms with Diagrams  199 in the mechanism diagrams in the previous section (synthesizing proteins or binding phosphates) are less readily imagined visually, when the diagram is simple enough those expert in the types of operations proposed are often able to rehearse mentally the effects of these operations to determine how the mechanism generates the phenomenon. Still, it is rare for a diagram to explicitly encode all the background knowledge (e.g., regarding the relative phasing of different activities) that is required. Mental animation works reasonably well when the operations occur sequentially so that one does not have to try to keep track of the effects of multiple operations at the same time and when the operations can be described in linear equations. But when multiple operations are viewed as interacting with each other at the same time in a non-​linear fashion, as in the diagrams in the previous section, mental animation ceases to provide a reliable indication of the mechanism’s behavior. Instead, researchers often appeal to computational models to determine how (and when) the mechanism’s parts will behave. For example, the mechanism Rust et al. imagined in Figure 7.3c involves the continuous transformation between the four phosphoforms shown around the perimeter. These transformations are modulated by the effects of KaiA and KaiB, which are assumed to affect the rates in a non-​linear fashion. This far exceeds the ability of humans to animate mentally. Accordingly, Rust et al. developed a computational model in which three equations specified how concentrations of the T, S, and doubly phosphorylated (ST) phosphoforms would change as a result of the operations represented by the arrows connecting them to other phosphoforms. Each term relating two phosphoforms includes a rate parameter kXY , which itself changes according to the following formula:

0 k XY = k XY +

A k XY A(S) k1 2 + A(S)

0 A where k XY is the rate in the absence of KaiA, k XY is the maximal influence of KaiA on the rate, and A is the concentration of active KaiA. The resulting equations were differentially integrated, resulting in a pattern of oscillation for each phosphoform. To understand these modeled data, the researchers visualized them in an abstract, quantitative space (Figure 7.6b) and compared them to a similar visualization of actual experimental data from a cyanobacterium (Figure 7.6a).

(a)

80

% KaiC

60

40

20

0

0

20

40

60

80

Time (h) Total

T-KaiC

ST-KaiC

S-KaiC

(b) 80

% KaiC

60

40

20

0 0

20

40

60

80

Time (h) Total

T-KaiC

ST-KaiC

S-KaiC

Figure 7.6  Comparison of empirical data and results of a simulation using the computational model Rust et al. developed based on the mechanism shown in Figure 3c. From Rust, M. J., Markson, J. S., Lane, W. S., Fisher, D. S., & O'Shea, E. K. (2007). Ordered phosphorylation governs oscillation of a three-​protein circadian clock. Science, 318, 809–​812, Figures 1a and 4b. Reprinted with permission from AAAS.

Imagining Mechanisms with Diagrams  201 In this instance, the computational simulation served as a means of extending the ability of humans to imagine the operation of a mechanism. As discussed earlier, the basic structure and functional arrangement of the mechanism Rust et al. proposed was well supported by experimental data. But they also needed to support the claim that the dynamics of such a system could produce the target phenomenon of circadian rhythmicity. It was to accomplish this that the authors turned to a computational simulation. But the raw output of the simulation is a set of numbers: the values assigned to variables at continuous time-​points. To make sense of these numbers, the researchers graphed them over time just as they did the data from actual experiments. The fact that the graph closely resembles that of the actual data supports the contention that the imagined mechanism could account for the phenomenon—​it provides the epistemic grip needed to convincingly put forth a how-​possibly mechanism. In this case, there is a relatively close connection between the variables deployed in the model and biologically realistic parts and activities. It might seem that this suffices to explain how the mechanism works. But surprisingly, given the intuitive idea that completeness is a virtue in mechanistic explanations (Kaplan and Craver 2011), researchers often take a different approach to understanding why the mechanism behaves as the simulation shows that it does. To this end, researchers find it useful to abstract from actual components and search for general design principles (Levy and Bechtel 2013; Green et al. 2014). Jolley et al. (2012) illustrate such a strategy for imagining a mechanism of rhythmicity that aims not to track its actual parts and operations in full specificity but rather to reveal the key design principles behind its operation. They began by imagining the skeleton mechanism shown in Figure 7.7a. The basic functional arrangement here ought to be familiar from our previous discussion of the KaiC oscillator. What is being modeled is a simple post-​ translational oscillator that relies upon a single substrate (S) that can be phosphorylated (P) at two different sites under the influence of one of two enzymes, a phosphatase (the oval labelled “F” in Figures 7.7b and 7.7c) and a kinase (the oval labelled “E” in Figures 7.7b and 7.7c). While this closely resembles Figure 7.4b, they have annotated each arrow with two parameters (k1. . . k8 are rate parameters specifying the number of substrate molecules converted to product molecules in a given reaction per enzyme per minute and Km1. . . Km8 are binding parameters that determine the substrate concentrations at which each reaction reaches half its maximum rate). After generating the appropriate

202  The Scientific Imagination P K

m

k

5

S01 m

5,

K

3,

K

m

1,

k

1

K

m

7,

k

k

7

3

P k

k

8,

6

4

6,

k

k

m

2,

2

P

m 4,

m

m

K

K

P

K

K

S11

8

S00

S10

Figure 7.7a  Simplified model of the oscillator used by Jolley et al. showing the sixteen parameters whose values they investigated. Cluster 1

Cluster 2

F P S01

S01

P

S00

P

S10

E

S11

P

P

E S00

P

P

F P

S11

S10

Figure 7.7b  A schematic representation of the two clusters of parameter values Jolley et al. identified that generated sustained oscillations. Thickness of the arrows around the perimeter indicates relative magnitude (speed) of the parameter-​values that produced sustained oscillations. The flat-​ended arrows in the center of these panels are explained in the text. Reprinted from Cell Reports, Vol 2, C. C. Jolley, K. L. Ode, and H. R. Ueda, A design principle for a posttranslational biochemical oscillator, Figures 1b and 2d, © 2012, with permission from Elsevier.

equations, Jolley et al. used numerical simulation to discover which sets of parameter values would sustain oscillations with the hope that they could identify a systematic pattern among the successful parameter values. Even directing their search to those sets of parameter values they thought likely to produce oscillations, Jolley et al. found that only ~0.1% of the sets of parameter values they checked generated sustained oscillations. This low “hit rate” indicates that their model was not overly permissive in generating

Imagining Mechanisms with Diagrams  203 oscillations. By randomly generating over a billion sets of parameter values, they identified approximately a million “hits” that did generate sustained oscillations and then employed a clustering algorithm to determine if there were any common patterns among the hits. They found two major clusters that accounted for 70% of the hits and that most of the remaining hits were very similar to those in the clusters. Abstracting from specific values and using thickness of arrows to indicate the speed of reactions determined by the rate parameter, Figure 7.7b shows that both clusters employ a motif in which the rates in the clockwise direction are higher than in the reverse direction. They also exhibit a motif of having low binding parameters for two of the reactions, which result in much of the enzyme being tied up in one of the reactions and thus unavailable to catalyze other reactions until it is liberated. The authors refer to this as “enzyme sequestering,” which they represent by flat-​ended arrows from one phosphorylation step to a subsequent arrow symbolizing a reaction that will be slowed by the unavailability of the enzyme. The researchers supported the hypothesis that these two motifs are what generate the oscillations in the model by creating new parameter sets that conformed to these motifs; the result was a much higher hit rate for producing sustained oscillations in models that fit the motifs. The free variability permitted here is extensive. While the KaiC oscillator was the first post-​translational oscillator discovered, it is now widely believed that many organisms, including animals, have a circadian clock that includes post-​translational oscillators in addition to central TTFLs. There are other oscillators that are not circadian but also involve phosphorylation and dephosphorylation. Jolley et al.’s model is potentially applicable to a huge number of these cases. On the other hand, it is not known how many such post-​translational oscillators limit themselves to the simple model here and how many might involve further parts and activities. Insofar as the model is offered as a general model of oscillators—​it is supposed that this model may fit all cases, until it is shown otherwise—​the model is fictive.5 Meanwhile, the authors’ decision to encode the speed of reactions using the thickness of arrows in panels B and C is a creative choice that facilitates easy comprehension of how the schematic model depicted in panel A must have its parameter values fixed if oscillations are to be attained. One way to understand the distinctive value of the imaginative practice shown here, in contrast to those 5 Offering the model is thus a step in the dynamic practice of heuristic category-​negotiation (Sheredos 2015).

204  The Scientific Imagination discussed above, is that Jolley et al. are attempting to clarify the decisive constraints that any post-​translational oscillator must satisfy. The graphic attains imaginative success in putting forth a hypothesis regarding near-​universal features of any such oscillator; this hypothesis then recommends certain standards for assessing imaginative success elsewhere, laying down general constraints that (if the hypothesis is a good one) any future bout of imaginative reasoning (i.e., any future positing of a hypothetical mechanism) must satisfy to attain success. In Figure 7.7ab above, the structure and operations of the modeled system are foregrounded. Some temporal information (about rates of reactions) is included, but it remains difficult to mentally animate this graphic to understand how the whole system will behave over time. Rust et al.’s strategy for addressing this, seen in Figure 7.6, was to employ an abstract space in which one dimension of the page represents time, and the other represents a quantitative value (abundance of each KaiC phosphoform). An alternative strategy is to employ an abstract “state-​space” in which every dimension defines the range of values on a variable, and then plot the state of the system at successive instants of time as a trajectory through that abstract space. Jolley et  al. plotted successive states of their imagined system in a space defined by two variables, S00 and S10. Three partially overlapping trajectories of the mechanism are shown using lighter arrows in Figure 7.8, with arrowheads showing the direction of successive values. By following

Log10(S10) (μM)

4 3 2 1 0 1

2

3

4

Log10(S00) (μM)

Figure 7.8  Jolley et al.’s phase space plot showing how the functioning of their imagined oscillator results in a limit cycle. Reprinted from Cell Reports, Vol 2, C. C. Jolley, K. L. Ode, and H. R. Ueda, A design principle for a posttranslational biochemical oscillator, Figure 1e, © 2012, with permission from Elsevier.

Imagining Mechanisms with Diagrams  205 many such trajectories, researchers can identify abstract patterns of a complex system’s dynamics, since they are imagined as visual patterns in that space. For example, all three trajectories shown in Figure 7.8 asymptote on the black closed form known as a limit cycle. The fact that the form is closed indicates that as the system approximates the represented values, it will continue to cycle through the sequence of values indefinitely—​that is, it exhibits sustained oscillations. By constructing this abstract space, an exceedingly complex dynamic pattern in a system’s operations can be creatively displayed using a simple loop. While this graphic does not provide precise temporal information (e.g., it does not make clear how much time it takes for the system to move from one point on the trajectory to another), it can be used to provide general information about the system’s dynamics. Further information (which they present in other figures) is required to ensure that the oscillations are circadian in their time course, with one oscillation occurring roughly every twenty-​four hours. As before, this extremely abstract representation puts forth a hypothesis that can be taken to impose constraints upon any future imaginative success. The amount of background knowledge involved in understanding these constraints and the difficulty of invoking them “in the mind’s eye” illustrate again the utility of scientists’ reliance on external visualizations in attaining imaginative success.

7.5 Conclusions: How-​Possibly Mechanisms as Imaginative Successes We have examined the epistemic activities through which scientists construct how-​possibly accounts of mechanisms, that is, accounts of possible mechanisms that could explain a phenomenon of interest. Although how-​possibly accounts have often been valued only as means to attain how-​actually accounts, they represent important successes in science. Even in cases in which the account turns out to correspond to what is actual, the how-​possibly account plays a critical role in showing that that mechanism could produce the phenomenon. Our interest has been in understanding the epistemic success of providing a how-​possibly account. The epistemic activities that go into providing how-​possibly accounts can be regarded as involving imagination, characterized in terms of involving visualization, creatively going beyond the evidence, being fictive by not

206  The Scientific Imagination entailing a commitment to actuality, and allowing for constrained flexibility in generating a mechanism design. We have regarded diagrams as the external traces of scientists’ imaginative reasoning, enabling us to examine how their imaginative reasoning proceeds. We have suggested that it is fruitful to regard the construction of an account of a how-​possibly mechanism as an imaginative success, where the success consists in advancing a scientific field by integrating data to provide an intelligible explanation of the phenomenon. This is an achievement that must be understood independently of (and is in fact presupposed by) the kind of success that would be achieved by a how-​actually account, and which would herald the completion of research regarding a phenomenon. To provide this account, we have relied on diagrams as the traces of imaginative reasoning. Diagrams figure prominently in scientists’ construction and presentation of how-​possibly accounts of mechanisms. We examined a variety of published diagrams as well as drafts generated before publication from research on circadian rhythms in cyanobacteria. Diagrams clearly involve visualization, are fictive in that they do not themselves assert that the proposed mechanism is actual, involve creativity in putting together the components of a mechanism in a way that could produce the phenomenon, and exhibit constrained free variability, often signaled by the role of question marks in a diagram. Since they are supported by data, diagrams are not entirely fictive, but instead operate under constraints. By comparing successful diagrams we showed how further research both restricts the free variability and opens up new avenues for variation. The discovery that post-​translational processes suffice for circadian rhythms revealed that the diagrams in Figures 7.1 and 7.2 were clearly fictive in that they did not correspond to the actual mechanism. The diagrams shown in Figure 7.3 are also fictive in that they present competing accounts of how the mechanism might work, going beyond the data. It is through this interplay of fact and imagination that all of these authors achieve imaginative success by offering accounts of mechanisms that could intelligibly generate circadian rhythms, thereby providing explicit targets for future research. Such success is not simple and automatic, as shown by a case in which we had access to drafts of the mechanism the researchers were proposing to explain how the circadian oscillator regulates gene expression. Here, not surprisingly, early drafts, while in part successful, also exhibited imaginative failure. The first draft exhibited the basic finding of the researchers that the S-​phosphoform of KaiC drives gene expression but did not show how it

Imagining Mechanisms with Diagrams  207 interacted with another, already established pathway. We presented one of many drafts that the researchers constructed in the process of exploring how the two pathways interacted. While these did include both pathways, and in that sense were more successful, the attempt to incorporate detail about the interaction led these to become increasingly complicated. Finally, the researchers chose to simplify the diagram by leaving out details not pertinent to the question of the interaction of the two pathways and constructed separate diagrams for the two classes of gene promoters involved. This proved far more successful and led to the diagram they published. The unpublished record gives an indication of the value that scientists assign to imaginative success, and the difficulties they often face—​and work to overcome—​in attaining it. Unique difficulties are involved in imagining a possible mechanism’s operations over time. With relatively simple mechanisms that are organized sequentially and do not involve significant non-​linearities, people can successfully imagine them in operation and draw conclusions about how the proposed mechanism would behave. But many of the mechanisms being proposed in contemporary biology, such as those involving the cyanobacterial circadian clock, are not sequentially organized (employing sometimes multiple feedback loops) and involve non-​linear reactions. We showed that in the case of one of the accounts of a possible mechanisms considered earlier, the researchers turned to computational modeling to determine how their proposed mechanisms would behave. This activity itself was grounded in the diagram in which they had imagined the mechanism. We finished with a case in which, in the effort to understand the basic principles operative in a proposed mechanism, researchers abstracted from details of the mechanism and explored through a computational model what parameter values were required to yield the phenomenon. In both these cases, the researchers represented the results of their modeling in graph representations that enabled them to present the results of the simulation of the imagined mechanism. These cases illustrate again how diagrams are employed to facilitate imaginative success. We conclude that an important part of the epistemic project when scientists develop mechanistic explanations is imagining possible mechanisms by representing the parts, operations, and organization of the mechanism in diagrams and then putting those diagrams to work in further imagining (often using computational simulations) how those mechanisms will operate. Putting components together and showing that the possible

208  The Scientific Imagination mechanism could generate the phenomenon are important imaginative successes in the development of intelligible mechanistic explanations, and visualizing computational models is a crucial component of this success. Notably, imaginative success serves to generate norms—​as researchers develop diagrams and computational models that they regard as establishing requirements on mechanisms that can account for a phenomenon, they lay down basic constraints that must be satisfied by any future imagining of this type of mechanism. In this respect, norms of imaginative success exhibit a kind of self-​regulation, which we are unlikely to understand until we treat imaginative success on its own terms, disentangling it from the norms that govern successful attainment of a how-​actually explanation.

References Bechtel, W. (2006). Discovering Cell Mechanisms: The Creation of Modern Cell Biology. Cambridge: Cambridge University Press. Bechtel, W., and Abrahamsen, A. (2005). “Explanation: A Mechanist Alternative.” Studies in History and Philosophy of Biological and Biomedical Sciences 36: 421‒441. Bechtel, W., and Abrahamsen, A. (2010). “Dynamic Mechanistic Explanation: Computational Modeling of Circadian Rhythms as an Exemplar for Cognitive Science.” Studies in History and Philosophy of Science Part A 41: 321‒333. Bechtel, W., and Richardson, R. C. ([1993] 2010). Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research. Cambridge, MA: MIT Press. Boden, M. A. (2004). The Creative Mind:  Myths and Mechanisms. 2nd ed. London: Routledge. Burnston, D. C., Sheredos, B., Abrahamsen, A., and Bechtel, W. (2014). “Scientists’ Use of Diagrams in Developing Mechanistic Explanations:  A Case Study from Chronobiology.” Pragmatics and Cognition 22: 224‒243. Byrne, R. M. J. (2005). The Rational Imagination: How People Create Alternatives to Reality. Cambridge, MA: MIT Press. Craver, C. F., and Darden, L. (2013). In Search of Mechanisms: Discoveries Across the Life Sciences. Chicago: University of Chicago Press. Currie, G., and Ravenscroft, I. (2002). Recreative Minds: Imagination in Philosophy and Psychology. Oxford: Oxford University Press. Ditty, J. L., Williams, S. B., and Golden, S. S. (2003). “A Cyanobacterial Circadian Timing Mechanism.” Annual Review of Genetics 37: 513‒543. Green, S., Levy, A., and Bechtel, W. (2014). “Design Sans Adaptation.” European Journal for Philosophy of Science 5: 15‒29. Hegarty, M. (1992). “Mental Animation:  Inferring Motion from Static Displays of Mechanical Systems.” Journal of Experimental Psychology:  Learning, Memory, and Cognition 18: 1084‒1102. Ishiura, M., Kutsuna, S., Aoki, S., Iwasaki, H., Andersson, C. R., Tanabe, A., Golden, S. S., Johnson, C. H., and Kondo, T. (1998). “Expression of a Gene Cluster KaiABC as a Circadian Feedback Process in Cyanobacteria.” Science 281: 1519‒1523.

Imagining Mechanisms with Diagrams  209 Iwasaki, H., and Kondo, T. (2000). “The Current State and Problems of Circadian Clock Studies in Cyanobacteria.” Plant Cell Physiology 41: 1013‒1020. Jolley, C. C., Ode, K. L., and Ueda, H. R. (2012). “A Design Principle for a Posttranslational Biochemical Oscillator.” Cell Reports 2: 938‒950. Jones, N., and Wolkenhauer, O. (2012). “Diagrams as Locality Aids for Explanation and Model Construction in Cell Biology.” Biology and Philosophy 27: 705‒721. Kaplan, D. M., and Craver, C. F. (2011). “The Explanatory Force of Dynamical and Mathematical Models in Neuroscience:  A Mechanistic Perspective.” Philosophy of Science 78: 601‒627. Kirsh, D., and Maglio, P. (1994). “On Distinguishing Epistemic from Pragmatic Action.” Cognitive Science 18: 513‒549. Machamer, P., Darden, L., and Craver, C. F. (2000). “Thinking About Mechanisms.” Philosophy of Science 67: 1‒25. Liao, S.-​Y., and Gendler, T. S. (2011). “Pretense and Imagination.” Wiley Interdisciplinary Reviews—​Cognitive Science 2: 79‒94. Levy, A., and Bechtel, W. (2013). “Abstraction and the Organization of Mechanisms.” Philosophy of Science 80: 241‒261. Mori, T., Williams, D. R., Byrne, M. O., Qin, X., Egli, M., McHaourab, H. S., Stewart, P. L., and Johnson, C. H. (2007). “Elucidating the Ticking of an In Vitro Circadian Clockwork.” PLoS Biology 5: e93. Nakajima, M., Imai, K., Ito, H., Nishiwaki, T., Murayama, Y., Iwasaki, H., Oyama, T., and Kondo, T. (2005). “Reconstitution of Circadian Oscillation of Cyanobacterial KaiC Phosphorylation in Vitro.” Science 308: 414‒415. Paddock, M. L., Boyd, J. S., Adin, D. M., and Golden, S. S. (2013). “Active Output State of the Synechococcus Kai Circadian Oscillator.” Proceedings of the National Academy of Sciences 110: E3849‒E3857. Rust, M. J., Markson, J. S., Lane, W. S., Fisher, D. S., and O’Shea, E. K. (2007). “Ordered Phosphorylation Governs Oscillation of a Three-​Protein Circadian Clock.” Science 318: 809‒812. Schwartz, D. L., and Black, T. (1999). “Inferences Through Imagined Actions: Knowing by Simulated Doing.” Journal of Experimental Psychology:  Learning, Memory, and Cognition 25: 116‒136. Sheredos, B., Burnston, D., Abrahamsen, A., and Bechtel, W. (2013). “Why Do Biologists Use So Many Diagrams?” Philosophy of Science 80: 931‒944. Sheredos, B. (2015). “Re-​reconciling the Epistemic and Ontic Views of Explanation (or, Why the Ontic View Cannot Support Norms of Generality).” Erkenntnis 81, no. 5: 919‒949. Strawson, P. F. (1970). “Imagination and Perception.” In Experience and Theory, edited by L. Foster and J. W. Swanson, 31‒54. Amherst: University of Massachusetts Press. Thomas, N. J. T. (1999). “Are Theories of Imagery Theories of Imagination? An Active Perception Approach to Conscious Mental Content.” Cognitive Science 23: 207‒245. Tomita, J., Nakajima, M., Kondo, T., and Iwasaki, H. (2005). “No Transcription-​Translation Feedback in Circadian Rhythm of KaiC Phosphorylation.” Science 307: 251‒254. Tversky, B. (2011). “Visualizing Thought.” Topics in Cognitive Science 3: 499‒535.

8 Abstraction and Representational Capacity in Computational Structures Michael Weisberg

Scientific models are often introduced using a verbal narrative, and sometimes are presented as narrative alone, without giving any mathematical, computational, or physical structure along with words. These practices have prompted several philosophers of science to develop accounts of models as fictions (Frigg 2010; Godfrey-​Smith 2009; Toon 2012). In such accounts, the mathematics or computations accompanying narratives are descriptions of the model, but the fictional scenario itself is the model. When a biologist says “imagine a population of self-​replicating RNA molecules,” she should be taken literally. Her subsequent mathematical description is a description of this population, which is literally the model. I have argued against this position in Simulation and Similarity (Weisberg 2013), and I  continue to believe that even when described in narrative form, mathematical models are best understood as interpreted mathematical structures. However, an important insight of the fictions position is that mathematical structures may not have sufficient representational capacity to do the representational work of models. In particular, Matthewson (2012) and other proponents of the fictions view have argued that many models are specially focused on causal or mechanistic aspects of their targets. While it is easy to see how an imagined scenario can have a causal structure, it is very difficult to see how a mathematical structure can be causal. Although I think proponents of this argument are too quick to conclude that mathematical structures cannot represent causal structures (Glymour 2003; Pearl 2000), this argument raises an important point. The kinds of mathematical structures philosophers of science have traditionally associated with models are trajectory spaces, and it isn’t obvious how these can be used to represent causes. Part of my response is that trajectory spaces alone Michael Weisberg, Abstraction and Representational Capacity in Computational Structures In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0009

Abstraction and Representational Capacity  211 don’t represent causes; all mathematical structures require interpretation in order to represent targets. Modelers’ interpretations are what tie particular parts of mathematical structures to particular features of targets. This is true for both causal and non-​causal features. However, this response isn’t sufficient. Many models are more thoroughly causal than interpreted trajectory spaces. Many kinds of models in the natural and social sciences describe the behavior of individuals (particles, people, cells, plants, etc.) as they transition through some process and interact with others. Although it may be possible to represent such scenarios using trajectory spaces, it is not always straightforward to do so. This is one reason it is tempting to see these kinds of mechanistic models as imagined scenarios, but I think that a much better way of understanding them is as computational models. Such models consist of interpreted computational structures, and these structures potentially represent features of target systems. This chapter explores this suggestion in more detail, examining what computational structure consists of, the resources it offers modelers, and why attempting to re-​describe computational models as imaginary concrete systems fails even more dramatically than it does for mathematical models.

8.1  Computational Models and Computational Structure Computers are a ubiquitous feature of contemporary science. Every aspect of research, including experimental design, data acquisition, simulation, analysis, preparation of research for publication, publication, and even reading published research, is conducted with the aid of computers. Moreover, a large amount of theoretical scientific research is conducted almost entirely using computers. Although hand calculations remain important, most research mathematics involves extensive use of computers for checking, exploring, and simulating. More relevant for this chapter is that when theorists create and explore mathematical models, they often do so using computers. Although everything I have described up to this point is computationally intensive, none of this is computational modeling in the sense I will discuss here. Computational modeling is not just a matter of using computers, although it almost always requires their use. What makes computational

212  The Scientific Imagination modeling distinct from other kinds of modeling is the use of interpreted computational structures as models. What constitutes a computational structure? I will complicate this a little further along in the chapter, but for now, let’s understand a computational structure as a procedure. Computational models are distinct from mathematical and physical models in that the causal structure of a target system (or possible target system; I will drop this caveat for clarity) gets mapped onto the procedural structure of the model. A computational model does not necessarily have to be run on a computer; conversely, not every model that is run on a computer is a computational model. What is required is that its computational structure is used to represent a target. A simple, canonical computational model is Schelling’s model of segregation. In this model, a set of transition procedures potentially represents the utility functions, thoughts, and behaviors of people in a real city. Schelling initially instantiated his model on a chessboard with dimes and nickels, but modern versions are instantiated in computers. However it is instantiated, Schelling’s model consists of a procedure used to represent a causal process. The model consists of agents of two types (A and B) distributed on a lattice. Their initial distribution is random, but in each cycle of the model, the agents have the possibility of moving according to a utility function and a movement rule. The utility function says that each individual prefers that at least 30% of its neighbors be of the same type. So the As want at least 30% of their neighbors to be As, and likewise for the Bs. Schelling’s neighborhoods were defined as standard Moore neighborhoods, a set of nine adjacent grid elements. An agent standing on some grid element can have anywhere from zero to eight neighbors in the adjoining elements. The model is made dynamic by a simple movement rule. In each cycle of the model, its agents choose either to remain in place or to move to a new location. When it is an agent’s turn to make a decision, that agent determines whether its utility function is satisfied. If the function is satisfied, the agent remains where it is. If the function is not satisfied, then the agent then moves to the nearest empty location. This sequence of decisions continues until all of the agents’ utility functions are satisfied. When the movement rule and utility function are implemented in a simulation, a cascade is observed, which leads from integrated neighborhoods to highly segregated neighborhoods. In a modern computer implementation of this model on a 51 × 51 grid (shown in Figure 8.1), a preference for 30% like neighbors usually leads to agents having 70% like neighbors.

Abstraction and Representational Capacity  213

Initial distribution

t=1

t=2

t=3

t = 14 (equilibrium)

Figure 8.1  Computer simulation of Schelling’s segregation model. On the left is shown a random distribution of the agent times. As time moves forward, large clusters of the two agent types form.

Schelling’s model of segregation nicely illustrates how computational structures can be used to represent causal relations. Say we find a pattern of segregation in a city that looks much like what Schelling predicted. If his model is an explanation of that pattern of segregation, then after interpretation, the model’s procedure must be similar to what is happening in the city. Model agents follow a two-​step procedure: determine whether they have enough similar neighbors to satisfy their utility function, and if their function is unsatisfied, move to a new position. The model will explain the segregation in a real city if an analogue to that procedure characterizes the real agents in that city. One reason that a computational structure is so well suited for Schelling’s model is the conditional structure of the model. In English, we say that the model agents assess whether or not their utility function has been maximized. If it has, they remain in the same state. If it has not been maximized, then they move to a new position, creating a new state of the model. While this could perhaps be modeled using piecewise functions, it is much more naturally represented using a computational procedure. In this case we can clearly see how the procedure of the model is used to represent a causal process. Further, we can see how the representational adequacy of the model would be assessed. Up to some standard of fidelity, we would determine the extent to which the primary procedural features of the model—​its movement rule and utility function—​are instantiated in some real-​world  city. Schelling’s model is an incredibly simple example of the representational power of computational structure. In the next section I  will take a more detailed look at computational structure, arguing in subsequent sections that this gives computational modelers enormous representational resources.

214  The Scientific Imagination

8.2  Aspects of Computational Structure In the previous section I  described computational models as interpreted procedures. The aim of the rest of this chapter is to begin making the idea more precise, considering the nature of computational structure in more detail. The precise nature of a computation and a computational structure is, of course, a vast topic, so my discussion here will be preliminary and incomplete. That said, even a cursory look at some of the subtleties will reveal how computational structure generates massive representational capacity for modelers. Computations are procedures operating on information. More formally, we can say that a computation consists of an algorithm acting on data (Abelson et al. 1985). A computational model will specify the initial state of the information (called the input) and one or more algorithms that transform the input through a series of state changes, ultimately yielding an output. Even with the further complication of separating algorithms from data, the distinctive feature of computational models is that procedures are the core components of the model, and they are the structures in virtue of which some features of a target can be represented (Kimbrough 2003). What does it mean to say that computational structures separate algorithms from data, and how does this differ from other types of representations such as concrete representations and mathematical representations? This is the first place abstraction enters our discussion. Computational procedures or algorithms are designed to be abstractions over specific operations. Very simple algorithms are familiar arithmetic procedures described in algebra, where symbols represent numbers. But the notion of an algorithm is more general: it is any procedure that can operate on symbols representing data of any sort—​numbers, more complex mathematical objects, words, lists, databases, and so forth. We now turn to several core properties of computational structure that are relevant to computational modeling: abstraction, scheduling, and how probability is simulated.

8.2.1  Abstraction The concept of abstraction is said to permeate the entire field of computer science (Dahl et al. 1972). All of the core concepts of computation rely on

Abstraction and Representational Capacity  215 the notion of abstraction, even concepts as seemingly familiar as variables. The idea is that the way things change can be independent of what things are changing. Well-​constructed computational structures separate the change from what is changing, or the procedure from the data. The data can thus be treated as opaque (it is “black-​boxed”), something simply transformed by the procedure. Here is a simple example. Let’s say we are interested in building a machine that can solve summation problems. To keep things simple, let’s assume we already have a procedure that can take two numbers and add them together. But what we want to do is to create a way to take any list of numbers and give the sum of those numbers. Even with a procedure for adding two numbers, summing a list requires some additional thinking. If the list only has two numbers, we don’t need anything beyond what we already have. But once we have more than two numbers, we no longer have a function that can add them. It is helpful to think about the simplest case first. Call our adding function ADD, and assume it takes two numbers x and y and returns their sum, which we will call z. Say we have the list [1 2]. We can apply our function ADD(1,2), and it returns 3. Now let’s say we have a list of three numbers, [1 2 5]. Intuitively, we might take our function, apply it to the first two elements, replace those elements with the output, and then apply the function again. Step by step (and assuming we can substitute two numbers with one number on our list), it would look like this: Input

Memory State

[1 2 5] ADD ADD

[1 2 5] [3 5] [8]‌

It is clear that the procedure could be simplified by telling the computer to do the ADD step twice. Of course, that doesn’t generalize; what if there were four numbers to add? So one option would be to have the user input how many times the addition procedure should be run (something like “repeat n ADD”). But the information about how many times it should be run is already present—​it’s the length of the list minus one. So an even simpler option is to make a procedure that adds together the first two elements of a list so

216  The Scientific Imagination long as there is more than one element on the list. Once the list has a single element, we are done. This becomes a very general procedure for summation starting from the simple procedure of addition. And because the halting condition has to do with the length of the list, not some further input or assumptions about the list, the procedure is independent of the data it acts on.

8.2.2  Layers of Abstraction and Encapsulation So far I have discussed abstraction in an extremely simple way. While I can’t even come close to giving the subject the attention it deserves in this chapter (it is one of the central notions of computer science), we need to delve a little deeper here. The computational procedure I  just described generalizes the primitive function of addition, which is a function handled at a very low level in a computer processor—​meaning that the processor itself can add two numbers (usually in base 16). In order to get a more general function for summing up a list, we started by saying “Let’s assume we have a function that can add two numbers together” and then didn’t think any more about what the function was. All we had to know is that it took two numbers and added them together. Now that we have a function for summing lists, we can also abstract away the details of how that works and just think about the function. Say we wanted to use our summation function (sum) to figure out a philosophy department’s total salary pool and then multiply that number by the cost-​of-​ living increase offered by the university. We would create a new function that took a salary list (called currerntSalary) and an increase rate (increaseRate) as inputs and returned a number. It would apply the summation function to the list, then multiply this number by the increase rate. In pseudo-​code, it would look like this: def salarypool(currerntSalary, increaseRate): return sum(currentSalary) * increaseRate

This idea of functional abstraction scales up to very high levels; complex software design would be impossible without it. Complex software projects will have some design work done at the “10,000-​foot” level of abstraction—​ where the basic processes and the ways that they interface with each other are

Abstraction and Representational Capacity  217 sketched. Then substantial attention will be paid to the interfaces between these parts—​what information one process expects from another, how a process can be called, what kind of output it is expected to provide, what its errors will look like, and so forth. Of course, at some point the details will have to be filled in. But even on very detailed, complex projects, it is rare to get all the way down to filling in the details of primitive functions like arithmetic operations, moving data in and out of memory, and so forth. The implementation of low-​level functions will have been determined when the programming language was designed, or even when the computer chips were designed. One way to think about this is hierarchically from the lowest levels of abstraction, which are very close to what is actually happening in the processor, through the abstractions required to build programming languages, through specialized libraries of functions that programs call and their associated data structures, through modules of a program, and all the way to the 10,000-​foot view of a large program. There are reasons to not take the notion of a hierarchy too seriously here—​for example, it is hard to make out what “higher” means in every case.1 But the idea that we can encapsulate details under more abstract structures is centrally important. By encapsulate, I  mean that we hide details of how things work, only knowing that a certain kind of input is expected, a procedure will be applied that will make such-​and-​such transformation, and then the transformed output will arrive in a particular form. Once we begin thinking in terms of encapsulated procedures interfacing with one another, we can see why computer scientists describe computer programs as consisting of embedded abstractions, where some procedures hide other procedures inside them. I began my discussion about the representational capacities of computational structures by noting that such structures can represent mechanistic and other causal relationships very naturally. The casual structure of a target system can be represented by the procedural structure of a computer program. But if computational structures consist of a hierarchy of abstraction, we need to ask where in the abstraction hierarchy the model lies. Two simple answers immediately suggest themselves:  all of the computational structure is the model’s structure, or only the lowest level is the model’s structure. Call these the inclusive and reductive theses, respectively. 1 I am grateful to Ehud Lamm for pressing this point.

218  The Scientific Imagination The attraction of the inclusive thesis is twofold. First, as I have emphasized, the complexity of computational structures provides very rich representational resources to modelers. Why should any particular level be the singular structure associated with models? Moreover, despite the way that computer scientists talk about abstraction as a kind of black-​boxing or hiding of the underlying structure, abstractions can leak (Spolsky 2004). Pushing a function beyond the limits it was designed to work at can often remove guarantees about inputs and outputs, which can be thought of us as the details underlying the function leaking upward. This suggests that there isn’t a single privileged level. In favor of the reductive answer is the simple fact that all computational structure is ultimately instantiated in the computational structure’s lowest level. No structure can be missed here, nor can leaks happen, because this represents how computations are actually instantiated in the processor. Neither of these answers is satisfactory. While it is correct to think that structure from any part of the abstraction hierarchy can provide representational resources for modeling, it is certainly not the case that all of this structure is relevant in every instance of modeling. For example, very few, if any, computational models depend in any way on how arithmetic operations are carried out. Many models don’t depend on the details of how random numbers are generated, even if some do. And so forth. At the same time, the reductive position is far too limiting, and even strangely irrelevant. At the lowest level, which programs ultimately compile down to, is a series of instructions in machine code. These instructions will involve very simple operations like writing a number to a particular address in memory, retrieving a number from memory, adding two numbers together, and so forth. No matter how complex the code, ultimately all processing is carried out with such instructions. So there is a very real sense in which any higher-​level representational capacities of the program must have some very tight relationship with what is happening here. Nevertheless, the idea that representation is best understood at this level suffers from the same kind of epistemic problems that plague reductive accounts in other contexts. Although molecules are ultimately composed of fundamental particles, it is very difficult to understand the molecules’ behavior if only the properties of those fundamental particles are considered. Only the molecule-​level properties appropriately partition possibility space for the chemists’ purposes. Similarly, a higher-​level description of a computational structure is almost always needed both to understand what the

Abstraction and Representational Capacity  219 computer is doing and to tie computation to the properties of a target, the essential task of computational modeling. These considerations suggest that a singular answer to our question may be unavailable. A better way to frame the question draws on material I have developed in Simulation and Similarity. Parts of computational structure becomes representational when theorists have the appropriate construals. As a consequence, construals will help us answer the question of which parts of the hierarchy are represented. For our purposes, the most important part of a modeler’s construal is her intended scope and her assignment. Intended scopes specify which aspects of a target phenomenon are intended to be represented by a model, while assignments tell us which parts of the model are intended to represent which parts of a real or imagined target. Taken together, they allow the modeler to specify which part of the model should be understood as representational and which is just part of the non-​representational infrastructure of the model. What do I mean by this? Models typically have structure not present in real-​world targets and not intended to be representational. For example, consider a canonical mathematical model: the Lotka-​Volterra model of predation. This model represents population sizes using state variables, and the connection between them using a set of coupled differential equations. Since differential equations are defined over real numbers, the mathematics of the model can describe transitions between states where populations have a fractional number of individuals, or even an irrational number. No biologist who uses the model intends such values to be representational. They are only present because it is part of the price of using differential equations to construct a model. So we could say that the modeler’s construal excludes any non-​integer number of individuals, only assigning whole numbers representational power (or relations of denotation between model and target). Something very similar should be said about computational models. Reflecting on the abstraction hierarchies of computer programs shows us that the complete structure of a computer program is enormous. And while all of this structure has representational capacity, much of it isn’t going to be used to represent a target. While the concept of abstraction in programming was originally designed to make complex software design possible, the presence of abstraction allows the modeler to find an appropriate part of the abstraction hierarchy whose functional relationships mimic the causal relationships being described, and whose state transitions mirror the state transitions in nature being described.

220  The Scientific Imagination This is a long way of saying that a computational model’s location in an abstraction hierarchy depends on the modeler’s intentions. If the precise way that probability or concurrency is implemented is part of her representational aims, then those functions or libraries are part of the model. If, on the other hand, her intention is only to have a simulated parallel or random process, then only an abstraction of these operations represents anything in the model. Reflecting on the abstraction hierarchy and its representational capacity also sheds light on a common critique of computational models. Although it seems to be receding, this criticism holds that they are less rigorous than mathematical models, and that their internal opacity makes it difficult to know what is really going on inside of them (for a discussion and reply, see Humphreys 2004; Muldoon 2007). One way to think about this criticism and how it might be addressed is in terms of abstraction. Even when a computational model is understood perfectly well, something lower-​level in the abstraction hierarchy might actually be driving a result. For example, one can imagine a model that is very sensitive to the way continuous functions are approximated, but where the procedures doing the approximation are buried in a low-​level library and hence not accessible to the modeler as she designs the construal for the model. These problems can arise, of course, in mathematical models as well—​some results depend on properties of underlying functions, numerical domains, and so forth. Modelers have a prescription for deciding the issue is the same in both mathematical and computational models:  robustness analysis (Levins 1966; Weisberg 2006; Weisberg and Reisman 2008; Wimsatt 1981).

8.2.3  Lexical Scoping and Data Abstraction In previous sections, I  have talked about abstraction of procedures, but computational structure also deploys abstraction to deal with data. In the context of computer programs, “data” are whatever pieces of information that a program operates over. Variables are used to represent data, but computer programs are often written only knowing that a variable represents, say, an integer, or a string. Sometimes operations can be performed without even knowing what the variable represents, just as inferential rules can be applied in first-​order logic without knowing the meanings of variables and predicates.

Abstraction and Representational Capacity  221 Given that variables will represent data, computer programs require ways of binding data to variables. To bind data to a variable simply means to assign a value to the variable, although the value needn’t be a number. And for any given variable, there are contexts or scopes in which a given datum is bound to a variable and ones in which it is not. Lexical scope are those contexts in which the binding of a variable to its value is valid. It is important for a programmer to recognize in what scope a variable is defined. For example, one needs to keep track of whether values are local to particular functions or available to all computations that can take place while the program is being executed. As we will see in later sections, the fact that variables can have one value in one context and a different value in another is an important computational resource for modelers.

8.2.4  Scheduling Another interesting difference between computational structures and mathematical structures is that in a computational structure, procedures happen in an order; they take place in time, unfolding in a specific series of steps. This means that a fully specified computational model requires a schedule of its steps and events (a topic called scheduling in the modeling and computer science literatures). While it might seem like scheduling is a mere pragmatic detail, it turns out to be a very important part of a computational model’s structure. Changes to a structure’s schedule can generate different properties, and these different properties can, in turn, significantly change the model’s behavior. For the kind of multi-​agent models that I have been discussing in this chapter, an important scheduling issue concerns whether or not the computational process is parallel, serial, or a serial approximation of parallel. Schelling’s original version of his segregation model was serial, and he made no serious attempt to simulate a parallel process. Agents in the upper left corner of the grid calculated their utility and moved (or not); he then swept across the row, down to the next row starting from the left, and so forth. In a real city, movement is either parallel, serial and random, or some combination thereof (clusters moving at the same time, but randomly). Schelling’s method is a low-​fidelity way to handle scheduling for a segregation model. So perhaps a truly parallel model would be a better way to represent a real city.

222  The Scientific Imagination It is easy to describe a fully parallel model but hard to implement one. Since most computational models are studied on computers with a small number of processors, true parallelization is rarely possible except in simple cases. In these cases, or when models are implemented in supercomputers that have many hundreds or thousands of processors, steps in the model’s procedure can be made to happen simultaneously. Even this, however, requires elaborate attention so that the processes, or threads, execute at precisely the same time. Given the nature of movement in Schelling’s model, one approach to simulating parallel movement would be to divide the computational procedure into two explicit steps and wait for all the agents to complete the first step before the second step begins. In the first step, all the agents could calculate their utility and make a movement decision to stay or to go. This decision could be recorded in a private variable, so that the information wasn’t available to neighbors. After each individual agent had decided to stay or to move, the movements themselves could be executed in any order. This would correctly simulate a parallel decision-​making process, but it could still create scheduling artifacts depending upon how ties and movements to the same locations are handled. Because of these complications, most implementations of Schelling’s model simply randomize the order of movements and change the order in each cycle of the model. If there are sufficiently many runs of the model, one hopes that order effects are washed out. Although randomizing turn-​taking is the norm when one wants to simulate a presumed parallel process, it is not essential. If there is some natural order to be represented, then a very different schedule might be relevant. My point is simply that a computational structure has a representational dimension of scheduling.

8.2.5 Simulating Probability Another important aspect of computational structure in modeling involves probability. Many computational models, including the vast majority of computational models used in the physical, biological, and social sciences, contain probabilistic transitions. These can be included because the modeler’s intended target actually behaves genuinely probabilistically. Or because the intended target’s transitions are too complex to be represented deterministically because of informational or computational limitations. Or because we

Abstraction and Representational Capacity  223 need to study an ensemble of cases to understand how a micro theory relates to a macro theory. In such cases, modelers can deploy probabilistic transition rules and then study distributions of outputs starting from sets of randomly generated values. It is important to emphasize from the beginning that probabilistic model transitions, or something statistically indistinguishable from them, form an essential part of a model’s computational structure. A concrete event in the natural world involves a particular sequence of steps. Similarly, a single run through a probabilistic procedure will also generate a particular event sequence. However, just as the abstract elements of a computational structure have representational resources beyond the specific computation being performed, when probabilistic transitions are added to models, these are part of the computational structure of the model and are important computational resources that can be drawn on. A very simple example of what I am talking about can be seen in computational models of genetic drift. In the long run, when two alleles in a one-​locus system are evolving by drift alone, one or the other will drift into fixation. This results can be proved mathematically, but more important for our purposes, it is easily demonstrated with a computational model. Although we know one allele or the other will eventually become fixed in the population, you actually have to run the model to find out which one will drift to fixation. And even more important for our purposes, the long-​ term behavior of such a system is often what the modeler wants to look at. Long before alleles drift into fixation, the population may change such that selection takes over, niche construction reinforces one of the alleles, the allele is broken up by crossing over, or something else entirely. In these and related cases, it is the short-​and medium-​term transients in allele frequency that are often what is of interest to the modeler, and these frequencies are represented probabilistically. I have been speaking as if probabilities are literally represented in computers, but this is almost never the case. Probabilities are calculated using random number generators, but with a few very special exceptions, these are actually pseudo-​random number generators. Such generators implement procedures that generate numbers nearly indistinguishable from strings of random numbers. The quality of pseudo-​random number generation is measured, among other ways, by the period of the sequence it generates—​in other words, how many digits can be generated before returning to the same sequence.

224  The Scientific Imagination There are lots of different ways to generate pseudo-​random numbers, and these methods have different properties and different qualities. For example, a very early method, proposed by von Neumann, is the middle seed method. You take a number (the seed), square it, pull out the middle digits as your random number, then use this as the next seed. A common modern pseudo-​ random number generator is called theMersenne Twister, which has a period of 219937 –​1 and passes all statistical tests for randomness. Such a procedure ensures that unless you purposely begin with the same seed, two simulations will behave as if they were probabilistically independent. Different ways of generating pseudo-​random numbers yields, strictly speaking, different computational structures. Does this mean that changing the pseudo-​random number generator yields a different model? This is an important question and brings us back around to the question of abstraction. As I have already discussed, computational structures often have a hierarchical structure, where procedures can abstract over procedures. To implement the simple algorithm for summation I discussed, we don’t need to know exactly how the computer performs the primitive addition operation. Similarly, in most cases of modeling, the details of exactly how pseudo-​ random numbers are being generated isn’t relevant, so long as they are “good” random numbers.

8.3  Representational Resources of Computational Models In the previous section I discussed some aspects of computational structure that are relevant to thinking about models, ending with a discussion of where in an abstraction hierarchy a model’s structure lies. I argued that this is modeler dependent and is determined by the modeler’s intention. In this section, I will try to deepen that discussion, looking at further aspects of computational structure that are relevant to models and modeling. Let’s begin with a consideration of the way probabilistic transitions are represented in a model. I  already mentioned that probability enters computational models not only to model genuinely probabilistic processes but also as a measure of uncertainty and in simulating parallel processes. There is another way that probability enters computational modeling that is a little less straightforward: probability can enter as part of an ensemble approach to computational modeling.

Abstraction and Representational Capacity  225 What do I mean by this? There are lots of cases where we roughly know the causal structure of a system but have great uncertainty about the details and/​or the initial conditions of the system. One approach to such a case is to make a bunch of very similar models and see how robust particular behavior is across slight changes to the model. But another approach is to describe parts of the system probabilistically. Causal links can be described probabilistically, as can initial distributions of materials, agents, or organisms. The approach is called an ensemble approach because one executes the same computational model repeatedly, drawing different random numbers into the probabilistic representations. Outcomes can then be described in distributions just as one does in robustness approaches. Although it is tempting to treat this as identical to the robustness approach, I think that this misses subtle differences and underestimates the representational resources of simulated probability inside of a computational structure. It is really only one interpreted computational structure that is being used—​ each run of the model looks different because different random numbers are being drawn. But all of them are generated by the same underlying computational structure. This structure has the representational resources to support the ensemble approach. So when we get a distribution of outcomes, we say that this distribution is associated with a particular probabilistic model, not that the distribution is associated with a set of models. Another representational resource of a computational model is the structure of abstraction. In the previous section I discussed some of the ways that abstraction makes computer programming possible—​it would be nearly impossible for a human to write a modern computer program in machine instructions. But beyond convenience, the very structure of abstraction can be used as a representational resource. In the kinds of models I have been discussing in this chapter, one way that abstraction can be a resource has to do with how the relationship between properties at different scales can be studied. In order to see this, it is helpful to compare computational models to mathematical models. Many traditional models in the social sciences and in ecology focused on populations. Models were constructed to study the dependence of aggregate properties on one another. For example, the Lotka-​Volterra model gives the dynamics for a simple predator-​prey system. Its dependent variables are the size or density of a generic predator population and a generic prey population. The underlying mathematical structures are represented by coupled differential equations, whose variables represent population properties. The model is completely

226  The Scientific Imagination silent on the properties of individuals that make up the population. However, when the model is directed at a target population, aggregate properties would ultimately be instantiated by individuals. But there is no way to study this this part/​whole relationship of instantiation in these models. Hence these models are not useful for learning about how, say, small changes to individual-​level properties change population-​level properties. Computational structures allow considerably more flexibility. Since abstraction hierarchies naturally fall out of computational structure, models can be designed whose abstraction hierarchies model the compositional relations of their targets. In agent-​based analogues of traditional models, individual organisms can be represented by one type of computational object, and the aggregate properties of populations can be represented by another object. The one that represents populations can abstract over the kind that represents individuals, so that the abstraction hierarchy mirrors compositional relationships in nature. If one is interested in the way that individual-​ level properties aggregate to compose population properties and dynamics, this can easily be studied within the model. If one is interested in the way that population-​level relations constrain underlying properties or histories of individuals, this can also be studied. The abstraction hierarchy itself becomes a computational resource. Lexical scope provides another representational resource for computational modeling. Recall that lexical scope is the part of the computational structure in which the binding of a variable to its value is valid. Very simple examples of lexical scoping were seen in my discussion of a summation function that took the first element of a list (call this a) and the second element of a list (call this b) and replaced a and b with a single value c that is a + b. I just described a computational procedure using variables. For a given call of this function, the variables a, b, and c will get temporarily assigned to particular numbers. As soon as the function is done executing, these assignments (or bindings) will no longer make any sense, so the computer needn’t hold them in memory. Lexical scoping is thus useful in all kinds of simple contexts. Lexical scoping can also be a representational resource in more scientifically interesting contexts. Consider an agent-​based model such as Schelling’s model. In this model, each agent is very simple but still has an identity. In this case, an identity means that it is numerically distinct, has a location and a utility function, and has a current level of utility. All of this information should not be available to all of the other agents. In other words, the scope of these agent variables is restricted to the agent and sometimes the agent’s

Abstraction and Representational Capacity  227 neighbors. In a slightly more sophisticated version of the model that had strategic interactions, this would be even more relevant. One agent may need to “guess” what another is going to do, but if it had access to the internal state of all the agents, there would be no need to guess; all the information would be available. Lexical scoping is part of what makes the representation of distinct agents possible. And while this isn’t unique to computational structures—​ something similar can be done with modal logics—​it is very hard to get this kind of representational resource in mathematical models. This is only a partial list of the representational resources of computational structures. I have discussed these factors because I think they are some of the most important ones that arise in computational models, but also to point out how these factors allow the structure to be used in many common representational circumstances.

8.4  Why I Am (Still) Not a Fictionalist In earlier work, I  raised three objections to the models-​as-​fictions view. Specifically, I argued against the view that we should think of mathematical models as fictional scenarios that would be concrete if they were real. I gave three main arguments. The first is called the problem of variation. Insofar as models are fictions, there may be considerable differences in the way these are conceived of by different scientists. This is true of all fiction, of course, but it rarely poses a problem in understanding fiction, and it is often part of the fun. In science, the issue is potentially more serious because a scientific representation, even an idealized one, needs to be genuinely shared among its users. I also argued that mathematical and computational models had different kinds of representational capacity than concrete scenarios and that we should be cautious about appealing to the way scientists talk and even think about their models. All three of these objections apply to computational models as well as mathematical models. Given the increased representational resources of computational models, there is the possibility for even more variation in how models are imagined. It is unclear how all of the representational resources of computational models can be fleshed out as concrete scenarios, and the way scientists talk and think about computational models varies tremendously. The initial conditions of some computational models like Schelling’s can be

228  The Scientific Imagination imagined, although in many cases the outcomes cannot be. Others models cannot even be imagined at all.2 The rich representational capacity of computational structures extends my second objection, providing additional support. Imagined, concrete scenarios are rich in properties, and this is one of the attractions of understanding modeling in terms of them. But ironically, they are impoverished with respect to some key computational properties. For example, consider how the complexities of abstraction would be addressed on the fictions view. In a computational model, abstraction hierarchies are themselves sources of representational capacity. In virtue of these hierarchies, a single model is capable of representing multiple target scenarios, abstracting over details of these scenarios, and even compressing multiple scenarios into a single model. On the other hand, fictional scenarios are concrete and singular. Just like concrete targets, imagined concrete systems have a single set of instantiated properties. I could continue giving examples, but my main point is that the representational parts of interpreted computational structures have properties and resources that concrete systems do not have. This is one of the main reasons that modelers have largely abandoned concrete models in favor of mathematical and computational models. Conceiving of the metaphysics of models as imagined and concrete unnecessary shackles modelers to the limitations of concrete systems.

8.5  Conclusion This chapter explored the significance of abstraction and other computational structures for modeling. I argued that the best argument for the fictions view—​that mathematical structures do not naturally represent causal processes—​can be answered not by insisting that mathematical structures easily represent causal processes but rather by pointing out that computational models do. Once we begin thinking seriously about computational structures, in ways merely introduced in this chapter, we can see the great depth of computational resources that they provide. As computational models increasingly become important in the life and social sciences, far 2 Some philosophers (e.g., Thomasson, this volume) advocate the view that fictional scenarios and imagination should not be tightly linked. This objection loses its force on such an account.

Abstraction and Representational Capacity  229 more philosophical attention will be required in order to fully understand their resources, their explanatory power, and the ways that they can be tested.

References Abelson, H., Sussman, G. J., and Sussman, J. (1985). Structure and Interpretation of Computer Programs. MIT Electrical Engineering and Computer Science Series. Cambridge, MA: MIT Press. Dahl, O.-​J., Dijkstra, E. W., and Hoare, C. A.  R. (1972). Structured Programming. London: Academic Press. Frigg, R. (2010). “Models and Fiction.” Synthese 172, no. 2: 251–​268. Glymour, C. (2003). “Learning, Prediction and Causal Bayes Nets.” Trends in Cognitive Sciences 7, no. 1: 43–​48. Godfrey-​Smith, P. (2009). “Models and Fictions in Science.” Philosophical Studies 143, no. 1: 101–​116. Humphreys, P. (2004). Extending Ourselves:  Computational Science, Empiricism, and Scientific Method. New York: Oxford University Press. Kimbrough, S. O. (2003). “Computational Modeling and Explanation: Opportunities for the Information and Management Sciences.” In Computational Modeling and Problem Solving in the Networked World: Interfaces in Computing and Optimization, edited by H. K. Bhargava and N. Yepp, 31–​57. Boston: Kluwer. Levins, R. (1966). “The Strategy of Model Building in Population Biology.” American Scientist 54, no. 1: 421–​431. Matthewson, J. (2012). “Generality and the Limits of Model-​Based Science.” Ph.D. dissertation, Australian National University. Muldoon, R. (2007). “Robust Simulations.” Philosophy of Science 74, no. 5: 873–​883. Pearl, J. (2000). Causality. New York: Cambridge University Press. Spolsky, J. (2004). “The Law of Leaky Abstractions.” In Joel on Software, 197–​202. Berkeley, CA: Apress. Toon, A. (2012). Models as Make-​ Believe:  Imagination, Fiction and Scientific Representation. Basingstoke: Palgrave Macmillan. Weisberg, M. (2006). “Robustness Analysis.” Philosophy of Science 73: 730–​742. Weisberg, M. (2013). Simulation and Similarity: Using Models to Understand the World. New York: Oxford University Press. Weisberg, M., and Reisman, K. (2008). “The Robust Volterra Principle.” Philosophy of Science 75: 106–​131. Wimsatt, W. (1981). “Robustness, Reliability, and Overdetermination.” In Scientific Inquiry and the Social Sciences, edited by M. B. Brewer and B. E. Collins, 124–​163. San Francisco: Jossey-​Bass.

9 “Learning by Thinking” in Science and in Everyday Life Tania Lombrozo

Two models of learning have dominated both research in human cognition and accounts of scientific progress. The first involves learning from observations, be it everyday experience or the results of systematic research. The second involves learning from testimony, be it the statements of relevant experts or the scientific canon on which new research is based. In both cases, learning is based on new evidence acquired “outside the head.” But some of the time, everyday learning and scientific progress depart from these familiar forms. Consider a child who puzzles through a tricky riddle: What has a single eye but cannot see? When she finally reaches the answer—​a needle—​she has learned something new. Consider Einstein’s well-​ known thought experiments involving elevators and moving trains, which likewise taught him (and the world) something new. In both cases, the new insight occurred in the absence of novel empirical observations or novel testimony. I refer to such cases of learning as learning by thinking. Learning by thinking contrasts with the most familiar forms of learning, learning from observation and learning from testimony (itself a special kind of observation). In cases of learning by thinking (LbT), new insight is achieved in the absence of novel observations obtained “outside the head.” Such cases naturally raise questions about how such novel insight is possible, in what sense it is really new, and whether we’re justified in believing the conclusions delivered by LbT. In the present chapter, my aim is to review some of what we’ve learned about LbT from recent research in cognitive development and cognitive psychology and, based on this research, to argue for a new take on the epistemic role of LbT. I’ll begin by considering the most widely discussed case of LbT:  thought experimentation. The literature on thought experiments Tania Lombrozo, “Learning by Thinking” in Science and in Everyday Life In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0010

“Learning by Thinking” in Science and in Everyday Life  231 within philosophy raises a useful comparison between thought experiments and arguments, which structures the three sections that follow. These sections discuss whether LbT is formally reducible to argumentation (yes), psychologically reducible to argumentation (no), and epistemically reducible to argumentation (no). I ultimately suggest that psychological irreducibility explains the apparent novelty of the conclusions reached through LbT, and I point to a novel take on the epistemic value of LbT processes as practices with potentially beneficial epistemic consequences, even when the commitments they invoke and the conclusions they immediately deliver are not themselves true.

9.1  Thought Experiments, Arguments, and Three Kinds of Reduction Thought experiments are canonical examples of learning by thinking. Within philosophy, both scientific and philosophical thought experiments have been the targets of careful analysis, with the challenge being to explain how we seem to learn something new in the absence of novel observations. Articulating this challenge, Kuhn writes:  “How, then, relying exclusively upon familiar data, can a thought experiment lead to new knowledge or to a new understanding of nature?” (Kuhn [1964] 1977, 241).1 Working in psychology and education, John Clement asks:  “How can findings that carry conviction result from a new experiment conducted entirely within the head?” (Clement 2009, 687). One approach is to reduce thought experiments to more familiar forms of learning. For instance, John Norton argues that thought experiments are truly arguments, perhaps disguised in picturesque, narrative form (Norton 1996). On this view, thought experiments generate something “new” in the sense that they derive a novel conclusion from known premises by applying deductive or inductive rules of inference.2 A nice feature of this view is that the conclusions delivered through thought experimentation can potentially

1 Philosophers vary in how they articulate the puzzle of thought experimentation, some focusing on new knowledge, others on new understanding, etc. For the most part, the discussion has not focused on new learning, which is the main focus in the discussion that follows. I am grateful to Mike Stuart for bringing this point to my attention. 2 There’s still something interesting to be said about the sense in which deduction (or induction) generates something “new.” For relevant discussion, see Powers 1978.

232  The Scientific Imagination be justified—​this will transpire just in case (and to the extent that) the corresponding argument justifies its conclusion. Thought experiments might also share properties with learning through observation. For example, Mach suggested that thought experiments reflect “instinctive knowledge” gathered through experience (Mach 1897, 1905)—​a body of implicit (but potentially justified) beliefs that can be accessed through thought experimentation, yielding novel insights directly or as premises in further arguments. Within psychology, Clement suggests that mental simulations can “draw out implicit knowledge” contained in mental schemata that “the subject has not attended to and/​or not described linguistically before” (2009, 694). There are a variety of alternative proposals, including some with more rationalist (e.g., Brown 1991) and evolutionary commitments (e.g., Shepard 2008). For present purposes, the comparison between thought experiments and arguments is useful in framing a set of related questions about whether—​ and in what sense—​learning by thinking is reducible to argumentation. In a paper on thought experiments in science, for example, Tamar Gendler asks whether “any conclusion reached by a good thought experiment will also be demonstrable by a non-​thought-​experiment argument” (1998, 399), and she goes on to differentiate three readings of “demonstrable” that correspond to three questions about reducibility. Specifically, she asks whether thought experiments can be reconstructed as arguments from the perspective of a mature science, whether they nonetheless have heuristic value, and whether they are epistemically equivalent within a developing science. Her answers are, respectively, (trivially) yes, (trivially) yes, and (controversially) no. In the present chapter, I take up a related set of questions. First, are thought experiments formally reducible to arguments?3 That is, is there some argument, with appropriate premises, rules of inference, and conclusions, that delivers the conclusions of the thought experiment? Second, are thought experiments psychologically reducible to arguments, in the sense that any conclusions reached through thought experimentation by a given person could, under the same circumstances, have also been reached through an explicit argument? And finally, are thought experiments epistemically reducible to 3 It is worth clarifying that the notion of “argument” used throughout the chapter is deliberately broad. I include as arguments any inferences that can be represented in terms of premises, conclusions, and rules of inference, even if the premises or rules of inference are not ones we would typically offer in a verbal argument. For example, an application of Bayes’s rule could feature in an argument. This notion is therefore broader than that employed, for instance, in the argumentative theory of reasoning (Mercier and Sperber 2011). See also Stuart 2016.

“Learning by Thinking” in Science and in Everyday Life  233 arguments, in the sense that the conclusions of thought experiments derive their epistemic force entirely and exclusively from the force of the corresponding argument? My answers (yes, no, and no) will mirror Gendler’s, but my analysis differs from hers in two important ways: in diagnosing what it is that makes the conclusion of a thought experiment “new,” and in differentiating the epistemic roles of thought experiments and arguments. These questions, while formulated here in terms of thought experimentation, arise for any LbT process. In the three sections that follow, I consider each question in turn, relying most heavily on research that involves learning by explaining to oneself.

9.2  The Case for Formal Reduction Within psychology, there has been little research on thought experimentation as such (for exceptions, see Clement 2009). However, psychologists have studied mental simulations (Hegarty 2004), which are very much like thought experiments, as well as other processes that involve LbT, such as explaining to oneself (Fonseca and Chi 2011; Lombrozo 2012, 2016) and engaging in analogical reasoning (Gentner and Smith 2012). Based on this work, my colleagues and I have argued that LbT processes effectively recruit constraints on reasoning that deliver conclusions that might not otherwise be reached (Lombrozo 2012, 2016), where these constraints play a role akin to premises or rules of inference within an argument. To better appreciate the basis for these ideas, consider a typical experiment involving learning by explaining. In such experiments, participants are presented with a task, such as learning to categorize novel objects or learning what activates a machine. Half the participants are prompted to explain to themselves at key points in the experiment. For example, they might be asked to explain why a particular object belongs to a particular category, or to explain why a particular object activated a machine. Importantly, participants never receive feedback on the content or quality of their explanations. In the control condition, participants are instead asked to engage in a task that’s comparably demanding, such as thinking aloud, describing category members, or reporting whether or not a given object activated the machine. Participants are then probed to assess whether those who explained differ from those in the control condition in terms of the inferences they draw or the information they recall. If the former group outperforms the latter, this

234  The Scientific Imagination constitutes evidence for LbT, as participants were all presented with the same evidence and the same probes; the differences can be attributed to the kind of thinking in which they engaged.4 Using experiments that follow this basic form, we have found that relative to participants in control conditions, both children and adults who are prompted to explain are more likely to discover and to generalize patterns that support broad and simple explanations (Kon and Lombrozo in press; Walker, Bonawitz, and Lombrozo 2017; Walker and Lombrozo 2017; Walker et al. 2014, 2016, 2017; Williams and Lombrozo 2010, 2013; Williams et al. 2013) and to privilege causal information over superficial perceptual properties (Legare and Lombrozo 2014; Walker et al. 2014). For example, in one study, participants studied eight novel robots, four of which belonged to one category and the remaining four to another (Williams and Lombrozo 2010). The examples were constructed such that the two groups of robots could be differentiated by a salient but imperfect rule: three of the four robots in one category had round bodies, and three of the four robots in the other category had square bodies. The two groups could also be classified perfectly by discovering and using a more subtle rule: all of the robots in one group had feet that were flat on the bottom, and the remaining four had feet that were pointy on the bottom (despite otherwise variable foot shapes). Participants who were asked to explain why each robot might belong to its respective category were significantly more likely to discover this more subtle basis for classification, and to use it subsequently in classifying novel robots. Why might explaining have these effects? Williams and Lombrozo (2010, 2013)  propose what they call the “subsumptive constraints” account, according to which the process of explaining recruits an important explanatory constraint: to identify an explanans that explicitly or implicitly invokes an explanatory generalization that subsumes the explanandum. In so doing, participants will be driven to seek and favor broad patterns—​that is, those that they believe apply to many cases—​over idiosyncratic ones. This makes the potentially counterintuitive prediction that prompts to explain might actually impair learning when there is no broad pattern to be found, and indeed, this is what we’ve found (Williams et al. 2013). 4 One might worry that requesting an explanation is itself a kind of evidence. For example, it might bring with it the implication that there is something that can easily be explained, such that the experimenter expects the participant to have discovered it. Various experiments have aimed to equate such pragmatic inferences across conditions (Williams et al. 2013) or to determine whether participants do draw such inferences (Williams and Lombrozo 2010). The research to date suggests that effects of explanation cannot be attributed to these factors.

“Learning by Thinking” in Science and in Everyday Life  235 The subsumptive constraints account is one piece of a larger story about explanation (Lombrozo 2011, 2012, 2016)  that has affinities to “inference to the best explanation” (IBE) in philosophy (Harman 1965; Lipton 2003). In the case of IBE, the core idea is that explanatory virtues—​such as scope and simplicity—​can inform an inference to which explanation is true. When it comes to our account of learning by explaining, the core idea is that the process of engaging in explanation recruits explanatory virtues as evaluative criteria, and these in turn act as constraints on learning and inference by leading learners to seek and privilege hypotheses that support those virtues. In the language of argument structure, the explanatory virtues are like premises or rules of inference (inductive constraints) that favor some conclusions over others.5 To make these claims more concrete, it helps to consider another example, this time drawn from work with five-​year-​old children (Walker, Bonawitz, and Lombrozo 2017). We know from prior work that adults favor explanations for two effects that are “simple” in the sense that they appeal to a common cause over those that appeal to two independent causes (Lombrozo 2007), and that this is driven by a preference for explanations that invoke the fewest unexplained causes, not the fewest causes per se (Pacer and Lombrozo 2017). The preference for common-​cause over independent-​cause explanations has also been found for preschool-​aged children (Bonawitz and Lombrozo 2012). If the process of engaging in explanation recruits explanatory virtues such as simplicity, then we should expect to see a greater role for simplicity as a constraint on inference when children engage in explanation than when they do not. Walker and colleagues tested this prediction by presenting five-​year-​old children with an illustrated garden from which carrots could be sampled, revealing which were healthy and which were “sick.” Children initially saw two sick carrots, one sampled after the other, and were asked either to explain why the plants were sick (i.e., “Why do you think these plants are sick?”) or, in a control condition, to report what they observed (i.e., “Were these plants healthy or sick?”). Crucially, these observations were consistent with two explanations: one appealing to a common cause (both were sick because they 5 One can potentially align explanatory virtues (such as a preference for broad scope or greater simplicity) with either premises or rules of inference, and either approach is consistent with the data reported here. Determining which kind of process or representation in fact governs human behavior suffers from especially acute problems of underdetermination (Anderson 1978). In part for this reason, I often refer to explanatory virtues as “constraints” on learning and inference, as this locution is neutral with respect to the underlying representation or process.

236  The Scientific Imagination were in the area with red soil) and the other to two plausible but independent causes (one was sick because it was in the shade of a tree, another was sick because it was near a broken sprinkler). The five-​year-​olds who were prompted to explain were significantly more likely than those in the control condition to make subsequent inferences in line with the simple explanation.6 It appears that engaging in explanation increased the extent to which they recruited simplicity as an inductive constraint, and that this accounts for the effects of “mere thinking” on learning. In sum, research with both children and adults has documented systematic effects of engaging in explanation on learning and inference—​even in the absence of feedback on the accuracy or quality of explanations. This form of self-​explaining is an instance of LbT that, like thought experimentation, occurs in the absence of evidence obtained outside the head. While effects of explanation on learning are almost certainly driven by multiple mechanisms, the research highlighted here points to one particular facet of learning by explaining with close parallels to IBE: the idea that engaging in explanation recruits inferential constraints (namely, scope, simplicity, and other explanatory virtues) that affect subsequent learning and reasoning. If this account is right, learning by explaining is formally reducible to a kind of argumentation, with explanatory constraints featuring as premises or implicitly in inferential rules.

9.3  The Case Against Psychological Reduction The research reviewed in the previous section suggests that the consequences of learning by explaining can be modeled as an inferential process that weights explanatory considerations—​such as scope and simplicity—​more heavily than they’re weighted when engaged in other processes, such as passively observing or thinking aloud. This naturally raises the question of why explaining is necessary to reach particular conclusions. That is, are LbT processes “psychologically dispensable” in the sense that they can readily be replaced by alternative forms of reasoning, such as explicit argumentation? 6 The study tested four-​year-​olds and six-​year-​olds as well. However, the four-​year-​olds responded at chance, while the six-​year-​olds tended to draw inferences in line with the simpler explanation regardless of whether they were prompted to explain. While these developmental changes are interesting in their own right, and discussed in Walker, Bonawtitz, and Lombrozo 2017, they are not relevant to the point made here.

“Learning by Thinking” in Science and in Everyday Life  237 The answer seems to be no. Most generally, LbT processes are uniquely powerful precisely because they deliver conclusions that appeal to premises or inferential rules that are not otherwise available. In her discussion of Galileo’s famous thought experiment involving falling bodies, Gendler (1998) suggests that engaging in a mental simulation brings in implicit commitments concerning which properties are physically determined. Endorsing aspects of Mach’s view, she writes: We have stores of unarticulated knowledge of the world which is not organized under any theoretical framework. Argument will not give us access to that knowledge, because the knowledge is not propositionally available. Framed properly, however, a thought experiment can tap into it, and—​ much like an ordinary experiment—​allow us to make use of information about the world which was, in some sense, there all along, if only we had known how to systematize it into patterns of which we were able to make sense. (Gendler 1998, 415)

Based on analyses of scientifically trained experts reasoning aloud through novel problems, Clement relatedly suggests that mental simulations begin from “implicit physical intuitions apprehended via imagistic simulations, rather than explicit linguistic propositions or axioms” (2009, 704). Because the bases for thought experimentation need not be represented linguistically, they may not be accessible via other forms of reasoning, such as explicit argumentation (see also Miscevic 1992; Nersessian 2007). These views rest on substantive—​ but plausible—​ commitments concerning human cognitive architecture. In particular, they rest on the idea that different mental representations are available to different mental processes. Linguistic representations may be available to the processes involved in explicit argumentation, while other forms of mental content—​such as perceptual and motor memories, or explanatory virtues—​may only emerge as constraints on reasoning when a thinker is engaged, respectively, in mental simulation or explanation.7 In support of these claims, consider two 7 Note that this is a more radical form of pluralism than that endorsed by popular “two-​systems” approaches within psychology (e.g., Evans and Stanovich 2013; Kahneman and Frederick 2002; Sloman 1996), in that many representational formats and processes must be differentiated. However, this form of pluralism need not take on the additional commitments associated with dual systems approaches, e.g., that systems are either automatic or controlled. Moreover, the effects of engaging in LbT processes, such as explanation, should not be equated with a shift from System 1 to System 2. In most of the experiments on explanation reported here, explanation is contrasted with a control task that is similarly deliberative and that requires the use of language. On most taxonomies, both the

238  The Scientific Imagination examples: one involving motor and perceptual simulation, the other explanatory virtues. In a 1999 paper, Schwartz and Black report an experiment in which participants were invited to imagine a narrow cylindrical cup and a wide cylindrical cup of equal heights, each filled with water to the same height. Participants were asked what would happen as the two cups are tilted: would they begin to pour water when tilted to the same angle, or at different angles? And if they would pour at different angles, which would require a greater tilt? When asked explicitly, a minority of participants (18.8%) gave the correct answer:  that the narrow cup would need to be tilted farther. When asked to actually tilt the cups, with eyes closed, to the point at which imaginary water would begin to pour, 100% of participants correctly tilted the narrower glass to a greater degree.8 In a subsequent experiment that involved visualizing this motion—​without actually holding a glass or moving one’s hands—​participants were again more accurate than their explicit judgments. This study provides evidence that motor and perceptual simulations can offer information that isn’t otherwise available to inform judgments. As a second example, consider a finding from Pacer and Lombrozo (2017). In a series of experiments, participants were asked to provide the most satisfying explanation for an alien’s two symptoms, where the viable options contrasted two plausible metrics for simplicity in causal explanations: “node” simplicity, according to which the simpler explanation is the one that invokes fewer causes, and “root” simplicity, according to which the simpler explanation is the one that invokes fewer unexplained causes. Participants reliably chose explanations that were lower in root simplicity but not node simplicity, and treated root simplicity as a virtue commensurate with probabilistic information. Yet when asked to justify their explanation choices, participants almost never appealed to a notion like simplicity or parsimony, and never identified the virtue that seemed to actually guide judgments:  reducing the number of unexplained causes. This suggests that the explanatory constraint invoked through explanation—​in this case a preference for low root

explanation condition and the control condition fall on the more controlled and deliberative side of the dichotomy. 8 Schwartz and Black (1999) conducted three versions of this task using differently shaped cups: rectangular, cylindrical, and cone-​shaped. The numbers reported here correspond to performance with the cylindrical cup. In all three cases, participants’ explicit judgments were considerably less accurate than their tilting behavior.

“Learning by Thinking” in Science and in Everyday Life  239 simplicity—​was not available to explicit reason in a way that would likely inform explicit argumentation or other explicit forms of inference. These examples support the psychological commitments implicit in proposals by Mach, Gendler, and others. They also suggest a modest sense in which LbT processes, such as mental simulation and self-​explanation, can offer something new: they create a representation with novel affordances, one that’s newly available to processes of explicit reasoning and argumentation, no matter that in some sense the relevant knowledge was there all along.9

9.4  The Case Against Epistemic Reduction So far I’ve argued for a sense in which learning by thinking is formally reducible to argumentation, but that psychological reality is such that LbT processes can sometimes deliver conclusions that could not have been reached through explicit argumentation. In brief, LbT processes provide access to constraints on learning and inference—​what can be thought of as premises or inference rules—​that are not available through explicit argumentation. These aspects of the paper correspond, respectively, to the case for formal reduction and against psychological reduction. We can now turn to epistemic reduction. That is, do the conclusions delivered through LbT processes have the same epistemic status as those of the corresponding formal arguments? Philosophers have debated this question for the case of thought experiments. Advocates of the “argument view,” such as Norton (1996), naturally endorse epistemic reduction. Good thought experiments correspond to good arguments, bad thought experiments to bad arguments. A  thought experiment is precisely as epistemically powerful as its corresponding argument. Others, such as Gendler (1998), argue that some epistemic force is lost in translation. For Gendler, this is in part because psychological reduction fails. She writes: “Even if it could be replaced with an equally effective argument, the justificatory force of a thought experiment might still be based 9 Gendler argues for a stronger sense in which scientific thought experiments can yield something new. Regarding Galileo’s thought experiment involving falling bodies, she writes: “The thought experiment that Galileo presents leads the Aristotelian to a reconfiguration of his conceptual commitments of a kind that lets him see familiar phenomena in a new way. What the Galilean does is provide the Aristotelian with conceptual space for a new notion of the kind of thing natural speed might be: an independently ascertainable constant rather than a function of something more primitive (that is, rather than a function of weight). It is in this way, by allowing the Aristotelian to make sense of a previously incomprehensible concept, that the thought experiment has led him to a belief that is properly taken as new” (1998, 412).

240  The Scientific Imagination on its capacity to make available in a theoretical way those tacit practical commitments which enable us to negotiate the physical world” (1998, 415). To the extent those tacit commitments are themselves justified, they offer some justificatory force we can’t otherwise achieve, as it’s the very process of thought experimentation that makes those tacit commitments available. Although the two perspectives just described differ considerably, they share a basic assumption. On both views, thought experiments are justified to the extent the commitments they invoke (however implicitly) are justified—​and, presumably, to the extent the inference rules they use are truth-​preserving. This seems intuitive enough, but it isn’t the only way to approach the epistemic value of LbT processes. In particular, LbT processes could potentially yield justified conclusions even when the premises they invoke are false. More radical still, engaging in certain forms of LbT could be epistemically beneficial (in the sense that they foster justified beliefs as a downstream consequences) even when the immediate conclusions they deliver are false. As a candidate instance of this first possibility, consider an account of thought experiments offered by Hayley Clatterbuck (2013). Clatterbuck’s account rests on a type of inductive inference that she calls “Dewey induction,” following a distinction articulated by Peter Godfrey-​Smith (2011). In Dewey inductions, generalizations from known to unknown cases derive their force not from the statistical properties of a sample (for instance, that it is large and that sampling was random) but from the characteristics of the known case: it must be representative of its kind. Clatterbuck writes that some thought experiments are “instances par exemplar of Dewey inductions,” where their force derives from their “ability to generate an inductive argument that does not depend on enumerative induction” (2013, 320). To generate thought experiments that support Dewey inductions, the reasoner first simulates a phenomenon known from experience, and then idealizes the case to remove contingent details, thereby (in a successful thought experiment) rendering it representative of its kind, and a good basis for generalizing to novel instances. The idealization step is central to Clatterbuck’s argument, and it also provides the crucial link to the point I aim to make here, as idealizations, in an important sense, are fictions.10 If Clatterbuck’s account is right, then thought experiments can sometimes yield justified conclusions, even though 10 It’s not clear whether Clatterbuck herself takes idealizations to be fictions. Her paper assumes that some idealizations can be “better” than others, but this kind of evaluation is consistent with the view that idealizations deliberately mis-​describe the world. She certainly does suggest that idealization involves removing information from experientially familiar cases. Others who take idealizations

“Learning by Thinking” in Science and in Everyday Life  241 some of the commitments they import depend on a process of idealization that deliberately distorts what we’ve actually observed from direct experience. Their epistemic value might not derive—​at least not directly—​from the truth of implicit commitments they invoke. Consider now the more radical possibility alluded to before: that in some cases LbT could be epistemically beneficial not only when the commitments invoked are false but also when the immediate conclusion supplied is false. To do so, let’s return to the case of simplicity in explanation. One justification for favoring simpler explanations comes from Newton, who writes, in the Principia Mathematica, that “we are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances . . . for Nature is pleased with simplicity, and affects not the pomp of superfluous causes” (Newton [1687] 1964, 398). In other words, we should favor explanations that involve fewer causes, and this is justified because the world is itself simple. The constraint invoked through explanation—​effectively, “simpler is more likely”—​is epistemically warranted (so Newton seems to imply) because it is true. This defense of simplicity is consistent with epistemic reduction: the justification for an LbT conclusion derives from the justification for the premises (implicitly) invoked, as in an argument. Contrast this approach to simplicity with that developed by Kevin Kelly (2007). Kelly formalizes a different metric for simplicity, and he demonstrates that under appropriate assumptions, favoring simpler hypotheses will lead to the right answer with a smaller number of mind-​changes. On this view, there’s epistemic value to favoring simplicity: it gets us to true beliefs more efficiently. But insofar as there’s an epistemic justification for favoring simplicity, it doesn’t require an assumption that simpler hypotheses are more likely. Instead, the benefits are further downstream: favoring simplicity helps us get to true beliefs  .  .  .  eventually. A  psychological mechanism that implements this process could therefore guide us to true beliefs, even though the commitments embedded in the inferential process that generates those beliefs—​“simpler is better”—​need not be themselves “true” in the sense that they directly describe or resemble the world, and even though the outcome of favoring simpler explanations will often be a false (but temporary) belief. Clatterbuck’s and Kelly’s positions help sketch out the possibility that LbT processes could have positive epistemic consequences even when the or scientific models to incorporate fictional elements include Frigg 2010, Godfrey-​Smith 2009, Levy 2015, and Toon 2010.

242  The Scientific Imagination premises they invoke are false, and even when the conclusions they deliver (at least in the short term) are false. The proposal has some empirical support as well. Here, again, the most compelling evidence comes from the case of learning by explaining. In many cases, learning by explaining has beneficial effects because the constraints invoked through explanation accurately mirror the structure of what’s being learned (e.g., Williams et  al. 2012). Explaining thus helps a learner arrive at the correct explanation, and having the correct explanation accounts for many of the beneficial consequences of having engaged in explanation. But in some cases there are benefits to engaging in explanation even when the explainer fails to generate an explanation, or generates an explanation that is false. How could this be? One example of this phenomenon comes from research reported by Chi et al. (1994). In their experiment, eighth-​grade students studied a text about the human circulatory system, with some students prompted to explain to themselves (without feedback) after each line of the text and others prompted to read through the materials twice. The researchers documented learning benefits for those prompted to explain, even though the explanations were often incorrect. They suggest that generating an explanation “objectifies” the incorrect commitments it embodies in a way that allows learners to recognize a conflict between those commitments and the accurate text they’re simultaneously reading. Recognizing the conflict can, in turn, initiate a process of belief revision. Interestingly, this proposal seems to presuppose a kind of psychological irreducibility, as the commitment that conflicts with the text becomes available for scrutiny (and rejection) when a learner engages in explanation, but not when a learner engages in a control task. It also shares characteristics with accounts of “destructive” thought experiments, which help render inconsistencies apparent (e.g., Brown 1991). The critical point for our discussion of epistemic irreducibility is this: the benefits of engaging in an LbT process need not derive from the immediate conclusion that the LbT process renders available (the correct or incorrect explanation). Epistemic benefits can also occur as downstream consequences, in this case a metacognitive awareness of inconsistency that triggers belief revision, eventually leading to more accurate beliefs. As a second example, consider findings from Walker et al. (2014). In their first study, three-​to five-​year-​old children were presented with sets of three blocks, where a target block in each set had a causal property (it made a toy play music when placed on top of it) and a perceptual property (e.g., a

“Learning by Thinking” in Science and in Everyday Life  243 yellow exterior). The remaining two blocks each shared one property with the target: the “causal match” made the toy play music but was a different color; the “perceptual match” was the same color but did not make the toy play music. Children saw each block go on the toy, with half the children prompted to explain why the block did (or did not) make the toy play music, and the other half, in a control condition, prompted to report whether the block did (or did not) make the toy play music. After all three blocks had been placed on the toy, one after another, the experimenter revealed that the target block had a hidden internal part (a red pin). Children were asked to indicate which of the other blocks—​the causal match or the perceptual match—​was more likely to share the internal part. Replicating prior work (Sobel et al. 2007), the study showed that the older children were more likely than younger children to generalize the internal part to the causal match over the perceptual match. In addition, however, those children who had been prompted to explain were significantly more likely than those in the control condition to generalize to the causal match over the perceptual match. Here’s one account of these results. When asked to explain why blocks did or did not make the toy play music, children were more likely to posit unobserved causal mechanisms, and therefore to expect similarities in internal structure that tracked causal affordances. In fact, many children did generate explanations that appealed to internal parts or mechanisms (e.g., “because it has something inside of it”; “because it has batteries”), and children who generated such explanations were more likely than those in the control condition to generalize the internal part on the basis of causal rather than perceptual similarity. But even children who produced other kinds of explanation—​such as those appealing to appearance (“because it’s purple”) or kind membership (“because it’s a music-​maker”)—​were more likely than children in the control condition to generalize on the basis of causal over perceptual similarity. What was explanation doing in such cases? It seemed to generate a more “adult-​like” pattern of generalization, no matter that the explanations themselves didn’t point to internal parts. Wilkenfeld and Lombrozo (2015) identify a variety of mechanisms that could be operating in such cases. Beyond the broadly metacognitive benefits suggested by Chi and colleagues, explaining could engage other processes that have positive downstream consequences, such as comparison (Edwards et  al. 2019)  and abstraction (Walker and Lombrozo 2017; Walker et al. 2014; Williams and Lombrozo 2010), both of

244  The Scientific Imagination which facilitate the extraction and application of rules and general schemata (Gentner and Medina 1998). These processes could in turn affect reasoning, even if the immediate output of the LbT process—​the explanation—​is not itself veridical or the basis for an appropriate inference. Wilkenfeld and Lombrozo coin the term “explaining for the best inference” (EBI) in characterizing a practice that encompasses such cases. Unlike inference to the best explanation (IBE), EBI focuses on the downstream consequences of engaging in explanation, not the immediate inferential consequences of privileging particular explanations. EBI therefore suggests a kind of epistemic question different from that traditionally posed in the case of thought experimentation. Rather than focusing on whether the conclusions delivered by LbT processes are justified, where their justification derives from the epistemic status of the premises and inference rules involved in their generation, we can instead ask whether the practice of engaging in LbT processes is, on the whole, epistemically valuable in the sense that, downstream, it leads us to a better suite of beliefs. The shift from thinking about the epistemic status of LbT commitments to LbT practices has parallels in the literature on modeling in science. Specifically, Levy (2012) introduces a useful distinction between two approaches to scientific models. His aim is to explain how models can be fictions while operating with realist commitments. Toward this end, he introduces “indirect realism” and “modeling as metaphor.” Levy’s first option—​indirect realism—​holds that scientific models should be understood as wholly fictional: the entities and relations they posit are imaginary, not real. The model is thus an object of scientific study in its own right, but comparing the model to the system it targets offers “a way of converting knowledge about the model to knowledge about the world” (2012, 742). For instance, one might regard models as sharing a similarity relation to their targets (e.g., Weisberg 2012), such that we can generalize features of the model to the world when the appropriate similarity relations obtain. On Levy’s second option, “modeling as metaphor,” models aren’t wholly fictional: they are about real entities and relations. However, we know that models often simplify and idealize target systems:  they deliberately “mis-​ describe,” and are in this weaker sense fictional. The interesting move comes in reconciling this approach to modeling with a form of realism. Levy suggests that rather than regarding the aim of a realist picture of science to be the production of “true” theories and models, we can shift to a picture in which the aim is the production of true beliefs. He writes:

“Learning by Thinking” in Science and in Everyday Life  245 In most formulations of realism the locus of the doctrine is seen as the content of the theory or model. The view is that scientists aim to attain true models. But we might also view realism as a doctrine concerning true beliefs. The idea would be, roughly, that realism is the doctrine that science aims to allow us to acquire knowledge about the world. . . . [I]‌f realism is a doctrine about knowledge, then theoretical science can be successful, from the realist’s point of view, even if its immediate products, e.g. models, are false. Deliberate distortions of the truth are fine, so long as models allow us to form (and justify) correct beliefs about the world. (2012, 743)

In other words, we can shift from thinking about models as epistemically valuable to the extent they accurately describe or approximately resemble the world to instead considering their epistemic value in terms of their role in supporting the acquisition of true beliefs. A model can be false, but a downstream consequence of engaging in the process of modeling can be the production of true beliefs. Not all instances of scientific modeling involve LbT: models are often updated in light of observations “outside the head,” and they’re often employed in simulations implemented on computers, not human minds. Nonetheless, learning from models and learning by thinking share obvious parallels, and focusing on cases where these practices are beneficial—​despite fiction, idealization, or inaccuracy—​makes Levy’s suggested account of realism attractive for the account of LbT sketched here. Just as “metaphorical” models can play a role in scientific progress, LbT processes might improve our epistemic potential, even when the commitments they invoke and the conclusions they deliver aren’t strictly true. Instead, the practice of engaging in LbT might “allow us to form (and justify) correct beliefs about the world.” In this section, I’ve sketched a view according to which LbT processes are not epistemically reducible to arguments. Specifically, engaging in LbT can have positive downstream epistemic consequences, but because LbT processes are not psychologically reducible to argumentation, these consequences will not, as a rule, be achieved by substituting LbT for explicit argumentation. For example, engaging in explanation seems to promote comparison and abstraction, and benefits learners even when they fail to arrive at an accurate explanation (Wilkenfeld and Lombrozo 2015); it’s doubtful that the corresponding arguments would generate the same effects. What I haven’t done is show that LbT processes are guaranteed or even likely to have positive effects. In part this is because each LbT process recruits a unique set of

246  The Scientific Imagination constraints, and each will correspondingly require a custom argument for why those constraints will tend to yield particular epistemic consequences in particular contexts. Developing and testing such accounts is beyond the scope of this chapter, but are important directions for future work.

9.5  Conclusions Learning by thinking is pervasive in science and in everyday life. While the most celebrated examples—​such as Galileo’s and Einstein’s thought experiments—​are rare indeed, their more mundane counterparts, including mental modeling and simulation, explaining to oneself, and engaging in analogical reasoning, occur on a regular basis. Drawing on philosophical work on thought experimentation and empirical work on learning by explaining, I’ve suggested answers to three questions about the reducibility of LbT to argumentation. Specifically, I’ve argued that LbT processes are formally reducible to their corresponding arguments, but that they are neither psychologically nor epistemically reducible to their corresponding arguments. The case against psychological reduction offers a modest sense in which LbT processes offer something “new”: they make commitments available to new cognitive processes, such as explicit verbal reasoning. The case against epistemic reduction, while more tentative, offers a new way of approaching the epistemic value of LbT practices. Rather than focusing on whether particular commitments or conclusions are warranted, we can consider whether particular practices are warranted by virtue of their downstream consequences. Further developing and testing this proposal will surely require more thinking and more argumentation—​with observations generated both inside and outside the head.

Acknowledgments This work was supported by a McDonnell Scholar Award in Understanding Human Cognition, as well as NSF grant DRL-​1056712. I am also grateful to Peter Godfrey-​Smith and Arnon Levy for helpful comments on a draft from September 2015, and to Mike Stuart for helpful conversation and comments in 2016.

“Learning by Thinking” in Science and in Everyday Life  247

References Anderson, J. R. (1978). “Arguments Concerning Representations for Mental Imagery.” Psychological Review 85: 249‒277. Bonawitz, E. B., and Lombrozo, T. (2012). “Occam’s Rattle: Children’s Use of Simplicity and Probability to Constrain Inference.” Developmental Psychology 48: 1156‒1164. Brown, J. R. (1991). Laboratory of the Mind: Thought Experiments in the Natural Sciences. 2nd ed. London: Routledge. Brown, J. R., and Fehige, Y. (2014). “Thought Experiments.” The Stanford Encyclopedia of Philosophy (Fall 2014 ed.), edited by E N. Zalta. http://​plato.stanford.edu/​archives/​ fall2014/​entries/​thought-​experiment. Chi, M.  T. H., De Leeuw, N., Chiu, M., and LaVancher, C.  (1994). “Eliciting Self-​ Explanations Improves Understanding.” Cognitive Science 18, no. 3: 439‒477. Clatterbuck, H. (2013). “The Epistemology of Thought Experiments: A Non-​eliminativist, Non-​Platonic Account.” European Journal for Philosophy of Science 3, no. 3: 309‒329. Clement, J. J. (2009). “The Role of Imagistic Simulation in Scientific Thought Experiments.” Topics in Cognitive Science 1, no. 4: 686‒710. Edwards, B. J., Williams, J. J., Gentner, D., and Lombrozo, T. (2019). “Explanation Recruits Comparison in a Category-​Learning Task.” Cognition 185: 21‒38. Evans, J. St. B.  T., and Stanovich, K. E. (2013). “Dual-​Process Theories of Higher Cognition:  Advancing the Debate.” Perspectives on Psychological Science 8, no. 3: 223‒241. Fonseca, B. A., and Chi, M. T. (2011). “Instruction Based on Self-​Explanation.” Handbook of Research on Learning and Instruction, edited by R. E. Mayer and P. A. Alexander, 296‒321. New York: Routledge. Frigg, R. (2010). “Models and Fiction.” Synthese 172, no. 2: 251‒268. Gendler, T. S. (1998). “Galileo and the Indispensability of Scientific Thought Experiment.” British Journal for the Philosophy of Science 1998: 397‒424. Gentner, D., and Medina, J. (1998). “Similarity and the Development of Rules.” Cognition 65, no. 2: 263‒297. Gentner, D., and Smith, L. (2012). “Analogical Reasoning.” Encyclopedia of Human Behavior, edited by V. S. Ramachandran, 1: 130‒136. Oxford: Elsevier. Godfrey-​Smith, P. (2009). “Models and Fictions in Science.” Philosophical Studies 143, no. 1: 101‒116. Godfrey-​Smith, P. (2011). “Induction, Samples, and Kinds.” In Carving Nature at Its Joints: Natural Kinds in Metaphysics and Science, edited by M. H. Slater, M. O’Rourke, and J. K. Campbell, 33–​52. Cambridge, MA: MIT Press. Harman, G. H. (1965). “The Inference to the Best Explanation.” Philosophical Review 74, no. 1: 88–​95. Hegarty, M. (2004). “Mechanical Reasoning by Mental Simulation.” Trends in Cognitive Sciences 8, no. 6: 280‒285. Kahneman, D., and Frederick, S. (2002). “Representativeness Revisited:  Attribute Substitution in Intuitive Judgment.” In Heuristics and Biases:  The Psychology of Intuitive Judgment, edited by T. Gilovich, D. W. Griffin, and D. Kahneman, 49–​81. Cambridge: Cambridge University Press. Kelly, K. T. (2007). “A New Solution to the Puzzle of Simplicity.” Philosophy of Science 74, no. 5: 561‒573.

248  The Scientific Imagination Kon, E., and Lombrozo, T.  (in press). “Scientific Discovery and the Human Drive to Explain.” In Advances in Experimental Philosophy of Science, edited by Richard Samuels and Daniel Wilkenfeld. New York, NY: Bloomsbury Press. Kuhn, Thomas. ([1964] 1977). “A Function for Thought Experiments.” In The Essential Tension:  Selected Studies in Scientific Tradition and Change. Chicago:  University of Chicago Press. Legare, C. H., and Lombrozo, T. (2014). “Selective Effects of Explanation on Learning During Early Childhood.” Journal of Experimental Child Psychology 126: 198‒212. Levy, A. (2012). “Models, Fictions, and Realism: Two Packages.” Philosophy of Science 79, no. 5: 738‒748. Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172, no. 3: 781‒798. Lipton, P. (2003). Inference to the Best Explanation. London: Routledge. Lombrozo, T. (2007). “Simplicity and Probability in Causal Explanation.” Cognitive Psychology 55: 232‒257. Lombrozo, T. (2011). “The Instrumental Value of Explanations.” Philosophy Compass 6, no. 8: 539‒551. Lombrozo, T. (2012). “Explanation and Abductive Inference.” In Oxford Handbook of Thinking and Reasoning, edited by K. J. Holyoak and R. G. Morrison, 260‒276. Oxford: Oxford University Press. Lombrozo, T. (2016). “Explanatory Preferences Shape Learning and Inference.” Trends in Cognitive Sciences 20, no. 10: 748‒759. Lombrozo, T., and Walker, C. M. (n.d.). “Learning by Thinking.” Manuscript. Mach, E. (1897). “Über Gedankenexperimente.” Zeitschrift für den physikalischen und chemischen Unterricht 10: 1–​5. Mach, E. (1905). “Über Gedankenexperimente.” In Erkenntnis und Irrtum, 181–​197. Leipzig: Johann Ambrosius Barth, 181–​197. Translated by J. McCormack, in Knowledge and Error, 134–​147. Dordrecht: Reidel. Mercier, H., and Sperber, D. (2011). “Why Do Humans Reason? Arguments for an Argumentative Theory.” Behavioral and Brain Sciences 34, no. 2: 57‒74. Miščević, N. (1992). “Mental Models and Thought Experiments.” International Studies in the Philosophy of Science 6, no. 3: 215‒226. Nersessian, N.  J. (2007). “Thought Experimenting as Mental Modeling:  Empiricism Without Logic.” Croatian Journal of Philosophy 7, no. 20: 125‒161. Newton, Isaac. ([1687] 1964). The Mathematical Principles of Natural Philosophy (Principia Mathematica). New York: Citadel Press. Norton, J. D. (1996). “Are Thought Experiments Just What You Thought?” Canadian Journal of Philosophy 26, no. 3: 333‒366. Pacer, M., and Lombrozo, T. (2017). “Ockham’s Razor Cuts to the Root: Simplicity in Causal Explanation.” Journal of Experimental Psychology: General 146, no. 12: 1761–​1780. Powers, L. H. (1978). “Knowledge by Deduction.” Philosophical Review 87, no. 3: 337‒371. Schwartz, D. L., and Black, T. (1999). “Inferences Through Imagined Actions: Knowing by Simulated Doing.” Journal of Experimental Psychology:  Learning, Memory, and Cognition 25, no. 1: 116. Shepard, R. N. (2008). “The Step to Rationality: The Efficacy of Thought Experiments in Science, Ethics, and Free Will.” Cognitive Science 32: 3–​35. Sloman, S. A. (1996). “The Empirical Case for Two Systems of Reasoning.” Psychological Bulletin 119, no. 1: 3‒22.

“Learning by Thinking” in Science and in Everyday Life  249 Sobel, D. M., Yoachim, C. M., Gopnik, A., Meltzoff, A. N., and Blumenthal, E. J. (2007). “The Blicket Within: Preschoolers’ Inferences About Insides and Causes.” Journal of Cognitive Development 8, no. 2: 159–​182. Stuart, M. T. (2016). “Norton and the Logic of Thought Experiments.” Axiomathes 26, no. 4: 451–​466. Toon, A. (2010). “Models as Make-​ Believe.” Beyond Mimesis and Convention: Representation in Art and Science, edited by R. Frigg and M. C. Hunter, 71–​96. Dordrecht: Springer. Walker, C.  M., Bonawitz, E., and Lombrozo, T.  (2017).“Effects of Explaining on Children’s Preference for Simpler Hypotheses.” Psychonomic Bulletin & Review 24, no. 5: 1538–​1547. Walker, C. M., and Lombrozo, T. (2017). “Explaining the Moral of the Story.” Cognition 167: 266–​281. Walker, C. M., Lombrozo, T., Legare, C., and Gopnik, A. (2014). “Explaining Prompts Children to Privilege Inductively Rich Properties.” Cognition 133: 343‒357. Walker, C.  M., Lombrozo, T., Williams, J.  J., Rafferty, A.  N., and Gopnik, A.  (2017). “Explaining Constrains Causal Learning in Childhood.” Child Development 88, no. 1: 229–​246. Weisberg, M. (2012). Simulation and Similarity: Using Models to Understand the World. Oxford: Oxford University Press. Wilkenfeld, D.  A., and Lombrozo, T.  (2015). “Inference to the Best Explanation (IBE) Versus Explaining for the Best Inference (EBI).” Science & Education 24, nos. 9–​10: 1059–​1077. Williams, J. J., and Lombrozo, T. (2010). “The Role of Explanation in Discovery and Generalization: Evidence from Category Learning.” Cognitive Science 34: 776‒806. Williams, J. J., and Lombrozo, T. (2013). “Explanation and Prior Knowledge Interact to Guide Learning.” Cognitive Psychology 66: 55‒84. Williams, J. J., Lombrozo, T., and Rehder, B. (2013). “The Hazards of Explanation: Overgeneralization in the Face of Exceptions.” Journal of Experimental Psychology 142: 1006‒1014.

10 Is Imagination Constrained Enough for Science? Deena Skolnick Weisberg

Science essentially involves imagination. This statement will probably come as a surprise to most people, who are used to thinking of science and imagination as being in tension. Common wisdom tells us that science deals with cold, hard facts, while imagination lets us build castles in the sky. But this is a false dichotomy, and the first section of this chapter will explain why the imagination is an essential part of science. Despite these arguments, there is a major objection to this view—​namely, that the imagination is too unconstrained to be used in serious scientific reasoning. The remainder of this chapter will respond to this objection, using empirical studies of how the imagination works to demonstrate the ways in which imaginative activities are indeed constrained enough for use in science.

10.1  Science and Imagination In order to fully explain the connection between science and the imagination, it is necessary to begin with a brief working definition of each. Although there is naturally some debate within philosophy and psychology about the precise nature of each definition, nothing in the rest of the discussion turns on these details. For the purposes of this chapter, we can consider science to be a practice that is aimed at discovering the way the world works, a practice that uses tools such as experiments and models to achieve this goal (Godfrey-​Smith 2003). Note that this definition emphasizes science as an activity rather than as a set of content areas, which will help it to align more clearly with the activity of imagination.

Deena Skolnick Weisberg, Is Imagination Constrained Enough for Science? In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0011

Is Imagination Constrained Enough for Science?  251 The imagination is the mental ability that allows us to represent possible worlds (Lewis 1978) or any entities or events that are not currently present to our senses. As such, imagination is essentially a form of counterfactual reasoning (Gopnik and Walker 2013; Weisberg and Gopnik 2013). Again, the tension between science and imagination seems to arise within these definitions: science is about discovering the workings of the real world, and the imagination is concerned with issues that are, by definition, outside of the real world. Nevertheless, the imagination is used in the service of science in some cases, the most prominent of which is thought experiments. Sometimes, when we want to know why things happen or when we want to figure out what will happen in a given circumstance, experimentation is not an option. This can happen because we want to investigate something that is impossible to observe, such as extremely long stretches of time. It can also happen because the time and resources are not available to implement the experiment, as when we want to investigate the psychological effects of alternative social structures. When these circumstances arise, it is necessary to use a thought experiment—​to engage with possibilities rather than with reality. When we do so, we take on a counterfactual premise, such as the existence of a frictionless plane or a world in which no one can tell a lie. Psychologically adopting such a premise necessitates the mental creation of a possible world in which the premise holds true. Once we have created this world, we can manipulate it in various ways and use it to shed light on real-​world structures. Considered in this way, what thought experiments require is the deployment of the imagination in the service of science. But thinking about this example seems to merely recapitulate the problem that we began with, that science and imagination are in tension. To be sure, thought experiments are sometimes useful tools, but for the most part the practice of science involves carefully controlled experiments. Scientists usually turn to thought experiments only when live experimentation is impossible or impractical. Given that, how can we best characterize the use of imagination in science so that it will apply more generally? In order to explain the essential connection between science and imagination for the majority of scientific practice, it is important to note that we use the imagination to think about possible worlds regardless of whether those possible worlds are highly similar to reality or very different. So we use imagination when we think about fantastical fiction, as when we visit Middle Earth, but we also use imagination when we think about something we should have done yesterday or what might happen in the near future.

252  The Scientific Imagination Although imagination can be used to think about possible worlds that might not exist, it is also the tool we use when we think about possible worlds that are extremely close to reality, as when we make plans for the future or regret a single past action (e.g., Beck et al. 2006; Byrne 2005). The imagination is also sometimes necessary in order to conceptualize aspects of reality that are true but conflict with our intuitive beliefs: We naturally experience the Earth as flat, so it takes some imagination to think of it as round, and in general to transcend our restricted epistemic viewpoints. Realizing that imagination can be used for representing these more mundane possibilities is the key to understanding how it is used in science. For example, scientists often form hypotheses for what might happen as the result of a particular experiment or intervention, or think of possible explanations for a pattern of data. These activities involve imaginative thinking, since these are all possibilities and do not necessarily fully reflect reality or our experience of it. A scientist designing an experiment, for example, might want to work through the possible results of several types of study designs. If she manipulates the environment of one population of bacteria by making their incubator two degrees warmer, then this would put a different sort of environmental pressure on the bacteria than making their incubator two degrees colder, or ten degrees warmer, and so on. This scientist uses her imagination to think about what might happen under these different circumstances, and then uses what she has learned from this process to choose her experimental design. This is also the case for predictions about uncertain future events, which can be thought of as little statements that describe possible worlds. The scientist in our example does not yet know exactly what will happen in any of these experiments, but she has some idea about what might happen. As in the case of thought experiments, this scientist can adopt a counterfactual premise (i.e., construct a possible world in which her hypothesis holds true) and work through the possible results of each experiment within this world. This premise may not be as far-​fetched as those in the thought experiments, but the process is the same: create a representation that does not necessarily match reality and work productively through the causal chains that would obtain in that world. This process requires imaginative thinking because these mental representations may not match reality. Additionally, their users know that they may not match reality; there is no delusion involved and the scientist is not confused about whether the possible outcome has really happened.

Is Imagination Constrained Enough for Science?  253 These arguments are meant to show that there is a deep connection between imaginative and scientific reasoning. Many aspects of science rely on the imagination, including the capacity to make suppositions, form hypotheses, and construct predictions that may not reflect how things really work. Without the ability to represent these kinds of possibilities or possible hidden structures of reality, the practice of science would be impossible.

10.2  The Problem of Constraint So far, I’ve argued that imagination is a key mental tool that is necessary for doing science. But this statement comes with a problem, one that gets us back to the intuitive dichotomy between science and imagination that we began with. The problem is one of constraint—​namely, the imagination seems to be too unconstrained a tool for drawing scientific conclusions. Indeed, some philosophers have argued against the use of thought experiments for this reason: our intuitions seem to be too unreliable for us to draw scientifically sound conclusions from them (Wilkes 1988; see discussion in Brown and Fehige 2014). In general, it seems to be a problem that the same tool that needs to be precise and accurate for constructing scientific hypotheses can be used for constructing magic rings, time travel, and teleportation. One response to this worry is that this kind of unrealism can sometimes be useful in science. As noted previously, sometimes the information we want to obtain about the world cannot be accessed through physical experiments, as when we want to know what might have happened if a key evolutionary transition had turned out differently, or what kind of language might emerge if a group of babies was raised without linguistic input. In these cases, the premises of the thought experiments are unrealistic, and yet we might be able to draw interesting conclusions from them, which could in turn inform experiment and theory. More strongly, one could argue that unrealistic thought experiments are sometimes necessary in science. Two theories might make the same predictions for all the cases it is possible to observe, and only by thinking of highly unrealistic or unobserved possibilities could we distinguish them. One example of such a case might be the difference between Newtonian and Einsteinian theories of physics, which generally match for the usual sets of objects but which diverge when considering extremely large objects or alternative timescales (see Weisberg and Gopnik 2013).

254  The Scientific Imagination Finally, the imagination’s ability to represent fantastical possibilities is often necessary in science because the truth of the world diverges so radically from our intuitive sense of how the world works. It does not seem to be the case, at first blush, that most solid objects are actually made of empty space, or that our planet is hurtling through space in its orbit around the sun at 30 kilometers per second. But these are the correct answers, and accessing them requires imagining things as being different from how they seem. In general, then, being able to think of fantastical possibilities is helpful in science. The correct answer often doesn’t match our intuitive understanding of the structure of the world, and we must seek somewhat unrealistic explanations for why things are the way they are. As noted previously, however, most of science does not look like this. These cases showing the utility of unrealistic thought experiments or wildly divergent possibilities are dramatic, but they are much more rare than the everyday business of running experiments and considering alternative hypotheses. For most practicing scientists, these possibilities do need to be tied firmly into the structure of reality in order to make sure that the hypothesis has a chance of matching at least part of what’s really going on. So the big problem remains—​the imagination seems to be an unreliable tool for science. When we imagine, we may create worlds that are too unconstrained or too underspecified to really serve a useful purpose.

10.3  An Empirical Refutation The problem of constraint is a worry about the use of imagination in science because the imagination does not seem to respond appropriately to the structure of the real world, as science demands. The response to this worry comes from empirical evidence showing that the imagination does respond appropriately (and, in some cases, excessively) to the structure of the real world. In a series of experiments, my collaborators and I have shown that the possible worlds that we create through the use of our imaginative capacities are constrained in ways that make them appropriate tools for deployment in scientific argumentation. These studies examine adults’ and children’s responses to fictional stories, another type of imagined activity, and ask how participants’ real-​world background knowledge combines with the premises of a story text to create a full representation of a possible world.

Is Imagination Constrained Enough for Science?  255 In one study (Weisberg and Goodstein 2009), adult participants read stories that varied in their degree of realism: one was entirely realistic although fictional, one incorporated a few fantastical elements but generally still took place in a realistic world, and one contained many fantastical elements. After reading each story, participants were asked to rate whether a set of real-​world facts were true in the story world. The key facts in this set were true in the real world but were not referred to in the stories that participants read. Far from showing fantastical tendencies, subjects judged that these real-​world facts generally held true in the stories, even ones with fantastical elements. This response pattern was stronger for mathematical facts and scientific facts having to do with the structure of the world (e.g., the sun rises in the east and sets in the west) than for social conventions or facts that were contingent on particular real-​world events (e.g., Washington, DC, is the capital of the United States). In addition, participants tended to import fewer real-​world facts into fantastical stories than into realistic ones, demonstrating sensitivity to the fictional context. These results indicate that our fictional representations rely heavily on our real-​world knowledge. More important for the current purpose, these results strongly suggest that imaginative scenarios look mostly like reality, hence are constrained enough to serve a useful role in scientific theorizing. Developmental work confirms that this reality-​proneness is a default assumption that children make. Surprisingly, their constructed worlds tend to be even more tied to reality than those of adults (Weisberg et al. 2013; Weisberg 2014). In a version of the adult study described earlier, preschool-​ aged children were presented with a story that was either realistic or fantastical. At various points in the story, children were told that the next page of the book had fallen out and that they had to choose which event should come next: an ordinary one with no violations of reality (e.g., the character walks to the ice cream store) or an impossible one that violated the structure of reality in various ways (e.g., the character teleports to the ice cream store). Children in this study overwhelmingly chose to fill in the stories with ordinary events, even if the story they heard already contained many fantastical elements. Children even tend to choose the ordinary events when invited to construct their own stories, with no need to match events to prior story structure (Sobel and Weisberg 2014). Additionally, once within the context of a fictional scenario, children tend to adopt the premises that govern that scenario and draw the appropriate inferences on the basis of these premises. For example, one study showed two-​to four-​year-​olds a pretend scenario in

256  The Scientific Imagination which cats did not say “meow” but instead made the sound of the animal they were addressing (e.g., saying “woof ” to a dog and “moo” to a cow). Children readily generalized this rule to a new cat (Van de Vondervoort and Friedman 2014; see also Dias and Harris 1988, 1990). These results show that children understood how this fictional world operated and could accept some reality-​ inconsistent premises within it. But they did not extrapolate from the existence of a single fantastical rule to assume that this world would be generally fantastical. This body of work demonstrates that even children, who are thought to be wildly imaginative and unconstrained in their thinking, construct imagined worlds that are tied closely to the structure of reality. This in turn provides evidence that imagined worlds are indeed constrained enough for use in science. While these results strongly suggest that both children and adults would prefer to imagine in realistic ways, they do not yet show that either children or adults are good at doing so. In the studies described earlier, the realistic options were obvious, since they matched the structure of reality. But in science, it is often the case that the structure of reality is opaque, and so imagining in accordance with realistic principles is not nearly as easy. How do our imagined possibilities stick closely to reality for these kinds of cases? There is yet little empirical work that bears on this question, but studies by Thomas Ward and his colleagues can begin to suggest how our imaginative faculties are constrained by our real-​world knowledge in cases where it is not clear what would be realistic (Brédart et al. 1998; Ward and Sifonis 1997; Ward 1994). To do so, these researchers asked adult participants to draw alien creatures from distant planets. Despite encouragement from the researchers to be as wildly imaginative as possible, most of the aliens tended to look a lot like Earth creatures and to preserve important features of real-​ life animals, such as bilateral symmetry and sense organs like eyes. Perhaps more surprising, and more relevant to the current issue, the imagination can be constrained in ways that recapitulate the structure of reality. When told to draw creatures with feathers, people also drew wings, beaks, and other bird-​like features, even though these hadn’t been mentioned in the prompt (Ward 1994, study 2). This result suggests that reality constrains our imagination for novel cases by imposing real-​world structures and causal relations. Even when individual facts may be in doubt (e.g., participants did not know what the aliens should look like; biologists may not know how a species will

Is Imagination Constrained Enough for Science?  257 respond to a change in environment), reality sets guidelines for where the imagination should travel. In fact, these results suggest that the pendulum swings the other way: rather than being too unfettered in our thinking about possibilities, we are actually too tightly constrained by our knowledge of reality. Rather than needing to rein in our imagination and struggle to make sure our imagined scenarios are realistic enough, the real challenge is in being creative, since it takes extra effort to imagine unrealistic scenarios. Indeed, if the scenario is too unrealistic, we might not be able to imagine it at all, a phenomenon known as imaginative resistance (Gendler 2000). This is good news for the use of imagination in science: our knowledge of reality appropriately (and perhaps too tightly) constrains what we’re able to imagine. Although these arguments show that imagination is tied closely enough to reality for science, there is still a more subtle worry about the ways in which imagination is tied to reality. That is, the constraints on imagination are not uniform, and people find certain kinds of counterfactual scenarios easier to imagine than others. Careful work by Ruth Byrne (2005) has demonstrated several ways in which general reasoning biases shape the ways in which people construct counterfactual scenarios. Byrne writes about these tendencies as exploiting natural “joints” or “fault lines” in reality—​places where people are particularly likely to see opportunities to construct counterfactuals. For example, people are more likely to change the last event in a causal sequence, even though any change to the sequence would disrupt the outcome. People are also more likely to regret actions that they have taken, rather than inactions: “If only I hadn’t . . .” rather than “If only I had . . .” The imagination thus obeys a series of rational principles that govern which types of counterfactuals come to mind most easily for a given situation. Byrne sees this as an advantage: even though we might envision only some counterfactuals and not others, this process follows a limited set of rules that are empirically tractable. But even if we can learn about the types of constraints people generally place on counterfactual thinking, this raises a problem for the use of imagination in science. The imagination may be constrained, but if these constraints limit the types of counterfactuals we consider, they might interfere with careful scientific reasoning. If there are whole classes of counterfactual or hypothetical scenarios that scientists do not consider, then imagination becomes considerably less useful as a tool for making scientific progress.

258  The Scientific Imagination To respond to this worry, it is important to note that the imagination is only one of many tools that scientists have at their disposal. The results of imagined scenarios can be checked against real-​world data or the outputs of simulations, and additional information can help to show where our intuitive thought processes might be limited. It is also important to note that the imagination is a tool for doing science, and like all tools, it needs to be calibrated. No particular instrument starts off as a direct path to scientific truth without some assumptions about how it works or the kinds of observations it can provide (Hacking 1983); imagination is not unique in this regard. In fact, one might think of the training that is provided in graduate programs as honing one’s scientific imagination to think about the kinds of possibilities generally useful for a particular field—​that is, calibrating this tool so that it will work properly within a certain body of background knowledge. Finally, Byrne’s point is that some imagined scenarios are easier or more natural to imagine than others. But it’s not impossible to think of different kinds of counterfactuals. It just takes practice, which enculturation into the scientific enterprise can provide.

10.4  Moving from Imagination to Reality Even though the imagination may be constrained enough to serve as a precise scientific tool, or can be developed to have the right constraints for this task, its use in the lab still presents a different kind of worry. The studies previously described have shown that when we move from reality to an imagined world, we take a lot of reality with us and use it as the basis for constructing the possible world. Even children base their imagined scenarios on reality, to a large extent. But a second thing that we must do in science is use these imagined representations, like thought experiments and hypothetical scenarios, to tell us something about reality. That is, we need to export information from our imagined world back into the real world to see if this information fits with our observations or to use it to inform experimentation or analysis. Do we do this appropriately? Research suggests that this process is far more promiscuous than the reverse: we tend to import lots of information from imagination back into reality. For example, one study gave participants fictional stories to read, in which were embedded some true statements and some false ones. Participants were explicitly warned that some of the information in the stories was false.

Is Imagination Constrained Enough for Science?  259 Despite this warning, they tended to incorporate all of the information in the story into their background knowledge and report these facts on a later memory test (Marsh and Fazio 2006; Potts et al. 1989; Prentice et al. 1997; Wheeler et al. 1999). Related work has shown that people are equally persuaded by claims made in speeches, whether those speeches are labeled as fact or as fiction (Green et al. 2006). On the one hand, this is good news, since it means that we can learn productively from our mental simulations of possible worlds. On the other hand, this could mean that information from imagined scenarios is just incorporated into background knowledge without necessarily being marked as being potentially untrue. In response to this possibility, there is some evidence that information we received from fictional sources is encapsulated to a certain extent. That is, although people are more likely to report that a false fact is true after reading it in a story, reaction times indicate that they are faster to do so when the false fact is embedded in a memory test that reminds participants of the story context and slower to do so when it is embedded in a memory test that contains questions about information that did not appear in the story (Potts et al. 1989; see also Green and Donahue 2011). So information presented in imagined contexts might not interfere with our real-​world knowledge as much as one might fear. More important, part of the point of being able to imagine is the ability to learn from experiences that we cannot actually have in reality, to expand our scope of experiences to include possible worlds and not just the real world. The occasional confusion of imagined information for truth seems like a mild price to pay for this amazing tool.

10.5  Conclusion The common picture of imagination and science as opposing processes obscures the important fact that these two capacities are deeply linked. Science crucially involves thinking about possibilities, hypotheses, and other scenarios that may or may not match the truth of reality. The imagination is precisely the ability to engage in this kind of thinking, and it is a necessary mental tool for doing science. Although some have worried that this tool is too unconstrained to serve this function well, empirical work from developmental and cognitive psychology shows that this is not the case. Both children and adults tend to imagine worlds that are closely tied to reality and governed by the same laws,

260  The Scientific Imagination although both children and adults can accept the premises of an imagined world and extend them in appropriate ways. And while the imagination is subject to some biases in the types of scenarios it naturally considers, this issue can be recognized and overcome, as with other types of scientific tools. The imagination is thus constrained by our background knowledge and psychological biases, making it an appropriate, and necessary, tool for the practice of science.

References Beck, S. R., Robinson, E. J., Carroll, D. J., and Apperly, I. A. (2006). “Children’s Thinking About Counterfactuals and Hypotheticals as Possibilities.” Child Development 77, no. 2: 413–​423. Brédart, S., Ward, T. B., and Marczewski, P. (1998). “Structured Imagination of Novel Creatures’ Faces.” American Journal of Psychology 111, no. 4: 607–​625. Brown, J. R., and Fehige, Y. (2014). “Thought Experiments.” In The Stanford Encyclopedia of Philosophy (Fall 2014 ed.), edited by Edward N. Zalta. http://​plato.stanford.edu/​ archives/​fall2014/​entries/​thought-​experiment. Byrne, R. M. J. (2005). The Rational Imagination: How People Create Alternatives to Reality. Cambridge, MA: MIT Press. Dias, M. G., and Harris, P. L. (1988). “The Effect of Make Believe Play on Deductive Reasoning.” British Journal of Developmental Psychology 6: 207–​221. Dias, M. G., and Harris, P. L. (1990). “The Influence of the Imagination on Reasoning by Young Children.” British Journal of Developmental Psychology 8: 305–​318. Gendler, T. S. (2000). “The Puzzle of Imaginative Resistance.” Journal of Philosophy 97, no. 2: 55–​81. Godfrey-​Smith, P. (2003). Theory and Reality: An Introduction to Philosophy of Science. Chicago: University of Chicago Press. Gopnik, A., and Walker, C. M. (2013). “Considering Counterfactuals: The Relationship Between Causal Learning and Pretend Play.” American Journal of Play 6, no. 1: 15–​28. Green, M. C., and Donahue, J. K. (2011). “Persistence of Belief Change in the Face of Deception: The Effect of Factual Stories Revealed to Be False.” Media Psychology 14, no. 3: 312–​331. Green, M. C., Garst, J., Brock, T. C., and Chung, S. (2006). “Fact Versus Fiction Labeling: Persuasion Parity Despite Heightened Scrutiny of Fact.” Media Psychology 8: 267–​285. Hacking, I. (1983). Representing and Intervening. Cambridge: Cambridge University Press. Lewis, D. (1978). “Truth in Fiction.” American Philosophical Quarterly 15: 37–​46. Marsh, E. J., and Fazio, L. K. (2006). “Learning Errors from Fiction:  Difficulties in Reducing Reliance on Fictional Stories.” Memory and Cognition 34, no. 5: 1140–​1149. Potts, G. R., John, M. F. S., and Kirson, D. (1989). “Incorporating New Information into Existing World Knowledge.” Cognitive Psychology 21, no. 3: 303–​333. Prentice, D. A., Gerrig, R. J., and Bailis, D. S. (1997). “What Readers Bring to the Processing of Fictional Texts.” Psychonomic Bulletin and Review 4, no. 3: 416–​420.

Is Imagination Constrained Enough for Science?  261 Sobel, D. M., and Weisberg, D. S. (2014). “Tell Me a Story: How Children’s Developing Domain Knowledge Affects Their Story Construction.” Journal of Cognition and Development 15, no. 3: 465–​478. Van de Vondervoort, J. W., and Friedman, O. (2014). “Preschoolers Can Infer General Rules Governing Fantastical Events in Fiction.” Developmental Psychology 50, no. 5: 1594–​1599. Ward, T. B. (1994). “Structured Imagination: The Role of Category Structure in Exemplar Generation.” Cognitive Psychology 27, no. 1: 1–​40. Ward, T. B., and Sifonis, C. M. (1997). “Task Demands and Generative Thinking: What Changes and What Remains the Same?” Journal of Creative Behavior 31, no. 4: 245–​259. Weisberg, D. S. (2014). “The Development of Imaginative Cognition.” Royal Institute of Philosophy Supplement 75: 85–​103. Weisberg, D. S., and Goodstein, J. (2009). “What Belongs in a Fictional World?” Journal of Cognition and Culture 9, no. 1: 69–​78. Weisberg, D. S., and Gopnik, A. (2013). “Pretense, Counterfactuals, and Bayesian Causal Models: Why What Is Not Real Really Matters.” Cognitive Science 37, no. 7: 1368–​1381. Weisberg, D. S., Sobel, D. M., Goodstein, J., and Bloom, P. (2013). “Young Children Are Reality-​Prone When Thinking About Stories.” Journal of Cognition and Culture 13, nos. 3–​4: 383–​407. Wheeler, S. C., Green, M. C., and Brock, T. C. (1999). “Fictional Narratives Change Beliefs: Replications of Prentice, Gerrig, and Bailis (1997) with Mixed Corroboration.” Psychonomic Bulletin and Review 6, no. 1: 136–​141. Wilkes, K. V. (1988). Real People:  Personal Identity Without Thought Experiments. Oxford: Oxford University Press.

11 Can Children Benefit from Thought Experiments? Igor Bascandziev and Paul L. Harris

The metaphor of the young child as a scientist who constructs and revises theories about the world has played an important role in the study of cognitive development. Piaget offered such a portrait of cognitive development, and despite emerging doubts about his overall account of cognitive development, his emphasis on theory change is endorsed by a variety of contemporary researchers (e.g., Carey et al. 2015). Moreover, this metaphor gave rise to a large research program focused on children’s ability to construct and revise theories based on incoming data, acquired via exploration and firsthand observations (e.g., see Gopnik and Schulz 2007). Interestingly, however, despite the wide recognition of the importance of cases where thought experiments and other rationalistic practices have played a critical role in theory construction and theory change in the history of science, very few empirical studies in psychology have systematically explored the role of thought experiments in learning in young children and in scientifically naive adults. In this chapter, we do not question the value of the research program investigating how children learn from data. Instead, we argue that it should be extended to include other factors that have been implicated in the process of theory construction and theory revision in the history of science. The history of science offers several examples of intellectual progress being made in the context of thought experiments rather than actual experiments. We ask if young children might similarly profit from engaging in thought experiments. More specifically, we ask if young children will modify their conceptualization of a given phenomenon if they are invited to engage in a thought experiment—​to imagine outcomes that prompt reflection on, and revision of, their hitherto stable assumptions or expectations. In the sections below, we first emphasize the entrenched character of theory-​driven beliefs and why thought experiments might be beneficial in Igor Bascandziev and Paul L. Harris, Can Children Benefit from Thought Experiments? In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0012

Can Children Benefit from Thought Experiments?  263 motivating theory revision. Next we examine two sets of findings that lend some feasibility to the speculation that children can benefit from thought experiments. We review evidence showing that even toddlers deploy their imagination in a disciplined fashion. When they are prompted to imagine a given sequence of events, especially in the context of pretend play, their imagination is guided by their existing knowledge of causal constraints on what can happen, whether drawn from naive ideas about physics, biology, or psychology. Making the same point differently, there is little evidence—​contrary to popular ideas about the rich fantasy life of young children—​that young children’s imagination is fanciful or poorly disciplined. Indeed, they find it difficult to imagine a sequence of events that is impossible in the sense that it violates known causal constraints. We then review studies that, in effect, encouraged young children to engage in a thought experiment. These studies were not explicitly designed to assess whether children can benefit from a thought experiment, but they provide encouraging preliminary evidence of such a benefit. Somewhat unexpectedly, they hint at the possibility that children’s engagement in a thought experiment might yield a greater cognitive benefit than their direct observation of the imagined outcomes.

11.1  The Relative Value of Empirical Data Consider the apparatus depicted in Figure 11.1. Young children commit a gravity-​based error when they are asked to find a ball dropped down one of the opaque corrugated tubes connecting the three chimneys above with the three containers below. Instead of taking account of the shape of the tube in which the ball has been dropped, two-​and three-​year-​olds persistently search in the container that is directly below the chimney in which they have seen the ball disappear, as if they expect gravity to make objects fall straight down regardless of other constraints (Hood 1995). The tubes apparatus appears to be a big scientific puzzle for the metaphorical child scientist. Two-​and three-​year-​olds continue to make gravity errors even after receiving multiple trials with visual feedback about the correct location of the ball. That is, they continue following the same heuristic—​search for the ball directly below the place where they’ve seen it dropped—​even after they have repeatedly been presented with evidence that their theory that objects always fall in a straight line (Hood 1995) yields incorrect predictions. Thus, at least in this case study, the metaphorical child scientist ignores

264  The Scientific Imagination

Figure 11.1  Picture of the tubes apparatus

the data going against her theory. She continues to make the same, incorrect predictions. An even more striking example of young children’s tendency to ignore observable data comes from studies that introduced trials with transparent tubes just before the trials with opaque tubes. For example, in a study by Hood (1995, experiment 3), two-​and-​a-​half-​and three-​year-​olds received five trials with transparent tubes so that they could clearly see the movement of the ball inside the tubes. Almost all children passed this task, thereby confirming that children indeed saw the movement of the ball and acted accordingly. However, when a further five trials with opaque tubes were presented, almost all children reverted to making the gravity error. Having seen the non-​vertical movement of the ball inside the transparent tubes did not help them. Children continued to maintain that objects always fall in a straight line despite visible, acknowledged evidence to the contrary. Joh and colleagues (2011) observed a similar inability to learn from visible evidence in a slightly different procedure. Instead of asking children to find the ball after the ball was dropped, the researchers asked children to predict where the ball would fall by placing a cup beneath one of the tubes. Half of the children received trials with transparent tubes first and trials with opaque tubes second, and the other half received the trials in reverse order. There was no order effect, which means that seeing the trajectory of the ball in trials with transparent tubes did not help children to do better on subsequent trials

Can Children Benefit from Thought Experiments?  265 with opaque tubes. Again, data contradicting children’s theory about the trajectory of falling objects did not lead them to revise their theory. Is the failure to react to incoming data specific to young children only? In other words, are adults better at adjusting their theories to accommodate new data? We believe not (e.g., Clement 1982; McCloskey 1983; McCloskey et al. 1980). For example, McCloskey et al. (1980) showed that many university students, including some who have taken college-​level physics, mistakenly believe that, in the absence of external forces, objects move in curved lines. McCloskey et al. (1980) showed students diagrams of curved tubes and asked them to imagine a ball traveling inside the tube. Next, students were asked to draw the trajectory of the ball after it exits the tube. Many students, most likely driven by their impetus theory of physics, drew curved trajectories, as if the object traveling inside the tube receives a curvilinear impetus that determines the trajectory of the object after it exits the tube. Importantly, it is plausible to assume that all students had had at least some experience with a curved water slide or a coiled water hose, where the motion of the person or the water exiting the tube is straight rather than curvilinear. Yet despite the wealth of everyday data going against the prediction that in the absence of external forces objects travel in a curvilinear trajectory, students still held on to their impetus theory and made that prediction. Indeed, if adults are asked to think about episodes in which they witnessed that in the absence of external forces objects travel in a straight line (e.g., memories of hosing down a car or watering a lawn), they do not fall prey to their impetus theory to the same extent. For example, Kaiser et al. (1986) introduced participants to two types of problems. In the first problem, modeled after McCloskey et al. (1980), they were asked to predict the motion of a ball bearing after it exits a curvilinear tube. The second problem used the same model of a curvilinear tube, but participants were asked to imagine a water hose being attached to one of the ends of the curvilinear tube and to then imagine the motion of the water after it exits the tube. Half of the participants received the water hose problem first and the ball bearing problem second, and the other half received the problems in reverse order. Many more participants gave correct answers to the water hose problem compared to the ball bearing problem. This means that they were able to reason more accurately about the motion of objects when they were basing their predictions on their memory of familiar events such as watering the lawn or washing a car. A different study confirmed that it is indeed the mention of the word “hose,” and not the kind of material that is traveling through the tubes or the

266  The Scientific Imagination speed at which it travels, that drives the improved performance on this task (Catrambone et al. 1995). However, simply reminding participants of familiar events that falsify their impetus theory does not seem to have an overwhelming influence on their theory. For example, Kaiser et al. (1986) found no order effects. Of the 53 participants who answered the water question correctly, 27 received the ball bearing problem first, and 10 of them (37%) answered it correctly. The other 26 participants received the ball bearing problem second, and 15 of them (58%) answered it correctly. Although this difference is in the right direction and it appears to be substantial, it was not statistically significant, possibly because of low statistical power. What is important to note, however, is that answering the water question correctly just a few moments before answering the ball bearing question did not guarantee correct answers on the ball bearing question: 42% answered it incorrectly, and when asked to justify their diverging responses, participants explained away the discrepancy in their answers by appealing to various irrelevant factors such as the material of the body traveling through the tube, the speed at which it travels, its weight, and so on. In many ways, these findings with adults parallel Hood’s (1995) finding that children ignore the non-​vertical trajectory of the ball falling down transparent tubes when they are subsequently presented with trials involving opaque tubes. Seeing the ball falling down a transparent tube has a local effect that is reflected in children’s correct searches on trials with transparent tubes. Similarly, imagining water exiting a hose helps adults to correctly predict that it will move in a straight line even if it emerges from a coiled hose. In both cases, the effect of seeing or imagining the relevant data does not transfer to other structurally very similar problems (i.e., trials with opaque tubes and a ball bearing exiting a curved tube). The justifications provided by subjects in the work by Kaiser et al. (1986) suggest that adults do not consider the conflicting data as being in conflict with their theory-​driven predictions. Rather, they consider it to be a special case. The history of science offers other examples of maintaining a theory despite evidence to the contrary. For example, Aristotle’s physics did not differentiate instantaneous speed from average speed, even though at some level he must have known that objects accelerate and decelerate (Kuhn 1977). Similarly, a misconception held by many adults and children alike (Dunbar et al. 2007) is that the natural speed of objects in free fall is directly proportional to their weight. On this view, if two objects with different weights were

Can Children Benefit from Thought Experiments?  267 dropped in a vacuum at exactly the same time from a platform above the ground, the heavier object would reach the ground first. If the heavier object was ten times the weight of the lighter object, it would fall at ten times the rate of the lighter object. Admittedly, there are examples in everyday life that (because of air resistance) appear to be consistent with this belief (e.g., a feather in free fall vs. a brick in free fall). However, there are many examples that are inconsistent with that theory. For example, no one has ever observed a boulder weighing 10 pounds falling 10 times faster than a rock weighing 1 pound or 160 times faster than a pebble weighing 1 ounce. If true, these large differences would have been very noticeable. Yet the fact that no one had observed such large differences escaped Aristotle, who believed that the natural speed of objects in free fall is directly proportional to their weight. Indeed, this belief persisted for almost two thousand years (Gendler 1998). Why is this so? Why do children and adults maintain theory-​driven beliefs despite the availability of evidence to the contrary? There are various interrelated reasons the learner’s theory might be unfazed by such evidence. One possibility is that the learner cannot recognize the relevance of the evidence and how it is related to her theory. Another possibility is that the learner lacks a capacity to work through the logical implications of the evidence. A third possibility is that the learner might fail to notice the discrepancy between those logical implications and her theory. A fourth possibility is that the learner might understand the relevance of the evidence, work through the logical implications of the evidence, and notice the discrepancy between those implications and her theory, but not know what do about it. Although understanding that one’s theory is threatened might motivate theory revision, in and of itself that understanding does not give specific guidelines about how to revise the theory. Thought experiments, and other rationalistic strategies such as making predictions before data collection or using extreme cases to reason about a particular phenomenon, might benefit the learner by helping her overcome some of the difficulties just listed. This is because the rationalistic strategies can draw the learner’s attention to the discrepancy between the evidence and her theory, isolate the problematic aspects of the learner’s theory, and bring them to the forefront of the learner’s consciousness. Before reviewing evidence in support of this speculation, we briefly review some views about what thought experiments are and why they work. There is no single, universally accepted definition of thought experiments (Brown and Fehige 2014; Clement 2009). Nevertheless, most scientific

268  The Scientific Imagination thought experiments share some common features: (a) they are presented as a narrative that invites the reader to imagine a scenario where (b) the reader applies her concepts as she usually does in the real world. However, (c) rather than depicting all the unnecessary details of a real experiment, thought experiments depict only relevant abstractions (Nersessian 1992). The imagined scenario (d) runs in the reader’s mind, and she can “see” what the outcome is. Then the reader (e) draws a conclusion, which is based on the outcome of the experiment (Brown and Fehige 2014). The goal of a scientific (as opposed to mathematical, philosophical, etc.) thought experiment is to confirm or disconfirm a hypothesis or a theory about the physical world (Gendler 2004). But how can thought experiments confirm or disconfirm a hypothesis or a theory about the world without any new data from the world? This question was raised by Thomas Kuhn (1977, 275) (and many others), who asked how, “relying exclusively upon familiar data, can a thought experiment lead to new knowledge or to new understanding of nature?” Kuhn proposed two possibilities: (a) thought experiments do not provide an understanding of nature, instead merely providing an understanding of the scientist’s conceptual apparatus by virtue of revealing conceptual confusions and contradictions; and (b) thought experiments provide an understanding of nature, and in that sense they are quite similar to real experiments. Kuhn argued for the latter and against the former. He argued that the concepts themselves are not contradictory in the sense that the concept “square-​circle” is, but they could be “wrong” or “false,” and for that reason a person who holds such concepts is more “liable to become confused” (1977, 287). Such confusions have been documented in historical cases of previously held (and replaced) concepts and theories (e.g., the phlogiston theory). To put it in Kuhn’s words, there isn’t any “intrinsic defect in the concept by itself. Its defects lay not in its logical consistency but in its failure to fit the full fine-​structure of the world to which it was expected to apply. That is why learning to recognize its defects was necessarily learning about the world as well as about the concept” (1977, 287). Accepting this role for thought experiments, we still need to answer the question of how it is that thought experiments—​which necessarily rely on familiar data—​can lead to learning something new about the world. Kuhn’s answer to this question is that “thought experiments give the scientist access to information which is simultaneously at hand and yet somehow inaccessible to him” (1977, 289, emphasis added). The information that is inaccessible to the scientist is usually ignored or suppressed and very rarely presented to her by nature. The role of thought experiments, then, is to bring the ignored

Can Children Benefit from Thought Experiments?  269 or suppressed information to the front and center of the scientist’s attention. This, in Kuhn’s view, would reveal the exact way in which nature does not agree with currently held beliefs, and it would also suggest ways in which the theory and concepts need to be revised. More recently, these ideas have inspired a wider discussion within philosophy about the source of the justificatory power of scientific thought experiments. The proposals range from Platonic accounts (e.g., thought experiments offer a Platonic perception of a priori knowledge of nature) (Brown 1986) to strong empiricist accounts (e.g., thought experiments are nothing but arguments in a picturesque disguise) (Norton 2004), with proposals that fall between those two extremes (e.g., proposals that put an emphasis on the imagistic aspects of thought experiments [Gendler 2004] and proposals that put an emphasis on mental models [e.g., Nersessian  1992]). Surprisingly, despite this wide discussion in philosophy about the justificatory power of thought experiments, there are no empirical studies in psychology asking whether thought experiments can lead to new justified beliefs. Furthermore, even though the history of science has documented many scientific thought experiments where the primary thinkers are adults, we know very little about how children engage with thought experiments and whether they can benefit from them. Here we raise these questions and begin to answer them.

11.2  The Young Child’s Imagination The first question is whether young children can imagine scenarios in a disciplined way that can lead to reliable imagined outcomes. Many experimental studies show that even very young infants can represent objects that are out of view. In studies that use the violation-​of-​expectancy paradigm, some infants are invited to watch a sequence of events that turns out to be consistent with routine causal constraints. For example, having witnessed two objects placed in succession behind a screen, the infants watch as a screen is raised to reveal that there are indeed two objects behind it, as might be expected. By contrast, other infants, having witnessed the same initial hidings, watch as the screen is lifted to reveal, for example, a single object rather than two. A large set of findings consistently shows that infants and other nonhuman animals will stare longer at this unexpected outcome than at the expected outcome. A plausible conclusion from such studies is that even in the course of the first year, infants are capable of keeping track of semi-​visible

270  The Scientific Imagination displacements—​the successive disappearance of first one object and then another object behind the screen—​and are puzzled if the scene that becomes visible when the occluding screen is raised does not match the presumed outcome of these displacements. This means that both human infants and nonhuman animals can reason about events that are out of view but are happening here and now. They can do this by either predicting or postdicting the events that are taking place behind the screen. For example, one possibility is that before the screen is lifted to reveal either one or two objects, infants actively imagine the outcome of the semi-​visible displacements they have seen. Thus, before the screen is lifted, they imagine that it is occluding two objects, not one. Another possibility is that infants retrospectively work out what ought to be visible once the screen is lifted. Seeing the screen lifted to reveal the scene that was hitherto occluded—​and seeing either one or two objects in place—​they work out whether or not those outcomes are consistent with what they have observed up to that point. This kind of experiment, however, does not tell us anything about the infant’s ability to imagine events that go beyond the here and now. When we turn to other types of studies with older children, however, we do find evidence of an imaginative capacity that goes beyond the here and now. In the course of the second year, toddlers start to produce simple acts of pretense. For example, they lay their head on a pillow and close their eyes as if in sleep, or they lift an empty cup to their lips and pretend to drink from it. Toward the end of the second year, toddlers also start to engage in joint pretense. For example, if a play partner hands them an empty cup, they will readily lift it to their lips and engage in pretend drinking. This receptivity to joint play can be used to probe children’s imagination (Harris and Kavanaugh 1993). For example, in one study, toddlers watched as an experimenter engaged in one of two pretend actions: the experimenter either picked up an empty milk carton and “poured” pretend milk into a container or, alternatively, picked up an empty talcum powder can and “shook” pretend talcum powder into the container. In either case, the experimenter then carried the container over to a toy horse and, holding it above the horse, turned it upside down. Toddlers were invited to say what had happened to the toy horse. By thirty months, they appropriately distinguished between two plausible outcomes. In the case of the milk carton, they described the horse as “wet” or “milky,” whereas in the case of the talcum can they described the horse as “powdery.” Notice that these appropriately distinctive descriptions of the outcome presuppose various abilities on the part

Can Children Benefit from Thought Experiments?  271 of the toddlers: the ability to infer what substance would emerge from the milk carton or the can of talcum powder; the ability to imagine the gravity-​ driven descent of that substance into the container; and the ability to realize that when the container was subsequently moved horizontally and held above the toy horse, its contents would be carried inside it until it was inverted, at which point they would fall onto the horse. In short, to understand the outcome of the experimenter’s pretend actions, children needed to imagine successive displacements, guided by their grasp of naive physics. Notice also that the final step—​the lateral displacement of the container and its contents, together with the eventual inversion of the container—​ remained identical for all toddlers. Despite this, children described its consequences differently depending on the preceding step (i.e., which pretend substance had been tipped into the container). More generally, in these studies of pretend play, there is no visible outcome that either matches or violates the child’s prediction or postdiction. Instead, the toddler is invited to imagine what has happened in the absence of any visible outcome and to offer a description. Based on studies like these, it is plausible to argue that even two-​and three-​year-​old children are able to imagine outcomes guided by their grasp of naive physics. More generally, these studies undermine the standard portrait of the child’s imagination as a zone in which all sorts of things can happen unchecked by any consideration of real-​world constraints (Harris 2000). Not only can three-​year-​olds use their imagination to correctly simulate an event that did not occur in reality, but they can also use their imagination to reason about counterfactual conditionals. For example, in one study, three-​to five-​year-​old children were told a story in which A caused B. When children were asked what would have happened if A had not occurred, they correctly answered that B would not have occurred. When they were given two different antecedents A′ and A″, where A′ would have caused B and A″ would not have caused B, they correctly differentiated between the two antecedents. Finally, when they were told a story about a protagonist who made some choices that led to a minor mishap, they correctly reasoned about what the protagonist could have done differently to prevent the mishap from happening (Harris et al. 1996). In a recent follow-​up to this work (Lane et al. 2016), we have probed young children’s imagination by means of a standardized interview. Children aged four through eight years were asked if they were able to imagine a variety of outcomes, some improbable but not excluded by ordinary causal constraints,

272  The Scientific Imagination and others downright impossible in the sense that they do violate such constraints. In addition, children were asked to say what they had visualized as they engaged their imagination. For example, children were asked: “Close your eyes, and imagine a person walking through a fire. Can you imagine that or not? What do you see when you try to imagine that?” Similarly, they were asked: “Close your eyes, and imagine a person walking through a brick wall. Can you imagine that or not? What do you see when you try to imagine that?” Finally, children were invited to say how confident they were that the phenomenon in question could or could not happen in the real world. For all age groups, a close relationship emerged between the statements of confidence in real-​world likelihood and their reports of being able to imagine the phenomenon in question. When they expressed more confidence that the phenomenon could actually happen, they were also more likely to report being able to imagine it. By implication, when children are asked to imagine what might happen, the scenario that they proceed to imagine is unlikely to be something that they judge to be impossible. Contrary to popular belief, young children’s imagination seems to be grounded in reality rather than prone to flights of fancy. So the act of imagining a scenario might teach them something about the effects of real-​world constraints. More specifically, children might come to anticipate hitherto unacknowledged implications of those constraints and take them into account in their subsequent expectations. Next we provide some evidence in support of this speculation.

11.3  Overcoming the Gravity Error Recall children’s gravity errors (see Figure 11.1), which persist even after the children receive overwhelming visual evidence that objects do not always fall in a straight line. Would asking children to imagine the constraints of the tubes help them override their prepotent hypothesis that objects always fall in a straight line? In other words, would bringing the critical role of the tubes to the center of children’s consciousness help them overcome the gravity error? Two different studies suggest that the answer to this question is yes. In one study, Joh et al. (2011) tested the idea that prompting children to use their imagination can help them to overcome the gravity error. Children were asked to predict where the ball would land by putting a cup beneath one of the three intertwined opaque tubes. Children assigned to the Imagine

Can Children Benefit from Thought Experiments?  273 condition heard the following prompt on each trial: “Can you imagine the ball rolling down the tube?” Children assigned to the Wait condition were told: “The ball is going to roll down the bumpy tube.” Children in the control condition received no instructions. Even though this study was not designed to test the effectiveness of thought experiments, it did in fact invite children to engage in a thought experiment. The prompt in the experimental condition asked them to perform a thought experiment in their heads where they were asked to imagine the ball traveling down the tube and “see” what the outcome of that experiment is. Indeed, receiving the question “Can you imagine the ball rolling down the tube?” was surprisingly effective. Children who received this question performed better than children assigned to the other two conditions. In addition, children assigned to this condition improved over the course of repeated trials. However, this finding does not tell us whether children’s participation in a thought experiment leads to a stable change—​a new insight into the way the world works—​or simply to a transient adjustment that occurs only when children are prompted to use their imagination. Note that children in the Imagine condition were prompted to imagine the ball rolling inside the tube on each and every trial. So we do not know what they would do if they were given follow-​up trials in which they were left to their own devices. Having been alerted to the constraints imposed by the walls of the tube in the Imagine prompt, would they simply revert to making the gravity error again if that prompt were withdrawn? In a different study, Bascandziev and Harris (2010) asked if verbal information concerning the role of the tubes could help children overcome the gravity error not just on training trials but also on subsequent test trials. All children received four pre-​test trials, two training trials, and four post-​test trials. As usual, children’s task was to search for the ball that was dropped down one of the three intertwined, opaque tubes (see Figure 11.1). After completing the pre-​test trials, children received different training instructions during the two training trials depending on which condition they were in. Children in the No Escape condition heard the experimenter say: “Look! I dropped the ball in this tube. And you know what? The ball could not escape from that tube. It rolled inside that tube!” Children in the Eye Movements condition heard the experimenter say: “Look! I dropped the ball in this tube. And you know what? What you need to do is to watch which tube the ball goes in and you need to follow that tube with your eyes. Okay?” Finally, children in the Attention condition heard the experimenter say: “Look! I dropped the

274  The Scientific Imagination ball in this tube. And you know what? You have to pay attention to the tubes in order to find the ball immediately.” The results are shown in Figure 11.2. Inspection of Figure 11.2 reveals that only children in the first two conditions (i.e., the No Escape and the Eye Movements conditions) improved from pre-​test to post-​test. More specifically, they stopped making so many gravity errors and were more likely to search in the correct tube. Children in the Attention condition showed no such improvement—​they continued to make many gravity errors. Just like the results of Joh et  al. (2011), the results of Bascandziev and Harris (2010) showed that receiving visual feedback about the correct location of the ball did not help children improve in post-​test trials. On the other hand, receiving testimony about the causal role of the tubes or receiving a behavioral rule about how to find the ball did help. Why were these two (a) 100

Pretest

80 60 40 20 0

No Escape

Eye Movement Gravity

(b) 100

Correct

Attention Other

Posttest

80 60 40 20 0

No Escape

Eye Movement Gravity

Correct

Attention Other

Figure 11.2  Mean percentage of trials on which children searched in the correct, gravity, or other cup as a function of condition and test

Can Children Benefit from Thought Experiments?  275 interventions successful? Again, even though this study was not designed to test the effectiveness of thought experiments, it is likely that, as in the study of Joh et al. (2011), children assigned to the experimental conditions and especially the children assigned to the No Escape condition were prompted to imagine the ball rolling in a constrained fashion inside the tube. More specifically, each of these two interventions served to remind children of—​to bring to the center of their consciousness—​something that they knew already, which is that one solid object cannot pass through another. Hence, if the ball is dropped inside the tube, the solid walls of the tube will constrain its downward trajectory over and above any constraint imposed by the force of gravity. By implication, the interventions prompted a change in the way that children thought about the behavior of the ball once it was dropped into the tube, and this change led to improved performance on post-​test trials even though children received no verbal instructions on those trials. In a follow-​up study (Bascandziev et al. 2016) we asked whether a suite of executive function skills (i.e., inhibitory control abilities, working memory, and cognitive flexibility [Diamond 2013; Miyake et al. 2000]) would predict which children would benefit from the intervention. Recall that the learner faces many difficulties when she is confronted with evidence that goes against her theory, be it in the form of raw data or an outcome of a thought experiment. As noted earlier, the learner needs to understand the relevance of the evidence, work through the logical implications of that evidence, notice the discrepancy between those logical implications and her theory, and then work on the theory revision process. It is very likely that these mental activities require executive function skills. Indeed, a growing literature shows that these skills are associated with conceptual development across different domains of cognition (for children’s theory of mind, see, e.g., Carlson and Moses 2001; Frye 1999; Frye et al. 1995; Sabbagh et al. 2006; for naive biology, see, e.g., Bascandziev et al. 2018; Zaitchik et al. 2013). This raises the question of whether the learning that takes place in the context of the tubes task is also related to executive functioning. More specifically, is children’s learning from the provided testimony akin to associative learning—​a process that appears to be automatic, with low demands on executive functioning—​or it is akin to processes that appear to put heavy demands on executive functioning and that are implicated in many episodes of causal learning and conceptual change (e.g., see Carey 2009)? To investigate this question, Bascandziev et al. (2016) tested a hundred children who received a pre-​test on the tubes task, followed by testimony about the causal role of the tubes and then by a

276  The Scientific Imagination post-​test on the tubes task, along with several executive functioning measures. Children who scored higher on executive functioning measures and on a fluid IQ test showed greater improvement than children with lower executive functioning and fluid IQ scores. These results suggest that children’s ability to construct a new understanding of the physical world—​by engaging in a thought experiment—​is associated with executive functions and with fluid IQ.

11.4  Is Empirical Feedback Necessary? The studies described in the previous section demonstrate that young children display a conceptual advance following a brief experimental intervention. The findings bolster the claim that direct empirical feedback is sometimes inadequate. Children in the control conditions of the above experiments received potentially helpful empirical feedback but showed no improvement. Can we conclude, then, that young children can be led to a conceptual change if they are prompted to engage in a thought experiment? Such a conclusion would be premature. Having received verbal instruction, children eventually went on to receive visual feedback about the final location of the ball. Strictly speaking, the fact that visual feedback was given along with verbal instruction means that children were engaged in a thought experiment plus a real experiment, rather than a thought experiment pure and simple. Still, the finding that visual feedback alone was not helpful and that verbal instruction helped children to overcome the gravity error suggests that, at least under some circumstances, inviting children to think about a physical phenomenon is more helpful than simply providing them with “clear” empirical evidence alone. The next step in this research program is to invite children and adults to engage in a thought experiment but to provide no empirical feedback concerning the outcome of such an experiment. In ongoing research, we are investigating this across several different domains. More specifically, we are investigating the following questions: (a) Can thought experiments lead to new knowledge in children and in scientifically naive adults? (b) Are the effects of a thought experiment similar to the effects of a real experiment when the two are structurally equivalent? (c) What are some of the critical features of thought experiments that could trigger belief revision? (d) What are some of the domain-​general cognitive resources that support learning from real

Can Children Benefit from Thought Experiments?  277 experiments and from thought experiments? Future research should address these and other related questions.

11.5  Conclusions The long-​established research program focusing on children’s ability to construct and revise theories based on incoming data, acquired via exploration and firsthand observations, has been very successful (Gopnik and Schulz 2007). Here, we argue that factors other than new empirical data could also drive theory construction and theory revision. One such factor is the human imagination or the ability to conduct thought experiments in the imagination. Previous research has shown that young children apply their concepts when imagining what will or might happen, just as they usually apply them in their observation of the world. Rather than having an unchecked imagination that is prone to flights of fancy, young children apply causal principles to an imagined sequence of events (Harris and Kavanaugh 1993; Lane et al. 2016). Furthermore, several studies have suggested that, at least under some circumstances, inviting children to reason about the critical aspects of a physical mechanism is more beneficial than inviting them to observe how the mechanism operates (Bascandziev and Harris 2010; Joh et al. 2011). Finally, we have sketched the kind of questions that one could ask in future research about the role of thought experiments in theory change. Positive results in those studies would be the first empirical results showing that thought experiments alone can advance knowledge.

References Bascandziev, I., and Harris, P. L. (2010). “The Role of Testimony in Young Children’s Solution of a Gravity-​Driven Invisible Displacement Task.” Cognitive Development 25: 233‒246. Bascandziev, I., Powell, L. J., Harris, P. L., and Carey, S. (2016). “A Role for Executive Functions in Explanatory Understanding of the Physical World.” Cognitive Development 39: 71‒85. Bascandziev, I., Tardiff, N., Zaitchik, D., and Carey, S. (2018). “The Role of Domain-​ General Cognitive Resources in Children’s Construction of a Vitalist Theory of Biology.” Cognitive Psychology 104: 1‒28. Brown, J. R. (1986). “Thought Experiments Since the Scientific Revolution.” International Studies in the Philosophy of Science 1: 1–​15.

278  The Scientific Imagination Brown, J. R., and Fehige, Y. (2014). “Thought Experiments.” In The Stanford Encyclopedia of Philosophy, edited by E. N. Zalta. Stanford, CA: Stanford University. Carey, S. (2009). The Origin of Concepts. New York: Oxford University Press. Carey, S., Zaitchik, D., and Bascandziev, I. (2015). “Theories of Development: In Dialogue with Jean Piaget.” Developmental Review 38: 36‒54. Carlson, S., and Moses, L. (2001). “Individual Differences in Inhibitory Control and Children’s Theory of Mind.” Child Development 72, 1032‒1053. Catrambone, R., Jones, C. L., Jonides, J., and Seifert, C. (1995). “Reasoning About Curvilinear Motion:  Using Principles of Analogy.” Memory and Cognition 23: 368‒373. Clement, J. (1982). “Students’ Preconceptions in Introductory Mechanics.” American Journal of Physics 50: 66‒71. Clement, J. (2009). “The Role of Imagistic Simulation in Scientific Thought Experiments.” Topics in Cognitive Science 1: 686‒710. Diamond, A. (2013). “Executive Functions.” Annual Review of Psychology 64: 135‒168. Dunbar, K., Fugelsang, J., and Stein, C. (2007). “Do Naive Theories Ever Go Away? Using Brain and Behavior to Understand Changes in Concepts.” In Thinking with Data, edited by M. C. Lovett and P. Shah, 193‒205. Mahwah, NJ: Lawrence Erlbaum. Frye, D. (1999). “Development of Intention: The Relation of Executive Function to Theory of Mind.” In Developing Theories of Intention: Social Understanding and Self Control, edited by P. D. Zelazo, J. W. Astington, and D. R. Olson, 119‒132. Mahwah, NJ: Lawrence Erlbaum. Frye, D., Zelazo, P. D., and Palfai, T. (1995). “Theory of Mind and Rule-​Based Reasoning.” Cognitive Development 10: 483–​527. Gendler, S. T. (1998). “Galileo and the Indispensability of Scientific Thought Experiments.” British Journal of the Philosophy of Science 49: 397‒424. Gendler, S. T. (2004). “Thought Experiments Rethought—​and Reperceived.” Philosophy of Science 71: 1152‒1163. Gopnik, A., and Schulz, L. E. (Eds.). (2007). Causal Learning: Psychology, Philosophy, and Computation. Oxford: Oxford University Press. Harris, P. L. (2000). The Work of the Imagination. Oxford: Blackwell. Harris, P. L., German, T., and Mills, P. (1996). “Children’s Use of Counterfactual Thinking in Causal Reasoning.” Cognition 61: 233‒259. Harris, P. L., and Kavanaugh, R. D. (1993). Young Children’s Understanding of Pretense. Monographs of the Society for Research in Child Development, vol. 58, no. 1, serial no. 231. Chicago: University of Chicago Press. Hood, B. (1995). “Gravity Rules for 2-​to 4-​Year-​Olds.” Cognitive Development 10: 577‒598. Joh, A. S., Jaswal, V. K., and Keen, R. (2011). “Imagining a Way out of the Gravity Bias: Preschoolers Can Visualize the Solution to a Spatial Problem.” Child Development 82: 744‒750. Kaiser, M. K., Jonides, J., and Alexander, J. (1986). “Intuitive Reasoning About Abstract and Familiar Physics Problems.” Memory and Cognition 14: 308‒312. Kuhn, T. S. (1977). “A Function for Thought Experiments.” In Thinking:  Readings in Cognitive Science, edited by P. N. Johnson-​ Laird and P. C. Wason, 274‒292. New York: Cambridge University Press. Lane, J., Ronfard, S., Francioli, S., and Harris, P. L. (2016). “Children’s Imagination and Belief: Prone to Flights of Fancy or Grounded in Reality?” Cognition 152: 127‒140.

Can Children Benefit from Thought Experiments?  279 McCloskey, M. (1983). “Naive Theories of Motion.” In Mental Models, edited by D. Gentner and A. Stevens, 229‒324. Hillsdale, NJ: Lawrence Erlbaum. McCloskey, M., Caramazza, A., and Green, B. (1980). “Curvilinear Motion in the Absence of External Forces: Naive Beliefs About the Motion of Objects.” Science 210: 1139‒1141. Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A., and Wager, T. D. (2000). “The Unity and Diversity of Executive Functions and Their Contributions to Complex ‘Frontal Lobe’ Tasks:  A Latent Variable Analysis.” Cognitive Psychology 41: 49‒100. Nersessian, N. J. (1992). “In the Theoretician’s Laboratory:  Thought Experimenting as Mental Modeling.” PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1992, 291‒310. Norton, J. D. (2004). “Why Thought Experiments Do Not Transcend Empiricism.” In Contemporary Debates in the Philosophy of Science, edited by C. Hitchcock, 44‒66. Oxford: Blackwell. Sabbagh, M. A., Xu, F., Carlson, S. M., Moses, L. J., and Lee, K. (2006). “The Development of Executive Functioning and Theory of Mind.” Psychological Science 17: 74‒81. Zaitchik, D., Iqbal, Y., and Carey, S. (2013). “The Effect of Executive Function on Biological Reasoning in Young Children: An Individual Differences Study.” Child Development 85: 160‒175.

12 Metaphor and Scientific Explanation Arnon Levy

12.1  Introduction In modern philosophy of science, metaphor has not received a warm welcome. With few exceptions, metaphors have been treated as, at best, unimportant heuristic devices. In particular, the idea that metaphors can serve an explanatory function, or that they can carry any theoretical weight, has raised the hackles of philosophers of science from Hempel onward: “it’s a metaphor” typically serves as a quick dismissal. This paper aims to rehabilitate metaphors and show how and why they can serve serious theoretical roles. It is also a complement to earlier work (Levy 2011) that discusses information in biology. In that work, I argued that biological information should be construed as a metaphor, albeit as one that carries explanatory weight. But the focus was on the role of information within the relevant biological contexts, and less attention was devoted to the broader question of explanation via metaphors. Here I will focus on those broader issues, using biological information for illustrative purposes. Resistance to the idea of explanatory metaphors stems from deep-​seated assumptions about explanation. Many see the very idea of a non-​literal explanation as wrong, or even downright absurd. Therefore I will try to show how this idea can be placed into an overall view of scientific explanation, somewhat skimping on details in order to foreground the big picture. I view this big picture as plausible independent of the connection to metaphor, and I’ll discuss general considerations in its favor. With the big picture at hand I will look at how explanation via metaphor works. I have just cast the issue in terms of the explanatory role of metaphor. But I’ll begin, and in fact spend a larger portion of the chapter, discussing the notion of understanding and anchoring the role of metaphor to it. That is primarily because, as I will suggest, we ought to view understanding as a more fundamental concept, accounting for explanation in terms of its relationship Arnon Levy, Metaphor and Scientific Explanation In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press.DOI: 10.1093/oso/9780190212308.003.0013

Metaphor and Scientific Explanation  281 to understanding. Relatedly, I think it is easier to make sense of the role of metaphors when approached this way. So the next section sketches an account of understanding. In a nutshell, its core is the idea that understanding something is having a representation of it that allows one to draw inferences about its counterfactual behavior. After expanding on this idea and describing some considerations in its favor, I will suggest that we can treat explanation in terms of what is conducive to understanding. It is in this sense that explanation is a derivative, less fundamental notion. Next, drawing on my earlier work, I will look at information in biology as an illustration of a metaphor that supplies understanding and serves in explanation. Then I briefly discuss the relationship between metaphors and models. The final portion of the chapter revisits the big picture, discussing why the idea of explanatory metaphors has not been especially popular and responding to some potential concerns philosophers may have about it.

12.2  Understanding I begin, as noted, by outlining an account of scientific understanding. We speak of understanding phenomena (such as climate change or sympatric speciation) and events (such as the Cambrian explosion or the Big Bang) but also of understanding people, understanding a language, and understanding a work of art. Some philosophers have tried to give an account that covers all these cases, and more (Kvanvig 2003; Wilkenfeld 2013). But I will focus on scientific understanding. More particularly, I focus on what is sometimes called “objectual” understanding—​understanding objects, or “things in the world,” as opposed to understanding a theory or a model. Partly this is because I am unsure there is a single general concept of understanding. The account I’ll give isn’t one that can be applied, at least not as it stands, to extra-​scientific contexts previously mentioned, and its application to understanding theories requires modifications that I will not discuss here. In essence, the view I will argue for is that understanding something—​ call it the understanding’s target—​consists in possessing a representation of it that allows one to make inferences about its behavior and properties under various conditions—​typically, conditions not previously observed, primarily counterfactual ones. That is to say, one understands T when one can use

282  The Scientific Imagination one’s representation of T to say what would happen to the target if this or that change were made to it. I see this view as a synthesis of work by a variety of authors over the last two decades or so, but it bears an especially close affinity with a view developed quite recently by Wilkenfeld (2013) concerning understanding and, in respect to explanation, to an account put forward by Bokulich (2011).1 I will not be able to delineate and defend the view in full, or to discuss specific differences between my account and others. But I do hope to say enough to explain its basic thrust and to motivate it. On the proposed view, to understand, for instance, cellular respiration is to have a representation of the process of respiration such that, using this representation, one can say what would happen to a respiring cell if, say, one or another of the components in this process were absent or altered, or if this or that environmental condition were modified. It is natural to extend this idea and treat understanding as a graded matter: one’s understanding is stronger (deeper, more comprehensive) the better one is at inferring what would happen to the target if conditions were altered—​better in terms of the range of relevant conditions, the precision of one’s inferences an so on. We can summarize the proposal, therefore, as follows: For Subject S to understand target T amounts to S’s being able to use a representation R of T to in order to draw sufficiently accurate and general inferences about T’s (actual and) counterfactual behavior.2

I will say more about some of the elements of this statement in a moment, but first I wish to highlight the core idea: the proposal links a property of the understander (namely, a capacity to draw inferences) with a set of properties of the target (namely, its behavior under counterfactual circumstances). Understanding, on this view, is a two-​factor concept, somewhat like knowledge. One component of understanding has a more internal and subject-​relative character, in that it involves a capacity of the subject, something the subject can (or cannot) do. The other component is more external and subject-​independent, in that it concerns what will happen to the target if alternative circumstances comes about. This 1 de Regt (2018) contains the best developed version of this approach, though it does not emphasize counterfactuals. 2 My focus is on counterfactual inferences. But often—​for instance, when performing an experiment to test some hypothesis—​a claim that appears counterfactual at one point is “actualized” at a later point. This is the main reason I say “(actual and) counterfactual.” Thanks to Peter Godfrey-​ Smith for highlighting this point.

Metaphor and Scientific Explanation  283 I regard as a matter of fact, external to and independent of the subject’s thoughts and beliefs (but see my comments later on the connection to modal realism). What links these two components of understanding is the possession of a representation of the target: a structure that encodes information about the target, allowing an agent to use it to infer how the target will behave. Let me highlight one significant consequence of this view right away: the account requires that the understander have a representation of the target (I will say some more about representations later), but it does not place any specific constraints on the representation’s content. It leaves considerable freedom in terms of how an understander represents the target, so long as she can reliably draw the right inferences. In particular, there is no requirement that the understander employ a correct or accurate representation. Oftentimes, of course, having an accurate representation will advance understanding, as it will facilitate good inferences. But this won’t always be the case: sometimes a less accurate representation can prove just as helpful, if not more helpful, than an accurate one. This feature of the account, as may already be apparent, is key to my treatment of the role of metaphors in providing understanding. Now to some of the elements of the proposal I have just made. First, the proposal requires that the subject possess a representation of the target. What do I have in mind? I won’t attempt, of course, to give a proper account of representation here. But I think we can settle for the following (very rough, but useful) schema: a representation is something produced by an agent in order to communicate something to an audience (the audience may be oneself). Much more by way of filling in this schema is required if we are to have a bona fide account of representation. But the key point for our purposes is the following: an agent can designate anything she wishes as a representation of some given target, and then use that to communicate and reason about the target. To borrow an example from Cohen and Callander (2006), who advocate a similar view, insofar as representation is concerned, one can use a salt shaker to represent the solar system. However, and this is a key point as well, some representations will be better, given the context, than others. 3 And it is here that much of the action concerning representation (at least in science) is at: how ought one to best represent a phenomenon, given one’s aims? One 3 Cohen and Callander accept this but do not highlight it their 2006 paper. I think this has generated considerable misunderstanding of and resistance to their view.

284  The Scientific Imagination may use various methods involving an association of elements of the representation and elements of the target, from qualitative similarity to formal mappings. One may require more or less accuracy, more or less detail, more or less simplicity from one’s representation, depending on the purposes and the agents involved. The means of representation may be verbal, mathematical, or graphical. These choices may matter greatly in terms of how useful the representation is relative to a given end such as prediction, understanding, control, or classification. But all such choices are choices pertaining to how to represent the target. The question of whether we have a representation depends only on whether an agent is using it to communicate something to an audience. So the account of understanding can (and should, I think) rely on a very low-​key notion of representation. But it does pose some constraints on the role of relevant representations, and hence, indirectly, also on their content and character. Primarily, the account requires that an understander use her representation so as to arrive at satisfactory counterfactual inferences. Now, short of a miracle, this would require the representation to embody information about the counterfactual properties of the target, and to do so in a way that is accessible to the representation’s user—​that is, the user must know how to extract the relevant information, apply it to new contexts, and so on. Let me now make a few remarks to clarify the phrase “sufficiently accurate inferences about the target’s counterfactual behavior,” which appears in the characterization given earlier. First, in speaking of the target’s counterfactual behavior I am assuming that there are right and wrong answers to questions about a system’s counterfactual behavior. The assumption I am making, to be clear, falls somewhat short of proper modal realism. While the view is compatible with modal realism, and while I am somewhat attracted by realism, I  am not assuming it here. Instead, I  assume only that questions such as “What would system X do under circumstances C?” are answerable in ways that are not substantially relativized to a person’s or a community’s beliefs, state of knowledge, or interests. How much of a realist, so to speak, one must be to accept such an assumption is an issue I will not address here. Perhaps, for instance, one can maintain a deflationary attitude toward counterfactuals, of the sort expressed by Godfrey-​Smith (this volume), while accepting that there are right and wrong answers to counterfactual questions. But I am not certain of that. At any rate, this is an issue that is largely independent of the concerns of this chapter.

Metaphor and Scientific Explanation  285 Now, about the “sufficiently accurate and general” part: when does an inference count as sufficiently accurate for the purposes of understanding? The answer is: it depends. I have said that the measure of one’s understanding is the breadth and accuracy of one’s counterfactual inferences. But while we sometimes speak of someone as understanding (or lacking understanding) simpliciter, I do not think there are context-​independent measures of how accurate or general such inferences must be in order for there to be understanding (here I am in agreement with Wikkendeld, de Regt and other recent writers on understanding cited earlier). We judge one’s degree of understanding on the basis of the subject matter, one’s expected level of expertise, the context of evaluation, and other factors. So while the kind of ability one must exhibit in order for one to understand is objective and specifiable independent of context, the threshold—​if there is a threshold—​for what counts as having understanding is context sensitive.4 Having outlined my view of understanding, I would now like to discuss a few considerations in support of it. I start with two points connected with the account’s focus on counterfactual inferences. First, connecting understanding with an ability to anticipate how things in the world behave, especially under conditions that we have not yet experienced, makes it easy to explain the value of understanding (cf. Woodward 2003, ch. 1; Lombrozo 2011). We want to control and manipulate our environments in various ways and for a multitude of reasons. Understanding, on the present account, is a first and vital step toward achieving this. So we have a simple and straightforward answer to an important question about understanding—​why it is valuable. Second, I think this account fits well with the ways in which we ordinarily attribute understanding in pedagogy and in other contexts, as I’ve already mentioned. The case of pedagogy is especially telling, I think, as it involves explicit and careful assessments of understanding. Consider a student who can solve a problem on an exam only if the details of the problem are identical to those that have been rehearsed in class or in a textbook. Ordinarily, we would consider such a student as having little or no understanding. I think we employ similar criteria to assess the degree of understanding in other contexts (think about what underlies claims about “the state of understanding” in some field or discipline). If this observation is right (it would be possible, and informative, to test the matter 4 This raises the question of whose understanding is at issue (and what the context is) when making a connection between understanding and explanation. I return to this point later.

286  The Scientific Imagination empirically), then it suggests a close connection between an ability to answer counterfactual-​style questions and understanding. Furthermore, the present view of understanding is recommended by the fact that it is appropriately internalist while not inappropriately subjectivist. Let me explain what I have in mind. On the one hand, it is clear that understanding is an internal psychological state or property. It is a cognitive achievement and centrally involves what is “in our heads.” On the other hand, understanding isn’t merely a subjective matter—​it is not a matter of feeling a certain way or having some distinctive experience (a kind of “aha”).5 Nor is an agent necessarily correct about whether she understands: one can have an illusion of understanding. The proposed account respects both the internal and external aspects. While it requires that one have a representation of the target that one can use to make inferences about its counterfactual behavior, those inferences are evaluable vis-​à-​vis an external target and with regard to an objective set of facts. This matters especially in the context of scientific understanding. Writers from Hempel onward were concerned that any appeal to understanding would introduce a large measure of subjectivity into our assessments of scientific achievements—​that “understanding in the psychological sense of a feeling of empathic familiarity” (Hempel and Oppenheim 1948, 17) would (mis) guide theory choice and judgments about explanatory power (see also Trout 2002). On the other hand, if understanding is construed along externalist, fact-​based lines only, or if it provides no more than an inspirational starting point for an account of explanation (cf. Friedman 1974; Kitcher 1989), then it does not play a distinctive role and its contribution to our thinking about explanation appears small. Thus the fact that the line between overly internal and overly external can be drawn represents an important step toward rehabilitating the notion of understanding and finding a place for it in philosophy of science. Finally, I think that an account of understanding of the sort presented here can bear fruits that most other views of understanding cannot. In particular, if linked to explanation in more or less the way I describe in a moment, such an account can allow us to make sense of the role of idealization in explanation, a problem that current writing on explanation has struggled with.

5 The “aha” feeling may often accompany understanding. It may even have a function with respect to understanding (Gopnik 1998). But it is not, on the present account, constitutive of understanding.

Metaphor and Scientific Explanation  287

12.3  Explanation Having laid out a view of understanding, let me make the connection to explanation. It is, in fact, fairly simple: explanations are vehicles of understanding, and an explanation is successful to the extent that it succeeds in generating understanding. To make this a little more precise, we may distinguish three common senses of the term “explanation.” Sometimes it denotes a communicative act, an attempt to give an account of some event or phenomenon to an audience within a particular communicative context (a lecture, a textbook, a lab meeting). We may call this the explanatory episode. Other times, “explanation” refers to impersonal facts or things-​in-​the-​world. We can call these the explanatory facts—​those cited in the course of an explanatory episode. Much of the discussion over scientific explanation has surrounded explanatory facts: Are they laws of nature? Causes and mechanisms? Probabilities? Finally, an “explanation” may be a representation, such as a text, a figure, or a set of equations. These I will refer to as explanatory vehicles—​the means by which information about the explanatory facts is conveyed in the course of an explanatory episode. Philosophical discussions of explanation have not always been careful about distinguishing these three senses. I think it is best to focus on vehicles. It is vehicles that we have in mind when assessing the explanatory power of a theory or when discussing inferences to the best explanation. Thus, when setting out a view of explanation as grounded in understanding, I am making a claim about the success conditions of a vehicle—​a representation of explanatory facts.6 It may still be asked, however, “Whose understanding?” I said earlier that understanding is a state of an individual or a community. Which individual or community ought we to take into account when assessing an explanation? My answer, in brief, is that we should appeal to a somewhat idealized agent—​what we might call a well-​placed expert on the relevant subject. We should ask: Can an expert on the phenomenon being explained, who

6 It may be objected that it is hard, if not impossible, to separate the explanatory vehicle from its communicative context—​that is, from the episode(s) in which it figures. I accept that there is a tight connection between vehicles and episodes. For instance, it will often be difficult to interpret a vehicle absent knowledge of the participants and other details of the communicative contexts in which it plays a part. But I think that with respect to many vehicles we can, to a first approximation, settle on an interpretation of the vehicle that is stable across a range of explanatory episodes, and in that sense treat the vehicle as independent of the episodes in which it figures. I thank Marie Kaiser for drawing my attention to this point.

288  The Scientific Imagination is provided with the explanation in appropriate conditions (it is being communicated clearly, the expert is in a position to take up the explanation, etc.), come to understand the phenomenon (or understand it better)? If the answer is positive, then the explanation is a good one, and the more understanding the explanation generates in such a suitably placed expert, the greater its explanatory power. That is, the more counterfactual inferences the expert is able to perform on the basis of the explanation (and the stronger and more precise those inferences are), the better the explanation is. What counts as a suitably placed expert? To a significant extent, that will depend on the context. Roughly speaking, an expert is someone who is acquainted with the state of the art in the relevant area, can use some of the relevant tools, and, typically, is part of a relevant epistemic community. Of course, such relativization to experts means that an explanation’s quality is not an agent-​independent matter—​it is tied to the abilities and epistemic state of science and scientists. This may raise some concerns, which I address later. What about the explanatory facts? Does the account I have been proposing have anything to say about the type of facts that are relevant to explanation? Indirectly, it does. Recall that the account of understanding requires an understander to be able to make good inferences about the target’s counterfactual behavior. If an explanation is a vehicle that enhances understanding, then it ought to convey information about the kinds of facts that underlie counterfactual behavior and that inform judgments about what the target would do if this or that condition were altered: facts about what makes a difference to the target’s behavior, as we may put it. In many scientific contexts these will be causal facts (Strevens 2008; Woodward 2003). But there is no reason to restrict explanatory facts to causes—​information about the constitution of a system may also allow one to make inferences about its counterfactual behavior, sometimes in combination with causal facts (Craver and Bechtel 2007). Other types of facts, like mathematical ones, may also at times allow one to make inferences, and would count as explanatory facts on the present proposal too (Jansson and Saatsi 2017). So the overall view, as developed so far, is this: Understanding consists in representing a target such that one can draw appropriate inferences about its counterfactual behavior. Explanations (i.e., explanatory vehicles) are successful to the extent that they contribute to understanding. To do so they should allude to difference-​making facts (i.e., causal and other dependence relations). But the kind of facts alluded to is not the only aspect that determines the explanatory credentials of a vehicle. It must also make the relevant

Metaphor and Scientific Explanation  289 facts cognitively accessible, allowing the understander to actually draw the relevant inferences. A good explanation, in other words, represents the relevant explanatory facts in a usable way. This is where metaphors enter the picture.

12.4  Information in Biology I will approach the topic of explanatory metaphors via an example:  information in biology. The notion that macromolecules such as DNA and hormones, as well as various cellular structures and processes (such as synapses and so-​called positional information), store, transmit, and process information is very common. But it is also puzzling, since the application of intentional notions to things that are not agents—​indeed, appear far removed from agency—​is unusual and its justification uncertain. In earlier work I argued that serious challenges arise if we take information talk literally, and that it is best to construe such notions as metaphors (Levy 2011). I will not rehearse that argument here. Instead, I’ll take for granted that informational notions are used metaphorically within biology, at least in cellular and molecular biology, and discuss how such metaphors contribute to understanding and hence play an explanatory role. I should note that the overview in this chapter applies to the basic notions of information transfer (that is, communication or signaling) and does not apply as such to related notions such as information storage and processing. I think a similar metaphor-​based treatment can be given to these as well, but I will not cover that here. Informational metaphors rely on our familiarity with the process of communication in the macroscopic, cognitively sophisticated, agential domain and apply the patterns and thought habits typical of such contexts to the less familiar domain of molecular and cellular biology. Ordinarily, an informational characterization appears in biological contexts in which one element regulates or exerts control over another—​such as a gland exerting control over metabolic activity in some remote tissue via a hormone, or DNA exerting control over protein synthesis. Typically, such control occurs over a certain spatial or temporal gap (e.g., across the cell membrane) or between two separate structures (e.g., between two organelles). An informational characterization highlights certain properties of such a control process, primarily three:

290  The Scientific Imagination (1) The directionality of the process—​what is controlling what—​via a designation of one element as the sender and another as the receiver. This is a matter of both spatial and temporal directionality. The sender sends messages and the receiver deciphers them. The sender acts before the receiver. The sender can influence the receiver, but the receiver cannot influence the sender. (2) The relative stability of the message, given changes occurring at both ends of the communicative chain—​the sender end and the receiver end. The sender is typically responding to some condition in its environment. It does that by sending a signal that communicates the new condition to the receiver. The fact that the signal’s structure is stable across this interaction, whereas the sender and the receiver are active and changing, is an important aspect of communicative processes. (3) It is not merely that sender and receiver are active, while the signal is passive. In communicative exchanges, there is typically a code or mapping, a kind of interpretation rule, generating a correspondence between states of the sender and states of the receiver. The existence of such a mapping is also highlighted by the informational description. It calls our attention to how variant states at the sender end correspond to variant states at the receiver end. Thinking in terms of an interpretation rule allows one to focus on the connection between the changes at the ends of the causal chain while de-​emphasizing intermediate links. The activity of hormones illustrates this picture nicely. Typically, hormonal signaling molecules are sent by a gland in one part of the body—​say, in the brain or the liver. The signaling molecule is carried by the bloodstream to its destination. Once bound to the recipient, it either activates a secondary messenger or enters the cell itself, up-​or downregulating cellular activity. Hormone molecules remain relatively unchanged in the process, whereas the gland and the target tissue change states. Describing this as if it were a case of signaling singles out the variation in the state of the sender (or the bodily parameter it is sensitive to, such as nutrient level) and how a corresponding metabolic activity occurs on the receiving end. By highlighting these kinds of features, an informational metaphor makes accessible various sorts of inferences about the system under description. To illustrate with a simple case, suppose a certain gland, such as the pituitary gland, has released a hormone, such as human growth hormone (HGH). The hormone enters the bloodstream and travels around in the body. It reaches,

Metaphor and Scientific Explanation  291 say, a muscle cell in the arm and binds to a receptor on its membrane. Through a series of intermediates, this causes sarcomere hypertrophy (the addition of muscle mass via an enlargement of its subunits). In many biological contexts, however, this kind of description is replaced by an informational metaphor, in which the gland is treated as sending a message to the muscle, which interprets it and executes a certain instruction. To put the matter vividly, the gland tells the muscle cells: “Commence sarcomere hypertrophy.” This calls to our imagination a familiar kind of interaction. It asks us to see the biological process in terms of two agents communicating via signals, using a code, where one directs the actions of the other. Such a gloss sidesteps the complex chain of events, moving directly from the release of HGH to muscle growth—​the former is a signal (sent by the pituitary gland) and the latter is the message it carries, to be delivered at the muscle, which interprets the signal according to a pre-​specified code to recover the message. Viewed this way, one can readily see what would happen if the signal were not sent or if it were altered, or what would occur if the state of the sender were different, and so on. This, as noted, is a fairly simple case. In cases in which the mapping is richer and the messages longer and more structured, as in DNA processing, the informational gloss will be correspondingly more subtle, and it will provide correspondingly more traction on the phenomenon. The foregoing is meant as a summary of one central use of informational language in cellular and molecular biology. I should note that, as I see it, a similar situation obtains in some other biological contexts—​for instance, when enzymes and other cellular actors are described as miniature machines. I won’t discuss these other cases, as that would take me too far afield (but for a detailed discussion of “machine-​likeness,” see Levy 2014). I mention this merely to indicate that biological information, in my view, is not a unique or isolated case of metaphor playing an explanatory role, at least in biology.

12.5  Metaphors and Explanation Consider again the overall picture of understanding and explanation I’ve sketched:  understanding something is representing it in way that allows you to draw solid inferences about its actual and counterfactual behavior. An explanation is a representation that is geared at understanding, and it is successful to the extent that it enhances understanding. A metaphorical description can be explanatory to the extent that it succeeds in enhancing

292  The Scientific Imagination understanding; in this respect it is just like any explanation. But metaphors do have special features, which affect how they function in explanation. So far I have only gestured at these features in the course of discussing informational metaphors. Let me now address this in more explicit and general terms. I will not be relying on a full-​fledged account of metaphor. That is partly because I think there may be different types of metaphors and I doubt there is a single account that covers them all, and partly to allow my overall story to be more ecumenical. In particular, I will not be presupposing a stance with regard to what is often seen as the central question surrounding metaphor—​ whether metaphors have a distinct kind of meaning (metaphorical meaning) and how it relates to literal meaning. The use to which I will put metaphor should be compatible with different answers to this question. Instead I highlight several (relatively uncontroversial) features of metaphor—​ enough to show how metaphor fits into the overall picture of understanding and explanation. The first feature, already noted in connection with information, is that metaphors typically work by juxtaposing the unfamiliar with the familiar. They illuminate something we do not know much about by treating it as if it were something we know quite well. Such is the case in the paradigmatic example of Juliet and the sun, and, as we have seen, in the case of hormonal activity and signaling. In some discussions the familiar object or notion is called the metaphor’s secondary subject and the unfamiliar object, which the metaphor is intended to illuminate, is called the primary subject.7 Second, metaphors engage the imagination. They are a type of figurative device, imposing an imaginative description on a real-​world target. To explain a little further what I mean, let me say a few words about the relevant notion of imagination. Frigg and Salis (this volume) lay out different conceptions of the imagination and then assess their relevance to modeling and thought experimentation. They especially emphasize a distinction between imagining as imagistic or perception-​like versus imagining as a propositional attitude—​belief-​like in its inferential character but less tightly anchored to empirical truth. Frigg and Salis seem to regard this as an exclusive distinction—​one can either regard a proposition as imaginary or entertain a mental image but not both. But I think that in some cases, among them the case of scientific metaphors, both modes of thinking 7 Older texts on metaphor sometimes follow I. A. Richards (1936) in speaking of a tenor versus vehicle distinction instead of primary and secondary subjects, respectively.

Metaphor and Scientific Explanation  293 may be present, and may interact in significant ways. One can entertain a mental image and use it to highlight important propositions; one may reason through the consequences of a given proposition by appealing to corresponding imagery. At the first-​person level, I can report that the two modes of imagination often seem to work jointly in my own thinking. (This is surely an area that would reward empirical study.) At any rate, I will be assuming that in metaphorical thinking both propositional and imagistic imagining are present and important—​and indeed that they can be mutually reinforcing. These two features of metaphor—​reliance on the familiar and the role of the imagination—​are central components of metaphor’s ability to frame a target, as it is sometimes put (Camp 2009, this volume; Lepore and Stone 2014). A frame (or a framing effect), as I am using the term, is a way of conceiving the metaphor’s primary subject (i.e., the target) that is striking and illuminating. Specifically, a frame directs one’s thinking toward certain properties and patterns of the primary subject via their association with the more familiar secondary subject. In this fashion, they allow one to utilize existing knowledge and reasoning skills. The understanding associated with metaphor therefore stems from the way in which it recruits preexisting cognitive resources to new tasks and domains. (This is connected to the often, unjustly disparaged idea of understanding as familiarity.) Metaphors frame a target and thereby enhance our ability to think about it, including in particular to draw inferences about its behavior. That is how they contribute to understanding—​that is, explain. Finally, a further property of metaphors is that they often form families or networks: a collection of metaphorical descriptions that draw on the same resources, often mutually enhancing one another. This idea is central in the work of George Lakoff (Lakoff 1993; Lakoff and Johnson 1980), who sometimes speaks of metaphors as cross-​domain maps. Such a map, says Hills (2016), “is a standing pervasive culture-​wide disposition to conceive one fixed sort of thing (e.g. love affairs), as and in terms of another fixed sort of thing (e.g. journeys).” For instance, Lakoff notes that English has many everyday expressions that are based on a conceptualization of love as a journey . . . : Look how far we’ve come. It’s been a long, bumpy road. We can’t turn back now. We’re at a crossroads. We may have to go our separate ways. The relationship isn’t going anywhere. We’re spinning our wheels. Our relationship

294  The Scientific Imagination is off the track. The marriage is on the rocks. We may have to bail out of this relationship. (1993, 206)8

Such a network or family effect expands the reach and utility of a metaphorical discourse while also making it more readily interpretable—​the associations between the target and the vehicle are reinforced and more readily discerned. When a metaphor is embedded in such a network, this too enhances its contribution to understanding. For the network may contain further, and perhaps better, patterns of inference associated with other members of the family, or it may sharpen and highlight features that occur across different nodes in the network (members in the family). To this extent metaphors that belong to a broader family may be able to recruit a richer set of cognitive resources and may lead to better understanding. Indeed, I think we see this effect in the context of informational metaphors in biology, discussed earlier. The ideas of signaling and communication belong to a larger set of concepts having to do with information—​information storage, the combining and processing of information, methods of preventing information from getting corrupted or lost, and so on. Biologists have made use of this extended network of informational concepts in both heuristic and explanatory roles. Overall, then, a metaphor frames a target by imaginatively juxtaposing it with a familiar subject matter. In this way it highlights certain properties and makes accessible certain patterns of reasoning, thus enhancing understanding. This is how metaphors play an explanatory role.

12.6  Metaphors and Models Before looking at some potential objections, let me comment on the connection between metaphor and models—​an issue that is especially pertinent in this volume. In both cases there is substantial use of the imagination, and in both there is typically a significant mismatch with reality. Thus some of the issues raised, in particular regarding understanding and explanation, are similar. One way to think about models and metaphors is in terms of surrogative representation. In surrogative representation we regard one thing—​a target 8 I should clarify that the fact that metaphors often form families is not specifically tied to Lakoff ’s well-​known views on metaphor, which I do not endorse.

Metaphor and Scientific Explanation  295 system—​as if it were something else, often something simpler.9 A number of authors have suggested a treatment of models centered around the idea of surrogative reasoning (Godfrey-​Smith 2006; Weisberg 2013). When one models a gas as a large collection of inelastic point masses, one is in effect thinking of the gas in terms of a different system—​one in which particles are simpler and “better behaved” than they actually are. Thinking this way affords a variety of epistemic advantages, such as tractability, highlighting certain effects or factors, and facilitating the communication of ideas and results. Metaphors and models are thus members of a broader family—​they are both forms of surrogative representation. Now, in both modeling and the use of metaphors we can understand what is going on in terms of an employment of the imagination. This will come as no surprise to readers of this volume (and of philosophical writing on models and metaphors more generally). It may be that surrogative reasoning more generally can be understood in terms of imaginative thinking; I will not take a definite stance on this point. Be that as it may, the connection to the imagination raises significant issues. For instance, while both modeling and metaphor simplify investigation and analysis and facilitate the application of certain sorts of tools, both also give rise to potential errors. In both cases, there is the potential for the model or metaphor to be overapplied, reified, or otherwise taken “too seriously.” Such errors, it seems, are far less common in other, more “direct” forms of representation and analysis. While metaphors and models are similar in being forms of surrogative representations, they also differ in some significant ways. In particular, models are typically more tightly specified (often mathematically specified) and clearer both in terms of their content and in terms of how they relate to their targets in the world (here my discussion overlaps, in some significant ways, with Calcott et al. 2015). The key issue can be summarized as follows. In metaphor, the secondary subject itself is not typically described in detail; which of its properties are most relevant is often unclear; and what the relation between the primary and secondary subjects is, exactly, is usually left open. This makes it difficult to know whether two (or more) people interpret the metaphor similarly. A model, in contrast, is typically specified in relatively precise detail. Its content can be readily discerned and, most important, agreed upon by different 9 This talk of “things” need not be construed ontologically. I discuss the ontology of models in Levy 2015.

296  The Scientific Imagination researchers. This is central, as it affects the degree to which a surrogative representation can be assessed and deployed by a collective, interpersonal body such as a scientific community. Let me illustrate these ideas with two examples whose target object is DNA. First, consider the notion that genetic material resembles a text. Perhaps the best-​known example is the tendency—​now somewhat less common than it was, say, a decade ago—​to describe the genome as the “book of life.” What exactly follows from describing genetic material as text-​like, or as a book? Does it contain analogues of words, sentences, or chapters? Does it have a beginning and an end? Should we understand the metaphor to mean that knowledge of the “language” in which the book is written is sufficient (or nearly so) for understanding the ins and outs of inheritance and development? It seems that no substantial agreement exists (nor has one ever existed) on the answers. To be sure, we have some idea of how to interpret the book metaphor—​it directs our thinking in some ways (for example, toward primary linear structure and toward read/​write mechanisms)—​and so it has certainly served to inspire research and guide our thinking about the genome. But these roles do not require a resolution of the ambiguities involved. In contrast, consider a worm-​like chain model, a standard simple tool for studying the mechanics and spatial organization of polymers such as protein and DNA. Despite the colorful name, this is not a case of metaphor. Here a multi-​unit polymer is treated as if it were a long, uniformly flexible rod (a “worm”). Such a model is often used to assess, in quantitative terms, the extensibility of a DNA molecule, the amount of force it can withstand, and related properties (Nelson et  al. 2013). In analyzing the worm-​like chain model, it is clear what the model says, how it depicts its target, and what exact implications this description has. To be sure, there is a continuum here, with substantial differences in terms of clarity and precision. But we can say that while metaphors lie at the “opaque” end of this specification spectrum, models sit at the “transparent” end. Furthermore, a theoretical construct’s position along the specification spectrum is not a static matter. Ideas often originate as metaphors and later get transformed into models (sometimes vice versa). Another, closely related aspect in which models and metaphors differ (again, in a continuous fashion) is the degree of match between vehicle and world. Matching concerns the fidelity with which the model/​metaphor depicts its biological target. There are different views about how models represent worldly targets, and I do not wish to put forward a view here. But I think

Metaphor and Scientific Explanation  297 that however one views model-​based representation, a model’s commitments vis-​à-​vis the world are typically clearer and more easily evaluable. This is a direct consequence of the fact that models have more definite content as such (i.e., they are more precisely specified). But it is a distinct issue that can raise distinct questions and problems. There is more to be said about modeling and metaphor. Camp’s chapter in this volume covers some of the ground not covered here. But the key purpose of comparing and contrasting models and metaphors is to suggest that we can treat both in an analogous fashion when it comes to explanation. Just as metaphors can contribute to understanding inasmuch as they enhance our ability to track and reason about a target’s counterfactual behavior, so can models. Indeed, in some respects models can aid understanding even more than metaphors: because they are more precisely specified, it is often easier to rely on them for inferential purposes and, speaking generally, they can thereby make a greater contribution to understanding.

12.7  Assuaging Some Concerns I have argued that metaphors can play a role in explanation on principled grounds: given the way metaphor works, the idea of explanatory metaphors is a natural consequence of the account of understanding and explanation described in the earlier sections of this chapter. Those who do not accept my proposals concerning understanding and explanation might, for that reason, have difficulties accepting the idea of explanatory metaphors. However, the idea that metaphors can play a genuine explanatory role is also subject to concerns of other sorts, and in this section I would like to address some of them. This will also allow me to say a few words about how the ideas I have outlined relate to other close-​by issues, such as model-​based explanations and inference to the best explanation. First, some may object to the idea of metaphors playing a role in explanation on the simple grounds that metaphors are, at least very often, literally false.10 For instance, I have suggested that cells and molecules do not genuinely communicate, send and receive signals, and exchange information. It is only metaphorically so. Some will immediately wonder: How can such a description explain? Must not an explanation be true? This kind of reaction

10 The exception being “twice true” metaphors such as “Moscow is a cold city” (Camp 2009).

298  The Scientific Imagination is, in one sense, understandable. The assumption that good explanations must be true is widespread in philosophy. But as stated, the objection seems to come down to a bare assertion that explanations must be true. If so, this would beg the question: I have provided an account of how metaphors can explain and illustrated it with the case of biological information. If that account is on the right track, then explanations needn’t always be true. Moreover, I think we can draw a comparison with explanations that employ models. Many models involve idealization—​they deliberately introduce false assumptions. A well-​known example often discussed in this context is an explanation of Boyle’s law (the inverse relationship between pressure and volume seen in many gases) on the basis of the kinetic model of gases. If an ordinary gas is enclosed in a container at a fixed temperature, then as volume increases, pressure decreases, and vice versa. Essentially, the explanation is that as volume decreases, the average rate at which gas particles collide with the container’s walls increases. As particles are presumed identical in mass and velocity, an increase in the rate of collisions against the wall amounts to an increase in the force per unit area (higher pressure). Notice that even such an informal description omits reference to collisions among the molecules and assumes that particles are inelastic (that is, they do not lose energy as they bump against walls; otherwise pressure would decrease over time). Both of these assumptions are idealizations. They do not hold true of the target system:  in any actual gas, countless elastic collisions occur every second. Nevertheless, as any textbook in physical chemistry will attest, the kinetic model is regarded as an excellent explanation of Boyle’s law. Speaking generally, I think that model-​based explanation can readily be embedded in the account of understanding and explanation I have sketched here. There are also several existing, well-​developed accounts of how idealized models do explanatory work (e.g., Batterman 2002; Bokulich 2011; Potochnik 2017; Strevens 2008). The present point, however, is simply that the idea of explanations that deviate from truth is not as unusual as it may initially seem, and it can be made sense of. And at any rate, we have reasons, independent of the discussion of metaphor, to question the assumption that explanations must be true. A second source of concern about metaphorical explanations relates to applicability. Consider again the case of information in biology:  what determines whether we can apply this type of metaphor to a given biological phenomenon or not? What makes it the case that DNA metabolism, for instance, is appropriately described in informational terms whereas digestion

Metaphor and Scientific Explanation  299 or natural selection are not? Since the account I have given does not impose a requirement of truth (or some other tight match) between the explanandum and the explanans vehicle, what prevents us from treating any phenomenon we wish as involving information (or, for that matter, clothing it in any other metaphorical guise)? I think this point is important and raises a real issue. It is hard to say precisely what makes a given metaphor apt in one context and not so apt in another context (Hills 2001). There is a sense in which, with metaphor, anything goes. But it is important not to exaggerate the importance of this point. Perhaps anything goes, but some things go better than others. That is to say, perhaps we can metaphorically describe any process as an informational exchange, but in many cases that would not be very helpful, since the features highlighted by the metaphor are not the ones we want to highlight. Digestion is not a process where there is an interesting input-​ output mapping, or where a passive intermediary exists and can be usefully regarded as a signal. Natural selection is too distributed a process to allow for a clean designation of a sender and a receiver, and the contingency of its outcomes are such that describing it in terms of a code-​like mapping is unlikely to be useful. In these kinds of cases the informational metaphor could be applied, but it wouldn’t contribute much to understanding. Moreover, we may draw here a kind of internal/​external distinction.11 From an external standpoint, there is considerable freedom in deciding whether to apply a given metaphor in a particular context. But once we have settled on a particular metaphorical description, there are definite standards of correctness within the frame it invokes. Importantly, these standards reflect real features of the process being described. If we describe a hormonal interaction in informational terms, then this settles which element should be described as the sender (the relevant gland) and which should be designated the receiver (the muscle tissue) and what the message would be (“commence sarcomere hypertrophy” rather than “inhibit glycolysis”). This illustrates that although metaphors need not be literally true descriptions, they serve in explanation (when they do) because they allow us to capture real, objective properties of the target. Informational language in biology, as suggested previously, serves as a way of pointing to the real (literally true) causal properties of certain phenomena—​the directionality of the process, the mapping between sender and receiver, and so on. These features are independent of one’s choice of whether to employ an informational metaphor, and they place

11 This paragraph overlaps with some of the points made in Levy 2011, 652.

300  The Scientific Imagination constraints on claims made within it. Thus, while metaphors are put to use on pragmatic and cognitive grounds, because of their potential to enhance the accessibility of certain properties and inferences, it is also important to bear in mind that once they are invoked there is a right way and a wrong way to use them, as a consequence of how they latch onto real-​world properties of the target. Let me also comment on another issue connected with the relationship between explanation and truth:  inference to the best explanation (IBE). Roughly speaking, IBE is a non-​deductive inference rule that instructs us to believe the explanans of our best explanations. IBE is seen by many as a vital inferential tool, both within science and in philosophical discussions (e.g. in debates over scientific realism). One might worry that the account that I have offered, in allowing for explanations that are not true (and may indeed be rather distant from the truth), undermines IBE, and that this is a price not worth paying. I cannot enter into an extended discussion of IBE here, but I do want to make two points in response to this type of concern. First, I doubt that the concern just stated really threatens IBE as such. For strictly speaking, IBE has an “if true” proviso: the IBE inference rule states that we ought to believe in the proposition that, if true, would provide the best explanation (see Douven 2017, §1). Such an “if true” proviso entails, at the very least, that the candidate explanations that enter into an IBE should be ones that we do not know to be false, or else they would not even be admissible as candidate explanations. For instance, in the case of the kinetic model of gases, we know full well that the assumptions that gas molecules do not collide is false. Therefore, we will not take the model’s success in explaining Boyle’s law as an indication of the truth of that assumption. The situation is essentially the same in the case of metaphors. In using metaphors we are aware (indeed, often acutely) that our explanation isn’t a true depiction of the world. Therefore, the use of metaphor in explanation does not invalidate IBE; such explanations ought not to serve as inputs to IBEs to begin with. Second and more generally, suppose an account along the lines I have offered does pose a threat to IBE. Still, I tend to think that if we can show that some part of science makes valuable use of metaphors, and if we can make sense of their contribution to explanation and understanding, then it is our views about IBE that ought to be adjusted and not vice versa. A final aspect of metaphors that tends to raise concern among philosophers is their potential to mislead. Metaphors can be overextended, reified, and misinterpreted in various ways. Their stoking of the imagination may lead

Metaphor and Scientific Explanation  301 to careless reasoning and obscure potentially important facts. I think this is a reasonable concern. The information metaphor, for instance, can lead one to assume that a process under description is linear and unidirectional—​going from sender to receiver in a straightforward way. This may result in simplistic conceptions of molecular biological processes, and the overlooking of feedback and other complex dynamics (Sarkar 1996). Informational language, it has been suggested, can lead us to neglect the importance of context and to assume relatively simple, deterministic, code-​like interpretation schemas—​ potentially oversimplifying biological reality (Griffiths 2001). These potential pitfalls of metaphor are real. But they ought not, I think, lead to a blanket rejection of metaphor, in explanation or elsewhere. For we can avoid the pitfalls if we are sufficiently mindful. Norbert Wiener is often credited with the aphorism “The price of metaphor is eternal vigilance.”12 I agree with Wiener; metaphors demand vigilance. But Wiener could well have said that the price of idealization is eternal vigilance, for idealizations can mislead us too. That, however, does not seem like a good reason to deny models an explanatory function. Moreover, vigilance is a price that we can afford to pay, at least in some cases—​those in which the cognitive payoff of the metaphor is substantial. And it is a price we know how to pay, in the sense that we can exercise caution and identify pitfalls associated with the metaphors (and models) we use. I have just alluded to some of the pitfalls of the information metaphor, as pointed out by Sarkar, Griffiths, and others. Thus, while I certainly accept that we should be mindful of the potential of metaphors to mislead, it seems that rejecting metaphors on such grounds is an overreaction.

12.8 Concluding Remark I’m well aware that the views I have put forward in this chapter are unorthodox. Most significantly, there is a shifting of the locus of discussion to the concept of understanding, treating explanation as a derivative category. This involves a substantial measure of psychologism about explanation—​that is, a view of explanation as closely linked to scientific agents and how they represent the world and think about it. It is this move to understanding that has 12 The aphorism is usually credited to Wiener without an exact reference, and in the few cases where a reference is given, it is incorrect. As far as I have been to ascertain, it does not appear in any of Wiener writings. He may have said it orally.

302  The Scientific Imagination allowed me to treat metaphor as a legitimate brand of explanatory vehicle. Psychologism about explanation, as noted early on, has not been popular in philosophical discussions of explanation, especially since Hempel. It has been seen as a threat to the objectivity of judgments about explanation and theory choice. But I suggest that by anchoring understanding to an ability that agents have vis-​à-​vis the external world, specifically the ability to draw inferences about counterfactual behavior, we can overcome this concern. We can regard explanation as something agents do while maintaining agent-​ independent standards of evaluation.

References Batterman, R. (2002). The Devil in the Details. New York: Oxford University Press. Bokulich, A. (2011). “How Scientific Models Can Explain.” Synthese 180, no. 1: 33–​45. Calcott, B., Levy, A., Siegal, M. L., Soyer, O. S., and Wagner, A. (2015). “Engineering and Biology: Counsel for a Continued Relationship.” Biological Theory 10, no. 1: 50‒59. Camp, E. (2009). “Two Varieties of Literary Imagination: Metaphor, Fiction, and Thought Experiments.” Midwest Studies in Philosophy 33: Midwest Studies in Philosophy: Poetry and Philosophy 33: 107‒130. Cohen, J., and Callander, C. (2006). “There Is No Special Problem of Scientific Representation.” Theoria: An International Journal for Theory, History and Foundations of Science 21, no. 1: 67‒85. Craver, C. F., and Bechtel, W. (2007). “Top-​down Causation Without Top-​down Causes.” Biology and Philosophy 22, no. 4: 547–​563. Douven, I. (2017). “Abduction.” In The Stanford Encyclopedia of Philosophy (Summer 2017 ed.), edited by Edward N. Zalta. https://​plato.stanford.edu/​archives/​sum2017/​entries/​ abduction/​. Friedman, M. (1974). “Explanation and Scientific Understanding.” Journal of Philosophy 71: 5–​19. Frigg, R. (2010). “Models and Fiction.” Synthese 172: 251–​268. Griffiths, P. E. (2001). “Genetic Information:  A Metaphor in Search of a Theory.” Philosophy of Science 68: 394–​412. Godfrey-​Smith, P. (2006). “The Strategy of Model-​Based Science.” Biology and Philosophy 21: 725–​740. Godfrey-​Smith, P., and Sterelny, K. (2016). “Biological Information.” In The Stanford Encyclopedia of Philosophy (Summer 2016 ed.), edited by Edward N. Zalta. https://​ plato.stanford.edu/​archives/​sum2016/​entries/​information-​biological. Gopnik, A. (1998). “Explanation as Orgasm.” Minds and Machines 8, no. 1: 101‒118. Hempel, C., and Oppenheim, P. (1948). “Studies in the Logic of Explanation.” Philosophy of Science 15: 135–​175. Hills, D. (2016). “Metaphor.” In Stanford Encyclopedia of Philosophy (Fall 2017 ed.), edited by Edward N. Zalta. https://​plato.stanford.edu/​archives/​fall2017/​entries/​metaphor. Jansson, L., and Saatsi, J. (2017). “Explanatory Abstractions.” British Journal for Philosophy of Science 70, no. 3: 817–​844. https://​doi.org/​10.1093/​bjps/​axx016

Metaphor and Scientific Explanation  303 Kitcher, P. (1989). “Explanatory Unification and the Causal Structure of the World.” In Scientific Explanation, edited by P. Kitcher and W. Salmon, 410–​505. Minneapolis: University of Minnesota Press. Kvanvig, J. (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press. Lakoff, G., and Johnson, M. (1980). Metaphors We Live By. Chicago:  University of Chicago Press. Lakoff, G. (1993). “The Contemporary Theory of Metaphor.” In Metaphor and Thought, 2nd ed., edited by Andrew Ortony. Cambridge: Cambridge University Press. Lepore, E., and Stone, M. (2014). Imagination and Convention: Distinguishing Grammar and Inference in Language. Oxford: Oxford University Press. Levy, A. (2011). “Information in Biology: A Fictionalist Account.” Noûs 45, no. 4: 640–​657. Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172, no. 3: 781‒798. Lombrozo, T. (2011). “The Instrumental Value of Explanations.” Philosophy Compass 6: 539‒551 Nelson, M. R., King, J. R., and Jensen, O. E. (2013). “Buckling of a Growing Tissue and the Emergence of Two-​Dimensional Patterns.” Mathematical Biosciences 246, no. 2: 229–​241. Potochnik, A.  (2017). Idealization and the Aims of Science. Chicago:  Chiacgo University Press. Richards, I. A. (1936). The Philosophy of Rhetoric. London: Oxford University Press. Strevens, M. (2008). Depth:  An Account of Scientific Explanation. Cambridge, MA: Harvard University Press. Sarkar, S. (1996). “Decoding ‘Coding’—​Information and DNA.” Biosciences 46: 857–​864. Trout, J. D. (2002). “Scientific Explanation and the Sense of Understanding.” Philosophy of Science 69, no. 2: 212‒233. Weisberg, M. (2013). Simulation and Similarity. New York: Oxford University Press. Wilkenfeld, D. (2013). “Understanding as Representation Manipulability.” Synthese 190, no. 6: 997‒1016. Woodward, J. (2003). Making Things Happen:  A Theory of Causal Explanation. Oxford: Oxford University Press.

13 Imaginative Frames for Scientific Inquiry Metaphors, Telling Facts, and Just-​So Stories Elisabeth Camp

In theories of scientific representation and investigation, metaphor has long been treated as a form of alchemy, with one of two divergent attitudes. The celebratory camp, led by the likes of Vico, Shelley, and Mary Hesse, takes metaphor to be distinctively equipped to achieve a mystical communion with nature—​a mode of representation that unlocks the universe’s secrets and even creates new worlds. Often, subscribers to this view take all language and thought to be ultimately metaphorical, or at least take metaphor to be the truest embodiment of the basic mechanisms by which reference, truth, and understanding are achieved. The dismissive camp, helmed by the likes of Hobbes, Locke, and Zenon Pylyshyn, rejects such representational and ontological profligacy, and instead treats metaphor as superstitiously positing occult, non-​referring forces and entities. At best, metaphor is a decorative trope or a mechanism for inspiration; at worst, it spins bubbles of self-​confirming pseudo-​science. This opposition appears especially stark given a positivistic conception of science as the logical subsumption of observation sentences under general theoretical laws. Few endorse this conception today. Since at least Quine (1951) and Kuhn (1962), philosophers have noted that scientists bring a host of only partially articulated theoretical, practical, and empirical assumptions to bear in investigating the world, and that distinct patterns of attention and explanation can motivate distinct interpretations of any given bit of data. A  more recent trend, exemplified by Ronald Giere, Peter Godfrey-​Smith, Roman Frigg, and Michael Weisberg, points to the crucial role of intermediate constructions—​“models”—​that are known to differ from the actual world in significant ways. Both developments have had the salutary effects of dispelling a false picture of scientific theories as transparent descriptions embedded in purely Elisabeth Camp, Imaginative Frames for Scientific Inquiry In: The Scientific Imagination. Edited by: Arnon Levy and Peter Godfrey-Smith, Oxford University Press (2020). © Oxford University Press. DOI: 10.1093/oso/9780190212308.003.0014

Imaginative Frames for Scientific Inquiry  305 logical structures, and of connecting our theoretical understanding of scientific investigation, representation, and justification more closely to actual scientific practice. Less directly, they have also enriched our understanding of rationality, by demonstrating an essential role for imagination within a paradigm case of rational inquiry. However, theorists who advocate a less simplistic view of scientific theorizing often lump together multiple types of indirect representation under the general banner of “models.” Further, some of these theorists, in their zeal to oppose a naively descriptivist realism, have sometimes concluded that all theories are mere fictions levied in the service of competing pragmatic interests. Thus we seem to return full circle to the claim that allrepresentation is essentially figurative, but with fiction now occupying the preeminent role once accorded to metaphor. In this chapter, I  distinguish among a range of representational tropes, which I call “frames,” all of which guide our overall interpretation of a subject by providing a perspective, or an intuitive principle for noticing, explaining, and responding to that subject. Frames play a theoretical role closely akin to that commonly ascribed to models. But where much of the discussion of models focuses on their ontological status and representational relation to reality, I focus on the cognitive structures and abilities that are generated by frames, and on the imaginative activities that exploit them. Further, where many theorists of modeling have aimed to explain models by positing a single common representational relation, I focus on distinct ways that scientific representations can fruitfully depart from representing “the truth, the whole truth, and nothing but the truth.” Specifically, where recent discussion of models draws inspiration from fiction, I focus on metaphor. My aim here is primarily descriptive: I want to identify the shared features of frames that make them powerful interpretive tools, distinguish among various ways they can work, and draw out similarities and differences between their application to everyday cognition and scientific inquiry. I believe the discussion of frames here also provides the resources for identifying central norms on frames’ epistemic aptness, in both general and particular cases. Further, I think that once we assess frames for epistemic aptness, we can justify a significant epistemic role for frames within scientific inquiry, and even at the putative end of inquiry. However, establishing these normative consequences is a task for another occasion (Camp 2019). I start by using metaphor to introduce the broader family of perspectival frames, and distinguish metaphor from some of its close cousins, especially telling details, just-​so stories, and analogies, as they function in the context

306  The Scientific Imagination of ordinary discourse. I then illustrate these various species at work within scientific inquiry, and use them to identify key differences in the sorts of gaps that models can open up between representation and reality. I conclude by advocating a mild ecumenicalism about scientific models:  although most models are deployed in support of importantly similar cognitive and epistemic functions, there is no single ontological status or representational relation common to all.

13.1  Frames, Perspectives, and Characterizations Begin with perhaps the most influential metaphor about metaphor in recent analytic philosophy, from Max Black: Suppose I look at the night sky through a piece of heavily smoked glass on which certain lines have been left clear. Then I shall see only the stars that can be made to lie on the lines previously prepared upon the screen, and the stars I do see will be seen as organised by the screen’s structure. We can think of a metaphor as such a screen, and the system of “associated commonplaces” of the focal word as the network of lines upon the screen. We can say that the principal subject is “seen through” the metaphorical expression—​or, if we prefer, that the principal subject is “projected upon” the field of the subsidiary subject. (Black 1954, 288)

I think this passage expresses an insightful and basically correct view of metaphor. But it is unsatisfying as it stands, in two ways. First, there is the problem of explicitness. Because it is itself a metaphor, Black’s image of smoked glass etched with clear lines does not directly articulate a claim about how metaphor works; further, the subsequent paraphrases or elucidations introduce additional metaphors, not all of which are clearly consistent. So at a minimum we need to spell out what talk of “screens” and “projections,” of “seeing through” and “organizing structure,” amounts to. Second, there is the problem of distinctiveness. In the paragraph preceding the quoted passage, Black articulates the core idea in less metaphorical language, saying that “the . . . metaphor suppresses some details, emphasizes others—​in short, organizes our view of [the topic].” While this is more explicit, it also characterizes a range of other rhetorical tropes that “frame” and “filter” thought, including fictions, slurs, and telling details. I think this is an

Imaginative Frames for Scientific Inquiry  307 important positive insight to be gleaned from Black’s remarks, rather than (just) a weakness. In this section, I spell out Black’s talk of metaphors as “organizing structures” in my own terms, as it applies to all these cases. In section 13.2, I tackle the question of how to differentiate among them. In everyday cognition, we frequently engage with the world using complex, intuitive ways of thinking about a subject, which I call characterizations (Camp 2003, 2015). The most familiar instances are stereotypes—​Black’s “systems of associated commonplaces.” But where stereotypes are culturally ubiquitous, characterizations can be more culturally restricted:  limited to a subdiscipline, a clique, even interlocutors in a particular conversation. In many cases, especially those relevant to science, characterizations are close to what philosophers call “conceptions”: a set of beliefs about an individual or a kind, which need not be extension-​determining, or constitutive of conceptual competence, or even reflectively endorsed by the agent, but which are easily evoked in thinking about the subject and provide the intuitive “mental setting” (Woodfield 1991, 551) or background against which specific beliefs and questions are formulated. Most characterizations are relatively inchoate and largely tacit: an intuitive patchwork of more or less unreflective and unarticulated assumptions. They also tend to be highly malleable, depending on the issues, interests, and contrasts that happen to be operative within the current context. In order to impose more coherence and stability on our own intuitive thinking, and in order to coordinate on common intuitive assumptions in communication, we frequently employ interpretive frames. As I will use the term, frames are representational vehicles—​a slogan, say, or a diagram, or a caricaturing cartoon—​under an intended interpretation that itself functions as an open-​ ended principle for understanding a target subject. Metaphors constitute a canonical class of framing device, but there are many other types of frames, even just among verbal representations. Notable cases include slurs, as in “He’ll always be an S” (Camp 2013); telling details, as in “Obama’s middle name is Hussein. I’m just saying” (Camp 2008); and just-​so stories, as in “It’s as if Jane had a puppy who died when she was little, and she’s still convinced it was her fault” (Camp 2009). These tropes differ in their rhetorical operations and effects in ways we’ll discuss later. But what they all have in common, in virtue of which they function as frames, is that they proffer a principle for organizing one’s overall intuitive thinking about the target—​what I call a perspective. Perspectives determine what information an agent notices and remembers about the subject; they guide how the

308  The Scientific Imagination agent assimilates and explains that information within the context of her other assumptions; and they guide how the agent evaluates and responds to it (Camp 2019). Thus, the function of frames is to express perspectives, which function to generate and regulate characterizations, which are themselves intuitive structures of assumptions about particular subjects. Not all perspectives are expressed by frames; some are too multivalent to be crystallized into a single slogan, or no one has yet happened or needed to do so. When a frame does express a perspective, though, that perspective goes well beyond the representational content encoded by the framing vehicle itself. Perspectives are principles for interpretation rather than particular thoughts or contents in themselves. As such, they are open-​ended, in two senses: they provide principles for updating characterizations over time, as new information comes along, and they generate characterizations of not just one but multiple, indefinitely many, different particular subjects. Frames are ubiquitous in ordinary life: in political discourse, intimate interpersonal arguments, informal commentaries on movies—​anywhere that intuitive interpretation is at stake. Three features of frames, and the perspectives they express, are especially important for understanding their operations in general and within science. First, a frame presupposes a taxonomy: a basic level of analysis that partitions a domain of relevant entities into a space of contrasting possibilities (often also entailing superordinate and subordinate classifications relative to that basic level [Rosch 1978]). As we will see, this taxonomy in turn determines, at least roughly, what sorts of features are relevant for classifying individuals and kinds, and which features can and should be ignored. Second, at least in everyday cognition, frames frequently raise to attention or impute experientially vivid representations of highly specific features: for instance, that George has this sort of nose, or that people of group S have that kind of eyes. Ordinary characterizations also often represent features in ways that are affectively and evaluatively loaded: that noses like this are elegant, or that George is snobby. Different frames thus “color” the features they attribute to their subjects differently, by linking experiential, affective, and evaluative responses in intimate, intuitive ways (Camp 2015).Third and most important, frames structure our intuitive thinking about a subject. A metaphor, slogan, image, or diagram functions as a frame insofar as an agent uses it to organize and regulate her overall intuitive thinking about one or more subjects. In playing this role, a frame doesn’t merely select certain features

Imaginative Frames for Scientific Inquiry  309 from the teeming mass of details as classificatorily relevant, nor does it merely evaluate or color a particular subset of features. Rather, it purports to determine, for any feature that might be ascribed to a subject, both whether and how it matters, by embedding that feature within the larger network constituted by the agent’s characterization of the subject. There are (at least) two distinct ways in which a feature can differ in the role it plays within a characterization (Camp 2003, 2013, 2015). First, some features ascribed to a subject are more prominent than others, in being more initially noticeable and quicker to recall. Following Tversky (1977), I analyze prominence (which he calls “salience”) as a function of two factors, each of which is contextually relative in a different way. On the one hand, a feature is diagnostic to the extent that it is useful for classifying objects in a given context, as the elliptical shape of a snake’s pupils might be useful for determining whether it is venomous. Because diagnosticity is taxonomy-​relative, frames that employ distinct taxonomies will draw intuitive attention to distinct features, and/​or assign distinct diagnostic implications to the same feature. On the other hand, a feature is intense to the extent that it has a high signal-​to-​noise ratio. What an agent counts as “noise”—​as the relevant background against which the current signal is measured—​varies widely, both in how locally restricted it is and in how cognitively mediated it is. So, for instance, the perceptual intensity of a light’s brightness relative to the ambient lighting in a room is fixed by a background that is both highly local and directly physical, while for a knowledgeable viewer the intensity of a pigment’s tonal saturation in a painting will be determined not just relative to the other colors in that particular picture but also against her assumptions about typical saturation levels in other paintings within that genre and from other historical periods. The total prominence of a given feature in an agent’s intuitive characterization of the subject is a function of both diagnosticity and intensity, where these interact both with each other and with the larger context in complex ways. Where prominence selects which features matter, the second dimension of significance, centrality, concerns how they matter. Characterizations connect features into rich explanatory networks, and centrality is a measure of a feature’s connectedness to other features. Some connections are conceptual, in the sense of being inferences that a competent thinker finds compelling (Peacocke 1992). However, conceptual status is neither necessary nor sufficient for a feature to play a central role in a characterization. On the one hand, many robustly conceptual inferences are too obvious and general to

310  The Scientific Imagination be relevant for explaining why a particular target subject is as it is. And on the other hand, we often intuitively connect features in ways that are highly contingent. In ordinary cognition, these connections can be emotional, ethical, even aesthetic (Camp 2017). But especially in science, the explanatory connections we impute are causal. A  good measure of centrality is mutability: how much the agent’s overall thinking about the subject would alter if she no longer attributed a given feature f to the subject (Murphy and Medin 1985; Thagard 1989; Sloman et al. 1998).1 Prominence and centrality are structurally distinct ways in which a feature can matter intuitively. For instance, Barack Obama’s ears or Donald Trump’s hair may be highly prominent in our thinking without being represented as at all central to who that person is. Similarly, we might find it notable that a certain species of fox exhibits patches of white fur without according that feature any explanatory significance beyond random mutation within a limited gene pool. However, the two dimensions of cognitive mattering are not entirely disconnected. In particular, when a feature f’s intensity departs markedly from a contextually determined baseline, this fact intuitively calls out to us for explanation. Sometimes we (justifiedly) dismiss such departures as mere anomalies, but often we seek to explain it in terms of the subject’s other features. Thus, for some people, Obama’s protruding ears are connected with his Spock-​like nerdiness, or Trump’s swooping hair with his grandiosity. More seriously, in the case of white fur, depigmentation has been correlated with hormonal and neurochemical changes associated with docility (Belyaev 1978; Trut 1999). In general, the desire to explain a prominent but apparently non-​central feature may lead an agent to seek out explanations that make it more central. And conversely, a high degree of centrality tends to increase a feature f’s diagnostic relevance and can lead us to raise our intuitive estimate of its actual intensity or statistical frequency and of the probability that the subject will possess other connected features (Diekman 2002; Judd and Park 2003; Ryan et al. 1996). These two dimensions of “mattering,” prominence and centrality, generate a complex, intuitive organizational structure for all characterizations. However, most ordinary characterizations are only loosely 1 At least in a scientific context, a psychological criterion of mutability fits smoothly with an analysis of causal explanation that invokes “difference makers” (Strevens 2008; Woodward 2003). Roughly, an agent treats f as causally important to a subject A if the agent treats f as making a difference to A in ways that matter given the presupposed taxonomy, and f is central to the extent that the agent takes its potential alteration to affect many features that matter.

Imaginative Frames for Scientific Inquiry  311 organized: different features have different weightings of prominence and are variously connected to other features, but those weightings and connections are inchoate, jumbled, and—​as attested by the vast experimental literature on affective and cognitive priming—​highly contextually malleable (Camp 2015). By contrast, a frame constitutes a unified interpretive principle that organizes the characterizations to which it applies into more coherent and stable wholes. So far, I have translated Black’s metaphor for metaphor as a network of clear lines etched on smoked glass into a view of frames in general as overarching principles for selecting, classifying, and connecting a subject’s features into a multidimensional, intuitive cognitive structure. But what does it mean to say that a frame imposes an intuitive structure on a characterization? The crucial insight that I take to be implicit in the quote from Black, and more generally in the ubiquitous talk of “perspectives,” is that neither the perspective expressed by a frame nor the characterizations it generates represents an organizational structure. Rather, that structure must be implemented or instantiated within the agent’s actual intuitive cognitive processes, so that the agent really is more likely to notice and quicker to recall features that are weighted as more prominent, and does intuitively connect central features with many others. As it is often put, frames offer cognitive Gestalts, much as the concepts “old lady” and “young lady” provide perceptual Gestalts for Figure 13.1. Thinking of frames as cognitive Gestalts, and explaining this in terms of implemented as opposed to merely represented structure, allows us to identify an important sense in which characterizations, perspectives, and frames are all non-​propositional. In principle, with sufficient reflection and effort, an agent might be able to explicitly articulate the complete set of features she intuitively associates with a given subject. Likewise, with even more reflection

Figure 13.1  Ambiguous figure of old and young woman.

312  The Scientific Imagination and effort, she might spell out the structure in which she intuitively arrange those features, perhaps by assigning numerical weights to reflect prominence and drawing directed graphs to illustrate explanatory connections. However, it is neither necessary nor sufficient for having a characterization that one explicitly entertain or endorse the propositions that specify that structure. Instead, having a characterization requires “getting” the Gestalt, so that the operative characterization actually structures one’s intuitive cognition. Likewise, “getting” a frame involves being actually, if only temporarily, disposed to form the relevant characterizations. Further, “getting” a characterization or frame in this sense is partly but not entirely under voluntary control. Sometimes, as with slurs, insinuations, and stereotype threat, frames impose themselves on our thinking when we would rather resist (Camp 2013). Conversely, we may endorse a frame’s cognitive utility but be unable to deploy it intuitively for ourselves. First encounters with scientific frames such as Feynman diagrams are frequently quite effortful, even when their primary advantage for those who are fluent with them is the way in which they foster an ability to navigate easily and flexibly about the topic. In cases where we want to but don’t yet intuitively “get” a characterization, any finite bit of advice—​for instance, being told that the young lady’s necklace in Figure 13.1 is the old lady’s mouth—​may help it to “click,” but no one such bit is guaranteed to succeed. In virtue of its intuitive Gestalt function, applying a frame is importantly a matter of imagination, but primarily in the synthetic sense (identified by Kant) of uniting a manifold of disparate elements into a coherent whole. It is distinct from the sort of imagination typically discussed by philosophers interested in make-​believe or pretense (e.g., Currie and Ravenscroft 2003; Friend 2008; Walton 1990). In particular, where make-​believe is a matter of experientially or abstractly conjuring contents that are taken not to be actually present, trying on a frame involves temporarily adopting a new perspective on a set of assumptions that are taken to be fixed (Camp 2009): as Wittgenstein says of Jastrow’s duck-​rabbit figure, “I see that it has not changed, and yet I see it differently” (1953, 193). Altering the intuitive prominence or centrality of a single feature can induce pervasive, complex alterations to the structural relations among other elements, “tipping” them into new clusters of explanatory and other dependence relations and new weightings of prominence. But the effects of applying a new frame can also extend beyond structural realignment, producing alterations in the significance of the basic features themselves.

Imaginative Frames for Scientific Inquiry  313

13.2  Metaphors and Other Framing Devices in Ordinary Discourse In the previous section, I deployed Black’s central metaphor for metaphor as an etched smoked glass to explicate the idea of frames in general. Theorists who draw attention to the selective, interpretive, and imaginative aspects of scientific theorizing sometimes assimilate all frames into a single type. Thus, Mary Hesse appears to treat models, narratives, fictions, analogies, and metaphors as fundamentally equivalent when she writes that “scientific theories are models or narratives, initially freely imagined stories about the natural world, within a particular set of categories and presuppositions which depend on a relation of analogy with the real world as revealed by our perceptions” (1993, 51; emphasis in original). While I share Hesse’s emphasis on the role of imagination and presupposition in scientific theorizing, and while I agree that models, fictions, metaphors, and analogies all employ imagination and presupposition to frame their subjects, I reject the assumption that all scientific theorizing inherently involves modeling or framing in a substantive sense of the term. More important, I will argue that there are important differences among these various species of frame, and only some rely on analogy. In this section I identify some of these key differences, and argue that they matter to how different frames guide everyday cognition and communication. In section 13.3 I will apply these distinctions to a variety of scientific models.

13.2.1  Internal and External Frames While all frames provide overarching principles of interpretation for their target subjects, I take a crucial differentiating feature of metaphors to be that they frame their subjects in terms of something else (Camp 2003, 2006, 2008). Broadly, I advocate a story roughly along the lines of Black’s “interactionism.” A metaphor is a representation that triggers initial characterizations of both a subject, A, and a framing topic, F. Thus, in the canonical example, the sentence “Juliet is the sun” triggers characterizations of the subject, Juliet, and the frame, the sun. (Coextensive expressions—​e.g., “sweat” and “perspire”—​ may be associated with distinct characterizations, and the same lexical expression may trigger at least somewhat different characterizations in distinct conversational contexts.) The metaphor works by taking the most prominent

314  The Scientific Imagination and central features in the characterization of F and seeking matches to them within the characterization of A, for as long as interest warrants effort. Matched features are raised in prominence and centrality, producing a restructured characterization of A (and to a lesser extent of F). In certain circumstances, when it would be plausible for A to possess a feature f that could be matched to a prominent and central F-​feature, but where no f-​like feature is currently included in the A-​characterization, f may be introduced into the characterization of A. When a metaphor is employed assertorically, the speaker claims that A possesses those features that are most tightly matched to the most prominent and/​or central features of F. Not all frames work by matching features between distinct characterizations in this way. At the broadest level, we need to distinguish “external” frames, which include metaphor, analogy, similes, and paratactic juxtapositions, from “internal” frames, where the latter directly attribute a feature f to the subject A and raise that very feature to prominence and centrality within the A-​characterization. The simplest internal frame is the “telling detail” (Camp 2008), as vividly exemplified by classic cases of insinuation. So, for instance, the speaker who utters “Obama’s middle name is Hussein” overtly merely asserts a fact that is itself undeniable, but thereby implicates that Barack Obama instantiates a cloud of more sinister and more dubiously possessed features associated with a presupposed characterization of people named Hussein. Focusing on the name functions both to highlight some known but otherwise unnoticed features and also to suggest other, as yet unknown ones. While many insinuations are insidiously underhanded, invocations of telling details can be quite explicit. Thus, a primatologist might utter “Trump is a primate” and go on to detail just how Trump’s behavior can be explained and predicted by an analysis in terms of notable, relevant, and causally influential properties of primates, especially involving social dominance (Camp 2008). So although both metaphors and telling details provide interpretive frames, they do so in quite different ways. In particular, metaphors differ from telling details in operating “from the outside”: as we might put it, where telling details are interpretive keys inserted directly into the subject characterization, metaphors are colored telescopes. More specifically, for example, Romeo doesn’t ask us to focus on the proposition that Juliet is the sun or that she actually glows. Rather, as his subsequent paraphrase spells out, the sun’s luminosity is matched to the distinct feature of her (purported) beauty. Where the insinuating speaker of the telling detail attributes to Obama

Imaginative Frames for Scientific Inquiry  315 the very features purportedly possessed by most people named Hussein—​ perhaps being foreign, dark-​skinned, Muslim, and duplicitous—​the features attributed by Romeo’s metaphor are identified indirectly, by sharing relevant higher-​order properties with features of the sun. Specifically, while both the sun’s luminosity and Juliet’s beauty are highly intense, the scale of intensity, the specific respect of intensity, and the operative comparison class are quite different in each case: the sun is brighter than themoon, Venus, or Saturn, while Juliet is more beautiful than Rosalind or any other Veronese girl. The sun’s luminosity and Juliet’s beauty also share other relevant features: both are natural, and a source of energy and life; both produce a feeling of warmth. Again, however, these common higher-​level features are implemented in qualitatively different ways within the two domains, and it is this indirect structural match that leads us to notice and impute new features to Juliet—​ features that the sun itself does not possess, such as making the other girls of Verona jealous.

13.2.2  Metaphor and Fiction I’ve argued that an internal frame structures its subject directly and “from inside,” while an external frame like metaphor operates indirectly. So far, this might just seem like a new label for the old difference between being literally true or false: absent literal truth, at most indirect truth remains. Against this, I want to argue that some literally false frames are still internal, because they function in imagination as if they were true. In particular, I think just-​ so stories are fictions that function like telling details rather than metaphor (Camp 2009). So, for example, a speaker might say that Trump acts as if he was denied admission to Harvard and has been compensating ever since, while explicitly acknowledging that this is not true.2 Intuitively, this speaker invites the hearer to pretend that Trump, in all his actual specificity—​raised in Queens, having a real estate mogul father, and so on—​really does possess the very feature of having been denied admission to Harvard, and to treat that possible-​ but-​in-​fact-​unrealized feature as an imaginative key to unlocking what really matters about him. More generally, the hearer of a just-​so story is asked to 2 Dan Evon, “Donald Trump’s Harvard Rejection Letter,” Snopes. August 18, 2016, www.snopes. com/​donald-​trumps-​harvard-​rejection-​letter. Apocryphal facts are in effect just-​so stories masquerading as telling details.

316  The Scientific Imagination pretend that a fictional feature f is actually instantiated by and explanatorily central to A, and to restructure her overall characterization of A by introducing and elevating other features from the F-​characterization that A really would possess if it did actually instantiate f. Once this imaginative exercise is accomplished, the hearer drops the pretended ascription of f, leaving the characterization as close as possible to what it would be if A were in fact f. The contrast between fictional and metaphorical frames is clearest when a single sentence can be plausibly deployed in either way. Consider as an example “Jane is a nurse.” On the one hand, employing the sentence as a just-​ so story involves pretending that Jane really is a nurse. Here, what we might call the “direction of imaginative fit” is from the actual reality to an imagined possibility (Levin 1988): the interpreter starts with actual-​Jane and uses her as an imaginative prop to construct the fiction. This involves transforming Jane imaginatively in two ways: first, adding features that actual nurses do prominently possess (for instance, listening to multiple people’s symptoms, monitoring vital signs, administering medicine, perhaps being on call at inconvenient times, answering to imperious bosses, and juggling many patients), and second, downplaying features of actual-​Jane that conflict with these prominent and central nurse features (for instance, her actual incompetence with machines or the fact that she works regular business hours). Once this imaginative transformation is accomplished, the pretense that Jane really is a nurse is dropped, but the highlighted features remain prominent and central. Thus, a natural use for offering “Jane is a nurse” as a just-​so story might be to elucidate first-​order respects in which Jane’s job involves performing key functions of a nurse, even though she doesn’t have a BSN or RN. On the other hand, if the speaker employs the sentence as a metaphor, then interpretation begins with a characterization of nurses and seeks to identify respects in which Jane, as she already currently actually is, is nurse-​ like. Rather than directly attributing actual nurse features to an imaginatively transformed Jane, the interpreter of a metaphor reconstrues actual-​Jane in a nurse-​like way. As with Juliet, this focuses attention on actual current features of Jane’s that are not actually possessed by nurses but that share higher-​order structural similarities with prominent and central features in the stereotype of nurses. Plausible such features might then include consistently lending a sympathetic ear (but for friends rather than assigned patients), checking on those friends’ emotional and psychological well-​being (rather than their physical symptoms and statistics), or nudging them toward avenues of

Imaginative Frames for Scientific Inquiry  317 emotional and psychological improvement (rather than delivering pills and injections). In cases of escapist fiction, an imaginative “prop” like Jane is merely a springboard for make-​believe. Other fictions, such as just-​so stories, are “prop-​oriented” (Walton 1990): we engage in the pretense in order to learn something about the prop itself—​perhaps something about its counterfactual possibilities, or about what it’s actually like—​that makes it apt for serving as a prop in this pretense. In focusing imaginative attention on their props, just-​so stories are importantly like metaphors. Partly for this reason, Kendall Walton (1993) argues that metaphors are invitations to engage in prop-​ oriented make-​believe, by pretending that the subject possesses the feature explicitly mentioned in the metaphorical sentence (see also Hills 1997 and Yablo 2001). I agree that the two kinds of imagination overlap, and that many utterances invite a mixture of both modes of interpretation (Camp 2009). Both frames are indirect, in the sense that we imaginatively step away from our actual assumptions about A. And both are guided by our intuitive characterizations about A and Fs. However, as I’ve argued, there is an important difference between the two tropes. With a just-​so story, we temporarily transform the prop A into a counterfactual counterpart by imputing actual F-​features to A; only then do we consider what this reveals about A as it actually is. By contrast, with metaphor we hold our understanding of how A actually is as fixed as possible, and we match features of A and F that are merely similar. Because they differ in their direction and directness in this way, the two types of frames often end up highlighting and introducing different features within the ultimate characterizations of their subjects (Camp 2009).

13.2.3  Metaphor and Analogy In drawing the contrast between “external” and “internal” frames, I  have distinguished metaphors from telling details and just-​so stories, and emphasized that metaphors are indirect, relying on abstract structures of higher-​order similarities between distinct lower-​level features. This view is closely akin to Dedre Gentner’s “structure-​mapping” theory of analogy (e.g., Markman and Gentner 1993). In this section, I argue that metaphor differs from analogy in two important ways.

318  The Scientific Imagination First, while both metaphors and analogies rely on abstract, higher-​order similarities, metaphors also frequently employ qualitative matches between first-​order features, often ones that are experientially rich and embodied (Lakoff and Johnson 1980). For instance, while the core match between the sun’s luminosity and Juliet’s beauty is a structural one, Romeo’s metaphor also suggests that being near Juliet produces a physical feeling in him that is not just structurally but qualitatively similar to the glow produced by the sun on a warm spring day. Second, metaphors permit a looser preservation of structure in the mapping from framing to subject characterization. In analogy, potential matches that are not embedded within more complex structures tend to be ignored even if they are topically relevant (Gentner and Jeziorski 1993); by contrast, metaphors often happily permit isolated matches. Analogies also require consistency in mapping: the operative structure within the frame must be replicated in the subject for the analogy to be sound; and known, relevant failures of match compromise the analogy’s plausibility. By contrast, metaphors can be quite unsystematic. For instance, Othello’s description of Desdemona as “false as water” suggests myriad distinct respects in which Desdemona is deceptive: formless and unstable; running whichever way is easiest; reflecting whatever is around her; showing things within as different than they really are (as water does a bent stick); seemingly clear but potentially poisonous. These various matches don’t align neatly with one another, but the lack of systematicity does not undermine the metaphor’s effectiveness, since it suggests such a rich range of matches with robust affective and imagistic elements, which themselves constellate into a coherent overall characterization of Desdemona. Metaphors’greater permissiveness makes their interpretation more imaginatively intuitive and holistic. Rather than puzzling out a precise, consistent formal mapping between complex, abstract, articulate structures, we more often feel our way through tacit clusters of matches involving largely inchoate features at a variety of levels, drawing on images and attitudes, and coloring and connecting those features, along with other, unmatched features that intuitively “fit” with them. Individual matches that are especially relevant to current conversational or cognitive purposes leap to attention and motivate intuitively related matches, even if these are not connected to or even logically consistent with the initial match. And clusters of such matches reconfigure both subject and frame to motivate further matches, in a snowball

Imaginative Frames for Scientific Inquiry  319 effect that can overwrite marked antecedent differences between the two characterizations that would stymie a logical analogy.

13.3  Metaphors and Other Frames in Scientific Inquiry In the previous sections, I have described framing devices in general and distinguished metaphor from three of its cousins—​telling details, just-​so stories, and analogies—​in terms of the direction, directness, level, and systematicity of imaginative fit between frame and subject. We can now examine how these differences play out in the scientific context and what their implications might be for models and modeling. As an initial point, although use of the term “model” is both varied and contentious, I think we can illuminate the utility and effects of many models by treating them as frames: representational vehicles that guide intuitive overall thinking about a target system by determining both what matters about that subject relative to a presupposed taxonomy and how those features that do matter are connected within an explanatory structure. Beyond this, our tour through various species of frame in the context of ordinary discourse puts us in a position to identify important sources of variation among scientific models, while illuminating their functional commonalities. In this section, I identify some important types of scientific frame, focusing on the different sorts of gap they assume between representation and reality and the different ways they bridge that gap.

13.3.1  Telling Details and Telling Instances Many scientific theories employ telling details:  they explain a complex phenomenon by treating a single feature, which is itself relatively uncontroversially true and also associated with a rich set of assumptions, as maximally explanatorily central. Differences in which details theorists take to be “telling” can produce pervasive, substantive differences of interpretation. So, for instance, Longino and Doell (1983) contrast androcentric and gynocentric theories of tool use in hunter-​gatherer societies within anthropology. Both theories agree that men hunted and women gathered, and both invoke tool use to explain the development of cognitive characteristics such as flexible intelligence and instrumental reasoning. But the two theories disagree

320  The Scientific Imagination structurally about which of these facts matter and which data exemplify more general, causally relevant patterns. While androcentric theories focus on hunting behavior and the relative efficacy of stone tools over sticks, gynocentric theories focus on the nutritional stresses of pregnancy and lactation and on the basic utility of sticks and reeds for digging, carrying, and food preparation. These different frames weigh additional data differently, generate different chronologies and causal histories, and implicitly (and sometimes explicitly) offer different predictions about, and affective and normative responses to, sex, tool use, and intelligence among contemporary humans. Insofar as the primary locus of disagreement is a higher-​order, interpretive one, it is difficult to adjudicate between the two theories directly at the level of demonstrable facts, because each theory has its own way of taxonomizing and explaining any given bit of information, and can dismiss distinct isolated chunks of (putative) data as mere anomalies or as true but marginal. Like the telling detail in everyday life, then, the “telling fact” in science takes a feature F that is uncontroversially assumed to be instantiated by a subject A and treats it as maximally prominent and central in theorizing about A, relying on an assumed background characterization of F. A closely related type of internal frame focuses directly on a single or limited class of instances—​a population of mice, say, or a patch of forest—​and treats that particular instance, a, as exemplary of a more general kind F. Catherine Elgin aptly calls such samples “telling instances,” and points out that they serve many of the functions I have identified for frames: the sample “exemplifies, highlights, displays or conveys the features or properties it is a sample of,” doing so in a richly context-​sensitive way, and thereby functions as “a symbol that refers to some of the properties it instantiates” (2006, 208). Both “telling facts” and “telling instances” focus on a feature that the target subject is presumed to actually possess, but they differ in their level and direction of interpretive attention. The telling fact operates at a theoretical level, by structuring the overall characterization of the target subject A (say, the evolution of tool use) in terms of a characterization of a fact f about it (say, that women used sticks to dig for roots). The core investigative work is interpretive, teasing out the theoretical consequences of taking this fact to be central for thinking about this subject. By contrast, the telling instance or sample is itself concrete, and investigation involves probing it directly, in concrete ways—​say, by feeding the mouse, or half of the mouse population, more saturated fat—​in order to discover more about what properties the instance itself actually possesses.

Imaginative Frames for Scientific Inquiry  321 Second, the two types of telling frame differ in the direction of interpretive attention. In the case of taking early women’s use of sticks to dig for roots as a telling fact, just as with the insinuation about Obama’s middle name, the overall target subject A is framed by a characterization of a particular fact f, because f is emblematic of a larger constellation of (purported) facts, F. This involves making f itself prominent and central within the characterization of A, which in turn introduces or elevates further features f1, f2, f3 . . . that are central and prominent role within F, and suggests causal connections between those F-​features and further, non-​F features within A. By contrast, with a telling instance, the focus of attention is directly on the particular sample, A itself, and investigation proceeds by observing and manipulating A. F does provide the frame for thinking about A, insofar as A matters only as an instance of the general kind F, so that assumptions and questions about F select only some of A’s features as warranting attention in virtue of exemplifying F-​features. Further, the ultimate goal is to “read back” relevant discovered features from A to other instances of F. However, the investigation proceeds by probing A itself, and using discoveries about A to understand F.

13.3.2  Abstraction and Idealization Both telling facts and telling instances are intuitively treated as true, in the basic sense that F does indeed apply to A. Some theorists, such as Hesse (1993) and Elgin (2006), reject this core intuition, because they take the selectivity inherent in all classification, and in modeling in particular, to render all theories and models literally false, or at least not true. All theories are fictions; some are merely more pragmatically efficacious than others. I agree that selection and abstraction play a pervasive role in science. Indeed, they are plausibly conditions on the very possibility of conceptual thought: applying a concept is a matter of classifying multiple entities together as alike in some respect, or the same entity as recurring on multiple occasions, both of which require abstracting away from differences between those distinct entities or occasions (Camp 2015). Further, we regularly criticize representers for inappropriate selectivity, either for ignoring features that are diagnostic relative to the representer’s own presupposed taxonomy, or because we take the taxonomy itself to falsely assume that certain kinds of features tend to cluster together or have certain causal effects. However, I  do not think that representational silence, in the form of selectivity or

322  The Scientific Imagination abstraction, constitutes falsity. While speakers can mislead and be misinterpreted, a representation itself is only false if it positively represents a state of affairs as obtaining that does not.3 Moreover, because assessment for truth can only take place against the background of a presupposed taxonomy, the very assumption of a taxonomy cannot itself be grounds for falsity, though it can constitute grounds for inappropriateness of some other variety. Insofar as abstraction does not introduce falsity, it differs from idealization. Both abstraction and idealization involve “imagining away” known facts that are assumed to be irrelevant (Godfrey-​Smith 2009), either temporarily (say, in the service of practical tractability) or permanently (say, to isolate key causal factors) (Elliott-​Graves and Weisberg 2014). But where abstraction engages in “mere omission” (Thomson-​Jones 2005), by remaining silent about known features, idealization introduces distortion by imagining features that are known to have one value to have a different one, as when the amount of friction between an inclined plane and a rolling ball is imagined to be zero, or the number of possible mates in a population is imagined to be infinite. While some idealizations are straightforward, idealizing in one respect often affects the values of other, related features, in ways that are often not obvious to the interpreting agent. Thus, idealization both involves overt distortion and risks unrecognized distortion in ways that abstraction does not. The contrast between abstraction and idealization highlights the contrast between telling facts and telling instances. As Elgin emphasizes, treating a telling instance A as a sample of F employs abstraction in an inevitable and pervasive way: only a limited subset of A’s features warrant investigation and are ultimately “read back” into the characterization of Fs; A’s other features not only can but need to be ignored. The use of a telling instance as a model combines uneasily with idealization, however, because idealization involves imaginatively constructing an entity that differs from the actual target, and hence inherently shifts attention away from directly observing and probing the sample itself. By contrast, when telling facts are used as frames, this is fully compatible with both idealization and abstraction. So, for instance, both androcentric and gynocentric theories of the evolution of tool use might acknowledge that a strict segregation into male hunters and female gatherers is 3 Speakers are especially likely to exploit, and insist on, the difference between active misrepresentation and mere non-​representation in strategic conversational contexts (Camp 2018). Assessing falsity is more complex in the context of extended conversations, where representations are embedded within entailed structures of presupposition and relevance (Roberts 2012; Stokke 2016). To the extent that scientific theories (as opposed to inquiry) also exhibit discourse structure, the distinction between semantic falsity and pragmatic implication likewise becomes more complex.

Imaginative Frames for Scientific Inquiry  323 an idealization from more fluid gender roles but still employ starkly differentiated “male” and “female” roles. And in implementing their contrasting frames, the two theories might each invoke highly idealized “agent-​based models” that compute the long-​term dynamic effects of repeated interactions between individuals who are defined by just a few gender-​based traits. Thus, we see that even though telling facts and telling instances are internal, true frames, they differ substantively and systematically in how they connect to and depart from reality.

13.3.3  Fact and Fiction If idealization, unlike abstraction, introduces a form of known falsity, we might be tempted to infer that all idealizations are therefore fictions. Here again, I think we should resist assimilation to a single trope. The falsification introduced by idealization is still like abstraction in ignoring (purportedly) irrelevant complexities of the target subject, even if doing so involves known and unknown distortion. By contrast, fictions paradigmatically introduce features that are known not to apply. While the line between merely “smoothing out” irrelevant complexities and actively introducing alternative properties is not a sharp one, fictionalization involves both a more substantive qualitative departure from the subject’s assumed reality and a greater attention to the fictionalized subject in its own right. Maxwell’s demon provides an illustrative case of the difference. Prior to 1871, the second law of thermodynamics, that entropy in a closed system never decreases, had often been interpreted as an absolute law grounded in the nature of “caloric.” As a counterexample to such an interpretation and in support of the molecular theory of heat, Maxwell suggested that “we conceive a being” whose perceptual faculties are “so sharpened that he can follow every molecule in its course,” but “whose attributes are still as essentially finite as our own.” If this being were stationed at a door that divided a vessel into two chambers, he could produce a difference in the temperature of the chambers “without expenditure of work,” just by opening and closing the door to allow swift molecules to move into one chamber and slow molecules into the other. From the fact that this possibility is even coherent, Maxwell concluded that the second law holds only at a statistical level—​“as long as we can deal with bodies only in mass, and have no power of perceiving or handling the separate molecules of which they are made up” (Maxwell 1871,

324  The Scientific Imagination 338–​339). Maxwell, then, asks his readers to imagine a scenario that is obviously false, but in (purportedly) merely contingent respects—​the demon is just like us, shrunk to a molecular scale—​in order to illustrate (contra the caloric theory) how a perpetual “heat engine” could be physically or metaphysically possible while still being extremely unlikely (Stuart 2016, 27). Unlike paradigmatic cases of idealization as ignoring or “imagining away,” Maxwell’s thought experiment directs investigative attention toward a situation that is overtly counterfactual. Much as with a just-​so story, we are asked to imagine that this very situation is true just as described, in order to highlight other features that follow directly from the framing proposition but that are actually (purportedly) true. Assessing the fiction’s aptness as a frame is thus a matter of determining two things: first, what is true within the fiction, given its operative “principles of generation” (Walton 1990); and second, whether the real world is indeed like the fiction in these unarticulated respects (Frigg 2010, 260). Subsequent discussion of Maxwell’s demon has, for instance, challenged Maxwell’s conclusion that the demon’s operation of the door—​or, more important, his measuring individual molecules’ speed—​does not itself constitute “expenditure of work,” and hence whether his thought experiment does successfully demonstrate that actual thermodynamic systems are such that differences in entropy could arise as the result of a sequence of individual random molecular movements.

13.3.4  Metaphor and (or Versus) Analogy In effect, we have now seen that abstraction, idealization, and fictionalization involve successively greater departures from stating “the whole truth and nothing but the truth” about the target subject. But because telling instances, telling facts, and just-​so stories are all internal frames, all of these departures arise in the service of focusing attention on features that both the frame and the target (purportedly) actually instantiate. “External” frames such as metaphor and analogy take the further step of “telling the truth but telling it slant,” as Emily Dickinson puts it. In these cases, as I argued above, we do not pretend, even temporarily, that the world really is as the representation literally describes. Instead, we seek to identify relevant respects in which the target is like the frame, where the operative similarities may be not just highly selective but also indirect.

Imaginative Frames for Scientific Inquiry  325 The history of competing models of atomic structure provides an illuminating case of the selective, indirect mapping employed by external frames, and their difference from fiction. A key problem for early atomic theory was how to reconcile the stability of atoms, which are neutrally charged, with the fact that their constituent electrons are negatively charged. Thomson’s (1904) “plum pudding” model of the (hydrogen) atom achieved this reconciliation by embedding those electrons within a uniform sphere of positive charge, much as the batter for a Christmas pudding contains raisins. In understanding Thomson’s model, we are not asked to pretend that atoms are bowls of raisin-​studded pudding, in the way Maxwell asks us to pretend that two chambers contain a microscopic demon operating a tiny door. Rather, we are asked to posit, and treat as central, a sphere of positive electric charge that is like a bowl of pudding in the respect of functioning as a diffuse stabilizing medium. Rutherford’s (1911) discovery of the existence of a small nucleus of intense positive charge falsified Thomson’s “diffuse” model of positive charge and provided an empirical basis for the alternative model of an atomic core. It thereby provided support for Nagaoka’s (1904) “Saturnian” model of electrons as akin to the rings around Saturn, which Nagaoka had proposed on distinct theoretical grounds based on the impenetrability of opposite charges. Bohr’s (1913) “solar” model then extended and refined Nagaoka’s Saturnian model by suggesting that the negative electrons orbit the massive positive core, just as the planets in the solar system revolve around the sun, and that electrons are attracted to the nucleus by electrostatic forces, akin to the sun’s gravitational force. Bohr’s model is a theoretical improvement in part because it subsumes the disparate empirical results that supported the earlier models into a single coherent model, and in part because it suggests a casual mechanism by which those effects are produced. In particular, shifting to the solar model introduces and explains the notion of an orbit as a discrete, stable path, where previous models were unable to explain either atomic stability or discreteness of energy levels. Thus, Bohr’s model explains more prominent features of the target using fewer and more robustly explanatory central features. For all of these models of the atom, though, the mappings from frame to target are highly selective, abstract, and structural, in the manner characteristic of analogy (Gentner and Jeziorski 1993, 449). Bohr’s model in particular identifies an identical higher-​level relational feature, an attractive force causing rotation, which is instantiated by quite different lower-​level features

326  The Scientific Imagination within the frame and target: where gravity causes the planets to orbit the sun, electrostatics causes electrons to orbit the nucleus. And it ignores myriad possible matches, such as color and relative temperature, as irrelevant to this causal structure. As we saw in our discussion of metaphor and analogy in ordinary discourse, such selective focus on “common relational abstractions” (Gentner and Jeziorski 1993, 448) as opposed to lower-​order shared features differentiates both metaphor and analogy from fiction. A scientific fiction, as Elgin (2006, 16) says, “sheds light on the way the world actually is” by “exemplifying features that diverge (at most) negligibly from the phenomena it concerns.” In this respect, Elgin argues, fictions are like samples—​indeed, because she assimilates abstraction and idealization to fictionalization, she argues that samples, such as paint chips, are fictions. While I reject Elgin’s assimilation, I agree that scientific fictions function like telling instances in drawing attention to features that really are exemplified in both the fiction and the actual world, or that diverge negligibly. By contrast, metaphors and analogies shed light on the world by exemplifying common structures that diverge substantively and relevantly in how they are implemented within frame and target. The difference between fiction and metaphor or analogy is especially stark if we contrast Maxwell’s original thought experiment with a subsequent metaphorical deployment of it. Pierre Bourdieu argues that the (French) educational system functions as an entropy-​reversing mechanism that maintains social structures of “difference and order, which would otherwise tend to be annihilated,” by sorting students at an individual level in terms of their possession of cultural capital (1998, 20). Bourdieu ignores Maxwell’s ultimate point entirely: that the second law of thermodynamics does in fact hold at a global, statistical level because there actually is no demon. But his metaphor does identify a common structure that is (purportedly) shared by Maxwell’s fictional situation and actual schools: of an entropy-​reversing and therefore “unnatural” mechanism that produces global effects by sorting individuals. However, as with Juliet and the sun, or the solar system and the atom, this common structure is implemented in very different ways in each case. And where Maxwell’s fiction directs our attention toward the target phenomenon itself—​the trajectory of distribution of heat in a closed volume—​and asks us to imagine something literal but counterfactual about it, Bourdieu applies that structure to a very different domain. A proponent of assimilating metaphor, analogy, and fiction to a single interpretive trope might point out that analogy, and to a lesser degree

Imaginative Frames for Scientific Inquiry  327 metaphor, do present the frame and target as possessing identical higher-​level features: in Bohr’s model, an attractive rotation-​causing force; in Bourdieu’s metaphor, a entropy-​reversing mechanism for sorting individuals. Given this, at a suitably high level of abstraction analogical and metaphorical frames do impute to the target features that are actually possessed by the framing subject—​in just the same way as a just-​so story imputes features possessed by the fictionalized subject to the target as it actually is. The proponent of a unified fictionalist account of scientific models might thus propose that any difference between metaphor and fiction is simply one of the level at which common features are imputed, rather than a difference between pretending that a nonfactual feature f really does apply in order to impute further features that would follow from f, on the one hand, and identifying matches between merely similar features, on the other. Unsurprisingly, I want to reject this analysis: I think it distorts the real representational import of analogy and metaphor, in both everyday discourse and science. The claim made by a metaphor or analogy is not merely that the target is somehow like the frame in a common, highly abstract respect, but rather that the target possesses a substantive lower-​level feature, one that is identified by way of its instantiating this higher-​order feature. For instance, Romeo claims not just that Juliet is comparatively maximally intense relative to the other Veronese girls in some respect or other, but that she is more beautiful than them. In the context of science, we might put the point by saying that metaphors and analogies do not typically function as purely abstract models, akin to the Lotka-​Volterra equations describing the effects of predator-​prey dynamics on population distribution. Such abstract models prescind from messy detail in order to focus attention on general, structural features. By contrast, in metaphor and analogy, the shared high-​level features warrant attention only instrumentally, as a means for identifying a more specific lower-​level feature within the target. In a pedagogical context—​for instance, when explaining electrical current by analogy to the flow of water through a pipe—​the speaker will explicitly identify, or ask listeners to identify for themselves, those lower-​ level instantiating features. In a context of discovery, investigators employ the possibility of a structural match as a principle for investigating what lower-​ level features the target might possess. In both cases, the structural match focuses attention on basic-​level features. So far, I’ve been emphasizing ways in which both metaphor and analogy differ from fiction, arguing that they involve a qualitatively greater gap

328  The Scientific Imagination between representation and reality than fiction, idealization, or abstraction, because they rely on indirect matches between what are conceived of as two distinct domains. But as we saw earlier in application to ordinary discourse, metaphor and analogy also differ from each other. In the context of science, Gentner and Jeziorski (1993) argue that contemporary scientific practice valorizes analogies, such as Bohr’s solar model, over metaphors because analogies employ precise, consistent, systematic matches between complex, causally connected systems of features. Further, they claim that this valorization is distinctive to modern Western science. In particular, they argue that alchemists up through the sixteenth century were much more promiscuous in their invocation of similarity, happily citing base-​level qualitative similarities, such as the yellowness of both the sun and gold or the whiteness of the moon and silver, and invoking multiple disconnected or even incompatible matches. The birth of modern science, they claim, arises in significant part because of this shift from promiscuous similarity to higher-​order structural matching. The upshot is that metaphor in contemporary science is a poor cousin to analogy, as encapsulated by George Pólya’s (1954) dictum: “And remember, do not neglect vague analogies. But if you wish them respectable, try to clarify them.” I have largely followed Gentner in emphasizing the ways that metaphor both approximates to and departs from analogy. Further, Gentner and Jeziorski’s priority claim about modern scientific practice is right in several important respects. First, metaphors in science, in contrast to literature, are typically more analogy-​like, emphasizing fewer, more consistent matches over richer, inconsistent ones—​especially in the contexts of pedagogy and theoretical advocacy, which are the cases that Gentner and Jeziorski discuss almost exclusively. Further, it is widely agreed that at least one central aim of science is to develop a precise, articulate understanding of objects, properties, and their relations, and that to accomplish this, we need symbols whose interpretation is “univocal, determinate, and readily ascertained” (Elgin 2006, 212). Insofar as metaphors differ from analogies in relying on tacit, vague, and otherwise inarticulable intuitions of similarity, they are not representationally adequate as they stand. More substantively, some of the most influential modern scientific metaphors have aimed at identifying abstract, high-​level properties, just as Gentner and Jeziorski predict. To take a pair of apt examples, the computational model of mind and the code model of genetic potential both hypothesize key causal operations that are functionally analogous to the

Imaginative Frames for Scientific Inquiry  329 algorithmic execution of a computer program. One reason that both metaphors have been so theoretically and empirically productive is that they encourage a focus on structural relations while remaining fairly neutral about implementational mechanisms, leaving the connection between abstract functional role and underlying physical substrate to be forged only after each level is understood better in its own terms—​a strategy that Pylyshyn (1993, 551)  calls the “principle of least commitment” or “principle of procrastination.” Thus, Pólya’s dictum about making vague analogies respectable by articulating precise structural relations is largely apt. However, this doesn’t make metaphors into second-​class versions of analogy, as Gentner and Jeziorski suggest. Rather, metaphors often play a theoretically and empirically fruitful role in scientific inquiry precisely because they stand in need of clarification: because they are inchoate, intuitive, and only partly consistent. As I argued earlier, metaphors’ greater permissiveness engages imagination in a richer, more intuitive, and flexible way. This means they can guide attention and suggest hypotheses in epistemic circumstances where a more precise structural analogy would be stymied. Early advocates of both a computational theory of mind and a code model of genetic potential lacked clear, coherent characterizations of both their target systems and framing subjects, since the notions of computation and coding were themselves still nascent. Indeed, as Fox Keller (1995) argues, conceptual and empirical developments within computation and genetics were mutually supporting, with each serving as a frame for the other domain. Thus, at the same time as the metaphor of genes as self-​replicating machines drove theoretical, empirical, and technological developments in molecular biology, so did the metaphor of complex machines as organisms orient research within systems analysis and cybernetics, in turn reciprocally influencing theories of biological development and cellular coordination. In effect, each metaphor provided what Richard Boyd ([1979] 1993, 488) calls an “inductive open-​endedness”: it guided research by gesturing toward a range of possible matches that had not yet been fully articulated, let alone investigated. Metaphors such as mind as computer, genes as machines, and machine systems as organisms can play this sort of “programmatic research-​orienting” role (Boyd [1979] 1993, 489) only because they lack the “univocal, determinate, and readily ascertained” interpretations of paradigmatic scientific symbols: they guide research by pointing to an indeterminate but bounded range of possible matches. Gentner and Jeziorski’s

330  The Scientific Imagination emphasis on “respectable” analogy in the explication and justification of contemporary scientific theories neglects the full, unruly, but ineliminable role of imagination in scientific inquiry. Our earlier explication of framing devices puts us in a position to make this point about the utility of interpretive indeterminacy in a more precise way. Both the constituent elements and organizational structure of characterizations are typically largely implicit and only partially subject to voluntary control. They are also highly dependent on context, with diagnosticity and centrality in particular depending on an agent’s interests and goals. As a result, different scientists will often bring markedly different characterizations and perspectives on their subjects to the interpretive table, especially at the beginning of inquiry. Further, even given a fixed pair of characterizations of both target and frame, there will nearly always be available multiple plausible overall mappings between them that trade off preferences for systematicity against directness in matching, and preferences for identifying new features and connections against preserving already known ones, in different but equally legitimate ways. Beyond this, as we have also seen, frames do more than just interpret a fixed set of assumptions about their targets: they provide open-​ended tools for assimilating new information and for generating hypotheses about undiscovered features and causal structures. Finally, in addition to all of these frame-​internal factors contributing to interpretive indeterminacy, the actual application of any frame depends in deep, important ways on external factors, including on what alternative theories it is being compared to, and so what expressive and epistemic needs it distinctively addresses (Okruhlik 1994), as well as on its interaction with current technological opportunities and limitations (Fox Keller 1995). Perhaps the best way to view the relationship between metaphor and analogy in much of contemporary scientific practice is to see metaphor as tracing a trajectory or “career” of precisification (Bowdle and Gentner 2005). This trajectory begins with an intuitive, holistic, and open-​ended—​and therefore diffuse and relatively unarticulated—​mode of construing one subject in terms of something else, where one or both domains may be only minimally understood. It moves through a process of articulating, probing, and refining the characterizations of one or both domains and plausible matches between them. Ultimately, it settles into a more regimented, systematic, and selective analogical mapping. At that point, the analogy may remain as a useful pedagogical tool. Alternatively, the interpretation of the framing term

Imaginative Frames for Scientific Inquiry  331 may have morphed so as to become literally applicable in a few restricted respects, as has arguably happened with both “computation” and gravitational “waves.” Or the metaphor may be discarded. Perhaps, like the metaphor of evolution as climbing a ladder of sophistication, it turns out to be misleading, because it directs attention toward features that are not as central as once thought, or imputes features that are not possessed. Or perhaps it does identify features that are both prominent and central, but has become too dominant and literalistic in its application, leading to neglect of other important features. Perhaps the metaphor of natural language as a logical calculus fits this description (Camp 2015).

13.4  Models and Frames Much current philosophical discussion about scientific models has focused on their ontological status—​in particular, on whether models are abstract structures or hypothetical, typically uninstantiated concrete entities—​and in turn on whether the representational relation between model and target is one of direct instantiation or a more indirect one of similarity in relevant respects (Frigg 2010; Giere 1988; Godfrey-​Smith 2006, 2009; Weisberg 2012). I have focused on the apparently distinct topic of frames. Although I can’t pretend to have surveyed, let alone explained, all the phenomena and functions of models and modeling in science, it does seem that models and frames share remarkably many features and are used for many common epistemic purposes. One benefit of an investigation of frames is that it helps to integrate the use of models in science more smoothly into a broader theory of interpretation, and thereby into a theory of cognition and communication, from which we can discern commonalities and differences between the use of models and other interpretive strategies within science, and between the practice and evaluation of those strategies in science and in everyday cognition and communication. Specifically, I have argued that frames are representational vehicles that provide an overarching interpretive principle or perspective. All frames presuppose a taxonomy, which is necessarily selective and contrastive; all frames determine what matters about their subject, and how it matters, along at least the two dimensions of prominence and centrality; and all frames are intuitive and non-​propositional, in the sense of actually implementing rather than merely representing those interpretive structures.

332  The Scientific Imagination However, within this broad genus, different species of frames function quite differently. Frames themselves can be more or less articulated, abstract, idealized, detailed, and affectively and experientially loaded. Some, such as the Lotka-​Volterra equations, express highly abstract structures that literally describe a few highly idealized features of the target domain; others, such as vials of water, constitute concrete exemplifications of their target subjects. Frames can also be more or less conventionally tied to their vehicles: some vehicles, such as the Lotka-​Volterra equations, constitute explicit semantic specifications of the relevant structures, but in many cases, such as computational metaphors of mind, the connection is one of implicit, pragmatic association. Whatever the connection between the representational vehicle and framing principle, the interpreted representational vehicle generates a cognitive structure, which is then deployed as a principle for structuring one’s overall understanding of the target. The ensuing connection between frame and target can be more or less direct, more or less instrumental, and more or less systematic. Some frames, such as sex-​based theories of the evolution of tool use, assimilate the frame’s defining feature, and all or most of its subsidiary features, directly into the target subject. Others, such as Maxwell’s demon, assimilate that feature directly but only temporarily, in order to highlight or introduce subsidiary features that the target really would have if the framing feature was actually possessed. Some frames, such as Bohr’s solar model, export a selective, coherent structure from one domain to another; others, such as the computer model of gene reproduction, highlight, explain, and restructure features of the target by an indirect mapping that is at least initially inchoate and potentially inconsistent. All of these forms of framing can naturally be described as employing models. But we miss important commonalities and differences if we focus narrowly on the representational entities that underwrite them. Attending to the practices and processes of modeling and framing affords a more perspicuous analysis (Godfrey-​Smith 2006; Levy 2015). And a full understanding of those practices requires attending to the cognitive structures and operations that make them natural and effective for agents with minds like ours. I have argued that although the various species of framing direct imaginative attention at different levels and bridge the gap between representation and reality in different ways, they all employ a synthetic, restructuring imagination to achieve a unified, open-​ended, intuitive construal of their targets.

Imaginative Frames for Scientific Inquiry  333

Acknowledgments Thanks for useful and enjoyable discussion to audiences at the philosophy departments at Indiana, Harvard, St. Andrews, and LOGOS Barcelona, at the Rutgers Philosophy of Science Reading Group, and at the Metaphors in Use Conference (Lehigh) and Varieties of Understanding Conference (Fordham). Individual thanks to Jordi Cat, Catherine Elgin, Peter Godfrey-​ Smith, Arnon Levy, Deborah Marber, Matthew Slater, Mike Stuart, Shuguo Tang, and Isaac Wilhelm for helpful discussion. Special thanks to Michael Weisberg for many illuminating conversations about models, metaphors, and science over multiple years. Finally, thanks to Stephen Laurence for drawing the especially elegant, easily reproducible version of Figure 13.1.

References Belyaev, D. K. (1978). “Destabilization as a Factor in Domestication.” Journal of Heredity 70: 301‒308. Black, M. (1954). “Metaphor.” Proceedings of the Aristotelian Society 55: 273–​294. Bohr, N. (1913). “On the Constitution of Atoms and Molecules, Part I.” Philosophical Magazine 26, no. 151: 1–​24. Bowdle, B., and Gentner, D. (2005). “The Career of Metaphor.” Psychological Review 112, no. 1: 193–​216. Bourdieu, P. (1998). Practical Reason: On the Theory of Action. Cambridge: Polity Press. Boyd, R. ([1979] 1993). “Metaphor and Theory Change:  What Is ‘Metaphor’ a Metaphor For?” In Metaphor and Thought, 2nd ed., edited by A. Ortony, 481‒532. Cambridge: Cambridge University Press. Camp, E. (2003). “Saying and Seeing-​As: The Linguistic Uses and Cognitive Effects of Metaphor.” Ph.D. dissertation, University of California, Berkeley. Camp, E. (2006). “Metaphor and That Certain ‘Je Ne Sais Quoi.’” Philosophical Studies 129, no. 1: 1‒25. Camp, E. (2008). “Showing, Telling, and Seeing: Metaphor and ‘Poetic’ Language.” Baltic International Yearbook of Cognition, Logic, and Communication 3: 1‒24. Camp, E. (2009). “Two Varieties of Literary Imagination: Metaphor, Fiction, and Thought Experiments.” Midwest Studies in Philosophy 33: 107‒130. Camp, E. (2013). “Slurring Perspectives.” Analytic Philosophy 54, no. 3: 330‒349. Camp, E. (2015). “Logical Concepts and Associative Characterizations.” In The Conceptual Mind: New Directions in the Study of Concepts, edited by E. Margolis and S. Laurence, 591‒621. Cambridge, MA: MIT Press. Camp, E. (2017). “Perspectives in Imaginative Engagement with Fiction.” Philosophical Perspectives: Philosophy of Mind 31, no. 1: 73–​102. Camp, E.  (2018). “Insinuation, Common Ground, and the Conversational Record.” In New Work in Speech Acts, edited by D.  Harris, D.  Fogal, and M.  Moss, 40–​66. Oxford: Oxford University Press.

334  The Scientific Imagination Camp, E. (2019). “Perspectives and Frames in Pursuit of Ultimate Understanding.” In Varieties of Understanding: New Perspectives from Philosophy, Psychology, and Theology, edited by S. Grimm, 17–​46. Oxford: Oxford University Press. Currie, G., and Ravenscroft, I. (2003). Recreative Minds: Imagination in Philosophy and Psychology. Oxford: Oxford University Press. Diekman, A., Eagly, A., and Kulesa, P. (2002). “Accuracy and Bias in Stereotypes about the Social and Political Attitudes of Women and Men.” Journal of Experimental Social Psychology 38: 268–​282. Elgin, C. (2006). “From Knowledge to Understanding.” In Epistemology Futures, edited by S. Hetherington, 199‒215. Oxford: Clarendon Press. Elliott-​ Graves, A., and Weisberg, M. (2014). “Idealization.” Philosophy Compass 9: 176–​185. Fox Keller, E. (1995). Refiguring Life:  Metaphors of Twentieth Century Biology. New York: Columbia University Press. Friend, S. (2008). “Imagining Fact and Fiction.” In New Waves in Aesthetics, edited by K. Stock and K. Thomson-​Jones, 150‒169. London: Palgrave Macmillan. Frigg, R. (2010). “Models and Fiction.” Synthese 172: 251–​268. Gentner, D., and Jeziorski, M (1993). “The Shift from Metaphor to Analogy in Western Science.” In Metaphor and Thought, 2nd ed., edited by A. Ortony, 447‒480. Cambridge: Cambridge University Press. Giere, R. (1988). Explaining Science:  A Cognitive Approach. Chicago:  University of Chicago Press. Godfrey-​Smith, P. (2006). “The Strategy of Model-​Based Science.” Biology and Philosophy 21: 725–​740. Godfrey-​Smith, P. (2009a). “Models and Fictions in Science.” Philosophical Studies 143: 101–​116. Godfrey-​ Smith, P. (2009b). “Abstractions, Idealizations, and Evolutionary Biology.” In Mapping the Future of Biology:  Evolving Concepts and Theories, edited by A. Barberousse, M. Morange, and T. Pradeu, 47‒56. Boston Studies in the Philosophy of Science. Dordrecht: Springer. Hesse, M. (1993). “Models, Metaphors and Truth.” In Knowledge and Language, vol. 3, Metaphor and Knowledge, edited by F. R. Ankersmit and J. J.  A. Mooij, 49‒66. Dordrecht: Springer. Hills, D. (1997). “Aptness and Truth in Verbal Metaphor.” Philosophical Topics 25, no. 1: 117‒153. Judd, C., and Park, B.  (1993). “Definition and Assessment of Accuracy in Social Stereotypes.” Psychological Review 100, no. 1: 109–​128. Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago:  University of Chicago Press. Lakoff, G., and Johnson, M. (1980). Metaphors We Live By. Chicago:  University of Chicago Press. Levin, S. (1988). Metaphoric Worlds:  Conceptions of a Romantic Nature. New Haven, CT: Yale University Press. Levy, A. (2015). “Modeling Without Models.” Philosophical Studies 172: 781–​798. Longino, H., and Doell, D. (1983). “Body, Bias, and Behavior: A Comparative Analysis of Reasoning in Two Areas of Biological Science.” Signs 9: 206‒227. Markman, A., and Gentner, D. (1993). “All Differences Are Not Created Equal:  A Structural Alignment View of Similarity.” In Proceedings of the Fifteenth Annual

Imaginative Frames for Scientific Inquiry  335 Conference of the Cognitive Science Society, 682‒686. Boulder, CO: Cognitive Science Society. Maxwell, J. C. (1871). Theory of Heat. London: Longmans, Green. Murphy, G., and Medin, D. (1985). “The Role of Theories in Conceptual Coherence.” Psychological Review 92: 289‒316. Nagaoka, H. (1904). “Kinetics of a System of Particles Illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity.” Philosophical Magazine 6, no. 7: 445‒455. Okruhlik, K. (1994). “Gender and the Biological Sciences.” Canadian Journal of Philosophy 20 (supp.): 21‒42. Peacocke, C. (1992). A Study of Concepts. Cambridge, MA: MIT Press. Pólya, G. (1954). Mathematics and Plausible Reasoning, vol. 1, Induction and Analogy in Mathematics. Princeton: Princeton University Press. Pylyshyn, Z. (1993). “Metaphorical Imprecision.” In Metaphor and Thought, 2nd ed., edited by A. Ortony, 481–​532. Cambridge: Cambridge University Press. Quine, W. V.  O. (1951). “Two Dogmas of Empiricism.” Philosophical Review 60, no. 1: 20–​43. Roberts, C. (2012). “Information Structure in Discourse: Towards an Integrated Formal Theory of Pragmatics.” Semantics and Pragmatics 5, no. 6: 1–​69. Rosch, E. (1978). “Principles of Classification.” In Cognition and Categorization, edited by E. Rosch and B. Lloyd, 27‒48. Hillsdale, NJ: Lawrence Erlbaum. Rutherford, E. (1911). “The Scattering of α and β Particles by Matter and the Structure of the Atom.” Philosophical Magazine 6: 21. Ryan, C., Judd, B., and Park, B.  (1996). “Effects of Racial Stereotypes on Judgments of Individuals:  The Moderating Role of Perceived Group Variability.” Journal of Experimental Social Psychology 32, no. 1: 91–​103. Sloman, S., Love, B., and Ahn, W.-​ K. (1998). “Feature Centrality and Conceptual Coherence.” Cognitive Science 22, no. 2: 189‒228. Stokke, A. (2016). “Lying and Misleading in Discourse.” Philosophical Review 125, no. 1: 83–​134. Strevens, M. (2008). Depth:  An Account of Scientific Explanation. Cambridge, MA: Harvard University Press. Stuart, M. (2016). “Taming Theory with Thought Experiments:  Understanding and Scientific Progress.” Studies in History and Philosophy of Science 58: 24‒33. Thagard, P. (1989). “Explanatory Coherence.” Behavioral and Brain Sciences 12: 435–​502. Thomson, J. J. (1904). “On the Structure of the Atom: An Investigation of the Stability and Periods of Oscillation of a Number of Corpuscles Arranged at Equal Intervals Around the Circumference of a Circle; with Application of the Results to the Theory of Atomic Structure.” Philosophical Magazine 7, no. 39: 237–​265. Thomson-​Jones, M. (2005). “Idealization and Abstraction: A Framework.” In Idealization XII: Correcting the Model, edited by M. Thomson-​Jones and N. Cartwright, 173‒217. Amsterdam: Rodopi. Trut, L. (1999). “Early Canid Domestication:The Farm-​Fox Experiment.” American Scientist 87: 160‒169. Tversky, A. (1977). “Features of Similarity.” Psychological Review 84: 327–​352. Walton, K. (1990). Mimesis as Make-​Believe: On the Foundations of the Representational Arts. Oxford: Oxford University Press.

336  The Scientific Imagination Walton, K. (1993). “Metaphor and Prop-​Oriented Make-​Believe.” European Journal of Philosophy 1, no. 1: 39‒57. Weisberg, M. (2012). Simulation and Similarity: Using Models to Understand the World. Oxford: Oxford University Press. Wittgenstein, L. (1953). Philosophical Investigations. Translated by G. E. M. Anscombe. Oxford: Blackwell. Woodfield, A. (1991). “Conceptions.” Mind 100, no. 399: 547‒572. Woodward, J. (2003). Making Things Happen:  A Theory of Causal Explanation. Oxford: Oxford University Press. Yablo, S. (2001). “Go Figure: A Path Through Fictionalism.” Midwest Studies in Philosophy 25: 72–​102.

Index For the benefit of digital users, indexed terms that span two pages (e.g., 52–​53) may, on occasion, appear on only one of those pages. Figures are indicated by f following the page number 1984 (Orwell), 109 analogies frames and, 313, 317 metaphors and, 317, 319, 324 Aristotle, 28–​29, 133, 181–​82, 266–​67 Arló-​Costa, H., 34   Barsalou, L.W., 29, 38 Bascandziev, I., 273–​76 Bayes’s theorem, 332 Bentham, J., 4 Black, M., 306–​7, 311, 313–​14 Black, T., 238 Boden, M.A., 182 Bohr, N. approximation in the development of an atomic model by, 132–​33, 172 internal versus external perspectives on atomic model of, 105–​6, 119–​20 on limitations of visual conceptions, 18 Rutherford atomic model as predecessor of, 63 “solar model” of the atom devised by,  325–​29 theoretical nature of the atomic model of, 111, 116, 132–​33 Bokulich, A., 123, 124–​25, 172, 281–​82 Bourdieu, P., 326 Boyd, R., 329–​30 Boyle’s law, 298, 300 Burgess, J., 174 Byrne, R., 34, 182, 257–​58   Callander, C., 283–​84 Cartwright, N., 3–​4, 51–​52, 102

Chi, M.T., 242 Clatterbuck, H., 239n9, 240–​42 Clement, J., 231, 232, 237 Cohen, J., 283–​84 computational models abstraction and, 214–​20, 225–​26, 228–​29 algorithms and, 214 causal structures and, 210–​11, 212, 213,  228–​29 computational structures and, 211 computer-​based modeling and, 211–​12 construals and, 219, 220 encapsulation and, 217 genetic drift example and, 223 inclusive thesis regarding, 217–​18 lexical scoping and, 221, 226–​27 mathematical models compared to, 220, 221,  228–​29 narrative description of models compared to, 210 parallelization and, 222 probabilistic simulations and, 222,  224–​25 problems of variation and, 227–​28 reductive thesis regarding, 218–​19 scheduling within, 221 Schelling’s segregation study as example of, 212–​13, 213f, 220, 222, 226 trajectory spaces and, 210–​11 computer-​based modeling abstract particulars and, 132–​33 computational models and, 211–​12 conditionals and, 166 mathematics and, 211 mechanisms simulated through, 180, 201, 202–​3, 207

338 Index computer-​based modeling (cont.) model systems and target systems compared in, 107 problems of interpretation regarding, 158 conditionals approximation and, 168–​72, 173–​74 computer-​based modeling and, 166 counterfactual conditionals and, 167, 168, 169, 170–​71, 172 indicative conditionals and, 167 material conditionals and, 167, 170 mathematics and, 171, 174 models and, 166–​72, 173–​74, 175 psychological approach to, 166 scientific discovery and, 154–​55 subjunctive conditionals and, 167, 168, 169, 173–​74, 175 Contessa, G., 140 Craver, C.F., 178–​79 Critique of Pure Reason (Kant) and,  181–​82 Currie, G., 36   Darden, L., 178–​79 De Anima (Aristotle), 181–​82 diagrams glyphs and, 182–​83, 186–​88, 190–​92,  194–​96 mechanisms and, 179–​80, 183–​87, 185–​202f, 192–​93, 194–​96, 205–6 mental animation of, 198–​99, 204–​5 spatial nature of, 182–​83, 194–​97, 204–​5,  204f Dickinson, E., 324 Doell, D., 319   Edgeworth, F.Y., 17 Egré, P., 34 Einstein, A., 17, 230, 246 Elgin, C., 320, 321, 322–​23, 326 Everett, A., 117–​18   fiction anti-​realist approaches to, 52, 53, 56, 58, 103, 108, 118–​19 artifactualist approaches to, 53, 66–​67, 69–​70, 72–​73, 119, 162

external realism and, 119–​21 fictionalizing discourse and, 53–​54, 60, 62, 70 “fictional truth” (Walton) and, 35–​36,  140n14 frames and, 323 make-​believe and, 35, 56, 58–​60, 108–​9,  162 Meinongian (realist) approaches to, 52, 53, 54–​55, 56–​57, 58, 103–​4, 108, 115 metaphors and, 315, 326 missing system modeling and, 77, 80n11,  81–​95 models and, 51–​53, 57, 63, 64–​66, 72–​73, 102–​5, 112–​13, 121, 155–​56, 160, 210, 227, 244 ontology and, 116, 160 principles of export and, 123–​24 principles of generation and, 112–​13,  117 pure pretense theories of, 52–​53, 56–​57, 62,  69–​70 realist (Meinongian) approaches to, 52, 53, 54–​55, 56–​57, 58, 103–​4, 108, 115 scientific discovery and, 154, 157 semantics and, 85–​86 type realism and, 118–​19 Fiction and Metaphysics (Thomasson), 85,  89–​91 Field, H.H., 129, 135–​36 Fine, A., 155 Fisher, R., 143–​44 Fodor, G.A., 28–​29 Fox Keller, E., 328–​29 frames abstraction and, 321, 323, 332 analogy and, 313, 317 atomic structure example and, 325–​27 definitions of, 293, 307 emphasis of highly specific and prominent features through, 308–​10, 311, 331 external versus internal frames and, 313, 315, 317, 324 falsity and, 321–​23 fiction and, 323, 326 Gestalts and, 311, 312 idealization and, 321, 323, 332

Index  339 just-​so stories and, 307–​8, 315–​17, 319,  326–​27 metaphors as, 293, 294, 313–​15, 317, 318, 319 models as, 305, 313, 319, 331 perspectives and, 305, 307–​8, 331 scientific inquiry and, 319 sex-​based theories of tool use as example of, 319, 322–​23, 332 slurs and, 307–​8, 312 structuring of intuitive thinking through, 308–​9, 331 taxonomy and, 308–​9, 321–​22, 331 telling details and, 307–​8, 314–​15, 317, 319 Frege, G., 28–​29, 128 Frigg, R. anti-​realism of, 61–​62, 63–​64, 81–​82n17,  81–​82 on the Fibonacci model, 106–​7 on the fiction view of model systems, 102 on make-​believe approach to models, 130, 161 on models and targets, 114, 139, 158,  163–​64 model systems/​model descriptions distinction and, 65 ontological commitments and, 70–​71 pure pretense theories and, 69–​70 on the stability of Newton’s model of the solar system, 105, 112–​13 on translations between model systems and targets, 123 on varieties of models, 129–​30 Walton’s theory of fiction and, 52, 57   Galileo, 20–​23, 24, 37–​38, 39–​40, 44, 138, 156, 237, 246 Gaut, B., 27 Gendler, T.S. learning by thinking and, 232–​33, 237,  239–​40 on thought experiments, 19, 24, 25 Gentner, D., 313–​14, 317–​18, 327–​30 Gestalts, 311, 312 Giere, R. missing system modeling and, 83–​84

on models and description-​fitting objects, 55, 107, 118–​19 models’ difference from fiction emphasized by, 3–​4, 160 theoretical hypothesis and, 158 Godfrey-​Smith,  P. Dewey inductions and, 240–​41 on the epistemology of models, 115 fictionalist approaches to models and, 51–​52,  130 identification of models with mathematical objects rejected by,  104–​5 on the indirect nature of modeling, 107 missing system modeling and, 83 on models and target systems, 114 on models and uninstantiated properties,  69–​70 on models of memory in cognitive science, 106 on resemblance relations in fiction, 107 on science and imagination, 67–​68 on science’s study of fictional objects, 62 Gopnik, A., 166   Hájek, A., 170–​71 Harris, P.L., 273–​75 Hartmann, S., 63–​64 Hegarty, M., 198f, 198–99 Hempel, C., 280, 286, 301–​2 Hesse, M., 304, 313, 321 Hobbes, T., 304 Hood, B., 266 Hughes, R.I.G., 134 Husserl, T., 181–​82   Ichikawa, J., 34–​35 imagination amodal symbols and, 29, 38, 39f analogical theory and, 28, 38 attitudinal elements of, 25–​26, 29–​30 children and, 269 cognitive science perspectives on, 6 constraints on, 181, 206, 253–​58,  259–​60 counterfactuals and, 33–​34, 40–​44 creativity and, 19–​20, 181–​82, 186, 206 definitions of, 5, 251

340 Index imagination (cont.) etymology of, 18 external representations and, 179–​80, 204, 205 fictivity and, 181–​82, 186, 189, 192, 193–​94,  205–6 freedom within, 181–​82, 187, 190, 1​ 92, 193–​94, 196–97, 203–​4, 205–6 imagination de se and, 36 mechanisms and, 178–​79, 180, 183–​98,  205–​8 metaphors and, 292–​93, 295, 313,  318–​19 modal symbols and, 29, 38, 39f models and, 17–​18, 111, 155–​56, 157, 175, 292–​93, 295, 313 movement between reality and, 258 objectual imagination and, 25–​26, 27,  38–​40 propositional imagination and, 24, 25–​26, 28–​29, 30–​35, 38, 46, 292–​93 science and, 250–​59 thought experiments and, 156, 251 visualization and, 18–​19, 27–​30, 46, 181–​83, 185–​86, 197, 205–6,  292–​93 inference to the best explanation, 235, 236, 300 Ishiura, M., 183–​84, 185f Iwasaki, H., 187–​88, 188f   Jeziorski, M., 327–​30 Joh, A.S., 264–​65, 272–​73, 274–​75 Jolley, C.C., 201–​4, 202–​4f “just-​so stories,” 307–​8, 315–​17, 319,  326–​27   Kaiser, M.K., 265–​66 Kant, I., 157, 312 Kekulé, A., 27, 34 Kelly, K., 241–​42 Kepler, J., 138, 142 Kment, B., 43–​44 Kondo, T., 187–​89, 188f Kosslyn, S.M., 28–​29 Kratzer, A., 150 Kripke, S., 53 Kuhn, T., 2, 157, 231, 268, 304

Lakoff, G., 293–​94 Lane, J., 271–​72 learning by thinking argumentation and, 230–​31, 232, 236, 237, 239–​40,  245–​46 constraints and, 233, 234–​36, 238–​39 Dewey inductions and, 240–​41 epistemic role of, 230–​31, 239, 246 explaining for the best inference and, 244 false conclusions and, 240–​42, 245 inference to the best explanation and, 235, 236 intuitions apprehended via imagistic simulations and, 237 learning by explanation and, 233–​36, 238–​39, 241–​44,  246 learning by observation compared to, 230 learning from models and, 245 learning from testimony compared to, 230 motor and perceptual simulations and, 238 thought experiments and, 230–​31, 232, 237,  240–​41 Levy, A. on abstract artifacts, 164 anti-​realist accounts of models and, 61,  81–​82 direct representation in modeling and,  110–​11 on indirect realism, 244 on “modeling as metaphor,” 244–​45 on models and approximate truths, 165 on models and make-​believe, 164,  165–​66 on models and partial truths, 115, 165n9, 165, 171–​72 on models and targets, 111 on models as rules for the imagination, 131 on models’ ontology, 164–​65 Lewis, D.K. models as concrete entities and, 159 possible worlds and, 154, 159 realist approach to models and, 116 semantic analysis of counterfactuals and, 33, 34, 43 on subject matter, 145

Index  341 Locke, J., 304 Lombrozo, T., 234, 238–​39, 243–​44 Longino, H., 319 Lotka-​Volterra  model abstract nature of, 105, 110, 114, 115–​16, 120–​21,  327 explicit semantic specifications in, 332 predator-​prey system illustrated by, 114, 120,  225–​26 as set of equations, 113–​14, 122, 132–​33,  219   Mach, E., 232 Mackie, J., 170–​71 make-​believe fiction and, 35, 56, 58–​60, 108–​9, 162 models and, 110, 161, 164, 165–​66,  171–​72 principles of generation and, 161 propositional imagination and, 35, 40–​41,  44–​45 mathematics computer-​based modeling and, 211 conditionals and, 171, 174 epistemic contexts and, 137 expressive power and, 128, 151 fictionalism and, 130, 227 figuralism and, 130 instrumentalism and, 129 nominalism and, 135, 151 nominal structuralism and, 173–​74, 175 nomological contexts and, 136 objects as a source of constraint in,  162–​63 physics and, 128 Platonism and, 172, 173 reification and, 174 representational devices and, 137 selection effect and, 129 structuralism and, 130, 173–​74, 175 “unreasonable effectiveness” and,  155–​56 Matthewson, J., 210 Maxwell, J.C. demon thought experiment of, 323–​25, 326, 332 imagination as tool in scientific discovery for, 4, 17, 156

Maynard Smith, J., 17, 106, 110 McCloskey, M., 265–​66 McMullin, E., 172 mechanisms computer-​based modeling and, 180, 201, 202–​3, 207 diagrams and, 179–​80, 183–​87, 185–​202f, 193, 194–​96, 205–6 how-​possibly versus how-​actually explanations of, 178–​79, 205–​6 imagination and, 178–​79, 180, 183–​98,  205–​8 inference strategies and, 178 schemas and, 178 metaphors analogies and, 317, 319, 324 biological information conveyed in, 280, 281, 289, 296, 298–​99, 300 Black’s description of, 306–​7, 311,  313–​14 constraints on, 318–​19 explanatory power of, 280–​81, 289, 291,  297–​98 familiar mixed with unfamiliar in, 292, 293,  295–​96 fiction and, 315, 326 as frames, 293, 294, 313–​15, 317, 318, 319 imagination and, 292–​93, 295, 313,  318–​19 inference to the best explanation and, 300 internal/​external distinction regarding, 299–​300 Juliet example and, 292, 313–​15, 318, 327 lack of systematicity of, 318–​19 models and, 244–​45, 281, 294 networks formed by, 293–​94 potential to mislead with, 300–​1 prominent and distinct features compared in, 313–​15, 318 scientific inquiry and, 319, 328–​29 surrogative representation and, 294–​96 “telling details” compared to, 307 Meynell, L., 45 missing systems abstract artifacts and, 81–​95

342 Index missing systems (cont.) anti-​realist accounts of, 81 artifactual approach to, 67–​68 concrete systems modeled as, 76 de re/​de dicto distinction and, 95 epistemology and, 80 fictional approach to modeling, 77, 80n11,  81–​95 imagination and, 94–​95 indirect nature of modeling and, 83–​85, 90–​91,  96–​97 knowledge problems and, 77, 78n9, 78,  88–​89 metaphysics and, 94 ontology and, 79 realist accounts of, 81–​85, 90–​91 semantics and, 76–​78, 80, 86–​88 target systems modeled as, 76, 84–​85,  95–​96 untargeted missing systems and, 76 use problem regarding, 77, 78, 89 models. See also specific types of models anti-​realist views of, 103, 111 approximate truths and, 165, 168–​72,  173–​74 artifactualist approaches to, 53, 67–​73, 159,  162–​64 concrete particulars and, 132–​33 conditionals and, 166–​72, 173–​74, 175 counterfactuals and, 40–​44 declarative truth and, 133–​34 description-​fitting objects and, 55–​56 directed truths and, 149 direct representation and, 110–​11, 123 epistemology and, 103, 104, 106, 107–​8, 112, 115–​17, 120, 121, 124–​25 external realism and, 120–​21 face-​value practice of, 105, 106, 175 fictionalism and, 130 fictional realist approaches to, 115 fiction and, 51–​53, 57, 63, 64–​66, 72–​73, 102–​5, 112–​13, 121, 155–​56, 160, 210, 227, 244 frames and, 305, 313, 319, 331 gedanken experiments and, 22–​23 heuristics and, 156 imagination and, 17–​18, 111, 155–​56, 157, 175, 292–​93, 295, 313

indirect representation and, 83–​85, 90–​91, 96–​97, 107,  165–​66 instrumentalism and, 129–​30 internal versus external perspectives on,  105–​6 interpretation and, 134, 158–​59 make-​believe and, 110, 161, 164, 165–​66,  171–​72 “Malileo” example regarding law of equal heights and, 21–​22, 39–​40, 44 material model/​theoretical model distinction and, 105, 110–​11 mental modeling and, 24–​25, 38–​39 metaphors and, 244–​45, 281, 294 metaphysical commitments and, 61–​62 model systems/​model descriptions distinction and, 51–​52, 65 object-​talk and, 162–​63, 174 ontology and, 61–​62, 65–​66, 70, 103–​5, 106, 108, 121, 132–​33, 139, 158–​59, 160, 162, 164–​65, 305 partial truths and, 165, 171–​72 partitions and, 146 Platonist approaches to, 173–​74 principles of export and, 124 principles of generation and, 112–​13, 117, 120, 121–​23, 124–​25 propositional imagination and, 19, 40–​46 pure pretense theories of, 52–​53, 57, 61, 68, 70, 161, 163 realist views of, 103, 112 representational models and, 131–​33 scale models and, 132–​33 selection effect and, 129 semantics and, 108, 114, 124, 134 semantic truths and, 133–​34 subject matter and, 145 supposition and, 31, 40–​41 surrogative representation and, 294–​96 targets and, 63–​64, 67–​68, 107–​8, 112, 114, 120, 124–​25, 131, 132–​33, 134–​39, 149, 158–​59, 163–​64, 165–​66, 171–​72, 210–​11,  212 thought experiments and, 19, 22–​23, 24–​25,  38–​39 translation and, 139–​40, 141, 151 verbal narratives describing, 210 visualization and, 18–​19

Index  343 “Models and Fictions” (Frigg), 163 Morgan, M.S., 122, 131–​32 Mori, T., 190–​92, 191f Morrison, M., 131–​32 Morton, A., 140   Nagaoka, H., 325 Nagel, E., 80–​81 Nakajima, M., 188–​89 Nersessian, N.J., 19, 24–​25, 38–​40 Newton, I. on real numbers, 174 simple explanations favored by, 241 solar system model of, 105, 110–​11,  112–​13 thought experiments of, 17–​18 Norton, J., 19, 23–​24, 37, 231–​32, 239–​40   Paddock, M.L., 193–​94, 195f Parsons, T., 54–​55, 116 Peacocke, C., 32 Piaget, J., 262 Platonism, 162–​63, 172, 173–​74, 269 Poincaré, H., 76 Polya, G., 327–​29 Popper, K., 2 propositional imagination conditional excluded middle and,  42–​43 counterfactual reasoning and, 32, 40–​44 dreams and, 34, 40–​41 epistemic purpose and, 32 freedom and, 30, 38 make-​believe and, 35, 40–​41, 44–​45 minimal core of propositional imagination and, 30–​32, 34, 38, 41 mirroring and, 30–​32, 38, 41 quarantining and, 31–​32, 38 rational thinking and, 32 supposition and, 31, 40–​41 Putnam, H., 128 Pylyshyn, Z., 304, 328–​29   Quine, W.V.O., 71–​72, 154, 304   Ramsey, F., 154, 157 Roca-​Royes, S.,  43–​44 Rust, M.J., 192–​93, ​199, 200f, 201, 204–​5

Rutherford, E., 325 Ryle, G., 28–​29   Salmon, N., 53, 95 Sartre, J.P., 181–​82 Schelling model as computational model, 212–​13 computer simulation of, 213f housing preferences grid model of, 143 lexical scoping and, 226–​27 parallel process in, 221, 222 Schiffer, S., 53, 66–​67 Schwartz, D.L., 238 science definition of, 250 explanation in, 287 hypothesis formation and, 252, 253, 254, 259 imagination and, CROSS objectual understanding and, 281 predictions and, 252–​53 scientific modeling and (see models) Science Without Numbers (Field), 135 Searle, J., 53 Simulation and Similarity (Weisberg), 210, 219 slurs, 307–​8, 312 “Speaking of Fictional Characters” (Thomasson), 89–​91,  92–​93 Stalnaker, R., 33, 34, 42–​43, 170–​71 Strawson, P.F., 181–​82 The Structure of Scientific Revolutions (Kuhn), 2 Suárez, M., 172   Teller, P., 118–​19 “telling details,” 307–​8, 314–​15, 317, 319 Thomasson, A. on abstract artifacts, 119, 159, 162n8, 162,  163–​64 on make-​believe and modeling, 161 missing system models and, 82, 85, 89–​91, 92–​93,  97–​98 on object-​talk, 162, 174 on ontology of models, 163–​65 Thomson, J.J., 325 Thomson-​Jones, M., 3, 55, 67–​68, 104–​5

344 Index thought experiments argumentation and, 231–​33 children’s engagement in, 262–​77, 264–​74f counterfactuals and, 37, 40–​44, 251, 252 definitions of, 17–​18, 267–​68 Dewey inductions and, 240–​41 Einstein and, 230 elimination thesis regarding, 23, 37 epistemology and, 232–​33 executive function and, 275–​76 false conclusions and, 240–​41 Galileo’s law of equal heights and, 20–​23, 24, 37–​38, 39–​40, 44, 156, 237 gravity error example and, 272, 274f imagination and, 156, 251 justificatory powers of, 269 Kuhn and, 268–​69 learning by imaginative thinking and, 230–​31, 232, 237, 240–​41 learning by observation and, 232 mental modeling and, 24–​25, 38–​39 objectual imagination and, 38–​40 propositional imagination and, 19, 37–​38,  40–​46 reconstruction thesis regarding, 23, 37 scientific models and, 19, 22–​23 supposition and, 31, 40–​41 visualization and, 18–​19, 24–​25,  38–​41 Tomita, J., 188–​89 Toon, A. anti-​realist account of models and, 61, 63–​64,  81–​82 direct representation in modeling and,  110–​11 model systems/​model descriptions distinction and, 65 on the ontology of models, 164–​65 on theoretical models, 110–​11

Walton’s theory of fiction and, 52, 57,  64–​65 Turing, A., 17 Tversky, B., 309   Vaihinger, H., 4, 80–​81, 102 van Fraassen, B.C., 80–​81, 145 van Inwagen, P., 85 Vico, G., 304 Volterra, V., 156. See also Lotka-​Volterra  model von Neumann, J., 224   Walker, C.M., 164, 235 Walton, K.L. anti-​realism of, 61, 81–​82 on “fictional truth,” 35–​36, 140n14 “games of pretense” in fiction and, 36, 52, 57, 58–​59, 60, 64–​65, 108–​9, 130, 161 on metaphors and prop-​oriented make-​believe,  317 ontological commitments and, 70–​71 on prescription in fiction, 113 on principles of generation, 112 on spontaneity of dreams, 34–​35 War and Peace (Tolstoy), 109 Ward, T., 256–​57 Weisberg, M., 83, 156 “Why Scientific Models Should Not Be Regarded as Works of Fiction” (Giere), 160 Wiener, N., 301 Wigner, E., 128, 155–​56 Wilkenfeld, D., 243–​44, 281–​82 Williams, J.J., 234 Williamson, T., 32–​33 Wittgenstein, L., 28–​29, 312   Yablo, S., 27, 115   Zalta, E., 54–​55