Higher-Order Evidence and Moral Epistemology 9780367343200, 9780429325328

463 36 2MB

English Pages [279] Year 2019

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Higher-Order Evidence and Moral Epistemology
 9780367343200, 9780429325328

Table of contents :
Cover
Half Title
Series Page
Title
Copyright
Contents
Acknowledgements
Change in Moral View: Higher-Order Evidence and Moral Epistemology
PART I Higher-Order Evidence Against Morality
1 Evolutionary Debunking, Self-Defeat and All the Evidence
2 Moral Intuitions Between Higher-Order Evidence and Wishful Thinking
3 Debunking Objective Consequentialism: The Challenge of Knowledge-Centric Anti-Luck Epistemology
4 Disagreement, Indirect Defeat, and Higher-Order Evidence
PART II Rebutting Higher-Order Evidence Against Morality
5 Higher-Order Defeat in Realist Moral Epistemology
6 Moral Peer Disagreement and the Limits of Higher-Order Evidence
7 Debunking Skepticism
PART III Broader Implications of Higher-Order Evidence in Moral Epistemology
8 Moral Testimony as Higher-Order Evidence
9 Higher-Order Defeat in Collective Moral Epistemology
10 The Fragile Epistemology of Fanaticism
PART IV Permissible Epistemic Attitudes in Response to Higher-Order Evidence in Moral Epistemology
11 How Rational Level-Splitting Beliefs Can Help You Respond to Moral Disagreement
12 Epistemic Non-factualism and Methodology
List of Contributors
Index

Citation preview

Higher-Order Evidence and Moral Epistemology

This book offers a systematic look at current challenges in moral epistemology through the lens of research on higher-order evidence. Fueled by recent advances in empirical research, higher-order evidence has generated a wealth of insights about the genealogy of moral beliefs. Higher-Order Evidence and Moral Epistemology explores how these insights have an impact on the epistemic status of moral beliefs. The essays are divided into four thematic sections. Part I addresses the normative significance of higher-order evidence for moral epistemology. Part II covers the sources of higher-order evidence in moral epistemology, such as disagreement and moral testimony, for both individuals and groups. The essays in Part III discuss permissible epistemic attitudes regarding a body of moral evidence, including the question of how to determine the permissibility of such attitudes. Finally, Part IV examines the relevance of higher-order evidence for phenomena of practical concern, such as fundamentalist views about moral matters. This volume is the first to explicitly address the implications of higherorder evidence in moral epistemology. It will be of interest to researchers and advanced graduate students working in epistemology and metaethics. Michael Klenk works at the intersection of metaethics, epistemology, and moral psychology. His published papers on these topics in Synthese, Ratio, the Journal of Ethics and Social Philosophy, and the Pacific Philosophical Quarterly, among others. He works at Delft University of Technology and held visiting positions at St. Gallen and Stanford University.

Routledge Studies in Epistemology Edited by Kevin McCain University of Alabama at Birmingham, USA

Scott Stapleford St. Thomas University, Canada

Pragmatic Encroachment in Epistemology Edited by Brian Kim and Matthew McGrath New Issues in Epistemological Disjunctivism Edited by Casey Doyle, Joseph Milburn, and Duncan Pritchard Knowing and Checking An Epistemological Investigation Guido Melchior Well-Founded Belief New Essays on the Epistemic Basing Relation Edited by J. Adam Carter and Patrick Bondy Higher-Order Evidence and Moral Epistemology Edited by Michael Klenk For more information about this series, please visit: www.routledge. com/Routledge-Studies-in-Epistemology/book-series/RSIE

Higher-Order Evidence and Moral Epistemology

Edited by Michael Klenk

First published 2019 by Routledge 52 Vanderbilt Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2020 Taylor & Francis The right of the editor to be identified as the author of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this book has been requested ISBN: 978-0-367-34320-0 (hbk) ISBN: 978-0-429-32532-8 (ebk) Typeset in Sabon by Apex CoVantage, LLC

Contents

Acknowledgements

vii

MICHAEL KLENK

Change in Moral View: Higher-Order Evidence and Moral Epistemology

1

MICHAEL KLENK

PART I

Higher-Order Evidence Against Morality 1 Evolutionary Debunking, Self-Defeat and All the Evidence

29 31

S I LVA N W I TT WE R

2 Moral Intuitions Between Higher-Order Evidence and Wishful Thinking

54

N O R B E RT PAUL O

3 Debunking Objective Consequentialism: The Challenge of Knowledge-Centric Anti-Luck Epistemology

78

PAU L S I LVA

4 Disagreement, Indirect Defeat, and Higher-Order Evidence

97

O L L E R I S B E R G AN D FO L KE TE RSMAN

PART II

Rebutting Higher-Order Evidence Against Morality

115

5 Higher-Order Defeat in Realist Moral Epistemology

117

B R I A N C . B A RN E TT

vi

Contents

6 Moral Peer Disagreement and the Limits of Higher-Order Evidence

136

M A RC O TI O ZZO

7 Debunking Skepticism

155

M I C H A E L H UE ME R

PART III

Broader Implications of Higher-Order Evidence in Moral Epistemology 8 Moral Testimony as Higher-Order Evidence

177 179

M A RC U S L E E, N E IL SIN CL A IR, AN D JO N RO BSON

9 Higher-Order Defeat in Collective Moral Epistemology

198

J. A DA M CA RTE R AN D DA RIO MO RTIN I

10 The Fragile Epistemology of Fanaticism

217

J O S H UA D I PAO L O

PART IV

Permissible Epistemic Attitudes in Response to Higher-Order Evidence in Moral Epistemology

237

11 How Rational Level-Splitting Beliefs Can Help You Respond to Moral Disagreement

239

M A R G A R E T GRE TA TURN B UL L AN D E RIC SAMPSON

12 Epistemic Non-factualism and Methodology

256

J U S TI N C L A R KE - DOAN E

List of Contributors Index

265 268

Acknowledgements Michael Klenk

I’d like to express my gratitude to my contributors. I know that the schedule was tight, and I am delighted that you persevered. Each author also agreed to read at least one other contribution and provided valuable feedback. Thanks to Andrew Weckenmann, editor at Routledge, for encouraging me to pursue this project and for helpful suggestions during the proposal process. Thanks, too, go to Alexandra Simmons, editorial assistant at Routledge, for her competent management of the production process. I am also grateful to the Dutch Organisation for Scientific Research (NWO) and Herman Philipse, who supported me in beginning this project after defending my PhD at Utrecht University, when I had some time remaining on my PhD contract. Thanks also go to my colleagues who created a supportive and inspiring environment at Delft University of Technology, where I completed the book manuscript, first as a lecturer and then as a postdoc in Ibo van de Poel’s project on value change Th that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 788321. I finalised the book when I was a visiting postdoctoral researcher at Stanford University. My work was generously supported by a Niels Stensen Fellowship.

Introduction Change in Moral View: Higher-Order Evidence and Moral Epistemology Michael Klenk

1 Introduction In this introductory chapter, I first explain the issues addressed in this book based on an example of everyday moral thought: how should we change our moral views, such as the conviction that eating meat is impermissible, in response to higher-order evidence – that is, evidence about our evidence? My aim is to make vivid how moral life is permeated with the issues discussed in this book. I then introduce the two philosophical debates that inform this question in Sections 3 and 4 and point out their connection in Section 5. My aim in these sections is to introduce and connect the central positions in the philosophical debate and to show how approaching moral epistemology from the higher-order-evidence perspective is expedient and fruitful. I then present the purpose and content of the volume in Section 6.1

2 Change in Moral View A good friend of mine is a vegetarian. He believes that it is morally wrong to eat meat because it is wrong to harm sentient beings such as fish, chicken, pigs, and cows. He holds this moral conviction based on sound evidence: he knows that animals can feel pain, he has the intuition that harming sentient being is wrong, and the arguments in Peter Singer’s Animal Liberation (1975) seem sound to him.2 Recently, my friend read in a newspaper that animal agriculture has a significant impact on climate change.3 He also thinks that we have a moral obligation to protect our planet, so the newspaper article is more evidence for him that eating meat is morally wrong: according to the article, eating meat perpetuates anthropogenic climate change, thereby harming our planet, and since we have an obligation to protect our planet, we should not eat meat. Intuitively, my friend should therefore be more confident in his moral view about eating meat. Many epistemologists embrace the view that we are rationally required to believe what our evidence supports. Generally speaking, any factor that

2

Michael Klenk

makes it more probable that certain states of affairs obtain (or do not obtain), such as the impermissibility of eating meat, is evidence (that those states of affairs obtain). Insofar as there is a sense in which moral beliefs (or moral views – I use these terms interchangeably) can be true, we can thus speak of evidence in support of a moral view.4 What counts as evidence in support of a moral view, of course, depends on what makes the particular moral view likely to be true. Descriptive information (e.g. about the fact that animals feel pain) as well as (considered) moral intuitions and moral emotions are often taken as evidence in this context.5 Through the newspaper article, my friend encountered more evidence in favour of the wrongness of eating meat, and consequently, it would be rational for him to be more convinced of his moral view against eating meat. Evidence can thus provide more support and therefore make it rationally required to endorse one’s moral views more strongly. Evidence may also take away support for one’s moral views, in which case one is rationally required to lower one’s confidence in one’s moral view. Suppose you do not agree with my friend and instead believe that eating meat is morally permissible. My friend might explain to you in gruesome detail how some animals die in factory farms, describing their shrieks and their anguish, and you might also then have the moral intuition that harming animals is wrong. Moral intuitions are an example of first-order evidence for our moral views.6 First-order evidence bears directly on the truth of a proposition at the object level; that is, in our example, first-order evidence bears on how probable it is that eating meat is morally permissible. You would now have evidence against your moral view. You should weigh it against the relevant evidence you already possess, and then you should endorse what your evidence most strongly supports. Plausibly, your overall confidence that eating meat is permissible now has less evidential support, and consequently, you should now be less confident that eating meat is permissible. This process is simple. However, the relationship between evidence and rational moral belief becomes more complicated when you consider new information about your evidence. For example, suppose my friend tells you that you just believe that eating animals is permissible because you grew up in a culture where almost everyone thinks this, so you were never confronted with good evidence to the contrary7 or that you just believe it because your naturally evolved moral intuitions have resulted in an inhibition against harming human animals, but not other animals.8 Alternatively, my friend might point out that you enjoy eating steaks and that therefore your subconsciousness manufactures an appropriate rationalisation so that you can keep doing what you enjoy.9 Or perhaps you are simply too callous and ignorant to grasp the impermissibility of eating meat. Or he might also emphasise that he, someone who has considered the question whether eating animals is morally permissible has carefully arrived at a different conclusion. Should you change your moral view?

Introduction

3

These are claims about what we might call genealogies of your moral beliefs: they concern the circumstances in which you formed your moral beliefs (e.g. the culture in which you grew up), the grounds on which you formed them (e.g. your desires or your ignorance), your way of interpreting the grounds (e.g. differently from someone else, such as my friend), or their causes (e.g. your naturally evolved intuitions).10 Genealogical claims may provide higher-order evidence – that is, evidence about the character of the evidence itself – or about your capacities and dispositions for responding rationally to your evidence (Feldman 2005; Christensen 2010; Lasonen-Aarnio 2014; Kelly 2005, 2010, 2016).11 Moral philosophers have been concerned with such genealogical claims. In particular, many have sought to debunk moral beliefs – that is, to show that they are less justified, or not justified at all, in light of the genealogical information, as I will show in more detail in Section 4. Although it is not immediately apparent how genealogical claims have a bearing on whether it is permissible to eat meat, claims of that kind strike many philosophers as relevant to your moral outlook (e.g. Street 2006; Joyce 2016b; see Wielenberg 2016 for an overview), and therefore, they are relevant to whether you should maintain your conviction about eating meat. But how precisely should you change your moral view in response to such claims? How does higher-order evidence matter in moral epistemology? The issue quickly gets complicated here. First, we have what I will term the classification problem. The classification problem is the problem of finding out whether a given piece of information counts as evidence or not (and thus makes some states of affairs more or less probable). Some genealogical claims are relevant to what one is rationally required to believe about moral matters, and thus, they are higher-order evidence for moral truths. Such claims seem to affect the rationality or justification of one’s beliefs at least indirectly.12 After all, it is evidence about how good your evidence is or how well you interpreted it, which is relevant to whether or not your belief is true, even though it does not directly concern whether some moral claim is true.13 But clearly not all genealogical claims are relevant to what one is required to believe. Unsupported confabulations do not matter. One must at least have sufficient reason to think that genealogical claims about a (set of) belief(s) could be true for them to affect what one is rationally required to change about the affected (set of) belief(s). For example, the claim that you never looked at good evidence about the permissibility of eating meat in the first place strikes me as relevant for what you are rationally required to believe about eating meat, but only insofar as the claim is (likely to be) accurate. If you are reasonably sure that the claim is false, then it should not play a role in how you change your moral view about eating meat. Moreover, even among genealogical claims that are likely to be true, only some of them should affect what we are rationally required to believe. For example, consider the claim that you formed

4

Michael Klenk

your beliefs about the permissibility of eating meat on a Tuesday. That is a genealogical claim, and it may likely be true, but I imagine that you would not change your view on eating meat because of it – and you should not, because weekdays do not affect the truth of moral beliefs about the permissibility of eating meat. So when can we count genealogical information as higher-order evidence instead of as mere higher-order information? Intuitively, there needs to be some “connection” between the genealogical claim and the truth of the belief that it is about for the genealogical claim to count as evidence. The question, of course, is what that “connection” should be. These observations verge on the trivial. However, as I will show in Section 4, attempts to explicate the nature of that connection in the moral case are controversial, and yet they can be understood as (inexplicit) attempts to solve the classification problem. Second, the classification problem is closely related to what I will term the accounting problem, which concerns a more fundamental puzzle about the proper relation between higher- and lower-order evidence. It helps to illustrate the accounting problem with a non-moral case. Let us consider Watson and Holmes. Suppose Watson correctly reasoned from his available first-order evidence (e.g. a bloody knife) that the butler committed the murder. But Holmes, the best criminal investigator of all time, tells Watson that he reasoned irrationally. Watson now has both first-order and higher-order evidence, and he must take account of what his total evidence supports. So even though he may have solved the classification problem, the accounting problem still looms. Likewise in the moral case. Higher- and lower-order evidence add up to one’s total evidence (e.g. your original evidence in support of the permissibility of eating meat plus the genealogical information provided by my friend). What view does one’s total evidence support? That is, in forming a moral view, how should one account for one’s total evidence? One option is to ignore the higher-order evidence. For example, you could maintain that eating meat is permissible and continue to believe that your view is based on good evidence. But in that situation, one would not only irrationally ignore a part of one’s evidence (i.e. one’s higher-order evidence), but this stance would also seem uncomfortably dogmatic, because one would not seem open to changing one’s moral view. Alternatively, one could let one’s higher-order evidence override one’s first-order evidence. For example, you could be less confident that eating meat is permissible, and you could also be less confident that your view is based on good evidence. But then you would just have irrationally ignored a different part of your evidence, and if your higher-order evidence is misleading, then your altered moral view would make you epistemically and morally worse off (because you gave up a rational belief in response to misleading higher-order evidence). Misleading higher-order evidence suggests that one’s (moral) view fails to be rational, even though one’s (moral) view is, in fact, rational. Finally, you could somehow try to weigh up your first-order and your higher-order evidence. If you do that,

Introduction

5

however, it would seem that you failed to fully “respect” both types of evidence (cf. Feldman 2005). So it seems as if there is no fully satisfactory way to change one’s moral view in response to higher-order evidence and thus, in other words, no fully satisfactory way to solve the accounting problem. This book looks at questions of moral epistemology through the lens of recent research on higher-order evidence. It thereby aims to break new ground in debates concerning debunking arguments in ethics (cf. Sauer 2018) and the epistemic significance of moral disagreement (cf. McGrath 2008) and to show how research on higher-order evidence has a bearing on debates about moral testimony, collective moral knowledge, and the truth of substantive moral theories such as consequentialism. In the rest of the Introduction, I will first explain the accounting problem in more detail and will then reveal the prominent, though hitherto mostly unacknowledged, role of questions about the epistemic significance of higher-order evidence for debates in moral epistemology, before I summarise the insights provided by the 12 contributions to this volume.

3 The Accounting Problem: Puzzles About Higher-Order Evidence The philosophical debate about higher-order evidence focuses predominantly on two questions, which will be relevant in the volume.14 First, there is the fundamental question of determining what one’s total evidence supports when one receives higher-order evidence. Second, there is a closely related question about the level of impact that higher-order evidence has. In this section, I will briefly introduce the debate concerning both questions so that we have a good grasp of the current debate about higher-order evidence. It will then become clear in the next section how these debates bear on metaethics. Most epistemologists accept that higher-order evidence is normatively significant – that is, it has a bearing on what a rational thinker should believe. All chapters in this volume presuppose this claim, and for good reasons.15 First, it would appear dogmatic to ignore higher-order evidence. Suppose that you get higher-order evidence that your reasoning from evidence E (e.g. your moral intuition that eating meat is morally wrong) to judgement B is erroneous. You would have to think as follows to be dogmatic. Given E, B. But given the higher-order evidence H, your reasoning from E to B might be erroneous. If you simply assert that E really supports B and thus conclude that H cannot be true and that therefore B, you would be dogmatic. In effect, you have reasoned your way to B by relying on B.16 Second, we are fallible creatures, and it is plausible that even beliefs about necessary truths are prone to error, and thus, there must be situations in which we have reason to revise such judgements, even when we

6

Michael Klenk

are correct. For example, even after a competent deduction, there will be cases where we have reason to change our views. Regular undercutting defeat is often characterised as the phenomenon of showing that an evidential relation that holds between E and H in circumstances C does not hold now insofar as C does not obtain (cf. Pollock and Cruz 1999). For example, suppose you are on a factory visit, looking at seemingly red wedges on a conveyor belt. In normal circumstances, you can trust your sight, so the fact that the wedges seem red to you is evidence that they are. But the factory foreperson’s claim that the wedges are illuminated by red light undercuts your justification for thinking so. So in typical cases of undercutting defeat, an evidential relation that holds in normal circumstances is revealed to be missing in the actual circumstances. But if one has competently worked out a proof the evidential relation between E and H necessarily holds, undercutting defeat may not be able to explain a change in a moral view about necessary truths.17 Higher-order defeat, in contrast to regular undercutting defeat, suggests that one has taken an evidential relation to obtain that never obtained; it is “revisionary” in this sense (Lasonen-Aarnio 2014). So the normative significance of higherorder evidence may help to explain why we ought to change our views even about necessary truths. Thus, higher-order evidence about morality should, somehow, affect our moral views. But how, exactly? The two views that have most commonly been defended in the literature are the conciliatory view and the steadfast view.18 Both will play a role in what follows. According to the conciliatory view, one should respond to (negative) higher-order evidence by becoming less confident in one’s original view and less confident that one’s first-order evidence supports one’s view. The position developed from considerations of the significance of peer disagreement (Christensen 2007, 2009; Matheson 2009). However, its scope can be broadened to encompass appropriate reactions to higherorder evidence in general. Conciliatory views are partly motivated by the arguments supporting the normative significance of higher-order evidence introduced earlier. They also accord with the intuition that ignoring higher-order evidence is dogmatic and therefore inappropriate. This intuition seems particularly strong when one considers dogmatic reactions to peer disagreement. According to the steadfast view, one should respond to (negative) higher-order evidence by maintaining one’s confidence in one’s original view, and one should also remain confident that one’s first-order evidence supports one’s view. Several views have been proposed to deal with this problem, and as several contributions to this volume show, adopting or strengthening one of these views can have implications for the debunking arguments mentioned earlier. Kelly (2010) suggests that both the steadfast view and the conciliatory view are onto something. The conciliatory view is correct in suggesting

Introduction

7

that what is rational to believe about our evidence is not wholly independent of what is rational to believe about the world. Learning that someone like Peter Singer believes that eating meat is wrong may be a reason to think that you assessed the evidence incorrectly. The steadfast view correctly shows, argues Kelly, that if something is genuinely good evidence for a given conviction, then that fact itself will contribute to the epistemic justification for believing that conviction (Kelly 2010, 159). Kelly goes on to defend what he terms the total evidence view. According to this view, the evaluation of one’s total evidence (e.g. your moral intuition about harming animals and the information about, say, the meateating culture in which you grew up) will depend on the strength of the higher-order and the lower-order evidence.19 As Tiozzo (2019, ch. 2) points out, Kelly’s view appears to have the advantages of both conciliatory and steadfast views, but it also has their downsides. By aggregating the evidence to find a suitable compromise, it seems that one will give neither the higher-order evidence nor the firstorder evidence its full due. Another way to look at the significance of higher-order evidence is in terms of the interaction of levels of belief. The conciliatory, steadfast, and total evidence views all agree that levels of belief are connected; that is, they agree that there is a close connection between what is rational to believe about some proposition p and what is rational to believe about p. In contrast, the proponents of level-splitting views maintain that one’s total evidence sometimes rationally requires divergent beliefs at different levels (Coates 2012; Hazlett 2012; Lasonen-Aarnio 2014; Weatherson n.d.). Level-splitting views offer a way of dealing with misleading higherorder evidence. Misleading higher-order evidence misleads you about your first-order beliefs. For example, suppose you correctly believe that eating meat is permissible because of my friend’s testimony. When a seemingly reliable source then tells you that my friend is a liar, you would be rationally required to be less confident about the impermissibility of eating meat. But, clearly, by connecting the impact of your higher-order beliefs about what your evidence supports to your first-order beliefs, you have put yourself in a worse position epistemically and morally. Level splitting can avoid this problem. You could correctly maintain that eating meat is permissible while also maintaining that you don’t have good evidence in support of this view. As we will see later, the level-splitting view is of interest in the metaethical debate because it offers a way for new information about the bad grounds for our moral beliefs to leave intact our justification for our first-order moral beliefs.20 The main problem for level-splitting views, which, at the same time, is the main motivation for level-connecting views, is a concern with akratic beliefs. On a level-splitting view, one ends up believing things like “p but my evidence does not support p” or “p but I am not sure whether it is

8

Michael Klenk

rational to believe that p.” Such akratic beliefs, or Moorean propositions, are widely considered to be irrational (cf. Horowitz 2014; Adler 2002). Moreover, the level-splitting view also entails that bootstrapping is permissible (Sliwa and Horowitz 2015, 2848).21 Thus, there is a fundamental question about what attitudes or beliefs one’s total evidence supports when higher-order evidence is in the picture. The accounting problem is perfectly general; it arises in all areas in which one strives for rational belief. But there is special reason to attend to the accounting problem in the case of moral epistemology, as the next section shows.

4 The Classification Problem: Genealogy and Moral Philosophy Several prominent metaethical debates turn on the question of how to react to genealogical information. In this section, I aim at showing that these debates can usefully be thought of as being (implicitly) concerned with the classification problem.22 Let us first consider some of the sources of genealogical claims of relevance for moral epistemology. According to a long tradition in the humanities, which rose to prominence with Freudian psychoanalysis, subconscious processes influence and control our thoughts and behaviour (cf. Leiter 2004).23 Of course, we would need to be much more precise about the this claim to properly assess its truth. The basic thought behind it, however, is clear and sufficient for the purposes of this chapter: the introspective seeming that we are “in control” of our thoughts and behaviour in rational ways is a chimera and wrong in so many ways. An anecdote by the philosopher G.A. Cohen (2001) illustrates this thought nicely. Cohen relates how he chose Oxford over Harvard for graduate school and later realised that that his fellow Oxonians generally accepted the analytic/synthetic distinction, while Harvard students generally rejected it. But both groups were confronted with the same arguments. Apparently, Cohen surmised, the respective environment explained the philosophical views that his peers and he adopted. Cohen’s anecdote illustrates how even our considered philosophical views might be significantly influenced by seemingly irrelevant factors beyond rational control (see Vavova 2018; Sher 2001; White 2010). From a different perspective, evolutionary biologists and evolutionary psychologists argue that the capacity for moral thought and behaviour (e.g. Tomasello 2016; Baumard 2016) as well as the content of some of our moral beliefs have evolutionary origins (cf. Barkow, Cosmides, and Tooby 1995). For example, the philosopher Sharon Street (2006) took up the latter position and argued that some of our fundamental moral intuitions, such as if doing something would promote the interests of a family member then that is a reason to do it, are best explained by the content of

Introduction

9

moral intuitions being ultimately determined by natural selection (2006, 115). Organisms that promoted the interests of their kin had comparative advantages of organisms that evaluated the world differently, as explained by the theory of kin selection, so evolution explains well the content of some our moral beliefs (Street 2006; see also Joyce 2006; see Buchanan and Powell 2015 for criticism). With a view to the proximate causes of moral beliefs, neuroscientists (cf. Liao 2016), experimental moral psychologists (cf. Doris 2010), and social psychologists (cf. Forgas, Jussim, and van Lange 2016), among others, have revealed a host of seemingly morally irrelevant situational influences on our moral judgements.24 For example, the philosopher and psychologist Joshua Greene argued that fMRI evidence suggests that moral judgements of a particular type, namely characteristically deontological moral judgements, have their origin in brain regions typically associated with emotional processing that get triggered by situations that are “up close and personal” (Greene et al. 2001, 2106; Greene 2008; see Kahane et al. 2015 for criticism). The list of descriptive, genealogical accounts of moral judgement could be continued. The pertinent metaethical question, however, is what can we learn philosophically from such genealogical findings?25 The answers to that question given by moral philosophers can be grouped into three categories: some suggested that genealogical accounts of our moral beliefs have implications for what we ought to do (that is, they have first-order, normative implications); others claimed that they have implications for what there is (that is, they have metaphysical implications about the nature of morality); and others have argued that genealogical claims have implications for what we ought to believe (that is, they have epistemological implications). However, the former two claims can be subsumed under the latter, at least if interpreted in a plausible way. If genealogical claims have implications for what we ought to do, then they must do so by “trickling down” to our first-order moral beliefs from our higher-order beliefs about our moral beliefs.26 For example, they might imply that we are rationally required to believe that we do not know that eating meat is permissible, which might require us to abstain from eating meat.27 If genealogical claims have metaphysical implications, then their metaphysical implications are plausibly only indirect, in the sense that they determine what kinds of existential claims we are justified in making.28 For example, genealogical claims might imply that some notable fact, such as that some moral beliefs are extremely widespread or that others are extremely disputed, is best explained by a theory that does not appeal to moral facts, so that we might lose our justification for postulating the existence of moral facts. The important point is that, in each case, genealogical claims about our moral beliefs are taken to have implications for what we ought

10

Michael Klenk

to believe. Thus, such arguments can be subsumed under the following genealogical schema: Genealogical Schema Empirical premise: Epistemological premise: Conclusion:

Beliefs of a set, M, have genealogy G (they are caused by process x, formed in environment y, or influenced by factor z). If M-beliefs have genealogy G, then we are rationally required to change our M-beliefs in a way w. So we are rationally required to change our M-beliefs in a way w.

The genealogical schema provides a generic answer to our question about the philosophical relevance of genealogical claims about moral beliefs. It says that we are rationally required to somehow change (a subset of) our moral beliefs because of their genealogy. How precisely should we change them? The most common interpretation of “change in a way w” has been to suggest that some genealogy “should reduce or nullify” our confidence in the respective moral beliefs, though an increase in confidence is equally possible. Arguments that fall within the genealogical schema have been tremendously influential in moral philosophy, as the following sketches illustrate. Montaigne (1877) and more recently and more sophisticatedly Wong (2006) argue that most of our moral beliefs are heavily influenced by culture and that we therefore cannot rationally maintain that these beliefs are true independently of culture. Ruse and Wilson (1986) argued that all our moral beliefs are ultimately caused by evolutionary pressures and that we therefore cannot rationally maintain that these beliefs are true independent of our human nature. In recent metaethics, Greene (2008) argues that characteristically deontological moral beliefs are caused by processes that do not work reliably in our current environments and that therefore deontological beliefs are unreliable. Street (2006) and Joyce (2006) argue that the evolutionary origin of our moral beliefs implies that we cannot have moral knowledge, at least if robust versions of moral realism are true. Tersman (2006), among others, has wondered whether widespread moral disagreement implies that at least some of us formed our moral beliefs in a problematic way such that we might lose our justification for maintaining them. If it is correct that these different debates fit the genealogical schema, then we can make an interesting observation. The genealogical schema raises the classification problem when we consider the soundness of the epistemic premise: why should a given genealogy, G, have a bearing on whether we are rationally required to change our moral beliefs in some

Introduction

11

particular way? Thus, several central debates in recent moral epistemology rely on an answer to the question of when new information provides evidence concerning our higher-order moral beliefs (e.g. the belief that our moral beliefs are based on good evidence). Interpreting the debate about genealogical explanations in moral philosophy as asking for a solution to the classification problem will help us to see how these different scientific approaches to morality are ultimately connected on the epistemological level. Thus, why do any of these claims bear on the rationality of our higher-order moral beliefs (if they do)? First, as we have seen in Section 2, we should be reasonably confident that such genealogical claims are true, so the ongoing debates about their truth is crucial to moral epistemologists interested in the actual truth and justification of our moral beliefs. Nonetheless, even a true genealogical claim does not per se bear on the rationality of our moral beliefs: true genealogical claims are not sufficient for a warranted change in moral view. There needs to be some “connection” between the higherorder information and moral beliefs. We need to ascertain whether these claims, if true, bear on whether our higher-order views (e.g. “my moral beliefs are based on good evidence”) are warranted. Therefore, to assess the soundness of any argument that is an instance of the genealogical schema, we need a solution to the classification problem. Once we have solved the classification problem, we should be able to tell which genealogical accounts classify as higher-order evidence, as having a bearing on the rationality of our beliefs about our moral evidence. Once we have solved the accounting problem, we should be able to tell how that bears on the rationality of our moral beliefs themselves.29 This is what this book aims to contribute to.30

5 Connecting the Classification Problem With the Accounting Problem The classification problem and the accounting problem are connected in moral epistemology. I will focus on five issues in recent moral epistemology that connect genealogical claims about morality with the classification problem and the accounting problem. The metaethical debate on genealogical information, in the debates about evolutionary debunking arguments31 and moral disagreement32 in particular, has increasingly gone epistemological in the sense that its participants have more explicitly begun to interpret these debates as concerning questions about what we should take as higher-order evidence concerning our moral beliefs (and why). First, can we classify genealogical information about our moral beliefs as higher-order evidence? That is, do genealogical claims have a bearing on whether our moral beliefs are rationally formed? That is the classification problem. According to at least one interpretation, debunking

12

Michael Klenk

arguments in moral philosophy provides us with evidence of error. In the characteristic language used by some proponents of debunking arguments, some influences seem to take our moral beliefs “off track” with regard to moral truth (cf. Street 2006). If genealogical claims are evidence that our moral beliefs are unlikely to be true, then they classify as bona fide higher-order evidence. However, several scholars have noted that targeting all moral beliefs by way of such an argument runs the risk of self-defeat (e.g. Klenk 2017; Vavova 2018; Rini 2016). That is because, presumably, we can know that all our moral beliefs are unlikely to be true only by presupposing some moral truths, which is illegitimate in an attempt to defeat all moral beliefs. However, that is the case only if one accepts a conciliatory view on the accounting problem (as Wittwer discusses in this volume). Apart from aiming to defeat all moral beliefs, there is an open question regarding whether genealogical claims might be evidence of error for a subset of our moral beliefs, which leaves open the possibilities for answering the classification problem at this level (Sauer 2018; May 2018). A second, closely related debate depends on solving the allocation problem. Even if some genealogical claim is legitimate evidence of error concerning (a subset of) our moral beliefs, there is the possibility to provide additional evidence that our moral beliefs are likely to be true to counter the evidence of error. Providing such evidence requires assumptions about the moral truth. However, for such a strategy to work, one must violate a variant of what is known as the independence principle in the higher-order evidence debate (cf. Christensen 2010, forthcoming). The thought behind the independence principle is as follows. Suppose that you receive evidence that your reasoning in support of your belief that p was faulty. According to the independence principle, you ought to reason about whether p without taking into account the evidence that led you to believe that p in the first place. If violating the independence principle is permissible, then debunking arguments can probably be disproved (cf. Klenk 2018c). At the same time, if the independence principle is false, then conciliatory solutions to the accounting problem lose some of their support. Thus, these questions about the classification problem in metaethics bear on solutions to the accounting problem. Third, genealogical claims may be evidence that moral beliefs fail some other significant epistemic requirement. They need not be evidence of error, but may be, for example, evidence about the reliability, safety, or sensitivity of one’s moral beliefs. In evaluating whether that is the case, we face another instance of the classification problem. Are available genealogies evidence in support of any of these claims? And, if so, in what ways should we change our moral beliefs in response to them? Here, a sometimes neglected aspect of the classification problem (what is higherorder information evidence of?) and the accounting problem (what are

Introduction

13

we rationally required to believe in response, given all our evidence?) combine. Metaethicists have begun to question the principles in virtue of which we ought to change our moral views (Klenk 2019; Clarke-Doane and Baras 2019). Scholars have defended several epistemic principles, some arguing that none of them are violated by any currently available genealogical information. Fourth, once we have classified some genealogical claim as higherorder evidence, we face the accounting problem of deciding what our total evidence supports. That issue has been most pronounced in debates about moral peer disagreement. Moral disagreement, at least among peers that are roughly equally likely to get moral matters right, is widely thought to raise a pernicious challenge for the view that moral judgements are justified.33 Disagreement challenges presuppose an answer to the classification problem; the heart of the debate concerns the allocation problem. Given that one possesses good evidence for one’s moral views, how should one account for the higher-order evidence provided by the information about the disagreement? Interestingly, the allocation problem should in equal measure arise in the debunking debate. Here as well, we should suppose that one possesses good evidence for one’s moral views and thus ask how any higher-order evidence can be accounted for. Some connections between both debates have already been established (Klenk 2018a; Bogardus 2016; Mogensen 2016). In principle, the same questions that arise in the debate about peer disagreement regarding the accounting problem should be applicable in other debates concerning genealogical information. Finally, the genealogical schema, as noted earlier, makes room also for increasing confidence in one’s moral views in response to genealogical information. On this point, there have been two debates in recent moral epistemology, concerning the epistemic significance of moral testimony (Hills 2009; McGrath 2009) and the requirements for collective moral knowledge (e.g. Anderson 2016). The classification problem looms for the first debate: can testimony provide us with evidence in moral epistemology? The questions we discussed earlier, in turn, should equally apply on the collective level: how should higher-order evidence be accounted for on the collective level? In reaching the current stage of the debate, metaethicists who have considered genealogical claims have increasingly “gone epistemological” by recognising the importance of some epistemological claims when thinking about the metaethical implications of genealogical information. This volume aims to make further progress in this area by explicitly addressing the assumptions about the normative significance of higher-order evidence that underlie much of the recent metaethics debate. By exploring the implications of different views about the epistemic significance of higher-order evidence in moral epistemology, the book should also

14

Michael Klenk

provide insights helpful in theory choice for scholars working on higherorder evidence.

6 The Structure of the Volume This volume is the first to explicitly address the implications of higherorder evidence for topics in moral epistemology. It contains 12 previously unpublished essays, and it brings together leading international philosophers working in moral epistemology and the debate about higher-order evidence and several promising scholars who are at an earlier stage in their careers. The volume is divided into four parts: the first is about the higher-order defeat of morality; the second argues against the higher-order defeat of morality; the third is about the wider implications of higher-order evidence in moral epistemology; and the fourth part is about permissible epistemic attitudes in response to higher-order evidence in moral epistemology. The first two parts build on the debates discussed earlier, namely the debate concerning the implications of genealogical explanations of moral beliefs and the epistemic significance of moral disagreement. The third part stays with the method of looking at moral epistemological topics through the higher-order evidence lens but widens the perspective to topics beyond the recent debunking debate. The chapters in this part address the role of moral testimony as higher-order evidence, higher-order defeat in collective moral epistemology, and the rationality of fanatic beliefs in the light of research on higher-order evidence. Finally, the two essays in the fourth part take on a more fundamental perspective, asking how we can rationally respond to higher-order evidence concerning our moral views. In each part of the book, I have sought to include chapters that display a broad range of views, including established positions and new insights. In Part I of the book, which is about the higher-order defeat of morality, Silvan Wittwer argues (in his contribution titled “Evolutionary Debunking, Self-Defeat and All the Evidence”) that evidence about moral peer disagreement could defeat all moral beliefs about which there is peer disagreement if the total evidence view is true (as opposed to a variant of conciliationism). Wittwer discusses how the total evidence view implies the falsity of the independence principle, according to which evidence of error regarding beliefs of some set, B, should be assessed independently of the contents of B. If the independence principle is false, then debunkers can show that widespread moral disagreement provides us with evidence of error. Wittwer explicitly discusses the self-defeat challenge raised by Vavova (2015, 2018) and shows that adopting the total evidence view is a way for debunkers to circumvent the challenge. However, to defeat all moral beliefs, debunkers must show that our total evidence implies that our moral beliefs are likely to be false. He suggests that an argument to this effect is unlikely to succeed in terms of showing the unreliability of

Introduction

15

moral beliefs but that it might succeed in terms of possible moral peer disagreement. Looking beyond the epistemic relevance of disagreement, Norbert Paulo, in his contribution, “Moral Intuitions Between Higher-Order Evidence and Wishful Thinking,” argues that recent moral psychology provides higher-order evidence against the reliability of moral intuitions as evidence for moral beliefs. In contrast to Wittwer, Paulo accepts the independence principle for the sake of argument. He claims, however, that the available higher-order evidence supports negative implications for the justification of our moral views. Paulo attests to a lack of evidence for the reliability of our moral intuitions and examines the evidence against the reliability of some moral intuitions that is discussed in recent situationist moral psychology. Paulo argues that continuing to rely on (considered) moral intuitions in the face of this evidence amounts to wishful thinking. Thus, according to Paulo, those who are sceptical about moral intuitions need not reject the independence principle to erect debunking arguments, as long as they can show that there are systematic influences on our moral beliefs that make them unreliable. In the chapter by Paul Silva Jr titled “Debunking Objective Consequentialism: The Challenge of Knowledge-Centric Anti-Luck Epistemology,” we see how considerations about higher-order evidence can affect which normative theories we adopt. Silva argues that we lack support for accepting consequentialism as the true normative theory because of higher-order evidence against the view that our moral intuitions are best explained by consequentialism. Silva operates on the assumption that there is a knowledge-centric anti-luck epistemology. Accordingly, for a belief to be knowledge, its truth must not be lucky, and for a belief to be justified, it must qualify as knowledge. Hence, the view implies that propositions that cannot be known cannot be the content of justified beliefs. For example, before a winner is drawn, the belief that one’s lottery ticket is a losing ticket cannot amount to knowledge, according to most epistemologists, because even in lotteries with minuscule odds of winning, it could be the case that one’s ticket is a winner. Consequently, one cannot know that one’s ticket is a losing ticket. Silva goes on to show, and this is the main innovation of the chapter, that consequentialist beliefs are relevantly similar to beliefs about lotteries. The net value of an action, it seems, could easily be different. That is so, argues Silva, even for seemingly uncontroversial, deeply held moral beliefs. Since support for consequentialism is often supposed to come from its ability to explain such uncontroversial moral beliefs, Silva’s appeal to the higher-order evidence about the knowability of such propositions takes away that support. A fourth way to rescue debunking arguments is discussed by Olle Risberg and Folke Tersman in their chapter “Disagreement, Indirect Defeat, and Higher-Order Evidence.” They argue that some kinds of moral disagreement defeat our moral views because these disagreements provide us with

16

Michael Klenk

undercutting defeaters. They assume for the sake of argument that levelsplitting views about higher-order evidence are correct. On the face of it, that assumption makes it difficult to argue that evidence of moral peer disagreement defeats the justification of some or all moral beliefs. After all, level-splitting views imply that higher-order evidence affects only our higher-order beliefs (e.g. that you have good evidence for your belief that eating meat is wrong) and not our substantive moral beliefs (e.g. that eating meat is wrong). But Risberg and Tersman show that peer disagreement provides us with undercutting defeaters, which sever the link between the truth of a belief and the grounds on which a given agent holds the belief, rather than higher-order evidence, which is often supposed to leave that link intact, and thus have a different epistemic force than undercutting defeaters. With that argument at hand, they show that we are rationally required to reduce confidence in our moral beliefs in the face of some kinds of moral disagreement. Moreover, Risberg and Tersman ultimately argue that higher-order evidence works just like ordinary undercutting defeaters. By weighing in on the nature of higher-order defeat their chapter makes a contribution to the epistemological debate about the nature of higher-order defeat. In the second part of the volume, which contains arguments against the higher-order defeat of morality, three chapters assess the prospects of higher-order defeat with a contrary conclusion. Brian C. Barnett argues in “Higher-Order Defeat in Realist Moral Epistemology” that given the correct view on higher-order defeat, moral realists can cope with ubiquitous debunking challenges (some of which I introduced earlier, such as the evolutionary debunking challenge and the moral disagreement argument), concluding that a significant count of ordinary first-order moral beliefs are safe from full defeat. Barnett’s argumentative strategy involves two steps. First, he sketches a theory of higher-order defeat in which he pays close attention to the relation between higher-order evidence and lower-order evidence to deduce which types of higher-order evidence defeat lower-order evidence. Second, similar to Risberg and Tersman, he looks at the evidence we gain from moral peer disagreement and evolutionary explanations of morality to see whether the evidence fits any of the defeating types of higher-order evidence that he deduced from his theory of higher-order defeat. The upshot of Barnett’s work is that neither moral peer disagreement nor evolutionary explanations of morality fully defeat our moral beliefs. The reason for the failure of debunking of all our moral beliefs is that, argues Barnett, it is evidence about our moral beliefs, but the (negative) support it confers is inscrutable and thus unable to be fully defeat all our moral beliefs. Marco Tiozzo takes a different route to put pressure on sceptical moral disagreement arguments. In his chapter “Moral Peer Disagreement and the Limits of Higher-Order Evidence,” he argues that higher-order evidenced gained from moral (peer) disagreement fails to imply widespread

Introduction

17

moral scepticism. Tiozzo pays close attention to what it takes for a belief to lose its justification, given a body of higher-order evidence, and thereby focuses on the link between higher-order evidence and higherorder defeat. In doing so, he distinguishes the question of what one’s higher-order evidence supports (which Wittwer, Paulo, and Barnett focus on) from what one can rationally believe given that evidence. He argues that higher-order defeat requires that one believes that one’s belief fails to be rational. If one remains ignorant of this fact, then there is no defeat. Tiozzo then shows that on this interpretation of higher-order defeat, the success of the argument from moral peer disagreement is in jeopardy. He distinguishes between objective and subjective defeaters. The former are facts that affect evidential relations. The latter are beliefs about evidential relations. He argues that subjective defeaters are better suited to explaining higher-order defeat and that therefore higher-order defeat is contingent on what people believe. Since people generally do not take moral peer disagreement to be evidence against the rationality of their moral beliefs, Tiozzo concludes that sceptical implications from moral disagreement are unlikely. In the final essay of Part II of the book, Michael Huemer, in his contribution “Debunking Skepticism,” turns the tables on aspiring debunkers of our moral beliefs. Huemer exploits the fact that higher-order defeat can appear at many levels. If we take genealogical information concerning our moral beliefs to have sceptical implications – that is, to consider that information as constituting higher-order defeaters – then, argues Huemer, we have reason to believe that beliefs in moral scepticism are based on inadequate evidence too. Find a defeater for them and they lose their force. Mirroring Paulo’s strategy of attending carefully to psychological explanations of moral beliefs, though for an opposing conclusion, Huemer argues that “moral scepticism is the product of an unreliable belief-forming process.” In doing so, Huemer accepts for the sake of argument that we have a good grasp of the defeating power of higher-order defeat, but he applies the same strategy “one level up.” He thereby adds a challenge for debunkers. When debunkers can solve the worry that their arguments are self-defeating, the same solution should allow us to raise a higher-order debunking argument against their view. That is, this can be the case as long as one accepts Huemer’s substantial empirical claims about the causes of sceptical belief, which may also provide an impetus for further moral psychological research. In Part III of the book, which is about the broader implications of higher-order evidence in moral epistemology, we look beyond particular metaethical positions about the nature of morality (such as moral realism) to a broader range of issues concerning the beliefs of fanatics, collective moral epistemology, and the epistemic significance of moral testimony.

18

Michael Klenk

Marcus Lee, Neil Sinclair, and Jon Robson assess the implications that accepting the normative significance of higher-order evidence has for a view about moral testimony (e.g. when my friend tells you that harming animals is wrong and you believe it on that basis) in their contribution “Moral Testimony as Higher-Order Evidence.” Moral testimony is often viewed with suspicion regarding its ability to confer moral knowledge or justification (Hills 2013, 2009). Does such pessimism about moral testimony imply that moral testimony cannot be a source of higher-order defeat? Lee, Sinclair, and Robson show that pessimism about moral testimony does not imply scepticism about the negative, or debunking, effect of moral testimony on justification. Thus, although moral testimony may not help us to justify moral beliefs, it may defeat our beliefs. When Risberg and Tersman note that peer disagreement can be taken as first-order evidence, they are implicitly relying on the view that moral testimony can work as higher-order evidence, a view that is corroborated by the chapter of Lee, Sinclair, and Robson. Lee, Sinclair, and Robson are thus able to show how peer disagreement may be significant, even if moral testimony from our peers would not help us to justify moral beliefs or moral knowledge. The epistemology of group beliefs, collective epistemology, has been another topic of recent interest in moral epistemology. J. Adam Carter and Dario Mortini examine it through the lens of research on higherorder evidence in their chapter “Higher-Order Defeat in Collective Moral Epistemology.” They assume for the sake of argument that moral knowledge involves cognitive achievement, a thesis that has recently attracted much attention (e.g. Pritchard 2012). According to this view, it is not sufficient for a belief to be reliably true to qualify as knowledge, but its truth has to be, in some sense, achieved by the believer in question. Carter and Mortini transport the achievement requirement to the collective level, briefly outlining what the conditions would be for groups to attain moral knowledge. They argue that the combination of an achievement view concerning moral knowledge and current proposals regarding collective epistemology suggest that collective moral knowledge is “extremely fragile.” That is, groups have a hard time satisfying the requirements of the achievement thesis and thus claim that a particular piece of collective moral knowledge would easily be defeated. It remains to be discussed whether this result is a strike against the achievement view or whether an improved account of collective moral knowledge can deliver a more stable moral epistemology. In the final chapter of Part III, we turn to a practical problem of particularly recent concern: the beliefs of fanatics. Fanatics, who often but not necessarily have a religious background, have what seem like problematic beliefs, and they seem resistant to change. Nonetheless, some argue that fanatics can be rational. On one account, they are sufficiently ignorant of conflicting information; they have a “crippled epistemology”

Introduction

19

(Sunstein 2009). So sociological facts might explain how the beliefs of fanatics can be rational, even though their beliefs seem far from it from the outside. Joshua DiPaolo shows in his contribution, “The Fragile Epistemology of Fanaticism,” however, that fanatics are ignorant of higher-order evidence and that existing explanations of the rationality of fanatic beliefs fail to take this into account. By approaching the epistemology of fanatic beliefs from the higher-order evidence perspective, DiPaolo refines the explananda of those theories that try to explain the rationality of fanatics. He describes how fanatics, in fact, react to higherorder evidence, claiming that fanatics possess relevant higher-order evidence against their fanatic beliefs while also ignoring this evidence. He finds that fanatics do not treat higher-order evidence as evidence (as they, plausibly, should) but instead as threats to their identity. Hence, if one wants to maintain that fanatics are rational, one has to explain how taking evidence as a threat to one’s identity can be rational. The chapter thus also perfectly fits the volume by showing how a particular moral epistemological debate has thus far neglected the normative significance of higher-order evidence. In Part IV of the book, which is about permissible epistemic attitudes in response to higher-order evidence in moral epistemology, Margaret Greta Turnbull and Eric Sampson argue in their chapter “How Rational Level-Splitting Beliefs Can Help You Respond to Moral Disagreement” that one can undogmatically split higher-order evidence so that it has a bearing only on one’s higher-order beliefs. They thereby show that one need not adopt a conciliatory view to avoid the charge of dogmatism and irrationality and, nonetheless, it is possible to avoid higher-order defeat of morality (which may come, as several other contributors discuss, through moral disagreement and evolutionary explanations of morality). In doing so, their chapter contributes to the mounting defence against the higherorder defeat of morality, which is also discussed in Part II of this volume. The main innovation of their chapter, however, is that they can show that level splitting can be a rationally permissible, and undogmatic, reaction to higher-order evidence. The key element of their argument is their distinction between higher-order evidence about one’s assessment of a given body of evidence (which is what most epistemologists have focused on) and higher-order evidence about whether one reasoned from an illusionary or incomplete body of evidence. Noting this distinction, they argue that it can be rational to maintain what one’s first-order evidence supports and also maintain that taking different evidence into account supports a different belief. They defend their argument by showing that two objections to level splitting by Horowitz (2014) do not apply to their version of level splitting. The authors thereby carve out space for rational level splitting and, furthermore, argue that adopting such beliefs allows level splitters “to occupy the narrow territory of humility, between servility and arrogance.”

20

Michael Klenk

Almost all of the contributions to this volume implicitly assume that settling how to change our moral beliefs – that is, what epistemically permissible attitudes to adopt – involves analysing concepts such as justified belief. Justin Clarke-Doane in his chapter “Epistemic Non-factualism and Methodology” goes against this assumption. He asks us to consider normative questions, about epistemology and metaethics, as questions that concern what to believe and what to do. The different chapters in this volume propose different answers to this question, which can be understood as suggesting different theories about what it takes for beliefs to be justified (e.g. Silva’s adoption, for the sake of argument, of knowledgecentric anti-luck epistemology or Carter and Mortini’s adoption of virtue epistemology). Clarke-Doane argues, however, that an open question remains about whether we must to adopt any particular understanding of justification or whether there might be equally good grounds to adopt another one. In doing so, Clarke-Doane transposes the well-known openquestion argument from descriptive to evaluative properties. If ClarkeDoane is right, then we face an under-studied epistemological question that concerns how we should pick our epistemic concepts, which clearly has a bearing on how we could settle on solutions to the classification problem and the accounting problem. The sets of issues included in this volume have been chosen both for their intrinsic interest and for their importance in central topics that are included in the current metaethical debate. I hope that they give the reader a sense of how progress is possible in the debunking debate, by using the finer epistemological tools at hand and a sense of excitement offered by the higher-order evidence perspective on other topics in moral epistemology. The contributions in this volume demonstrate valuable starting points for future examinations.

Notes 1. Thanks to Jaakko Hirvelä, Herman Philipse, Ibo van de Poel, Martin Sand, Steffen Steinert, and Marco Tiozzo for helpful comments on a previous version of this chapter and to Liam Deane for an instructive discussion. 2. Ordinarily speaking, objects (e.g. a gun with fingerprints on it at a crime scene) are taken to be evidence, though prevalent epistemological approaches maintain that one’s knowledge or (subsets of) one’s mental states constitute one’s evidence; see Williamson (2000) and Conee and Feldman (2004), respectively. Intuitions about moral principles (e.g. harming sentient beings is wrong), knowledge of facts related to those principles (e.g. “X is a sentient being”), and sound arguments that suggest that a certain state of affairs obtains are thus evidence in this general sense. 3. E.g. Cameron, J. and Cameron, S. Amis (December 4, 2017). 4. Early proponents of non-cognitivism (e.g. Ayer 1971 [1936]; Hare 1963) may deny this, but most theories across the metaethical spectrum make room for speaking about moral truth; this includes modern versions of noncognitivism (e.g. Gibbard 1990; Blackburn 1998) and non-naturalist moral realism (e.g. Shafer-Landau 2003; Enoch 2011; Huemer 2005), error theory

Introduction

5.

6. 7. 8. 9. 10.

11.

12. 13. 14. 15. 16. 17. 18. 19.

21

(e.g. Mackie 1977), and naturalist realism (e.g. Brink 1989; Railton 1986) and of more subjectivist leanings (e.g. Prinz 2007). To consider evidence to be any factor that makes a given state of affairs more probable, they might need to adopt a correspondence theory of truth, which would also exclude modern versions of non-cognitivism. Such factors might include intuitions, beliefs, and other mental states, as well as facts, propositions, events, and worldly objects; see Kelly (2016). It is commonly understood that the truth of most moral views depends on both descriptive and normative factors. For example, the truth of the view that eating meat is impermissible plausibly depends, in part, on the descriptive fact that animals feel pain. Hence, there can be descriptive evidence in support of this view (e.g. scientific studies about the extent to which animals feel pain). But that view also depends on normative principles (e.g. a prohibition against causing harm). What counts as evidence for such principles will be determined by one’s account of the nature of moral truth. Depending on how we fill in the details, intuitions, and emotions, which can be socially informed, can be interpreted as evidence according to metaethical views from expressivism (Gibbard 1990) to non-naturalist realism (Enoch 2011); see also (Roeser 2011). See Climenhaga (2018) and Bedke (2008). Compare Durkheim (1995), based on whose work one might provide the defence for such a view. See, for example, Greene (2013). See Leiter (2004) for a discussion of a related genealogical suspicion that is especially highlighted by Friedrich Nietzsche, Sigmund Freud, and Karl Marx. Friedrich Nietzsche (1887 [2013]) coined the term genealogy as applied to concepts, beliefs, and values in his endeavor to discredit what he took to be Christian values based on an analysis of their origins (see Klenk 2018b for more discussion). I use the term genealogy in a wider sense. They can take many forms; although they typically concern the history and development of a concept, belief, or value, I interpret the claim widely, so that they encompass information about how individuals attained a concept, formed a belief, or endorsed a value, cf. Queloz (forthcoming). We get higher-order evidence whenever we learn about the circumstances in which we formed a belief, about the causes of our beliefs, or about the way we interpreted the evidence. The term higher-order evidence is used in several non-equivalent ways in the literature. Some take it to mean evidence that has a bearing on evidential relations. Others focus on evidence about the rationality of the person’s thinking or on evidence about the reliability or accuracy of the person’s thinking (Christensen forthcoming). In contrast to first-order evidence, which affects it directly. Compare the following: whether or not you interpreted Peter Singer’s (1975) arguments correctly does not determine whether eating animals is permissible, but it does affect how confident you should be about that question. I am indebted to Tiozzo (2019, ch. 2) for the setup of this section. However, see Whiting (2017) for an argument rejecting the significance of higher-order evidence. See Elga (2007), Christensen (2010, 2011), and Sliwa and Horowitz (2015) for similar bootstrapping cases. Insofar as some moral truths are necessary, there would not be a rational change in a moral view about these either. Horowitz (2014); see also Tiozzo (2019, ch. 2.2). See Wittwer’s contribution to this volume for further discussion of the total evidence view.

22

Michael Klenk

20. See the contributions by Risberg and Tersman and by Turnbull and Sampson for further discussion of level-splitting views. 21. Though see Silva (2017). 22. It is not my aim to provide a thorough overview of moral epistemology. See Campbell (2015) and Zimmerman, Jones, and Timmons (2018) for useful introductions and overviews. 23. As Richard Joyce (2016a, 2) notes, genealogical debunking arguments are commonplace and were made much earlier in recorded scholarship. That they are part and parcel of our everyday lives should be clear by the examples given in the introduction. 24. See Norbert Paulo’s contribution to this volume, as well as Klenk and Sauer (2019), for recent evaluations of the situationist evidence. 25. I am treating genealogical findings as a class here to find common themes in how philosophers have reacted to them. Of course, individual reactions will look quite different, depending on whether they concern, for example, Darwin’s early formulation of the evolution of morality or later improvements of the theory. 26. This is because of well-known problems, and worries, about attempts to derive first-order moral claims from purely descriptive premises. See Farber (1994) for a thorough discussion of the historically best-known case of evolutionary ethics. 27. Though see Coates (2012). See Paul Silva’s contribution to this volume for a relevant argument that suggests a way, though Silva does not explicitly endorse it, for us to go from higher-order evidence to first-order conclusions. 28. Most arguments along these lines appeal to a variant of the parsimony principle; see, for example Harman (1977) and the argument about moral disagreement in Mackie (1977). See Lillehammer (2016) and Clarke-Doane (2016) for discussions on the view that several recent attempts to offer “debunking explanations” of morality, such as those offered by Joyce (2006), are instances that belong to this category. 29. Initially, scholars were most impressed with the alleged normative implications of genealogical findings (cf. Farber 1994). Later developments brought with them not only novel descriptive perspectives on morality but also new perspectives on how to interpret these findings metaethically; see Klenk (forthcoming) for an introduction. 30. It is another question, of course, how important the descriptive premise is to the soundness of arguments based on the epistemological debunking scheme; see Klenk (2017) 31. See Street (2006) and Joyce (2006) for the starting points of the modern discussion on evolutionary debunking arguments. See Kahane (2011) Wielenberg (2016) for recent overviews. 32. See Rowland (2017) for a helpful overview. 33. The argument from moral disagreement rose to popularity with the work of Mackie (1977). As Tiozzo shows in his contribution to this volume, however, the argument aims for a metaphysical conclusion, whereas the moral disagreement argument as understood here proceeds on epistemological terms.

References Adler, Jonathan Eric. 2002. “Akratic Believing?” Philosophical Studies 110 (1): 1–27. Anderson, Elizabeth. 2016. “The Social Epistemology of Morality.” In The Epistemic Life of Groups: Essays in the Epistemology of Collectives, edited by Michael Brady and Miranda Fricker, 75–94. Oxford: Oxford University Press.

Introduction

23

Ayer, A.J. 1971 [1936]. Language, Truth, and Logic. Pelican Books. Harmondsworth: Penguin Books. Barkow, Jerome H., Leda Cosmides, and John Tooby. 1995. The Adapted Mind: Evolutionary Psychology and the Generation of Culture. Oxford: Oxford University Press. Baumard, Nicholas. 2016. The Origins of Fairness: How Evolution Explains our Moral Nature. Oxford: Oxford University Press. Bedke, Matthew S. 2008. “Ethical Intuitions: What They Are, What They Are Not, and How They Justify.” American Philosophical Quarterly 45 (3): 253–69. Blackburn, Simon. 1998. Ruling Passions: A Theory of Practical Reasoning. Oxford: Oxford University Press. Bogardus, Tomas. 2016. “Only All Naturalists Should Worry about Only One Evolutionary Debunking Argument.” Ethics 126 (3): 636–61. https://doi.org/ 10.1086/684711. Brink, David O. 1989. Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Buchanan, Allen, and Russell Powell. 2015. “The Limits of Evolutionary Explanations of Morality and Their Implications for Moral Progress.” Ethics 126 (1): 37–67. https://doi.org/10.1086/682188. Cameron, James, and Suzy Amis Cameron. 2017. “Animal Agriculture Is Choking the Earth and Making Us Sick: We Must Act Now.” The Guardian, December 4. Accessed September 24, 2019. www.theguardian.com/commentisfree/2017/dec/ 04/animal-agriculture-choking-earth-making-sick-climate-food-environmentalimpact-james-cameron-suzy-amis-cameron. Campbell, Richmond. 2015. “Moral Epistemology.” In Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. Winter. Accessed July 30, 2018. https:// plato.stanford.edu/archives/win2015/entries/moral-epistemology. Christensen, David. forthcoming. “Formulating Independence.” In Higher Order Evidence: New Essays, edited by Mattias S. Rasmussen and Asbjørn SteglichPetersen. Oxford: Oxford University Press. Christensen, David. 2007. “Epistemology of Disagreement: The Good News.” The Philosophical Review 116 (2): 187–217. Christensen, David. 2009. “Disagreement as Evidence: The Epistemology of Controversy.” Philosophy Compass 4 (5): 756–67. https://doi.org/10.1111/j. 1747-9991.2009.00237.x. Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomenological Research 81 (1): 185–215. https://doi.org/10.1111/j.1933-1592. 2010.00366.x. Christensen, David. 2011. “Disagreement, Question-Begging, and Epistemic SelfCriticism.” Philosopher’s Imprint 11 (6): 1–22. Clarke-Doane, Justin. 2016. “Debunking and Dispensability.” In Leibowitz and Sinclair 2016, 23–36. Clarke-Doane, Justin, and Dan Baras. 2019. “Modal Security.” Philosophy and Phenomenological Research 65 (1): 87. https://doi.org/10.1111/phpr.12643. Climenhaga, Nevin. 2018. “Intuitions Are Used as Evidence in Philosophy.” Mind 127 (505): 69–104. https://doi.org/10.1093/mind/fzw032. Coates, Allen. 2012. “Rational Epistemic Akrasia.” American Philosophical Quarterly 49 (2): 113–24. Cohen, G.A. 2001. If You’re an Egalitarian, How Come You’re so Rich? Cambridge, MA: Harvard University Press.

24

Michael Klenk

Conee, Earl, and Richard Feldman. 2004. Evidentialism: Essays in Epistemology. Oxford: Oxford University Press. www.loc.gov/catdir/enhancements/ fy0620/2004300585-d.html. Doris, John M., ed. 2010. The Moral Psychology Handbook. Oxford: Oxford University Press. Durkheim, Émile. 1995. Erziehung, Moral und Gesellschaft: Vorlesung an der Sorbonne 1902/1903. Frankfurt am Main: Suhrkamp. Elga, Adam. 2007. “Reflection and Disagreement.” Noûs 41 (3): 478–502. https:// doi.org/10.1111/j.1468-0068.2007.00656.x. Enoch, David. 2011. Taking Morality Seriously: A Defense of Robust Realism. Oxford: Oxford University Press. Farber, Paul Lawrence. 1994. The Temptations of Evolutionary Ethics. Berkeley, CA: University of California Press. Feldman, Richard. 2005. “Respecting the Evidence.” Philosophical Perspectives 19 (1): 95–119. https://doi.org/10.1111/j.1520-8583.2005.00055.x. Forgas, Joseph P., Lee J. Jussim, and Paul A.M. van Lange, eds. 2016. The Social Psychology of Morality. New York, NY: Psychology Press. Gibbard, Allan. 1990. Wise Choices, Apt Feelings: A Theory of Normative Judgment. Cambridge, MA: Harvard University Press. Greene, Joshua D. 2008. “The Secret Joke of Kant’s Soul.” In Moral Psychology: The Neuroscience of Morality: Emotion, Brain Disorders, and Development, edited by Walter Sinnott-Armstrong, 35–79. A Bradford Book Vol. 3. Cambridge, MA: MIT Press. Greene, Joshua D. 2013. Moral Tribes: Emotion, Reason, and the Gap between Us and Them. New York, NY: Penguin Books. Accessed September 23, 2019. Greene, Joshua D., R.B. Sommerville, L.E. Nystrom, J.M. Darley, and J.D. Cohen. 2001. “An FMRI Investigation of Emotional Engagement in Moral Judgment.” Science 293 (5537): 2105–8. https://doi.org/10.1126/science.1062872. Hare, R.M. 1963. The Language of Morals. Oxford: Oxford University Press. Harman, Gilbert. 1977. The Nature of Morality: An Introduction to Ethics. Oxford: Oxford University Press. Hazlett, Allan. 2012. “Higher-Order Epistemic Attitudes and Intellectual Humility.” Episteme 9 (3): 205–23. https://doi.org/10.1017/epi.2012.11. Hills, Alison. 2009. “Moral Testimony and Moral Epistemology.” Ethics 120 (1): 94–127. https://doi.org/10.1086/648610. Hills, Alison. 2013. “Moral Testimony.” Philosophy Compass 8 (6): 552–9. https:// doi.org/10.1111/phc3.12040. Horowitz, Sophie. 2014. “Epistemic Akrasia.” Noûs 48 (4): 718–44. https://doi. org/10.1111/nous.12026. Huemer, Michael. 2005. Ethical Intuitionism. Basingstoke: Palgrave Macmillan. Joyce, Richard. 2006. The Evolution of Morality. Life and Mind. Cambridge, MA: MIT Press. Joyce, Richard. 2016a. Essays in Moral Skepticism. Oxford: Oxford University Press. Joyce, Richard. 2016b. “Evolution, Truth-Tracking, and Moral Scepticism.” In Essays in Moral Skepticism, 142–58. Oxford: Oxford University Press. Kahane, Guy. 2011. “Evolutionary Debunking Arguments.” Noûs 45 (1): 103–25. Accessed October 3, 2014. Kahane, Guy, Jim A.C. Everett, Brian D. Earp, Miguel Farias, and Julian Savulescu. 2015. “‘Utilitarian’ Judgments in Sacrificial Moral Dilemmas Do Not Reflect

Introduction

25

Impartial Concern for the Greater Good.” Cognition 134: 193–209. https:// doi.org/10.1016/j.cognition.2014.10.005. Kelly, Thomas. 2005. “The Epistemic Significance of Disagreement.” In Oxford Studies in Epistemology. Vol. 1, edited by Tamar S. Gendler and John P. Hawthorne, 167–96. Oxford: Oxford University Press. Kelly, Thomas. 2010. “Peer Disagreement and Higher-Order Evidence.” In Disagreement, edited by Richard Feldman and Ted A. Warfield, 111–74. Oxford: Oxford University Press. Kelly, Thomas. 2016. “Evidence.” In Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. https://plato.stanford.edu/archives/win2016/entries/ evidence/. Klenk, Michael. forthcoming. “Evolutionary Ethics.” In Introduction to Philosophy: Ethics, edited by George Matthews, 76–90. The Rebus Community. Klenk, Michael. 2017. “Old Wine in New Bottles: Evolutionary Debunking Arguments and the Benacerraf–Field Challenge.” Ethical Theory and Moral Practice 20 (4): 781–95. https://doi.org/10.1007/s10677-017-9797-y. Klenk, Michael. 2018a. “Evolution and Moral Disagreement.” Journal of Ethics and Social Philosophy 14 (2): 112–42. https://doi.org/10.26556/jesp. v14i2.476. Klenk, Michael. 2018b. “Survival of Defeat: Evolution, Moral Objectivity, and Undercutting.” PhD thesis, Utrecht University. https://dspace.library.uu.nl/ handle/1874/364788 Klenk, Michael. 2018c. “Third Factor Explanations and Disagreement in Metaethics.” Synthese. https://doi.org/10.1007/s11229-018-1875-8. Klenk, Michael. 2019. “Objectivist Conditions for Defeat and Evolutionary Debunking Arguments.” Ratio: 1–14. https://doi.org/10.1111/rati.12230. Klenk, Michael, and Hanno Sauer. 2019. Situationism and the Control Requirement for Moral Progress. Manuscript. Lasonen-Aarnio, Maria. 2014. “Higher-Order Evidence and the Limits of Defeat.” Philosophy and Phenomenological Research 88 (2): 314–45. Leiter, Brian. 2004. “The Hermeneutics of Suspicion: Recovering Marx, Nietzsche, and Freud.” In the Future of Philosophy, edited by Brian Leiter, 74–105. Oxford: Oxford University Press. Liao, S. Matthew, ed. 2016. Moral Brains: The Neuroscience of Morality. Oxford: Oxford University Press. Lillehammer, Hallvard. 2016. “‘An Assumption of Extreme Significance’: Moore, Ross, and Spencer on Ethics and Evolution.” In Leibowitz and Sinclair 2016, 103–23. Mackie, John Leslie. 1977. Ethics: Inventing Right and Wrong. London: Penguin Books. Matheson, Jonathan. 2009. “Conciliatory Views of Disagreement and Higher-Order Evidence.” Episteme 6 (3): 269–79. https://doi.org/10.3366/E1742360009000707. May, Joshua. 2018. Regard for Reason in the Moral Mind. Oxford: Oxford University Press. McGrath, Sarah. 2008. “Moral Disagreement and Moral Expertise.” In Oxford Studies in Metaethics. Vol. 3, edited by Russ Shafer-Landau, 87–108. Oxford: Oxford University Press. McGrath, Sarah. 2009. “The Puzzle of Pure Moral Deference.” Philosophical Perspectives 23 (1): 321–44.

26

Michael Klenk

Mogensen, Andreas L. 2016. “Contingency Anxiety and the Epistemology of Disagreement.” Pacific Philosophical Quarterly 97 (4): 590–611. https://doi. org/10.1111/papq.12099. Montaigne, Michel de. 1877. “Of Cannibals.” In Essays of Michel De Montaigne, edited by William C. Hazlitt, 1–10. Nietzsche, Friedrich Wilhelm. 1887 [2013]. On the Genealogy of Morals: A Polemic. London: Penguin Books. translated by Michael A. Scarpitti; with an introduction and notes by Robert C. Holub. Pollock, John L., and Joseph Cruz. 1999. Contemporary Theories of Knowledge. Lanham, MD: Rowman & Littlefield. Prinz, Jesse J. 2007. The Emotional Construction of Morals. Oxford: Oxford University Press. Pritchard, Duncan. 2012. “Anti-Luck Virtue Epistemology.” The Journal of Philosophy 109 (3): 247–79. https://doi.org/10.5840/jphil201210939. Queloz, Matthieu. forthcoming. “Nietzsche’s English Genealogy of Truthfulness.” Archiv für Geschichte der Philosophie. Railton, Peter. 1986. “Moral Realism.” The Philosophical Review 95 (2): 163– 207. https://doi.org/10.2307/2185589. Rini, Regina A. 2016. “Debunking Debunking: A Regress Challenge for Psychological Threats to Moral Judgment.” Philosophical Studies 173 (3): 675–97. https://doi.org/10.1007/s11098-015-0513-2. Roeser, Sabine. 2011. Moral Emotions and Intuitions. Basingstoke: Palgrave Macmillan. Rowland, Richard. 2017. “The Epistemology of Moral Disagreement.” Philosophy Compass 12 (2): e12398. https://doi.org/10.1111/phc3.12398. Ruse, Michael, and Edward Osborne Wilson. 1986. “Moral Philosophy as Applied Science.” Philosophy 61 (236): 173–92. Accessed December 13, 2016. Sauer, Hanno. 2018. Debunking Arguments in Ethics. Cambridge: Cambridge University Press. Shafer-Landau, Russ. 2003. Moral Realism: A Defence. Oxford: Oxford University Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10266702. Sher, George. 2001. “But I Could Be Wrong.” Social Philosophy and Policy 18 (2): 64. https://doi.org/10.1017/S0265052500002909. Silva, Paul. 2017. “How Doxastic Justification Helps Us Solve the Puzzle of Misleading Higher-Order Evidence.” Pacific Philosophical Quarterly 98: 308–28. https://doi.org/10.1111/papq.12173. Singer, Peter. 1975. Animal Liberation: A New Ethics for Our Treatment of Animals. New York, NY: Random House. Sliwa, Paulina, and Sophie Horowitz. 2015. “Respecting All the Evidence.” Philosophical Studies 172 (11): 2835–58. https://doi.org/10.1007/s11098-0150446-9. Street, Sharon. 2006. “A Darwinian Dilemma for Realist Theories of Value.” Philosophical Studies 127 (1): 109–66. https://doi.org/10.1007/s11098-0051726-6. Sunstein, Cass R. 2009. Going to Extremes: How Like Minds Unite and Divide. Oxford: Oxford University Press. http://site.ebrary.com/lib/academiccomplete titles/home.action. Tersman, Folke. 2006. Moral Disagreement. Cambridge: Cambridge University Press.

Introduction

27

Tiozzo, Marco. 2019. “Moral Disagreement and the Significance of HigherOrder Evidence.” PhD thesis, Gothenburg University. https://gupea.ub.gu.se/ handle/2077/57974 Tomasello, Michael. 2016. A Natural History of Human Morality. Cambridge, MA: Harvard University Press. Vavova, Katia. 2015. “Evolutionary Debunking of Moral Realism.” Philosophy Compass 10 (2): 104–16. https://doi.org/10.1111/phc3.12194. Vavova, Katia. 2018. “Irrelevant Influences.” Philosophy and Phenomenological Research 96 (1): 134–52. https://doi.org/10.1111/phpr.12297. Weatherson, Brian. n.d. Do Judgements Screen Evidence? http://brian.weatherson. org/JSE.pdf White, Roger. 2010. “You Just Believe That Because . . .” Philosophical Perspectives 24: 573–615. Whiting, Daniel. 2017. “Against Second-Order Reasons.” Noûs 51 (2): 398–420. https://doi.org/10.1111/nous.12138. Wielenberg, Erik J. 2016. “Ethics and Evolutionary Theory.” Analysis 76 (4): 502–15. https://doi.org/10.1093/analys/anw061. Williamson, Timothy. 2000. Knowledge and Its Limits. Oxford: Oxford University Press. Wong, David B. 2006. Natural Moralities: A Defense of Pluralistic Relativism. Oxford: Oxford University Press. Zimmerman, Aaron, Karen Jones, and Mark Timmons, eds. 2018. Routledge Handbook of Moral Epistemology. New York, NY: Routledge.

Part I

Higher-Order Evidence Against Morality

1

Evolutionary Debunking, Self-Defeat and All the Evidence Silvan Wittwer

1 Introduction1 Robust moral realism is the metaethical view that there are mind-independent moral facts that are irreducible to natural facts and in principle knowable.2 Recently, this view has been criticized heavily on evolutionary grounds. So-called evolutionary debunkers argue that becoming aware of the evolutionary origins of our moral beliefs, realistically construed, undermines their justification.3 While there are various ways of spelling out the underlying epistemological details, an appeal to higher-order evidence of error has recently garnered increased philosophical attention.4 More precisely, Tomas Bogardus (2016), Andreas Mogensen (2014, 2017) and – at least on one plausible reconstruction5 – Sharon Street (2006) argue that evolutionary considerations provide such evidence, add the view – often called conciliationism6 – that higher-order evidence of error defeats justification and thus conclude that evolutionary considerations defeat the justification of our moral beliefs. In response, moral realists such as Katia Vavova (2014) have objected that this evolutionary debunking argument is self-defeating.7 To see how that threat materializes, note that conciliationism is typically motivated by independence, the principle that we should assess higher-order evidence of error with respect to p independently of our original first-order evidence, beliefs or reasoning in support of p.8 To respect this principle when assessing evolutionary higher-order evidence of error, the moral realist would need to set aside all of their moral evidence, beliefs and reasoning. But doing so risks setting aside too much to know whether they are genuinely mistaken about morality or not. So the evolutionary debunking argument from earlier seems to defeat itself. The literature lacks any discussion of whether evolutionary debunkers can handle this self-defeat objection.9 My overall aim in this chapter is to argue that they cannot, thus filling that lacuna – and vindicating Vavova’s worry. To achieve my aim, I proceed in two steps. First, I propose a novel, prima facie promising strategy for avoiding self-defeat. Then, I show that evolutionary debunkers face insuperable difficulties trying to successfully

32

Silvan Wittwer

implement that strategy. As a result, the evolutionary debunking argument from higher-order evidence of error fails. What does this strategy (that constitutes the first step of my argument) look like? It consists in realizing that conciliationism isn’t the only prima facie plausible view on the epistemic significance of higher-order evidence of error. In light of that, it would be premature to conclude that the evolutionary debunking arguments from higher-order evidence defeat themselves. Rather, independently compelling alternatives are available – and it remains an issue of active debate which one we should endorse. Further, some of these alternatives appear to have distinctive features that would help avoid self-defeat. In particular, some views (A) reject independence and therefore allow first-order moral evidence into the picture yet (B) still promise to yield the verdict that evolutionary higher-order evidence can defeat the justification of first-order moral beliefs. For example, Thomas Kelly’s (2010) total evidence view fits that bill. According to it, your total evidence determines whether your belief that p is justified or not. The total evidence includes both your first-order evidence (in support of p) and the second-order evidence of peer disagreement (regarding p). Therefore, Kelly’s (2010) view clearly satisfies (A): it rejects independence, affording our first-order moral evidence, beliefs and reasoning a role in determining whether a given belief is justified or not. That amounts to a promising start, giving evolutionary debunkers enough reason to explore whether their argument avoids self-defeat if they accept the total evidence view instead of conciliationism. However, whether Kelly’s (2010) view satisfies (B) is more complicated, as I show in the second step of my argument. To establish (B), evolutionary debunkers must argue that the total evidence available to the robust moral realist, consisting of their first-order moral evidence and the evolutionary higher-order evidence of error, defeats the justification of their moral beliefs. But, upon reflection, evolutionary debunkers cannot discharge this argumentative burden. The exact reason for their failure depends on the kind of higher-order evidence of error that evolutionary considerations allegedly provide. Debunkers such as Street (2006), for whom evolutionary considerations supply evidence of moral unreliability, struggle with evidential weight. In contrast, debunkers such as Bogardus (2016) and Mogensen (2014, 2017), who construe evolutionary considerations as evidence of moral peer disagreement, are committed to a pair of inconsistent assumptions about evolutionary counterparts. Either way, evolutionary debunkers who appeal to the notion of higher-order evidence of error struggle to implement (B) of my proposed strategy. By implication, their arguments cannot avoid self-defeat. Here is my plan. Section 2 presents the evolutionary debunking argument from higher-order evidence of error in more detail, while Section 3 turns towards characterizing the self-defeat objection. Then, Section 4

Evolutionary Debunking, Self-Defeat and All the Evidence

33

unveils my two-part strategy for dealing with self-defeat. The remainder of the chapter explores whether evolutionary debunkers can make good on that strategy. Section 5 argues that the first part is easy to pull off, but Section 6 raises concerns about the implementation of the second part, having to do with evidential weight and whether evolutionary counterparts qualify as epistemic peers. Section 7 concludes.

2 Evolutionary Debunking and Higher-Order Evidence Recently, some philosophers have argued that evolutionary theory debunks beliefs in robust mind-independent moral facts. To make their case, some of these evolutionary debunkers rely on the notion of higher-order evidence of error. Despite differences in detail, the arguments put forth by Tomas Bogardus (2016), Andreas Mogensen (2014, 2017) and – at least on Vavova’s (2014) compelling reconstruction – Sharon Street (2006) share the following structure: (1) Higher-order evidence of error about your beliefs that p, q, etc. defeats their justification.10 (2) Evolutionary considerations provide moral realists with higher-order evidence of error about their moral beliefs. (3) Therefore, evolutionary considerations defeat the justification of the moral realists’ moral beliefs. This argument, if sound, forces moral realists into moral scepticism: even if robust moral facts exist, our beliefs about them aren’t justified. To begin with, the first premise introduces the notion of higher-order evidence of error. Such evidence of error indicates that one suffers from some epistemic malfunction.11 Here are two familiar examples: Offside Call: In my spare time, I enjoy attending football games with my best friend Julian. We are both equally good at spotting whether a forward is offside or not. Last Sunday, though, we disagreed: while Julian judged that our forward started from an offside position and the resulting goal was thus irregular, it seemed to me that our forward timed his run well and was onside.12 Hypoxia: While climbing the Dufour peak in the Swiss Alps, the weather suddenly turns near the summit. I stop briefly to calculate whether there is enough time to reach the peak and start the climb down before the snow storm hits. After going over my calculations several times, I am rather confident that I should be able to make it. However, I suddenly remember that given the high altitude, I am very likely to suffer from mild hypoxia (or lack of oxygen), which undetectably impairs one’s reasoning, leading to stupid yet fatal mistakes.13

34

Silvan Wittwer

In the first case, I receive evidence of peer disagreement: Julian and I are equally good at making offside calls, but disagree about whether our forward was offside or not this time. Since we cannot both be right (but are equally good at making offside calls), one of us must be in error, which might very well be me. In contrast, the second case features evidence of unreliability: hypoxia makes it likely that my reasoning (about time, in this case) is mistaken.14 The first premise doesn’t just introduce higher-order evidence of error but also articulates a view about its epistemic significance. Conciliationism says that higher-order evidence of error defeats the justification of relevant first-order beliefs.15 Originally, this view was defended in the context of peer disagreement, recommending that one conciliate (hence its name) upon receiving evidence of peer disagreement, such as in Offside Call.16 However, it can easily be generalized, resulting in a view that says that any kind of higher-order evidence of error defeats justification.17 For instance, it would say that, in Hypoxia, learning of my likely reason-distortion defeats the justification of my belief that I have enough time to reach the Dufour peak and return safely. So conciliationism is a moderate form of scepticism: according to it, higher-order evidence of error defeats justification but only the justification of those first-order beliefs that we have such evidence about.18 Conciliationism is prima facie attractive.19 First, it accommodates our intuitions: in both Offside Call and Hypoxia, it seems intuitively appropriate to revise our beliefs in light of the higher-order evidence of error – which conciliationism respects. Second, the view also plausibly explains our intuitions. For instance, given that Julian and I are equally likely to get offside calls right and that there must be a mistake on either his or my part in Offside Call, we both have reason to think that we have made a mistake, which may well be enough to defeat the justification of our relevant beliefs. Similarly, given evidence of my medical condition in Hypoxia, it is significantly more likely that I have made a justification-defeating mistake in my time management. Third, conciliationism follows from a seemingly plausible principle for correctly evaluating evidence. According to independence, we should assess higher-order evidence of error with respect to p independently of our original firstorder evidence, beliefs or reasoning in support of p.20 To see why that seems plausible, reconsider Offside Call: once I learn that Julian, an epistemic peer, disagrees with me, it would be intuitively wrong or irrational to dismiss his judgment by depending on my initial perceptual seeming that the forward wasn’t offside. (Similarly, for Hypoxia: sticking with my original reasoning would be epistemically problematic in the face of (significant risk of) an altitude-induced distortion of reasoning.) But once we accept independence, conciliationism straightforwardly follows: if it is rational to bracket one’s first-order evidence and more and thus only the

Evolutionary Debunking, Self-Defeat and All the Evidence

35

higher-order evidence of error matters, it will defeat the justification of our relevant first-order beliefs.21 The second premise states that evolutionary considerations provide moral realists with higher-order evidence of error about their moral beliefs.22 But what kind of evidence is this exactly? There are at least two answers in the literature, both of which work for the argument. On the one hand, according to Sharon Street (2006), moral realists face a scenario similar to Hypoxia. For her, evolutionary considerations provide moral realists with evidence of moral unreliability.23 After all, when moral realists reflect on the evolutionary origin of our moral beliefs, they must realize that evolution selects for adaptive, not true, moral beliefs.24 For instance, suppose you believe that you have special moral obligations to your family, based on a corresponding moral intuition. But then you realize that we evolved to survive, not to track mind-independent moral truths, and that it is therefore likely that your belief is false. So moral realists have good reason to think that their moral beliefs have been unreliably formed: they are the upshot of a process that was not designed to get at moral truth. In that way, evolutionary considerations provide moral realists with evidence of moral unreliability. On the other hand, for Bogardus (2016) and Mogensen (2014, 2017), moral realists find themselves in a situation similar to Offside Call.25 According to them, evolutionary considerations amount to evidence of possible peer disagreement: evidence that moral realists could disagree with their evolutionary counterparts about fundamental moral matters (such as the wrongness of incest or slavery, our obligations towards our children, that ethnicity doesn’t matter to moral standing, etc.). After all, moral realists must realize that, had humans evolved differently, they would hold different moral beliefs now. Suppose moral realists believe that incest is morally wrong, on the basis of a corresponding moral intuition. Their evolutionary counterparts might disagree: since incest did not hamper their reproductive fitness, they don’t believe that it is morally impermissible. Rather, they believe that it is perfectly morally alright, on the basis of their corresponding moral intuition. In that manner, evolutionary considerations amount to evidence of possible peer disagreement. With both premises in place, we are now able to secure the sceptical conclusion. After all, combining the claim that evolutionary considerations provide moral realists with higher-order evidence of error about their moral beliefs with the view that such evidence defeats justification yields the conclusion that evolutionary considerations defeat the justification of our moral beliefs, realistically construed. Therefore, evolutionary considerations seem to saddle moral realists with an uncomfortable commitment to moral scepticism. Even if robust moral facts exist, we do not form justified beliefs about them, given the higher-order evidence of error that evolutionary considerations provide.

36

Silvan Wittwer

3 The Threat of Self-Defeat One of the most powerful objections to the evolutionary debunking argument from earlier is that it threatens to be self-defeating. To see how that threat materializes, recall that conciliationism is typically motivated by independence, the principle that we should assess higher-order evidence of error with respect to p independently of our original first-order evidence, beliefs or reasoning in support of p. To respect this principle when assessing evolutionary higher-order evidence of error, the moral realist would need to set aside all of their moral evidence, beliefs and reasoning. Why? Because the debunker’s idea is that evolutionary considerations call into question all of their moral beliefs. But doing so risks setting aside too much to know whether they are mistaken about morality. As Katia Vavova (2014, 89–93), who develops this objection most clearly and forcefully, writes, “we cannot determine if we are likely to be mistaken about morality if we can make no assumptions at all about what morality is like” (Vavova 2014, 92).26 After all, to see whether true moral and adaptive moral beliefs indeed come apart, as the evolutionary debunker has it, we need to know something about the contents of both of those sets. If we don’t know what morality is, how can we know whether we fall short of it? Or, if morality could be about anything, we have no reason to think that mind-independent moral truths and adaptive moral beliefs don’t coincide or overlap. So the evolutionary debunking argument seems to defeat itself. For illustration, consider an analogy with perception: to evaluate whether my perceptual beliefs about mid-size objects in my immediate environment (e.g. tables, chairs, desk lamps, water bottles, coffee mugs) are indeed unreliably formed, I need to make some assumptions about the contents of my perception. For instance, I need to know roughly what a chair is in order to make sure that my perceptual belief that there is a chair right in front of me is false. Similarly, Vavova points out, “I cannot show that I am not hopeless at understanding right and wrong without being allowed to make some assumptions about what is right and wrong” (ibid.). For instance, consider my moral belief that racism is wrong. At the same time, I am aware of evolutionary explanations of racism: it is adaptive to be suspicious of those who look different from me. Here the adaptive and true moral beliefs come apart. However, importantly, to draw that distinction, I must already assume some moral truths, including that racism is morally wrong.27 It is important to appreciate how general this objection is. More precisely, the worry does not depend on a specific interpretation of premise (2): the claim that evolutionary considerations provide higher-order evidence of error. Vavova (2014) criticizes a version of the evolutionary debunking argument according to which thinking about the evolutionary origins of their moral beliefs provides moral realists with evidence

Evolutionary Debunking, Self-Defeat and All the Evidence

37

of unreliability. But that, in my view, artificially restricts the scope of the objection, since we can easily generalize it to any evolutionary debunking argument from higher-order evidence of error, regardless of the kind of evolutionary higher-order evidence. After all, any such argument subscribes to the problematic commitments that the objection capitalizes on: evolution as providing higher-order evidence of error, conciliationism and, especially, independence. For instance, suppose that evolutionary considerations amount to evidence of peer disagreement. Once the moral realist receives such evidence, they must once again set aside all of their moral evidence, beliefs and reasoning to respect independence. But doing so would make it impossible for them to assess whether they, as opposed to their evolutionary counterpart, are more likely mistaken about morality. Again, the evolutionary debunker ends up with self-defeat. So the threat is perfectly general and therefore relevant to any evolutionary debunker relying on the notion of higher-order evidence of error.28 Of course, not all evolutionary debunking arguments appeal to (higherorder) evidence of error. Rather, some recent influential debunkers have argued that evolutionary considerations defeat the justification of our moral beliefs because they show that those beliefs are insensitive or unsafe.29 For evolutionary debunking arguments developed along such lines, Vavova’s self-defeat worry might not arise. However, such arguments face their own issues. For example, Justin Clarke-Doane (2015, 2016, 2017; Clarke-Doane and Baras 2019) have convincingly responded that our moral beliefs are modally robust, assuming their truth and defeasible justification.30 After all, our beliefs are sensitive to the robust moral facts by default: since such facts hold necessarily, they could not possibly change. So any true moral belief must be sensitive as well. Similarly, our moral beliefs are safe from error: they are true (as we assume) and could not easily have evolved differently. So our moral beliefs could not easily be false.31 But if the prospects of such modal evolutionary debunking arguments look dim, defending the evolutionary debunking argument from higher-order evidence against the threat of self-defeat becomes all the more important.32

4 Avoiding Self-Defeat: A Strategy As mentioned in the introduction, the literature doesn’t feature any discussion of how evolutionary debunkers could or should respond to this self-defeat objection. My aim in what follows is to address that shortcoming. I shall propose and subsequently evaluate a two-pronged strategy. Its key insight is the following: conciliationism isn’t the only prima facie plausible view on the epistemic significance of higher-order evidence of error. Rather, independently compelling alternatives are subject to active debate in the literature. And, importantly, some of these alternatives

38

Silvan Wittwer

share distinctive features that promise to help avoid self-defeat. So evolutionary debunkers should reject conciliationism – or premise (1) of their argument. Instead, they should explore the prospects of a view that (A) rejects independence and therefore allows first-order moral evidence into the picture, yet (B) still yields the verdict that evolutionary higher-order evidence defeats the justification of first-order moral beliefs.33 In this section, I shall briefly motivate both prongs, and the next section introduces Thomas Kelly’s (2010) total evidence view as a candidate framework that clearly satisfies (A). The final section will then focus on whether evolutionary debunkers could make good on (B) within Kelly’s framework, discussing evolutionary evidence of moral unreliability (Section 6.1), before it delves into evolutionary evidence of moral peer disagreement (Section 6.2).34 To wit, three claims constitute the evolutionary debunking argument: conciliationism about higher-order evidence of error (including independence), the claim that evolutionary considerations provide the moral realist with such evidence, and the sceptical conclusion. To avoid self-defeat, evolutionary debunkers must give up one of them. They cannot give up the last two. Without the distinctive claim about the evidential import of evolution (or, more generally, aetiology), their argument would cease to be an evolutionary (or, more generally, aetiological) one. And without the sceptical conclusion, their argument wouldn’t be a debunking one. But conciliationism and especially independence appear to be the culprits: only if the moral realist is forced to assess the evolutionary evidence independently of all of their first-order moral evidence does the argument threaten to defeat itself. Therefore, evolutionary debunkers should reject conciliationism due to its commitment to independence, while holding onto the claim that evolution provides higher-order evidence of error as well as the sceptical conclusion. But giving up on conciliationism – or premise (1) – won’t be enough, of course. Rather, evolutionary debunkers also need a replacement, a view that licenses the inference from the claim that evolutionary considerations amount to higher-order evidence of error to the sceptical conclusion. More precisely, evolutionary debunkers must defend (or, at least, sketch) a view with two distinctive features: it must (A) reject independence and thus allow first-order (moral) evidence into the picture, yet it must (B) still yield the verdict that (evolutionary) higher-order evidence defeats the justification of first-order (moral) beliefs. To successfully deal with the self-defeat objection, the view appealed to in premise (1) must satisfy both requirements. Are such views available?

5 The Total Evidence View Fortunately for the evolutionary debunker, there are independently plausible views on peer disagreement that promise to fit the bill. For

Evolutionary Debunking, Self-Defeat and All the Evidence

39

instance, take the total evidence view, as developed by Thomas Kelly (2010, 135–150).35 This view states that your total evidence determines whether your belief that p is justified. The total evidence includes both your first-order evidence (in support of p) and the second-order evidence of peer disagreement (regarding p). Sometimes, the total evidence justifies my original belief and thus makes it reasonable for me to stick to my guns. Suppose that I disagree once again with Julian about a football matter. While I insist that there was no delay of game, Julian claims that there was and that, rather absurdly, the goalkeeper, who isn’t on a yellow card, should be immediately sent off for it.36 Here the total evidence, consisting of my perceptual experience and knowledge of the football rulebook, justifies my belief that there was no delay of game. But on other occasions, the total evidence may defeat the justification of my original belief and therefore make it reasonable for me to change my outlook. Suppose that I am a sceptic about other minds, on the basis of the relevant class of arguments. But then I find out that an overwhelming majority of professional philosophers disagree with me, having independently arrived at their view.37 Here the total evidence, consisting of the arguments, my considered judgment and all the considered judgments of my peers, seems to justify the common-sense view that there are indeed other minds. So it would be reasonable for me to conciliate or to even accept the common-sense view.38 The total evidence view clearly satisfies the first requirement or (A) from earlier. Unlike conciliationism, this view rejects independence: it affords our first-order evidence, beliefs and reasoning a role in determining whether a given belief is justified. In our context, that means that the robust moral realist no longer needs to set aside all of their firstorder moral evidence, beliefs and reasoning when assessing evolutionary higher-order evidence of error. And since they are not forced to do that, they don’t not risk setting aside too much to know whether they are mistaken. Therefore, the total evidence view looks like a suitable candidate for premise (1) of the debunkers’ argument. But what about the second requirement or (B)? Does the view yield the verdict that evolutionary higher-order evidence of error defeats the justification of first-order moral beliefs? Here, matters get more complicated. To establish that, evolutionary debunkers would need to argue that the total evidence available to the robust moral realist, consisting of the moral realist’s first-order moral evidence and the evolutionary higherorder evidence of error, indeed defeats the justification of the moral realist’s moral beliefs. Can they do so? In what follows, I shall answer that question negatively. Upon reflection, the total evidence available to the robust moral realist does not defeat the justification of their moral beliefs. By implication, evolutionary debunkers cannot satisfy the second requirement or (B) of my strategy

40

Silvan Wittwer

from earlier. And since they cannot do that, their argument defeats itself. However, as I shall argue, the exact reason why evolutionary debunkers fail to establish defeat depends on the kind of higher-order evidence that evolutionary considerations allegedly provide. As I shall discuss in Section 6.1, debunkers such as Street (2006), for whom evolutionary considerations supply evidence of moral unreliability, struggle with evidential weight. In contrast, I make the case in Section 6.2 that the evolutionary disagreement argument developed by Bogardus (2016) and Mogensen (2014, 2017) rests on a pair of inconsistent assumptions.

6 What Does the Total Evidence Defeat? 6.1 Evolution and Moral Unreliability Suppose Street (2006) is right: evolutionary considerations provide evidence of moral unreliability. Then, the total evidence available to the moral realist consists of both that evidence and our first-order moral evidence. Does that evidence defeat the justification of our moral beliefs? The answer seems to depend on the weight of the respective bodies of evidence. If the evolutionary evidence of moral unreliability outweighed our first-order moral evidence, our moral beliefs would no longer be justified. If it didn’t, our moral beliefs would remain undefeated. But how should we decide which of those two conditionals holds? To start with, we might appeal to brute intuition. After all, we seem to have widely shared intuitions about whether the first-order evidence or the higher-order evidence has greater weight in some extreme cases. For instance, it is intuitive to think that the higher-order evidence of error outweighs our first-order evidence in Hypoxia. Conversely, it is natural to think that our first-order perceptual evidence outweighs the higherorder evidence of error gained by finding out that a popular headache remedy caused hallucinations in one in a million test subjects. And there is some support for this line of thinking in the literature on peer disagreement. For instance, Kelly (2010, 202) seems to advocate a form of epistemic particularism about the matter. When considering the question of whether the first-order evidence or the higher-order evidence plays a greater role in fixing the reasonability of what to believe, he writes that “the question of which counts for more – peer opinion, or the evidence on which the peers base their opinion? – is not, I think, a good question when it is posed at such a high level of abstraction.” Rather, we have to examine cases and our intuitive verdicts about them individually. Similarly, Errol Lord (2014) sketches a test for evidential weight in the context of peer disagreement that seems driven by brute intuition. To determine whether one’s original reasons are strong or weighty enough to ground a permission to dismiss peer disagreement, we should ask, “do [those original reasons] put you in a position to think your peer is crazy

Evolutionary Debunking, Self-Defeat and All the Evidence

41

or otherwise epistemically suspect?” (Lord 2014, 376, fn 15, emphasis added). And perhaps, his proposal can be generalized to evidence of unreliability: if your first-order evidence makes the source of the evidence of unreliability seem epistemically suspect, the former outweighs the latter. But I don’t think that an appeal to brute intuition will help the debunker. Unlike our widely shared intuitions about evidential weight in extreme cases, our intuitions about the weight of evolutionary evidence of moral unreliability vis-à-vis our first-order moral evidence strike me much more moot. To see that, it suffices to point to the persistent disagreement as in the literature about the epistemic import of evolutionary biology for robustly moral belief – which seems at least partially fuelled by conflicting intuitions about evidential weight. While evolutionary debunkers share the intuition that evolutionary evidence is weightier than moral evidence and therefore undermines robustly moral beliefs, robustly moral realists tend to lack the intuition, or they explain it away as irrelevant. If that is correct, we have reached a dialectical stalemate, without making any progress in the matter at hand. So an appeal to brute intuition won’t help evolutionary debunkers such as Street (2006) to establish the claim that evolutionary evidence of moral unreliability outweighs our first-order moral evidence. Instead, the evolutionary debunker might look towards a more theoretical (or formal) notion of evidential weight. Unfortunately, there is remarkably little literature on how to measure and compare the weight of evidence and no literature at all on how to apply such ideas to evolutionary debunking. Still, there are some suggestions worth exploring. For instance, take James Joyce (2005, 162–165): for him, the balance of total evidence favours whatever ordered sequence of propositions contains an estimated higher number of truths.39 Following his proposal, we estimate whether the first-order moral evidence or the evolutionary evidence of moral unreliability contains a higher number of truths. If we can reasonably expect the first-order moral evidence to feature more truths, the balance of total evidence favours the moral propositions making up that evidence. That would be bad news for evolutionary debunkers. Conversely, if we can reasonably expect the higher-order evidence of error to feature more truths, the balance of total evidence favours the propositions making up the higher-order evidence of error. That would be good news for debunkers such as Street. However, I doubt that evolutionary debunkers like Street (2006) can avail themselves readily of the resources that Joyce’s (2005) framework offers. First, we might worry generally that this probabilistic notion of evidential weight cannot successfully model interactions between bodies of evidence at different levels. After all, when weighing evidence from different levels, the estimates won’t be independent. Rather, the estimate of how many truths the first-order evidence contains will depend to some extent on our estimate of how many truths the higher-order

42

Silvan Wittwer

evidence (of error) contains. That might complicate the formation of reasonable expectations. Further, that dependency might be especially pertinent in the context of evolutionary debunking. As Kevin Brosnan (2011, 55) points out, it is impossible to estimate how likely it is that moral truths obtain before evolutionary influence. After all, (almost) everybody accepts that our moral beliefs evolved and that we thus cannot assess their truth before or independently of evolution. But if that is correct, how can we estimate how many truths the first-order moral evidence contains? Second, the specific application of Joyce’s account to evolutionary debunking might prove problematic also in another way. In particular, we might find it hard to estimate the number of basic moral truths. After all, there is plenty of disagreement about what they are and which of them are properly basic, even among robust moral realists. So how are we supposed to count the number of basic robustly moral truths? Given these significant issues, evolutionary debunkers most likely cannot borrow Joyce’s way of measuring the balance of total evidence. But if they cannot do that, they once again won’t be able to make good on the claim that the evolutionary evidence of moral unreliability outweighs our firstorder moral evidence. In short, debunkers who construe evolutionary considerations as evidence of moral unreliability struggle at the first hurdle. To meet the second requirement or (B) of the strategy outlined in Section 4 and therefore avoid self-defeat, they must establish that the evolutionary evidence of moral unreliability outweighs our first-order moral evidence. But doing that, in turn, requires a plausible theoretical notion of evidential weight. Even though it might not be impossible to find such a notion, our discussion (and the dearth of literature on the subject) suggests that these evolutionary debunkers have their work cut out for themselves.40 It seems fair to conclude that any evolutionary debunking argument based on evidence of moral unreliability must defeat itself. 6.2 Evolution and Moral Disagreement Suppose Bogardus (2016) and Mogensen (2014, 2017) are right: evolutionary considerations amount to evidence of possible moral peer disagreement. Does the total evidence available to the moral realist in that case defeat the justification of our moral beliefs? To answer that question affirmatively, evolutionary debunkers may proceed in two steps. First, they remind us of Kelly’s (2010) diagnosis of Offside Call. There, Kelly argues, the total evidence available to me defeats the justification of my belief that the scorer was onside.41 Why that? Initially, Julian and I have different first-order perceptual evidence: while it appears to me that the forward was onside when he started his run, the opposite seems to be the case to Julian. This evidence justifies our initial perceptual beliefs, respectively. But once we become aware of our

Evolutionary Debunking, Self-Defeat and All the Evidence

43

perceptual disagreement, we pool our first-order evidence and add the higher-order evidence of peer disagreement. The resulting total evidence neither supports my belief that the goal was scored onside nor Julian’s belief to the contrary. Instead, we have a situation of evidential symmetry between me and Julian. Therefore, neither his nor my initial belief is justified anymore. That is how the total evidence view diagnoses cases with the structure of Offside Call. In a second step, evolutionary debunkers can argue that possible moral peer disagreements with your evolutionary counterparts share the structure of Offside Call. Initially, we also have two different bodies of firstorder evidence: while you have the moral intuition that you have special obligations to your family members, your evolutionary counterpart has the contrary moral intuition. Again, this evidence justifies our respective initial moral beliefs. But once you discover the moral peer disagreement and pool your first-order moral evidence, the total evidence – just as earlier – supports neither your belief that you have special moral obligations to your family members nor your evolutionary counterpart’s belief to the contrary. Therefore, the justification of both your and your counterpart’s initial moral beliefs have been defeated. And of course, the same reasoning could be employed for any other basic moral intuition that seems prima facie compelling and could be subject to disagreement based on divergent evolutionary histories.42 By following these two steps, evolutionary debunkers who construe evolutionary considerations as evidence of disagreement could argue that the total evidence available to the robust moral realist defeats the justification of their moral beliefs. Achieving that – without even requiring a theoretical notion of evidential weight – would be no mean feat. Rather, it would show that they make good on the second requirement or (B) of my strategy outlined in Section 4: the total evidence view yields the verdict that evolutionary higher-order evidence defeats the justification of first-order moral beliefs. As a result, these evolutionary debunkers would avoid self-defeat. To make this move work, however, evolutionary considerations would have to amount to evidence of possible moral peer disagreement.43 (We cannot merely suppose that they do, as we have done up to this point.) In more detail, our evolutionary counterparts must not just disagree with us about fundamental moral matters but also count as our epistemic and – in a sense to be explained presently – metaphysical peers. However, I doubt that entities can simultaneously satisfy both of those criteria. But if they cannot, evolutionary considerations cannot plausibly amount to evidence of moral peer disagreement. That, in turn, means that the evolutionary debunking argument from moral peer disagreement doesn’t get off the ground. I develop a worry along these lines in Wittwer (2018, ch. 3, 55–59), and Michael Klenk (2018) offers a paper-length defence of a similar criticism.44

44

Silvan Wittwer

To see how this worry arises, note that any evolutionary debunking argument based on moral peer disagreement rests on two crucial assumptions. First, it assumes that our counterparts have an alternative evolutionary history and that this explains why they hold radically different moral beliefs or disagree with us about fundamental moral matters. To illustrate, consider an example introduced in Section 2. Suppose robust moral realists believe that incest is morally wrong and that their evolutionary counterparts morally disagree. What explains their disagreement is the difference in evolutionary past: while incest hampered our ancestors’ reproductive fitness, it did – by stipulation – not do so for our evolutionary counterparts.45 As a result, they don’t believe that incest is morally impermissible. Importantly, the assumption that differences in evolutionary trajectory explain differences in fundamental moral outlook is indispensable to the argument from earlier: without it, the argument would cease to be distinctively evolutionary. Instead, it would be only a generic argument from moral peer disagreement against robust moral realism. Second, any evolutionary debunking argument based on moral peer disagreement presupposes that our evolutionary counterparts count as our peers. More precisely, those counterparts must be comparable to us both epistemically and metaphysically. Epistemically, they must be equally well placed to us: their evidence must be similarly strong and their intellectual abilities comparable.46 If their evidence were lacking or significantly impoverished and/or their reasoning capacities were impaired, we couldn’t consider them our epistemic equals. That we are epistemic peers is important: if our evolutionary counterparts weren’t epistemic peers, their disagreement with us about fundamental moral matters wouldn’t defeat the justification of our moral beliefs. Metaphysically, those counterparts need to partake in our robust moral reality: the moral facts that hold in their world must significantly overlap with those holding in the actual world. Or, more poetically, we both must be bound by most of the same moral laws (and seeking to uncover them). Why that? Because we couldn’t have meaningful disagreement otherwise, let alone disagreement between epistemic peers. Rather, we would be talking past each other – just like two people “disagreeing” over which ice cream flavour tastes best.47 So the argument must assume that our evolutionary counterparts are peers, both epistemically and metaphysically. But, upon reflection, those two assumptions are inconsistent or, at least, stand in serious tension. Suppose that the first assumption is true: our counterparts radically disagree with us about moral matters because of their radically different evolutionary history. That seems to cast doubt on their putative status as both epistemic and metaphysical peers. To begin with, if their evolved moral beliefs are radically different from ours, yet still rationally formed, they probably rest on different bodies of moral evidence, including moral intuitions and morally relevant non-moral

Evolutionary Debunking, Self-Defeat and All the Evidence

45

facts.48 But if their evidence so significantly differs from ours that it becomes unintelligible to us and their moral outlook strikes us as completely alien, it seems difficult to count them as our epistemic peers.49 So our evolutionary counterparts don’t seem equally well placed to us, epistemically speaking. Further, the first assumption also undermines their putative status as metaphysical peers. Suppose that our counterparts hold radically different moral beliefs, due to alternative evolutionary pressures. If their moral beliefs are radically different, however, it seems plausible that other, morally relevant, non-doxastic features of their psychology would differ from ours as well. For instance, they might not be able to experience pain (or only certain kinds of pain), or they might lack the emotion of romantic love. But if our evolutionary counterparts experience and navigate the (moral) world so differently, why think that the same robust moral facts hold for them as for us? After all, many moral facts depend on the morally relevant, non-doxastic features of our psychologies (as even robust moral realists would admit).50 For instance, take the fact that needlessly inflicting pain on others is morally wrong. That fact holds only if there are subjects capable of experiencing pain. If our evolutionary counterparts couldn’t experience pain, they would not be bound by that moral fact. Similarly, if they were incapable of experiencing romantic love, many moral laws that specifically govern romantic interpersonal relationships wouldn’t apply to them. For instance, it might not be wrong for our evolutionary counterparts to “cheat” on each other. So it seems plausible that if our evolutionary counterparts had different moral beliefs and thus different non-doxastic moral psychologies, the robustly moral facts that hold for them would differ as well. That, though, means that they cannot be our peers, metaphysically speaking. After all, those counterparts do not partake in our robust moral reality (or they aren’t bound by most of the same moral laws). In sum, it appears that the truth of the first assumption, namely that our counterparts have evolved to radically morally disagree with us, undermines the second assumption, namely that our counterparts are our peers, both epistemically and metaphysically. As a result, those two assumptions are inconsistent or, at least, stand in serious tension. But, importantly, both assumptions are indispensable to any evolutionary debunking argument based on moral peer disagreement. We cannot just relax them.51 So evolutionary debunkers such as Bogardus (2016) and Mogensen (2014, 2017) face a serious worry: it does not seem plausible that evolutionary considerations amount to evidence of moral peer disagreement. That, in turn, means that their argument doesn’t get off the ground. Where does that leave us? In this subsection, I sketched how evolutionary debunkers such as Bogardus (2016) and Mogensen (2014, 2017) can avail themselves of the resources of the total evidence view to get around the self-defeat objection. Their key move consists in establishing

46

Silvan Wittwer

that moral disagreements with evolutionary counterparts are structurally similar to Offside Call and then adopting Kelly’s (2010) diagnosis of such standard cases. However, that works only if we can plausibly suppose that evolutionary considerations supply evidence of moral peer disagreement. That supposition, though, strikes me as implausible. Upon reflection, we cannot simultaneously assume that our counterparts have evolved to radically morally disagree with us and count them as our peers, both epistemically and metaphysically. So their key move falters, and by implication, they cannot make good on the second part or (B) of my strategy. But if they cannot do that, the evolutionary debunking argument from moral peer disagreement cannot avoid self-defeat.

7 Conclusion In this chapter, I closely examined an evolutionary debunking argument on the basis of the epistemic principle that higher-order evidence of error defeats justification. More precisely, I argued that any such argument ultimately cannot avert self-defeat. After presenting both the argument and the worry in more depth, my argument proceeded in two steps. First, I sketched out an initially promising strategy against self-defeat: evolutionary debunkers should reject conciliationism and instead explore Kelly’s (2010) total evidence view as their background view on the epistemic significance of higher-order evidence of error. However, as I argued in the second step, both versions of the evolutionary debunking argument from higher-order evidence of error fail to take advantage of that switch. More specifically, both versions fall short of establishing that the total evidence available to the robust moral realist defeats the justification of their moral beliefs. Debunkers such as Street (2006), who construe evolutionary considerations as evidence of moral unreliability, lack a plausible theoretical notion of evidential weight. In contrast, the evolutionary disagreement argument developed by Bogardus (2016) and Mogensen (2014, 2017) rests on two inconsistent assumptions about the epistemic (and metaphysical) credentials of our evolutionary counterparts. My overall argument has two interesting implications. First, it implies that the prospects of developing an evolutionary debunking argument by appeal to higher-order evidence of error look dim. After all, such an argument runs into trouble, irrespective of which more general (background) view about the epistemic significance of higher-order evidence we endorse. If we accept conciliationism (and thus independence), the evolutionary debunking argument from higher-order evidence of error defeats itself, as Vavova (2014) persuasively claims. In contrast, if we accept the total evidence view as the correct account of how to rationally respond to higher-order evidence, the argument doesn’t really get off the ground and thus remains toothless, as my line of thinking in Section 4–6 suggests. So evolutionary debunkers should strongly examine alternative ways

Evolutionary Debunking, Self-Defeat and All the Evidence

47

of spelling out the epistemology of evolutionary debunking arguments. For instance, they might revisit Richard Joyce’s (2006) thought that evolution, not mind-independent moral facts, best explains our moral beliefs. Second, my discussion holds lessons for both proponents of conciliationism and the total evidence view. After all, both views face issues when applied to evolutionary debunking. To start with and generally speaking, conciliationism (and especially independence) seems to lead to self-defeat whenever the higher-order evidence of error in question concerns an entire domain of inquiry, as is the case with the evolutionary debunking argument from higher-order evidence outlined in Section 2. This suggests that proponents of the view must look into ways to nonarbitrarily restrict the scope of independence. If they cannot do that, their view won’t be able to accommodate theoretically interesting instances of global defeat by higher-order evidence of error. In contrast, the total evidence view seems to hit an obstacle to generalization: without a plausible theoretical notion of evidential weight, the view won’t be able to successfully theorize about the epistemic significance of higher-order evidence of error other than peer disagreement. So my discussion points towards areas in which both views require refinement.

Notes 1. I am grateful to Dan Baras, Marius Baumann, Joshua Blanchard, James Brown, Matthew Chrisman, David Enoch, Giada Fratantonio, Camil Golub, David Plunkett, Duncan Pritchard, Geoff Sayre-McCord, Neil Sinclair and especially Michael Klenk and Paul Silva for helpful feedback on previous drafts. I also thank audiences at the Czech Academy of Sciences in Prague, the University of Edinburgh and the University of Nottingham for their input. This paper was written while holding a Swiss National Science Foundation Doc. Mobility Fellowship and a Janggen-Poehn Foundation Scholarship. 2. See Enoch (2011), Huemer (2005), Shafer-Landau (2003) and Wedgwood (2007). Henceforth, I am concerned with this view only as a target of evolutionary debunking. All references to moral realism should be understood accordingly. 3. See Kahane (2011). My discussion concerns only epistemological, not metaphysical, evolutionary debunking arguments. (For the latter, see Das 2016.) Further, I am interested in only global, not local, evolutionary debunking arguments: arguments that call into question all moral beliefs, robustly construed. Finally, for brevity, I shall omit the qualifier “realistically construed” henceforth, trusting that any instance of moral belief is understood accordingly. 4. Vavova (2014, 2018) has contributed most to clarifying, reframing and criticizing evolutionary debunking arguments in terms of higher-order evidence of error. My discussion is heavily indebted to her framework and exclusively tackles this strand of evolutionary debunking. Other strands rely on the idea that our moral reliability requires explanation (e.g. Clarke-Doane 2012; Korman and Locke forthcoming) or the thought that evolution, not mindindependent moral facts, best explains our moral beliefs (e.g. Woods 2018). I will say a bit more in Section 3 about the problems that the former faces.

48

Silvan Wittwer

5. See Vavova’s (2014) compelling reconstruction. Whether there are other plausible interpretations is an exegetical matter beyond the scope of this paper. 6. See Christensen (2007, 2009, 2011) or Elga (2007). 7. See also May (2018). 8. See Christensen (2011), and Lord (2014). 9. Cruz et al. (2011), Kyriacou (2016) and Sterpetti (2015) discuss a self-defeat worry for evolutionary debunking arguments. However, their discussion isn’t set within a higher-order evidence framework and fails to engage at all with Vavova’s (2014) specific objection. 10. Plausibly, the higher-order evidence of error would also need to be good, strong or weighty enough. Most common cases, including the ones presented later on, intuitively meet that threshold. For more on the issue of evidential weight in more controversial cases such as evolutionary debunking, see Section 6.1. 11. See Christensen (2010) and Lasonen-Aarnio (2014). Some higher-order evidence is evidence of epistemic success, not error (e.g. visiting your optometrist might confirm your visual reliability). 12. See Elga (2007). 13. See Lasonen-Aarnio (2014). 14. Other kinds include following incorrect epistemic rules, following correct epistemic rules incorrectly, etc. For more, see Lasonen-Aarnio (2014, 315). Evolutionary debunkers focus on the two features in the main text, though. 15. Nothing later on hinges on characterizing conciliationism in terms of justification defeat. Instead, it could be defined in other epistemic terms (e.g. rationality, reasonability, confidence). 16. See Christensen (2007), Elga (2007), Feldman (2006), Bogardus (2009) and Matheson (2015). 17. See, for example, Lasonen-Aarnio (2014) and Vavova (2018). 18. For that very reason, the argument does not collapse into an argument for radical external world skepticism. To put it in Vavova’s (2014, Section 3–4) terms, the argument is committed good, not no good. 19. See Matheson (2015). 20. See Christensen (2011) in the context of peer disagreement. For criticism, see Lord (2014). Both evolutionary debunkers and their critics endorse independence. See Bogardus (2016, 656) and Vavova (2014, 81). 21. For more on that implication, see Christensen (2009, 758f). 22. This is plausibly an instance of the problem of irrelevant influences: factors (e.g. upbringing, socioeconomic background, gender or evolution) that partially explain yet don’t give reasons for our beliefs, thus providing higherorder evidence of error. For more, see, for example, Ballantyne (2013), Cohen (2001), Mogensen (2017), Sher (2001), Schoenfield (forthcoming, 2014), Vavova (2018) and White (2010). 23. See also Vavova (2014), who ultimately thinks that moral realists cannot recognize evolutionary evidence of moral unreliability as such. For more, see Section 3. 24. See Fraser (2014) for empirical details. 25. See Ballantyne (2013). 26. In the text, Vavova frames her objection slightly differently: moral realists cannot recognize evolutionary evidence as good evidence of error, and there are thus limits on our ability to get evidence of our own error, arising from the way such evidence works. However, her key move is denying that moral realists have a “good independent reason” (2014, 92–6) to doubt their moral beliefs. So it can be framed as an attack on independence in the context of evolutionary debunking, as I do here. I am grateful to Neil Sinclair for a helpful discussion on this point.

Evolutionary Debunking, Self-Defeat and All the Evidence

49

27. Vavova also addresses some preliminary responses to her objection. For instance, what if the evolutionary debunker extends the scope of their attack to include evaluative in addition to moral beliefs? In that case, Vavova (2014, 87–9) argues, the evolutionary debunking argument defeats itself as well: now, to respect independence, the moral realist cannot even rely on their beliefs about epistemic principles, including principles about how to evaluate evidence such as independence or conciliationism. What if the evolutionary debunker restricts the scope of their attack to deontological moral beliefs? In that case, Vavova (2014, 93–5) argues, their argument either collapses into a more ambitious form (such as the one against moral realism) – or the evolutionary story turns out to be idle, given other worries about deontology. Finally, what if the evolutionary debunker insists that their conclusion is merely dialectical (such that it establishes moral constructivism, say), not skeptical? According to Vavova (2014, 89), that wouldn’t get them off the hook: even if the conclusion is dialectical, the inference of the argument must still go through. But it doesn’t, if her objection is correct. 28. The self-defeat objection might be even more general, arising for analogue arguments against realist construals of other putatively a priori beliefs about mathematics, logic, modality, epistemology or religion. A proper examination of these analogue arguments, however, would go well beyond the scope of this paper. 29. For a critical overview, see Bogardus (2016). 30. Importantly, that assumption is innocuous in that context. For a different line of criticism, see also Klenk (2019), who argues that modal evolutionary debunking arguments depend on a problematic account of defeat. 31. For further discussion, see Woods (2018) and – in response – Clarke-Doane and Baras (2019). 32. I am grateful to Michael Klenk for pressing me on this point. 33. Why not reformulate independence instead of rejecting it? Because the most plausible ways of doing so strikes me as highly problematic. For instance, evolutionary debunkers might first distinguish between substantive and formal assumptions about moral truth (e.g. Gert 1998; Gert and Gert 2017; Sinclair 2018) and then argue that independence tells us to set aside only substantive, yet not formal, moral assumptions when assessing higher-order evidence of error. However, it is often difficult to draw the formal/substantive distinction. Formal moral assumptions (such as motivational internalism) are often highly controversial even among moral realists, and it is unclear whether formal assumptions provide sufficient detail to assess whether moral realists are mistaken about morality. 34. In what follows, I shall assume that epistemic akrasia is rationally impossible (and level splitting is therefore not plausible). In other words, it can never be rational to have high confidence in something like “P, but my evidence doesn’t support P.” Even though non-trivial, that assumption strikes me as very plausible. For a defense, see Horowitz (2014). For further critical discussion, see Lasonen-Aarnio (2014) and Silva (2017). Thanks to Paul Silva for alerting me to this. 35. Another example would be Jennifer Lackey’s (2010) justificationism. However, those two views ultimately converge, according to Matheson (2015). 36. In football, a delay of game is, at best, worthy of a yellow card. Players get awarded yellow cards for bad fouls that don’t warrant immediate ejection. Two yellow cards, though, equal a red card, which signifies immediate rejection. This case is modelled after so-called extreme restaurant cases (see Christensen 2007, 199–203; Elga 2007, 490f; Kelly 2010, 150f) 37. See Kelly (2010, 146). To what extent the assumption of independent convergence holds is, of course, a psychological and sociological, not philosophical,

50

38.

39.

40.

41. 42. 43. 44.

45.

46. 47.

48. 49.

50.

Silvan Wittwer matter. But for more on its epistemic importance, see Kelly (2010, 146–9) and Goldman (2001, 99–104). Following Kelly (2010), my presentation of the view focuses on peer disagreement here. But at least initially, the view seems to plausibly generalize to other kinds of higher-order evidence of error. For more on whether that impression withstands scrutiny, see Section 6.1. For some common objections, see (Kelly 2010, 150–67). Joyce (2005) distinguishes between the balance and the weight of evidence. While the balance of evidence concerns whether a given body of evidence “points” towards one set of propositions over another, its weight corresponds to the size of a body of evidence. Given his usage, an evolutionary debunker requires a theoretical notion of evidential balance, not weight. However, since the distinction isn’t relevant in our discussion, I shall continue using them interchangeably. It might be unsurprising that Kelly’s (2010) view requires supplementation when generalizing it to higher-order evidence of error other than peer disagreement (for which it was conceived and developed). In fact, it might even be unsurprising that any decent epistemic theory requires a notion of weight. (For an argument that any decent moral theory requires weighted notions such as normative reasons (and thus an account of their weights), see Lord and Maguire 2016, 1–8). What is surprising, however, is that supplementing Kelly’s view proves that hard. Of course, the total evidence also defeats the justification of Julian’s belief that the scorer was offside, as we shall see shortly. For a similar line of thought, see Setiya (2012, ch. 1). Some might worry that hypothetical disagreement can never be epistemically significant. But that strikes me as a red herring. For more on that, see Mogensen (2017). As you will see, the worry developed in this section might actually be so powerful that it would undermine any evolutionary debunking argument from moral peer disagreement, irrespective of self-defeat. After all, it targets the assumption at its core: evolutionary considerations amount to such evidence in the first place. For the purposes of this chapter, I want to remain neutral about this implication, but Klenk (2018) draws a conclusion to that effect. We can imagine and spell out the empirical details in various ways: perhaps those evolutionary counterparts reproduce differently or incest strengthens group cohesion, which, in turn, increases survival prospects and has other benefits. For the purpose of our worry, those details don’t matter. See Kelly (2005). For criticism, see King (2012). Of course, this assumes – plausibly in my mind – a roughly subjectivist or response-dependent account of taste judgments. Further, note that most paradigmatic cases of peer disagreement discussed in the literature trivially meet the criterion of metaphysical peerhood and therefore don’t spell it out. Take an offside call: it is uncontroversial that there is a mind-independent (yet perhaps not practice/institution-independent) fact of the matter about whether the striker was offside or not. See Wedgwood (2007, 7). This point is exacerbated within the framework of the total evidence view: we are allowed to rely on our first-order moral evidence when assessing evidence of peer disagreement. But once we do, it becomes hard to regard our radically disagreeing evolutionary counterparts as epistemic peers – whether or not they share our evidence. Thanks to Camil Golub for this point. Not all, of course. For instance, it might be a fact that ecosystems have moral value. But their value wouldn’t depend directly on any feature of our nondoxastic moral psychologies.

Evolutionary Debunking, Self-Defeat and All the Evidence

51

51. As rehearsed earlier, if we relax the first one, the argument ceases to be evolutionary, and if we relax the second one, it ceases to be about disagreement between peers.

References Ballantyne, Nathan. 2013. “The Problem of Historical Variability.” In Disagreement and Skepticism, edited by Diego E. Machuca, 239–58. New York, NY: Routledge. Bogardus, Tomas. 2009. “A Vindication of the Equal-Weight View.” Episteme 6 (3): 324–35. https://doi.org/10.3366/E1742360009000744. Bogardus, Tomas. 2016. “Only All Naturalists Should Worry about Only One Evolutionary Debunking Argument.” Ethics 126 (3): 636–61. https://doi.org/ 10.1086/684711. Brosnan, Kevin. 2011. “Do the Evolutionary Origins of Our Moral Beliefs Undermine Moral Knowledge?” Biology and Philosophy 26 (1): 51–64. https://doi. org/10.1007/s10539-010-9235-1. Christensen, David. 2007. “Epistemology of Disagreement: The Good News.” The Philosophical Review 116 (2): 187–217. Christensen, David. 2009. “Disagreement as Evidence: The Epistemology of Controversy.” Philosophy Compass 4 (5): 756–67. https://doi.org/10.1111/j. 1747-9991.2009.00237.x. Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomenological Research 81 (1): 185–215. https://doi.org/10.1111/j.1933-1592.2010. 00366.x. Christensen, David. 2011. “Disagreement, Question-Begging, and Epistemic SelfCriticism.” Philosopher’s Imprint 11 (6): 1–22. Clarke-Doane, Justin. 2012. “Morality and Mathematics: The Evolutionary Challenge.” Ethics 122 (2): 313–40. https://doi.org/10.1086/663231. Clarke-Doane, Justin. 2015. “Justification and Explanation in Mathematics and Morality.” In Oxford Studies in Metaethics. Vol. 10, edited by Russ ShaferLandau, 80–103. Oxford: Oxford University Press. Clarke-Doane, Justin. 2016. “Debunking and Dispensability.” In Explanation in Ethics and Mathematics: Debunking and Dispensability, edited by Uri D. Leibowitz and Neil Sinclair, 23–36. Oxford: Oxford University Press. Clarke-Doane, Justin. 2017. “What Is the Benacerraf Problem?” In Truth, Objects, Infinity: New Perspectives on the Philosophy of Paul Benacerraf, edited by Fabrice Pataut, 17–44. Dordrecht: Springer. Clarke-Doane, Justin, and Dan Baras. 2019. “Modal Security.” Philosophy and Phenomenological Research 65 (1): 87. https://doi.org/10.1111/phpr.12643. Cohen, G.A. 2001. If You’re an Egalitarian, How Come You’re so Rich? Cambridge, MA: Harvard University Press. Cruz, Helen de, Maarten Boudry, Johan de Smedt, and Stefaan Blancke. 2011. “Evolutionary Approaches to Epistemic Justification.” Dialectica 65 (4): 517– 35. https://doi.org/10.1111/j.1746-8361.2011.01283.x. Das, Ramon. 2016. “Evolutionary Debunking of Morality: Epistemological or Metaphysical?” Philosophical Studies 173 (2): 417–35. https://doi.org/10.1007/ s11098-015-0499-9. Elga, Adam. 2007. “Reflection and Disagreement.” Noûs 41 (3): 478–502. https:// doi.org/10.1111/j.1468-0068.2007.00656.x.

52

Silvan Wittwer

Enoch, David. 2011. Taking Morality Seriously: A Defense of Robust Realism. Oxford: Oxford University Press. Feldman, Richard. 2006. “Epistemological Puzzles about Disagreement.” In Epistemology Futures, edited by Stephen C. Hetherington, 216–36. Oxford: Oxford University Press. Fraser, Ben. 2014. “Evolutionary Debunking Arguments and the Reliability of Moral Cognition.” Philosophical Studies 168 (2): 457–73. Gert, Bernard. 1998. Morality: Its Nature and Justification. Oxford: Oxford University Press. Gert, Bernard, and Joshua Gert. 2017. “The Definition of Morality.” In Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. https://plato.stanford. edu/archives/fall2017/entries/morality-definition/. Goldman, Alvin I. 2001. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63 (1): 85–110. https://doi.org/10.2307/3071090. Horowitz, Sophie. 2014. “Epistemic Akrasia.” Noûs 48 (4): 718–44. https://doi. org/10.1111/nous.12026. Huemer, Michael. 2005. Ethical Intuitionism. Basingstoke: Palgrave Macmillan. Joyce, James M. 2005. “How Probabilities Reflect Evidence.” Philosophical Perspectives 19 (1): 153–78. https://doi.org/10.1111/j.1520-8583.2005.00058.x. Joyce, Richard. 2006. The Evolution of Morality. Life and Mind. Cambridge, MA: MIT Press. Kahane, Guy. 2011. “Evolutionary Debunking Arguments.” Noûs 45 (1): 103–25. Accessed October 3, 2014. Kelly, Thomas. 2005. “The Epistemic Significance of Disagreement.” In Oxford Studies in Epistemology. Vol. 1, edited by Tamar S. Gendler and John P. Hawthorne, 167–96. Oxford: Oxford University Press. Kelly, Thomas. 2010. “Peer Disagreement and Higher-Order Evidence.” In Disagreement, edited by Richard Feldman and Ted A. Warfield, 111–74. Oxford: Oxford University Press. King, Nathan L. 2012. “Disagreement: What’s the Problem? Or a Good Peer Is Hard to Find.” Philosophy and Phenomenological Research 85 (2): 249–72. https://doi.org/10.1111/j.1933-1592.2010.00441.x. Klenk, Michael. 2018. “Evolution and Moral Disagreement.” Journal of Ethics and Social Philosophy 14 (2): 112–42. https://doi.org/10.26556/jesp.v14i2.476. Klenk, Michael. 2019. “Objectivist Conditions for Defeat and Evolutionary Debunking Arguments.” Ratio: 1–14. https://doi.org/10.1111/rati.12230. Korman, Daniel Z., and Dustin Locke. forthcoming. “Against Minimalist Responses to Moral Debunking Arguments.” In Oxford Studies in Metaethics. Vol. 15, edited by Russ Shafer-Landau. Kyriacou, Christos. 2016. “Are Evolutionary Debunking Arguments Self-Debunking?” Philosophia 44 (4): 1351–66. Lackey, Jennifer. 2010. “A Justificationist View of Disagreement’s Epistemic Significance.” In Social Epistemology, edited by Adrian Haddock, Alan Millar, and Duncan Pritchard, 298–325. Oxford: Oxford University Press. Lasonen-Aarnio, Maria. 2014. “Higher-Order Evidence and the Limits of Defeat.” Philosophy and Phenomenological Research 88 (2): 314–45. Lord, Errol. 2014. “From Independence to Conciliationism: An Obituary.” Australasian Journal of Philosophy 92 (2): 365–77. https://doi.org/10.1080/0004 8402.2013.829506.

Evolutionary Debunking, Self-Defeat and All the Evidence

53

Lord, Errol, and Barry Maguire, eds. 2016. Weighing Reasons. Oxford: Oxford University Press. Matheson, Jonathan. 2015. “Disagreement and Epistemic Peers.” Oxford Handbooks Online. https://doi.org/10.1093/oxfordhb/9780199935314.013.13. May, Joshua. 2018. Regard for Reason in the Moral Mind. Oxford: Oxford University Press. Mogensen, Andreas L. 2014. “Evolutionary Debunking Arguments in Ethics.” PhD thesis, All Souls College, Oxford University. http://andreasmogensen.com/ wp-content/uploads/2014/10/COMPLETE-Evolutionary-debunking-argumentsin-ethics.pdf Mogensen, Andreas L. 2017. “Disagreements in Moral Intuition as Defeaters.” The Philosophical Quarterly 67 (267): 282–302. Schoenfield, Miriam. forthcoming. “Meditations on Beliefs Formed Arbitrarily.” In Oxford Studies in Epistemology, edited by Tamar S. Gendler and John Hawthorne. Schoenfield, Miriam. 2014. “Permission to Believe: Why Permissivism Is True and What It Tells Us about Irrelevant Influences on Belief.” Noûs 48 (2): 193–218. https://doi.org/10.1111/nous.12006. Setiya, Kieran. 2012. Knowing Right from Wrong. Oxford: Oxford University Press. Shafer-Landau, Russ. 2003. Moral Realism: A Defence. Oxford: Oxford University Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10266702. Sher, George. 2001. “But I Could Be Wrong.” Social Philosophy and Policy 18 (2): 64. https://doi.org/10.1017/S0265052500002909. Silva, Paul. 2017. “How Doxastic Justification Helps Us Solve the Puzzle of Misleading Higher-Order Evidence.” Pacific Philosophical Quarterly 98: 308–28. https://doi.org/10.1111/papq.12173. Sinclair, Neil. 2018. “Belief-Pills and the Possibility of Moral Epistemology.” In Oxford Studies in Metaethics. Vol. 13, edited by Russ Shafer-Landau, 98–122. Oxford: Oxford University Press. Sterpetti, Fabio. 2015. “Are Evolutionary Debunking Arguments Really SelfDefeating?” Philosophia 43 (3): 877–89. Street, Sharon. 2006. “A Darwinian Dilemma for Realist Theories of Value.” Philosophical Studies 127 (1): 109–66. https://doi.org/10.1007/s11098-005-1726-6. Vavova, Katia. 2014. “Debunking Evolutionary Debunking.” In Oxford Studies in Metaethics. Vol. 9, edited by Russ Shafer-Landau, 76–101. Oxford: Oxford University Press. Vavova, Katia. 2018. “Irrelevant Influences.” Philosophy and Phenomenological Research 96 (1): 134–52. https://doi.org/10.1111/phpr.12297. Wedgwood, Ralph. 2007. The Nature of Normativity. Oxford: Oxford University Press. White, Roger. 2010. “You Just Believe That Because . . .” Philosophical Perspectives 24: 573–615. Wittwer, Silvan. 2018. “Evolution and the Possibility of Moral Knowledge.” PhD thesis, University of Edinburgh. https://era.ed.ac.uk/handle/1842/33281 Woods, Jack. 2018. “Mathematics, Morality, and Self-Effacement.” Noûs 52 (1): 47–68. https://doi.org/10.1111/nous.12157.

2

Moral Intuitions Between Higher-Order Evidence and Wishful Thinking Norbert Paulo

1 Introduction In contemporary philosophy, moral reasoning proceeds largely through the systematization of intuitions1 about moral problems or cases on all levels of generality. The idea is that moral problems or cases elicit moral intuitions and that these intuitions provide defeasible reasons to believe in their content because they are produced by a process – intuiting – that is generally truth-tracking (for discussion, see Bedke 2008). Just consider the well-known trolley scenario: in the Bystander Case, a trolley is heading toward a group of five people working on the tracks. The workers will inevitably die if the trolley proceeds on its course. The only possible alternative is to hit the switch next to you that will move the trolley onto a side track, where it will cause the death of the one person working there. Another version is the Footbridge Case, where you are standing on a footbridge spanning the tracks, in between the trolley and the five workers. The only option to save the five workers is to push a large stranger off the bridge. Their body would stop the train but they would be killed. Having the intuition that it is morally wrong to push the large person off the bridge in order to save the five workers on the tracks is a defeasible reason to believe in the truth of the judgment that it is morally wrong to push them. When we do moral philosophy along these lines, we use moral intuitions as direct, first-order evidence. We hold beliefs that are supported by evidence. Some ethicists defend methods that work more or less exclusively with intuitions elicited by moral cases or problems. Examples include versions of moral consistency reasoning (Campbell and Kumar 2012; Paulo 2019a), of moral particularism (Dancy 2004) and of casuistry (Jonsen and Toulmin 1992). Most ethicists, however, seek to use such intuitions to justify moral principles and comprehensive normative systems. They search for principles that can explain and ground the content of particular intuitions (Kagan 2001; Bedke 2010; Kahane 2013; McMahan 2013; Climenhaga 2018). Probably the best-known method that works in this way is Rawlsian reflective equilibrium (best explained

Higher-Order Evidence and Wishful Thinking

55

in Rawls 1974, 8; for recent reviews of the debate, see Cath 2016; Tersman 2018). To engage in the method of reflective equilibrium is to start from intuitions about certain moral problems or cases and to systematize those with moral principles. In that process of systemization, intuitions and principles are reconsidered and revised until the set of intuitions and principles is coherent. If successful, this method leads to a state of reflective equilibrium. In this state of reflective equilibrium, the coherent set of beliefs is considered to be justified. In contrast to this “narrow” reflective equilibrium, where only intuitions about particular moral problems and cases and more general principles have to be brought into coherence, “wide” reflective equilibrium also requires coherence with a range of relevant background theories for a set of moral beliefs to be justified (Daniels 1979). However, reflective equilibrium has been criticized for various reasons ever since the publication of Rawls’s A Theory of Justice, mainly for its apparent emphasis on pretheoretical intuitions. Richard Brandt, for instance, claims that moral intuitions “are strongly affected by the particular cultural tradition which nurtured us, and would be different if we had been in a learning situation with different parents, teachers, or peers,” and cautions that the method of reflective equilibrium “may be no more than a reshuffling of moral prejudices” (Rawls 1974, 21–2). Similarly, Richard Hare holds that intuitions “are the product of our upbringings” and that the “’equilibrium’ [to be reached in reflective equilibrium] is one between forces which might have been generated by prejudice” (Hare 1982, 12, 40). And Peter Singer, focusing on the evolution of morality, asked why do we not rather make the opposite assumption, that all the particular moral judgments we intuitively make are likely to derive from discarded religious systems, from warped views of sex and bodily functions, or from customs necessary for the survival of the group in social and economic circumstances that now lie in the distant past? (Singer 1974, 516) Recent empirical research into moral decision-making questions the reliability of intuiting in the moral realm (for an overview, see Zamzow and Nichols 2009). It has been argued that this research discredits moral epistemic reliance on moral intuitions in general (Sinnott-Armstrong 2006, 2008) and reflective equilibrium as an intuition-driven moral epistemic theory in particular (McPherson 2015; Paulo 2020). Reflective equilibrium has been a target of the first paper in experimental philosophy, in which it is described as the paradigmatic instance of intuition-driven romanticism, in philosophy (Weinberg, Nichols, and Stich 2001), and

56

Norbert Paulo

Singer has used findings from experimental philosophy and cognitive sciences to reinforce his earlier criticism of reflective equilibrium (Singer 2005). These findings suggest, for instance, that moral intuitions vary systematically with cultural background (Tännsjö and Ahlenius 2012), gender (Buckwalter and Stich 2013) and framing (Nadelhoffer and Feltz 2008). That is, test people’s intuitions are best explained by reference to factors such as culture, gender and framing of the moral question.2 Let’s term this the unreliability objection against intuition-based approaches to moral epistemology (Paulo 2019b). In all of the aforementioned approaches to moral epistemology, moral beliefs can be challenged in various ways. One can provide contrary firstorder evidence regarding the respective belief. Just consider the trolley scenario again: in the Footbridge Case, most people have the intuition that it would be wrong to push the large stranger, although they also hold that it would be morally permissible to hit the switch in the Bystander Case. Were one to believe that a moral principle that allows intentional killing when this maximizes the overall number of lives saved is correct, this belief would be direct evidence against the belief – based on the common intuition in the Footbridge Case – that it is morally wrong to push the large person. Another way to challenge the belief works indirectly, namely by providing information about the particular intuition or about the use of intuitions in general. For instance, one might be informed that the particular moral intuition is likely affected by implicit biases, which makes it less likely that the process of intuition tracks the truth (see Brownstein 2018). Such information should decrease confidence in the belief based on the potentially biased intuition. Similarly, learning that epistemic peers have a different intuition about the same case might warrant suspending the belief.3 The kind of information used for such indirect challenges is often called higher-order evidence. It is this kind of challenge to intuition-based moral epistemology that I am concerned with in this chapter. In recent years, there has been a vast debate on higher-order evidence (see, for instance, the contributions to Rasmussen and Steglich-Petersen forthcoming). One interesting question concerns the difference between undercutting defeat and defeat by higher-order evidence. Both can be used to indirectly challenge beliefs, but some authors hold them to be relevantly different (for an overview, see Sec. 2 DiPaolo 2018; for discussion, see also Risberg & Tersman in this volume). The following is a standard example of undercutting defeat: imagine that you believe an object to be red because it looks red. If you are now told that the object is in fact being illuminated by red trick lighting, this information undercuts your belief that the object is red. However, it does not cast any doubt on the epistemic rationality of your originally believing the object to be red. The belief-forming processes that you used were completely fine. It was only

Higher-Order Evidence and Wishful Thinking

57

due to the particular circumstances that your holding the belief turned out to be unjustified. Higher-order evidence, by contrast, works differently. The evidence it provides has a retrospective effect: it is evidence that you were never justified in holding the belief (Lasonen-Aarnio 2014, 317). Here is the example that David Christensen uses to illustrate the functioning of defeat by higher-order evidence (Christensen 2010, 187): I’m asked to be a subject in an experiment. Subjects are given a drug, and then asked to draw conclusions about simple logical puzzles. The drug has been shown to degrade people’s performance in just this type of task quite sharply. In fact, the 80% of people who are susceptible to the drug can understand the parameters of the puzzles clearly, but their logic-puzzle reasoning is so impaired that they almost invariably come up with the wrong answers. Interestingly, the drug leaves people feeling quite normal, and they don’t notice any impairment. In fact, I’m shown videos of subjects expressing extreme confidence in the patently absurd claims they’re making about puzzle questions. This sounds like fun, so I accept the offer, and, after sipping a coffee while reading the consent form, I tell them I’m ready to begin. Before giving me any pills, they give me a practice question: Suppose that all bulls are fierce and Ferdinand is not a fierce bull. Which of the following must be true? (a) Ferdinand is fierce; (b) Ferdinand is not fierce; (c) Ferdinand is a bull; (d) Ferdinand is not a bull. I become extremely confident that the answer is that only (d) must be true. But then I’m told that the coffee they gave me actually was laced with the drug. My confidence that the answer is “only (d)” drops dramatically. One important difference between standard cases of undercutting defeat and defeat by higher-order evidence seems to be that only the latter concerns evidence about doxastic states as being the output of cognitive processes that are somehow malfunctioning and therefore flawed. Or as Christensen (whom I follow here) puts it, the differences between higher-order evidence and ordinary undercutters “flow from [higherorder evidence’s] bearing on the relevant propositions about the world only via bearing on propositions about the agent’s own cognitive processes” (Christensen 2010, 202). Unlike undercutting defeat, defeat by higher-order evidence works by blocking people from the means needed to reliably compute the relevant information. The beliefs formed under such circumstances can be right, of course. But that would be mere coincidence. The person never had the means necessary to rationally form the respective belief.

58

Norbert Paulo

Normal undercutting defeat – as in the red trick lighting case – is much more limited. It concerns a relevant change of information about the world, but it does not concern cognitive functions. That is, were it not for the red trick lighting, one’s perception would normally be sufficient to rationally form a belief about the color of the object; other things being equal, one would have been in a position to rationally form beliefs. In other words, higher-order evidence can be understood as an untypical subcategory of undercutters. What is special about higher-order evidence is that the new information about the world concerns one’s cognitive functions insofar as they are relevant to rational belief formation. All standard cases of higher-order evidence are about evidence to the effect that one was not in a position to rationally form beliefs. Take the standard example of a medical resident who after diagnosing a particular patient’s condition is informed that she has not slept in 36 hours. The lack of sleep is taken to make cognitive errors more likely. This evidence should decrease the confidence in the correctness of the diagnosis (Christensen 2010, 186).4 Or take another, structurally similar, standard scenario: having achieved a difficult ascent in the Himalayas, one has to abseil down a long pitch, which requires going through several steps of reasoning to calculate the length of the pitch. One then learns that one is in danger of being affected by hypoxia caused by high altitude. Hypoxia impairs one’s reasoning abilities while they seem to work perfectly fine for the person affected (Lasonen-Aarnio 2014, 315).5 In all of these standard cases of higher-order evidence, someone is imagined to learn about a cognitive malfunction that they were not aware of and cannot control. This also seems to be the kind of evidence referred to in traditional and recent criticisms of the reliance on intuitions in moral epistemology. As I said earlier, this decades-old line of criticism is aimed mainly at Rawlsian reflective equilibrium. It basically holds that intuitions merely reflect moral prejudices, and it seems to be getting support from recent empirical research into moral decision-making. Against this background, in the following main part of this chapter, I investigate the role of higher-order evidence about the (un)reliability of the process of intuiting in the debate concerning the reliance on intuitions in moral epistemology. I distinguish between isolated higher-order evidence about an intuition in a particular case and higher-order evidence about relying on intuitions as a moral epistemic practice. I argue that the earlier-sketched unreliability objection is better understood in this latter sense and that mere “bracketing” is no appropriate response to cases of higher-order evidence about a practice. Rather, the practice to rely on intuitions would be an instance of wishful thinking if there were no or little evidence that intuiting is generally reliable. After introducing my understanding of reliability of the process of intuiting in terms of its robustness to distorting factors, I look at some of the available evidence about the robustness of moral intuitions to irrelevant external

Higher-Order Evidence and Wishful Thinking

59

factors and conclude, provisionally, that we have little reason to trust in our moral intuitions, because they seem to go wrong in surprising ways. Higher-order evidence about relying on intuitions as a moral epistemic practice thus indicates that this practice is probably an instance of wishful thinking.

2 Information About (Un)reliability as Higher-Order Evidence in Particular Cases The general picture of criticisms of the reliance on intuitions in moral epistemology seems to be strikingly similar to standard cases of higherorder evidence: people’s moral cognition is said to be malfunctioning without them being able to recognize or control the malfunctioning. For example, when we learn that certain intuitions in the trolley cases seem to be subject to order effects (Liao et al. 2012), this information does not imply that every single moral intuition is influenced by such effects. Just as in the standard higher-order evidence scenarios, people don’t know whether or not they are thus influenced in a particular situation. They learn that there is a significant chance that they are influenced only when intuiting about particular cases. So what are rational reactions to higher-order evidence? Christensen holds that in “accounting for the higher-order evidence about the drug, I must in some sense, and to at least some extent, put aside or bracket my original reasons for my answer. In a sense, I am barred from giving a certain part of my evidence its due” (Christensen 2010, 195).6 This also seems to be a natural way to react to higher-order evidence about the unreliability of intuitions. When one gets evidence about unreliable intuitions, the intuitions that have been taken as evidence and as reasons for the respective belief should be bracketed. Unsurprisingly, this is precisely how proponents of the use of intuitions in moral epistemology do react to the unreliability challenge. They hold that one can generally rely on intuitions, but only on those that have not yet been shown to be unreliable. That is, one should put aside those (and only those) intuitions that have been shown to be unreliable by empirical research. So according to this view, intuitions can be used in moral epistemology as long as the available evidence does not provide reasons against their reliability. One way to do this is to collect and enlist the findings that speak against the reliability of certain intuitions so that we know which intuitions to disregard in which cases (for such an approach, see Huemer 2008, 381f; for discussion, see Paulo 2019b). It is generally epistemically advantageous to rely on those intuitions that seem to be robust against distorting factors. One would rely less on intuitions that have already been shown to be unreliable (compared to not taking such empirical information into account). So if proponents of the use of moral intuitions were to engage with the relevant empirical

60

Norbert Paulo

literature, this would be a step forward. They would be more likely to form moral beliefs in a reliable way.7

3 The Practice of Relying on Intuitions Between Higher-Order Evidence and Wishful Thinking However, this idea to bracket the intuitions that have been shown to be unreliable in particular cases would arguably not be enough, because the whole practice of relying on intuitions might be flawed. Imagine you have ten moral intuitions. Now a respected colleague of yours who recently became interested in experimental philosophy points you to empirical evidence suggesting that two of these intuitions are not robust to distorting factors. Imagining further that you do not have any empirical evidence about the (un)reliability of the other eight intuitions, what is it that justifies your continued reliance on these eight intuitions? There seems to be a relevant difference between standard cases of higher-order evidence and information about the unreliability of moral intuitions when understood as a practice. In standard cases of higher-order evidence, we can rely on our normal abilities as long as we do not receive higher-order evidence about some cognitive malfunctioning: normally we are able to solve logic puzzles, doctors can diagnose patients and mountaineers can calculate the length of ropes; it is only in rare circumstances that we or they cannot do these things. It is in this sense that the kind of evidence we call higher-order evidence prevents “a belief from being epistemically rational or justified in the first place: if I make a simple calculation error or reason badly due to hypoxia [or a distorting drug, or lack of sleep], my belief may be excusable, but it is not justified” (Lasonen-Aarnio 2014, 316). Beliefs formed under the (unknown) influence of distorting factors are excusable precisely because we normally have good reasons to form beliefs in the way we did; we just happened to be in unusual circumstances. That is, the general practice to rely on the respective beliefforming mechanisms is justified because it normally works. The practice of relying on intuitions in moral epistemology, however, might be flawed, and beliefs formed according to this practice might thus not be excusable. In other words, this practice might be a form of wishful thinking: relying on moral intuitions because one wants them to be reliable (for a discussion of wishful thinking and related phenomena, see Siegel 2017). This kind of wishful thinking is distinct from higher-order evidence in that a wishful thinker does not bracket their belief in light of higher-order evidence about the lack of robustness of the underlying intuition. If they react to higher-order evidence in this way, they no longer engage in wishful thinking when intuiting about particular cases.8 What might be an inexcusable instance of wishful thinking, though, is the practice to rely on intuitions if one lacks evidence that intuiting is generally reliable. The aforementioned unreliability objection against

Higher-Order Evidence and Wishful Thinking

61

intuition-based approaches to moral epistemology is perhaps best understood as referring not to the reliance on intuitions in particular cases but rather to the general moral epistemic practice to rely on intuitions. Thus understood, the unreliability objection says not only that there are rare circumstances in which people show cognitive malfunctions but that these malfunctions seem to be so common and widespread that, absent special circumstances, reliance on moral intuitions is not rational. That is, if higher-order evidence about unreliable intuitions is better understood as higher-order evidence about unreliable intuitions in the plural, then the situation of intuition-based moral epistemology is closer to cases of higher-order evidence showing that one is always under some reasondistorting drug, sleep-deprived or in high altitude than to standard cases of higher-order evidence about exceptional circumstances in which normally reliable capacities might fail. Imagine one would learn that one is permanently under the influence of reason-distorting drug. To continue to generally rely on one’s potentially distorted reason would be an instance of wishful thinking; one believes in one’s reasoning capacities because one wishes them to work reliably. Beliefs formed under the influence of this drug might be right or wrong; but they would inexcusable. And the same would hold for the moral epistemic practice to rely on intuitions in light of higher-order evidence about the unreliability of the process of intuiting. Continuing to generally rely on moral intuitions would be an instance of wishful thinking, even if one is willing to bracket and put aside intuitions that have been found to unreliable in particular cases. So when one receives higher-order evidence about the lack of robustness of a particular intuition, the question is whether having formed the corresponding moral belief as part of a moral epistemic practice that relies on moral intuitions is rational. It is rational if we have good reasons to form beliefs in that way, as a general practice, and just happened to form this particular belief under exceptional circumstances. It is not rational if we do not have good reasons to form beliefs in that way as a general practice. Since the relevant reasons are information about the reliability of the process of intuiting, the answer to this question depends on the understanding of reliability, and on the available empirical evidence. I start with my minimal understanding of reliability of intuiting before I discuss the evidence.

4 Reliability of Moral Intuitions – A Minimal Understanding At this point, it is important to understand what I mean by reliability, or at least what I take to be necessary for reliability. So far, I have used the term reliable both for the process of intuiting that yields particular intuitions and for these particular intuitions. From now on, I use reliability

62

Norbert Paulo

only for the process of intuiting and robustness for the particular intuitions. The idea is basically to understand the reliability of the process of intuiting in terms of the robustness of intuitions. In a nutshell, for the process of intuiting to be reliable, it must at least not be sensitive to irrelevant factors in surprising ways. That is, it must be robust to a range of, but not necessarily to all, potentially disturbing factors in predictable ways. The problem is that it is hard, if not impossible, to investigate the process of intuiting directly. What can be investigated, though, are the particular intuitions the process leads to. So let’s say that if my particular intuition I that p is sensitive to the irrelevant factor x, then this particular intuition I is not robust to x. This fact alone would not yet imply the unreliability of the process of intuiting that led to I, because x could turn out to be the only factor, or one of a few factors, that intuitions such as I are not robust to. When that is the case, the process of intuiting would be generally reliable, just not in cases of x. However, if I were robust to x, this alone would not imply that the process of intuiting that produced I is reliable, because reliability requires large-scale robustness over a range of potentially disturbing factors. I am here not interested in where the threshold lies – that is, to how many disturbing factors the intuitions can fail to be robust to before the process of intuiting becomes unreliable. This would, of course, be important for a comprehensive notion of reliability of intuiting.9 But my aim here is more modest. For the purposes of this chapter, however, I am focusing on a minimal understanding of reliability. That is, I focus on elements that are necessary for the process of intuiting to be generally reliable without them being jointly sufficient for a comprehensive notion of reliability. The process of intuiting would be unreliable if it would not meet the conditions necessary for reliability, which, of course, does not exclude the possibility that the process is unreliable, because it doesn’t meet conditions that are not captured by my minimal understanding of reliability. So when I said that the basic idea for reliability of the process of intuiting is that it must at least not be sensitive to irrelevant factors, what I meant was robustness of particular intuitions to a range of potentially disturbing factors. I also said that the process must not be sensitive to irrelevant factors in surprising ways. By this, I mean that minimal reliability requires predictability: we should generally be aware of the factors that the process of intuiting is inappropriately sensitive to. That is, we need not always be aware of the presence of such a factor in a particular situation, but we should have a general understanding of the disturbing nature of these factors. When it comes to the reliability of visual perception, we have the general knowledge that the stick under water just appears to be bent. Our

Higher-Order Evidence and Wishful Thinking

63

observation is thus not robust in such situations. Nonetheless, observation is generally reliable because there are only a few disturbing factors that it is inappropriately sensitive to, and we have a good general understanding of these factors: for example, observation is not robust when it is too dark, when one is intoxicated or happens to be colorblind.10 Similarly, pocket compasses are generally reliable in pointing north, but they will predictably fail to do so near magnets. We generally know that, even though we might not be aware of a nearby magnet as a disturbing factor. In the moral realm, one can distinguish between factors that are internal to a moral case and factors that are external (Königs 2019). Internal factors are those that actually make up the relevant scenario. In the trolley cases, for example, internal factors include the number of expected casualties, the factual (though not necessarily moral) difference between intentionally killing someone and foreseeing their death as an unintended side effect, between pushing a person off a bridge and hitting a switch and so on. When we say that the process of intuiting is reliable in the moral realm, we at least expect it to respond to such internal factors. However, sometimes the moral intuitions that are the output of that process seem to vary with external factors. External factors can come in the form of cognitive biases and in the form of demographic factors. External factors as cognitive biases involve intuitions as being “unduly influenced by information present/salient at the moment our judgments are formed” (Wright 2016); they are due to the way the scenario is presented. Such cognitive biases include the framing of the moral question, the order of presentation, the cleanliness of the choice environment and so on. Demographic factors include gender, age, culture, socioeconomic status and so on. These demographic factors are deeply internalized and affect one’s worldview, understanding of concepts and, perhaps, moral intuitions. It is largely such demographic factors that the early critics of reflective equilibrium had in mind when talking about prejudices. So when I take the fact that I have the intuition in the Footbridge Case that it is right to push the large person off the bridge as reliable evidence for the truth of the content of that intuition, then the intuition should at least respond to internal factors of the trolley scenarios and not be sensitive to external factors in surprising ways.11 That is, my intuiting can be regarded as generally reliable when it responds to the factors that make up the case and not merely reflect the fact that I am male, that I am sitting at a clean desk or that I answered the switch scenario before I answered the Footbridge Case. To sum up, the minimal understanding of reliability of the process of intuiting requires robustness to a range of potentially disturbing factors, our general awareness of the factors the process of intuiting is inappropriately sensitive to and the process responding to internal factors.

64

Norbert Paulo

5 Epistemic Profile and Reasons for the Reliability of Intuiting Jonathan Weinberg offered a particularly helpful formulation of what it is that proponents of intuition-based methods in moral epistemology would have to provide to warrant relying on intuitions in moral epistemology as a practice. Weinberg complains that although there is a considerable literature about the nature of intuitions, there is surprisingly little literature about “intuiting” as a philosophical method. He believes that a proper philosophical method of intuiting would include what he calls an epistemic profile: “an account not just of its baseline reliability, but also of the particular errors it may be prone to” (Weinberg 2016, 295). This is pretty much what I tried to get at with my understanding of reliability of intuiting in terms of robustness. I said that when a moral intuition is to be reliable evidence for the truth of its content, the process that led to the intuition should at least respond to internal factors of the case and not be sensitive to external factors in surprising ways. That is, the baseline reliability of intuiting lies in its general responsiveness to the right sort of factors (i.e. the factors internal to the case) and in our general understanding of when it is not robust to external factors. Consider again the analogy between non-moral observation reports and intuitions in the moral realm. Relying on observation is justified because we have a good understanding of its epistemic profile. Its baseline reliability is evidenced by the virtual absence of disagreement about observations, together with a general scientific understanding of the working of our visual perception. At the same time, we are aware of a host of “particular errors” our visual perception is prone to. I already mentioned contingent conditions such as darkness and color-blindness, as well as the appearance of the stick under water as being bent. Here are two more examples: what do you “see” when sitting at the back of an airplane climbing toward the sky? When you look straight ahead, your visual perception will be that the top of the plane is higher than your eye level; this is of course correct, but it is something you do not actually see, because you just look straight ahead. You observe something you don’t actually see, because the brain “corrects” the actual input. That such corrections can also go wrong will be obvious to anyone who has ever sat on a train and perceived that the train started to move when it was actually the train on the other track moving in the opposite direction. Similar to the case of non-moral observation, what proponents of intuitionbased methods in moral epistemology would need to do is to point to relevant information about the general reliability of intuiting in the moral realm and about the conditions under which intuitions are not robust to distorting factors. In other words, intuitions should be used only when relevant evidence provides reasons for the reliability of the process of intuiting.12

Higher-Order Evidence and Wishful Thinking

65

What we need, then, is information about the general reliability of the process of intuiting. However, as I introduced the notions earlier, unlike robustness, reliability cannot be investigated directly. When an intuition is found to be robust to one manipulation, this information alone does not imply intuiting’s reliability. After all, it can fail to be robust to other manipulations.

6 Available Evidence So far, my inquiry has been entirely hypothetical. I was concerned with the question of what would need to be the case for the practice of relying on intuitions in moral epistemology to be justified. I argued that it would not be enough to collect and then bracket evidence that speaks against the robustness of particular intuitions. Rather, we would need positive evidence for the general reliability of the process of intuiting. Such positive evidence, I suppose, would require an active research agenda for an extensive and systematic empirical inquiry into moral decision-making. To determine intuiting’s reliability, one will at least have to investigate its robustness in the context of the most likely errors. As far as I know, the extensive and systematic research necessary to develop an epistemic profile for the reliance on intuitions as a practice has not yet been carried out. A meaningful epistemic profile of intuitionbased methods in moral epistemology would require extensive empirical information not yet available. But let us at least have a look at some of the available evidence concerning the question whether, and how, external factors affect moral intuitions to find out where the research stands.13 According to the distinctions drawn in the section about my notion of reliability, I begin with demographic factors and discuss cognitive biases after that. 6.1 Demographic Factors Gender: A much-discussed study by Buckwalter and Stich (2013) that found gender differences in response to plank-of-Carneades, violinist and trolley cases, among many others, failed to replicate (Adleberg, Thompson, and Nahmias 2015; Seyedsayamdost 2015). However, Petrinovich, O’Neill and Jorgensen (1993) and Zamzow and Nichols (2009) report gender differences in response to trolley cases. Fumagalli et al. (2010) and Bartels and Pizarro (2011) found that men tend more toward “characteristically” utilitarian intuitions – that is, intuitions that are more easily explained by reference to utilitarianism than by reference to alternative moral theories (paradigmatically when one finds it permissible to cause harm to prevent greater harm). This finding gets support from a

66

Norbert Paulo large meta-analysis (Friesdorf, Conway and Gawronski 2015). Gender effects seem to be rare in philosophy in general, but as demonstrated by the mentioned studies, they are found in ethics (Machery 2017, 63). But even when they are found, the effect sizes are rather moderate. Age: There is little research on possible age effects on moral intuitions. Hauser et al. (2007) and Gold, Colman and Pulford (2014) report, with small to moderate effect sizes, age differences in response to versions of the trolley cases, especially concerning the loop case. Apart from that rather specific finding, age does not seem to play a big role. Development: Somewhat relatedly, some effects found in adults are not found in children, and vice versa. But, as Knobe (2019) notes, “the most salient result of this research has been the degree to which children do show many of the effects obtained in research on adults.” For instance, children have been shown to exhibit some of the surprising patterns of intuition that adults show about trolley cases (Pellizzoni, Siegal and Surian 2010) and the side effect effect (Leslie, Knobe and Cohen 2006). Personality: Bartels and Pizarro (2011) found that where people score on the psychopathy scale partly predicts how they judge in a range of moral cases. Gleichgerrcht and Young (2013) found that people who respond to trolley cases in a utilitarian manner have less empathy than those who don’t; see also Kahane et al. (2015). Personality effects seem to occur mainly in ethics and in action theory, with moderate to large effect sizes. Religiosity and political orientation: Banerjee, Huebner and Hauser (2010) show that religiosity and political orientation have small to moderate effects on moral intuitions in a variety of dilemmas; see also Antonenko Young, Willer and Keltner (2013). However, the effects of religion seem to be limited to sacrificial dilemmas (Machery 2017, 67). Education and moral training: Hauser et al. (2007) found that moral intuitions do not vary with the general level of education; see also Gleichgerrcht and Young (2013). Nor do they vary with the number of philosophy courses taken or with the number of ethics books the test subjects have read (Banerjee, Huebner and Hauser 2010). Culture: Some studies did not find variations with respect to cultural differences of the test subjects (Hauser et al. 2007; Abarbanell and Hauser 2010; Knobe 2019). But some experiments did find such culture effects. Tännsjö and Ahlenius (2012), for instance, report that Americans, Russians and Chinese test subjects respond systematically differently to variants of the trolley cases. Gold, Colman and Pulford (2014) report similar effects for British as compared to Chinese test subjects. Machery also reports significant differences

Higher-Order Evidence and Wishful Thinking

67

between South Korean and American test subjects in response to experience-machine cases (Machery 2017, 58). In contrast to some of the demographic factors mentioned before, cultural background is found across most areas of philosophy, and the effect sizes are quite large (Machery 2017, 59). 6.2 Cognitive Biases Mood: The mood of test subjects seems to affect their intuitions (Valdesolo and DeSteno 2006). They tend to react differently when they are induced with a positive mood – through watching a comedy clip, for instance – compared to when they are in a neutral mood; see also Strohminger, Lewis and Meyer (2011). Cleanliness/Disgust: Relatedly, the feeling of disgust, induced through a filthy environment or bad odor is also reported to have an effect on moral intuitions (Wheatley and Haidt 2005; Schnall et al. 2008; Kelly 2011; Strohminger 2014). However, this line of research is seriously contested; at least some of the findings do not replicate (Johnson, Cheung and Donnellan 2014; Schnall 2017); and the philosophical implications of the findings are unclear (May 2014). Order effects: We speak of an order effect when an intuition varies with the order in which two or more cases are presented. That is, there is an order effect when test subjects have different intuitions about two cases, C1 and C2, when they see C1 before C2 compared to when they see C2 before C1. Lanteri, Chelini and Rizzello (2008), Wiegmann, Okan and Nagel (2012) and Liao et al. (2012) found a number of order effects, with moderate to large effect sizes, primarily in trolley scenarios. Framing effects: Petrinovich and O’Neill (1996) found, with large effect sizes, that test subjects react differently to trolley cases when they are formulated in terms of saving one person compared to killing five. That is, a significant proportion of test subjects’ intuitions vary with the mere wording of the scenarios. This effect is larger for female test subjects (Petrinovich and O’Neill 1996). A significant framing effect has also been reported with regards to formulations that use a fictional character (“John”) compared to formulations using the more personal “you” – the so-called actorobserver bias (Nadelhoffer and Feltz 2008) – as well as with regard to formulations that use prototypical “black” versus prototypical “white” names (Uhlmann et al. 2009). Also, people react differently to trolley cases when they are presented in their native language compared to a foreign language (one they understand, obviously) (Costa et al. 2014). Framing often seems to have large effects, and these effects do not seem to be limited to trolley cases.14

68

Norbert Paulo

The reported findings are mixed. But even though I limited the cursory inquiry into robustness to external factors – that is, to a minimum condition for reliability – some points are worth noting. First, the findings do not unambiguously support the traditional criticism of the use of intuitions in methods such as reflective equilibrium mentioned earlier. Remember that the critics believed that intuitions merely reflect moral prejudices, vary inappropriately with cultural or religious background and so on. If this were the case, we should expect significant effects of demographic factors such as gender, cultural background and religiosity. Although some effects have been found, the findings are not as significant and widespread as the early critics seem to have suspected. But neither do the findings clearly debunk the early prejudice criticism. For one thing, some effects of demographic factors have been found. Also, prejudices seem to play a disturbingly large role in framing effects. Second, the findings do not seem to support self-confident reliance on moral intuitions as a practice. All the morally irrelevant external factors that I mentioned have been shown to have effects on moral intuitions. The effects are sometimes large and sometimes moderate; some are limited to certain cases and apply more widely; some seem to play a role only in the moral realm and some also in other areas of philosophy. But if, as I suggested, intuiting is reliable only if it is not sensitive to irrelevant factors in surprising ways, then the findings so far suggest that intuiting is not reliable. Moral intuitions seem to be robust to some potentially disturbing factors, but not to almost all of them, as would be necessary for the process of intuiting to be reliable. Third, from the findings discussed, no general picture about the factors that intuiting is inappropriately sensitive to emerges. We lack a general understanding of the disturbing nature of these factors. Which of the factors tested for turn out to be disturbing and which don’t seems to be far from commonsensical. In other words, moral intuitions seem to vary in surprising ways. Unless future research provides a better understanding of the nature of disturbing factors, intuiting can hardly be regarded reliable.

7 Conclusion So given this provisional and limited higher-order evidence, is moral epistemic reliance on intuitions mere wishful thinking? If we understand wishful thinking as the practice to rely on intuitions simply because one wants intuiting to be reliable, the answer is a hesitant “probably yes.” Moral intuitions seem to be more robust to potentially distorting factors than some critics thought. Yet, as it stands, we also have little reason to trust in our moral intuitions, because they seem to go wrong in surprising ways. That is, the problem is not so much that they are not robust to

Higher-Order Evidence and Wishful Thinking

69

every potentially disturbing factor – almost no epistemic means are that robust. The problem is that, unlike most other epistemic means, we do not yet understand where and why intuitions are inappropriately sensitive to irrelevant factors. This prevents us from developing effective strategies to avoid mistakes. The available evidence puts us in a situation similar to the one described earlier, in which we learn that we are permanently under the influence of a reason-distorting drug. To continue to generally rely on one’s potentially distorted reason would be an instance of wishful thinking; one believes in one’s reasoning capacities because one wishes them to work reliably. Beliefs formed under the influence of this drug might be right or wrong; but they would be inexcusable. Similarly, beliefs formed on the basis of a moral intuition might be right or wrong; but they would be inexcusable. We simply lack a proper understanding of the epistemic profile of intuiting. The mere bracketing of single intuitions that have been found not to be robust in particular cases would not do the trick. Bracketing would be a rational way to react to higher-order evidence about the robustness of moral intuitions only if we had positive reasons for reliance on moral intuitions. It is, of course, not too late to search for evidence supporting the reliability of intuiting. I would certainly welcome the philosophically significant large-scale and systematic empirical research about the reliability of intuiting necessary for a less provisional and limited answer to the question whether moral epistemic reliance on intuitions is mere wishful thinking. I’d like to end on a more positive note: even if I am right, none of this supports a skeptical position concerning moral knowledge. In this chapter, I am concerned with intuition-based approaches to moral epistemology. A skeptical conclusion would be warranted only if there were no alternative moral epistemic theories that do not put epistemic value on moral intuitions or if, as Sinnott-Armstrong (2008) argues, moral reasoning could not get off the ground without reliance on intuitions. Whether there really are alternative moral epistemic theories is, surprisingly enough, not a trivial question. Some proponents of reflective equilibrium, for instance, regard it as “the only defensible method: apparent alternatives to it are illusory” (Scanlon 2002, 149). Others have argued that philosophers should continue using reflective equilibrium – no matter how much evidence against the reliability of intuiting is found – as long as there is no better alternative. Here is Jonathan Floyd’s recent argument for such a position: “The ultimate defence of Rawls’s method,” Floyd argues, “is that unless we can construct an alternative, together with a convincing argument regarding its superiority, we should just ‘keep calm and carry on’” (Floyd 2017, 378). As Floyd, Scanlon and Sinnott-Armstrong of course know, there is no shortage in alternatives to reflective equilibrium that do not rely on moral intuitions. Habermas’s idea of an ideal discourse (Habermas 1983), Dworkin’s Herculean task

70

Norbert Paulo

to find the one right answer (Dworkin 1986, 2011, pt. 1 and 2) and reliance on transcendental arguments (Maagt 2017) come to mind. As I have argued elsewhere (Paulo 2019b), such alternatives cannot be reduced to reflective equilibrium, and neither is there a presumption for reflective equilibrium as the default theory on moral epistemology. The same holds, I suggest, for other intuition-based approaches to moral epistemology. So even if my arguments are sound and further empirical evidence supports doubts about the reliability of intuiting, this alone would not warrant a skeptical position concerning moral knowledge.15

Notes 1. There is no agreed-on understanding of intuitions. But most philosophers take them to be seemings or a kind of belief about particular cases or types of cases that arise more or less spontaneously or at least without conscious inference. These intuitions might directly translate into judgments or might be overridden by other considerations (see Sinnott-Armstrong 2008; Bengson 2014; Osbeck and Held 2014). 2. I say more about recent findings later in the chapter. 3. Note that information about peer disagreement can be understood as firstorder evidence and as second-order evidence (for a discussion, see Kelly 2010 and Risberg & Tersman in this volume). 4. A variant of this case has the doctor being informed that she might have been tricked into eating reason-distorting mushrooms (Sliwa and Horowitz 2015). 5. DiPaolo (2018, 250–1) and Schechter (2013, 443) have variants of the hypoxia case. 6. Here again, I follow Christensen. For an alternative view, see Wittwer’s discussion (in this volume) of Kelly’s total evidence view. 7. This approach to disregard those intuitions that have been shown to be unreliable might look obvious and easy. But it is far from what philosophers actually do, and it is pretty demanding. It requires gathering information about empirical research about moral intuitions. This is normally not regarded as philosophical work. It is not what philosophers normally do; it is also not what they are trained to do. So they would have to learn new skills and to engage with literature that they usually do not engage with. 8. It would thus be an instance of wishful thinking to rely on intuitions when evidence about their unreliability is available. I take this to be uncontroversial. And most philosophers who defend the view outlined earlier do not engage in this kind of wishful thinking: when they advocate collecting and enlisting the findings that speak against the reliability of certain case-specific intuitions, this amounts to bracketing in light of higher-order evidence; it is thus no instance of wishful thinking. 9. The comprehensive notion might also require something more positive, something that motivates the truth-trackingness of that process. 10. In personal conversation, both Folke Tersman and Shelly Kagan raised the problem that even in the case of visual perception, our understanding of the processes that lead to certain observations being robust and others – such as illusions or color-blindness – not being robust to disturbing factors is of rather recent vintage. The worry is that requiring a general awareness of the factors that the processes such as intuiting or observing are inappropriately sensitive to might commit me to holding that that observation was to be regarded as unreliable until recently. I agree that this would be an implausible

Higher-Order Evidence and Wishful Thinking

71

implication. I think that people had good reasons to rely on observation even though they lacked understanding of its functioning. However, I think that visual perception and intuiting are relevantly different and that these differences explain why people were justified in relying on observation even though they might not be justified in relying on intuiting before they have a general understanding of these processes. The most important difference is that people usually observe the same things. That is, they rarely disagree about what they see. Neither do they disagree in everyday situations nor in cases in which visual perception is not robust, such as in cases of illusions. We generally fall for the same illusions and agree on what we perceive. This holds even across time and cultures. Things are different in the case of intuiting (see also Brink 1989, ch. 5). Let’s stick to intuiting in the moral realm. Everyday moral intuitions might be widely shared within one society, but there is way more disagreement about everyday moral intuitions than there is about everyday observations. Moreover, moral intuitions are likely to vary, sometimes dramatically, between cultures (see empirical findings later on) and with time. The variance is likely to be higher in unusual cases, such as moral dilemmas that are mostly used in intuition-based moral epistemology. (Similar situations were rare in the context of observations. Color-blindness or unknown effects of drugs on visual perception are two examples that come to mind. However, they are significantly different from intuiting in that they were rare exceptions; the vast majority of people were not color-blind and were not under the influence of such drugs.) With these differences in mind, we can say that people were justified in relying on observation even before they lacked a general understanding of its functioning, because they had little reason to doubt the robustness of their observations. This might be different in the case of intuiting, because people disagree (sometimes quite fundamentally) about their intuitions. 11. To be more precise, it should respond to morally relevant internal factors. I limit my discussion in this chapter to external factors, which are always, I suppose, morally irrelevant. For a particularly influential debunking argument using morally irrelevant internal factors, see Greene (2008, 2014); for a discussion of this argument, see Berker (2009), Paulo (2019a) and Königs (2019). 12. One might suspect that the distinction that I draw between reasons for and reasons against is misleading because the reasons that speak against the robustness of a certain intuition can be reformulated as reasons for its robustness in other circumstances or for opposite intuitions, and vice versa. Yet this is not the case. For one thing, the mere accumulation of evidence about intuitions varying with certain factors doesn’t say much: from the fact that an intuition is not robust to a certain disturbing factor, one cannot conclude that the intuition is reliable when this factor is absent. Neither does it imply that the opposite intuition is correct. Consider the trolley cases again. If we learn that judgments about the Footbridge Case vary with irrelevant factors (i.e. providing a reason against their robustness) such as cleanliness, this fact is does not speak for the reliability of judgments made in other circumstances: having the information that intuitions formed in unclean environments are not robust does not say anything whatsoever about intuitions in clean environments. Similarly, learning that the stick under water just appears to be bent (i.e. learning that our observation is unreliable in such situations) does not provide any reason to believe that our observation is reliable in a desert, say. Neither can one justifiably believe in the opposite of an intuition that is not robust: if one learns that in situations of uncleanliness – in which intuitions are not robust – people tend to intuit that it is wrong

72

Norbert Paulo

to push the large person off the bridge, this does not warrant the contrary belief that it is right to push them. The information about robustness just is what it is, namely information about the robustness of a certain intuition. It does provide a reason to disregard the intuition, but it does not provide any support for the robustness (or any other epistemic credence) of another intuition. I thank Folke Tersman for pressing me on this point. Note further that the distinction that I draw between reasons for and reasons against the reliability of intuiting is not the same as the one between global and local debunking (see Kumar and May 2018). Although I am discussing the process of intuiting, I am not concerned with “process debunking” (Nichols 2014) of moral beliefs. Rather, my inquiry concerns the pros and cons of certain approaches to moral epistemology, not the beliefs formed through these epistemologies. I am confident that most philosophers do not ground their moral beliefs solely or primarily on moral intuitions. So for any moral belief, there will normally be many grounds, only one of which is intuition. This is also the reason why I don’t think that my arguments imply a skeptical conclusion. I say more about this in the conclusion. 13. I largely follow the helpful overview in Machery (2017, ch. 2). See also Knobe (2019) and Wright (2016). 14. For a critical discussion of framing effects in ethics, see Kumar and May (2018) and Demaree-Cotton (2016). 15. For helpful comments and discussion, I am thankful to Shelly Kagan, Michael Klenk, Joshua Knobe, Peter Königs, Pranay Sanklecha and Folke Tersman.

References Abarbanell, Linda, and Marc D. Hauser. 2010. “Mayan Morality: An Exploration of Permissible Harms.” Cognition 115 (2): 207–24. https://doi.org/10.1016/j. cognition.2009.12.007. Adleberg, Toni, Morgan Thompson, and Eddy Nahmias. 2015. “Do Men and Women Have Different Philosophical Intuitions? Further Data.” Philosophical Psychology 28 (5): 615–41. https://doi.org/10.1080/09515089.2013.878834. Antonenko Young, Olga, Robb Willer, and Dacher Keltner. 2013. “‘Thou Shalt Not Kill’: Religious Fundamentalism, Conservatism, and Rule-Based Moral Processing.” Psychology of Religion and Spirituality 5 (2): 110–15. https://doi. org/10.1037/a0032262. Banerjee, Konika, Bryce Huebner, and Marc Hauser. 2010. “Intuitive Moral Judgments Are Robust Across Variation in Gender, Education, Politics and Religion: A Large-Scale Web-Based Study.” Journal of Cognition and Culture 10 (3–4): 253–81. https://doi.org/10.1163/156853710X531186. Bartels, Daniel M., and David A. Pizarro. 2011. “The Mismeasure of Morals: Antisocial Personality Traits Predict Utilitarian Responses to Moral Dilemmas.” Cognition 121 (1): 154–61. https://doi.org/10.1016/j.cognition.2011.05.010. Bedke, Matthew S. 2008. “Ethical Intuitions: What They Are, What They Are Not, and How They Justify.” American Philosophical Quarterly 45 (3): 253–69. Bedke, Matthew S. 2010. “Intuitional Epistemology in Ethics.” Philosophy Compass 5 (12): 1069–83. https://doi.org/10.1111/j.1747-9991.2010.00359.x. Bengson, John. 2014. “How Philosophers Use Intuition and ‘Intuition’.” Philosophical Studies 171 (3): 555–76. https://doi.org/10.1007/s11098-014-0287-y. Berker, Selim. 2009. “The Normative Insignificance of Neuroscience.” Philosophy & Public Affairs 37 (4): 293–329. https://doi.org/10.1111/j.1088-4963.2009. 01164.x.

Higher-Order Evidence and Wishful Thinking

73

Brink, David O. 1989. Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Brownstein, Michael. 2018. The Implicit Mind: Cognitive Architecture, the Self, and Ethics, New York: Oxford University Press. Buckwalter, Wesley, and Stephen Stich. 2013. “Gender and Philosophical Intuition.” In Experimental Philosophy. Vol. 2, edited by Joshua Knobe and Shaun Nichols, 307–47. Oxford: Oxford University Press. Campbell, Richmond, and Victor Kumar. 2012. “Moral Reasoning on the Ground.” Ethics 122 (2): 273–312. https://doi.org/10.1086/663980. Cappelen, Herman, Tamar Szabo Gendler, and John Hawthorne, eds. 2016. The Oxford Handbook of Philosophical Methodology. Oxford and New York: Oxford University Press. Cath, Yuri. 2016. “Reflective Equilibrium.” In Cappelen, Szabo Gendler, and Hawthorne 2016, 213–30. Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomenological Research 81 (1): 185–215. https://doi.org/10.1111/j.1933-1592.2010. 00366.x. Climenhaga, Nevin. 2018. “Intuitions Are Used as Evidence in Philosophy.” Mind 127 (505): 69–104. https://doi.org/10.1093/mind/fzw032. Costa, Albert, Alice Foucart, Sayuri Hayakawa, Melina Aparici, Jose Apesteguia, Joy Heafner, and Boaz Keysar. 2014. “Your Morals Depend on Language.” PLoS One 9 (4): e94842. https://doi.org/10.1371/journal.pone.0094842. Dancy, Jonathan. 2004. Ethics without Principles. Oxford: Oxford University Press. Daniels, Norman. 1979. “Wide Reflective Equilibrium and Theory Acceptance in Ethics.” Journal of Philosophy 76 (5): 256–82. Demaree-Cotton, Joanna. 2016. “Do Framing Effects Make Moral Intuitions Unreliable?” Philosophical Psychology 29 (1): 1–22. https://doi.org/10.1080/ 09515089.2014.989967. DiPaolo, Joshua. 2018. “Higher-Order Defeat Is Object-Independent.” Pacific Philosophical Quarterly 99 (2): 248–69. https://doi.org/10.1111/papq.12155. Dworkin, Ronald. 1986. Law’s Empire. Cambridge, MA: Harvard University Press. Dworkin, Ronald. 2011. Justice for Hedgehogs. Cambridge, MA: Harvard University Press. Floyd, Jonathan. 2017. “Rawls’ Methodological Blueprint.” European Journal of Political Theory 16 (3): 367–81. https://doi.org/10.1177/1474885115605260. Friesdorf, Rebecca, Paul Conway, and Bertram Gawronski. 2015. “Gender Differences in Responses to Moral Dilemmas.” Personality & Social Psychology Bulletin 41 (5): 696–713. https://doi.org/10.1177/0146167215575731. Fumagalli, Manuela, Maurizio Vergari, Patrizio Pasqualetti, Sara Marceglia, Francesca Mameli, Roberta Ferrucci, Simona Mrakic-Sposta, et al. 2010. “Brain Switches Utilitarian Behavior: Does Gender Make the Difference?” PLoS One 5 (1): e8865. https://doi.org/10.1371/journal.pone.0008865. Gleichgerrcht, Ezequiel, and Liane Young. 2013. “Low Levels of Empathic Concern Predict Utilitarian Moral Judgment.” PLoS One 8 (4): e60418. https://doi. org/10.1371/journal.pone.0060418. Gold, Natalie, Andrew Colman, and Briony Pulford. 2014. “Cultural Differences in Responses to Real-Life and Hypothetical Trolley Problems.” Judgment and Decision Making 9 (1): 65–76.

74

Norbert Paulo

Greene, Joshua D. 2008. “The Secret Joke of Kant’s Soul.” In Moral Psychology: The Neuroscience of Morality: Emotion, Brain Disorders, and Development, edited by Walter Sinnott-Armstrong, 35–79. Vol. 3. Cambridge, MA: MIT Press. Greene, Joshua D. 2014. “Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics.” Ethics 124 (4): 695–726. Habermas, Jürgen. 1983. “Diskursethik – Notizen zu einem Begründungsprogramm.” In Moralbewußtsein Und Kommunikatives Handeln, 53–125. Frankfurt am Main: Suhrkamp. Hare, Richard M. 1982. Moral Thinking: It’s Levels, Method and Point. Oxford: Oxford University Press. Hauser, Marc, Fiery Cushman, Liane Young, R. Kang-Xing Jin, and John Mikhail. 2007. “A Dissociation between Moral Judgments and Justifications.” Mind & Language 22 (1): 1–21. https://doi.org/10.1111/j.1468-0017.2006.00297.x. Huemer, Michael. 2008. “Revisionary Intuitionism.” Social Philosophy and Policy 25 (1): 368–92. https://doi.org/10.1017/S026505250808014X. Johnson, David J., Felix Cheung, and M. Brent Donnellan. 2014. “Does Cleanliness Influence Moral Judgments?” Social Psychology 45 (3): 209–15. https:// doi.org/10.1027/1864-9335/a000186. Jonsen, Albert R., and Stephen Toulmin. 1992. The Abuse of Casuistry: A History of Moral Reasoning. Berkeley, CA: University of California Press. Kagan, Shelly. 2001. “Thinking about Cases.” In Moral Knowledge, edited by Ellen F. Paul, Fred D. Miller, and Jeffrey Paul, 44–63. New York, NY: Cambridge University Press. Kahane, Guy. 2013. “The Armchair and the Trolley: An Argument for Experimental Ethics.” Philosophical Studies 162 (2): 421–45. https://doi.org/10.1007/ s11098-011-9775-5. Kahane, Guy, Jim A.C. Everett, Brian D. Earp, Miguel Farias, and Julian Savulescu. 2015. “‘Utilitarian’ Judgments in Sacrificial Moral Dilemmas Do Not Reflect Impartial Concern for the Greater Good.” Cognition 134: 193–209. https:// doi.org/10.1016/j.cognition.2014.10.005. Kelly, Daniel. 2011. Yuck! The Nature and Moral Significance of Disgust. Cambridge, MA: MIT Press. Kelly, Thomas. 2010. “Peer Disagreement and Higher-Order Evidence.” In Disagreement, edited by Richard Feldman and Ted A. Warfield, 111–74. Oxford: Oxford University Press. Knobe, Joshua. 2019. “Philosophical Intuitions Are Surprisingly Robust across Demographic Differences.” Epistemology & Philosophy of Science 56 (2): 29–36. https://doi.org/10.5840/eps201956225. Königs, Peter. 2019. “Experimental Ethics, Intuitions, and Morally Irrelevant Factors.” Philosophical Studies 37 (4): 293. https://doi.org/10.1007/s11098019-01330-z. Kumar, Victor, and Joshua May. 2018. “How to Debunk Moral Beliefs.” In Methodology and Moral Philosophy, edited by Jussi Suikkanen and Antti Kauppinen, 25–48. New York, NY: Routledge. Lanteri, Alessandro, Chiara Chelini, and Salvatore Rizzello. 2008. “An Experimental Investigation of Emotions and Reasoning in the Trolley Problem.” Journal of Business Ethics 83 (4): 789–804. https://doi.org/10.1007/s10551008-9665-8.

Higher-Order Evidence and Wishful Thinking

75

Lasonen-Aarnio, Maria. 2014. “Higher-Order Evidence and the Limits of Defeat.” Philosophy and Phenomenological Research 88 (2): 314–45. Leslie, Alan M., Joshua Knobe, and Adam Cohen. 2006. “Acting Intentionally and the Side-Effect Effect.” Psychological Science 17 (5): 421–7. https://doi. org/10.1111/j.1467-9280.2006.01722.x. Liao, S. Matthew, Alex Wiegmann, Joshua Alexander, and Gerard Vong. 2012. “Putting the Trolley in Order: Experimental Philosophy and the Loop Case.” Philosophical Psychology 25 (5): 661–71. https://doi.org/10.1080/09515089. 2011.627536. Maagt, Sem de. 2017. “Reflective Equilibrium and Moral Objectivity.” Inquiry 60 (5): 443–65. https://doi.org/10.1080/0020174X.2016.1175377. Machery, Edouard. 2017. Philosophy within Its Proper Bounds. Oxford: Oxford University Press. May, Joshua. 2014. “Does Disgust Influence Moral Judgment?” Australasian Journal of Philosophy 92 (1): 125–41. https://doi.org/10.1080/00048402.20 13.797476. McMahan, Jeff. 2013. “Moral Intuition.” In the Blackwell Guide to Ethical Theory. Vol. 4, edited by Hugh LaFollette and Ingmar Persson, 103–20. Oxford: Blackwell. McPherson, Tristram. 2015. “The Methodological Irrelevance of Reflective Equilibrium.” In The Palgrave Handbook of Philosophical Methods, edited by Christopher Daly, 652–74. London: Palgrave Macmillan. Nadelhoffer, Thomas, and Adam Feltz. 2008. “The Actor-Observer Bias and Moral Intuitions: Adding Fuel to Sinnott-Armstrong’s Fire.” Neuroethics 1 (2): 133–44. https://doi.org/10.1007/s12152-008-9015-7. Nichols, Shaun. 2014. “Process Debunking and Ethics.” Ethics 124 (4): 727–49. https://doi.org/10.1086/675877. Osbeck, Lisa M., and Barbara S. Held, eds. 2014. Rational Intuition: Philosophical Roots, Scientific Investigations. Cambridge: Cambridge University Press. Paulo, Norbert. 2019a. “In Search of Greene’s Argument.” Utilitas 31 (1): 38–58. https://doi.org/10.1017/S0953820818000171. Paulo, Norbert. 2019b. The Unreliable Intuitions Objection against Reflective Equilibrium. Manuscript. Paulo, Norbert. 2020. “Romantisierte Intuitionen? Die Kritik der experimentellen Philosophie am Überlegungsgleichgewicht.” In Empirische Ethik: Grundlagentexte Aus Psychologie Und Philosophie, edited by Norbert Paulo and Jan C. Bublitz. Berlin: Suhrkamp. Pellizzoni, Sandra, Michael Siegal, and Luca Surian. 2010. “The Contact Principle and Utilitarian Moral Judgments in Young Children.” Developmental Science 13 (2): 265–70. https://doi.org/10.1111/j.1467-7687.2009.00851.x. Petrinovich, Lewis, and Patricia O’Neill. 1996. “Influence of Wording and Framing Effects on Moral Intuitions.” Ethology and Sociobiology 17 (3): 145–71. https://doi.org/10.1016/0162-3095(96)00041-6. Petrinovich, Lewis, Patricia O’Neill, and Matthew Jorgensen. 1993. “An Empirical Study of Moral Intuitions: Toward an Evolutionary Ethics.” Journal of Personality and Social Psychology 64 (3): 467–78. https://doi.org/10.1037/ 0022-3514.64.3.467. Rasmussen, Mattias Skipper, and Asbjørn Steglich-Petersen, eds. forthcoming. Higher Order Evidence: New Essays. Oxford: Oxford University Press.

76

Norbert Paulo

Rawls, John. 1974. “The Independence of Moral Theory.” Proceedings and Addresses of the American Philosophical Association 48: pp. 5–22. https:// doi.org/10.2307/3129858. Risberg, Olle, and Folke Tersman. 2020. “Disagreement, Indirect Defeat, and Higher-Order Evidence.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Scanlon, Thomas M., 2002. “Rawls on Justification.” In The Cambridge Companion to Rawls, edited by S. Freeman. Cambridge: Cambridge University Press, pp. 139–167. Schechter, Joshua. 2013. “Rational Self-Doubt and the Failure of Closure.” Philosophical Studies 163 (2): 429–52. https://doi.org/10.1007/s11098-011-9823-1. Schnall, Simone. 2017. “Disgust as Embodied Loss Aversion.” European Review of Social Psychology 28 (1): 50–94. https://doi.org/10.1080/10463283.2016. 1259844. Schnall, Simone, Jonathan Haidt, Gerald L. Clore, and Alexander H. Jordan. 2008. “Disgust as Embodied Moral Judgment.” Personality & Social Psychology Bulletin 34 (8): 1096–109. https://doi.org/10.1177/0146167208317771. Seyedsayamdost, Hamid. 2015. “On Gender and Philosophical Intuition: Failure of Replication and Other Negative Results.” Philosophical Psychology 28 (5): 642–73. https://doi.org/10.1080/09515089.2014.893288. Siegel, Susanna. 2017. “How Is Wishful Seeing Like Wishful Thinking?” Philosophy and Phenomenological Research 95 (2): 408–35. https://doi.org/10.1111/ phpr.12273. Singer, Peter. 1974. “Sidgwick and Reflective Equilibrium.” The Monist 58 (3): 490–517. Singer, Peter. 2005. “Ethics and Intuitions.” The Journal of Ethics 9 (3–4): 331–52. Accessed July 14, 2015. Sinnott-Armstrong, Walter. 2006. “Moral Intuitionism Meets Empirical Psychology.” In Metaethics after Moore, edited by Terry Horgan and Mark Timmons, 339–66. Oxford: Oxford University Press. Sinnott-Armstrong, Walter. 2008. “Framing Moral Intuitions.” In Moral Psychology: The Cognitive Science of Morality: Intuition and Diversity. Vol. 2, edited by Walter Sinnott-Armstrong, 47–76. A Bradford Book Vol. 2. Cambridge, MA: MIT Press. Sliwa, Paulina, and Sophie Horowitz. 2015. “Respecting All the Evidence.” Philosophical Studies 172 (11): 2835–58. https://doi.org/10.1007/s11098015-0446-9. Strohminger, Nina. 2014. “Disgust Talked About.” Philosophy Compass 9 (7): 478–93. https://doi.org/10.1111/phc3.12137. Strohminger, Nina, Richard L. Lewis, and David E. Meyer. 2011. “Divergent Effects of Different Positive Emotions on Moral Judgment.” Cognition 119 (2): 295–300. https://doi.org/10.1016/j.cognition.2010.12.012. Tännsjö, Torbjörn, and Henrik Ahlenius. 2012. “Chinese and Westerners Respond Differently to the Trolley Dilemmas.” Journal of Cognition and Culture 12 (3–4): 195–201. https://doi.org/10.1163/15685373-12342073. Tersman, Folke. 2018. “Recent Work on Reflective Equilibrium and Method in Ethics.” Philosophy Compass 13 (6): e12493. https://doi.org/10.1111/phc3.12493. Uhlmann, Eric Luis, David A. Pizarro, David Tannenbaum, and Peter H. Ditto. 2009. “The Motivated Use of Moral Principles.” Judgment and Decision Making 4 (6): 479–91.

Higher-Order Evidence and Wishful Thinking

77

Valdesolo, Piercarlo, and David DeSteno. 2006. “Manipulations of Emotional Context Shape Moral Judgment.” Psychological Science 17 (6): 476–7. https:// doi.org/10.1111/j.1467-9280.2006.01731.x. Weinberg, Jonathan M. 2016. “Intuitions.” In Cappelen, Szabo Gendler, and Hawthorne 2016, 287–308. Weinberg, Johnathan M., Shaun Nichols, and Stephen Stich. 2001. “Normativity and Epistemic Intuitions.” Philosophical Topics 29 (1): 429–60. https://doi. org/10.5840/philtopics2001291/217. Wheatley, Thalia, and Jonathan Haidt. 2005. “Hypnotic Disgust Makes Moral Judgments More Severe.” Psychological Science 16 (10): 780–4. https://doi. org/10.1111/j.1467-9280.2005.01614.x. Wiegmann, Alex, Yasmina Okan, and Jonas Nagel. 2012. “Order Effects in Moral Judgment.” Philosophical Psychology 25 (6): 813–36. https://doi.org/10.1080/ 09515089.2011.631995. Wittwer, Silvan. 2020. “Evolutionary Debunking, Self-Defeat and All the Evidence.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Wright, Jennifer Cole. 2016. “Intuitional Stability.” In A Companion to Experimental Philosophy. Vol. 95, edited by Justin Sytsma and Wesley Buckwalter, 568–77. Chichester: Wiley-Blackwell. Zamzow, Jennifer L., and Shaun Nichols. 2009. “Variations in Ethical Intuitions.” Philosophical Issues 19 (1): 368–88. https://doi.org/10.1111/j.1533-6077.2009. 00164.x.

3

Debunking Objective Consequentialism The Challenge of KnowledgeCentric Anti-Luck Epistemology Paul Silva

1 Introduction1 It is a part of our common-sense perspective on the world that we know and have some justified beliefs about the moral status of our prospective actions. With these, we also attempt to make our way towards a theoretical understanding of what makes these moral beliefs true: we attempt to home in on the correct normative theory of ethics. It is generally assumed that the search for the correct normative theory of ethics is not absolutely futile – that is, at the very least, we can find reasons to justifiedly increase our confidence in some normative theories over others. Certain versions of objective act consequentialism are widely thought to be among the normative theories of ethics that we should take most seriously and thus be comparatively more certain about. In a large part, this is due to its ability to explain important parts of our common-sense moral perspective. In what follows, I will explain why this is mistaken from the perspective of knowledge-centric anti-luck epistemology. According to it, there are modal anti-luck demands on both knowledge and justification, and it turns out that our beliefs about the moral status of our prospective actions are almost never able to satisfy these demands if objective act consequentialism is true. Accordingly, if objective act consequentialism is true, we neither know nor have justified beliefs about the moral status of our prospective actions. So a problematic kind of applied moral skepticism obtains. As I will explain, this kind of applied moral skepticism introduces problematic limits on our ability to use objective act consequentialism’s explanatory power to justifiedly increase our confidence in its truth. This is, in part, a product of higher-order defeat, as I explain in the final section. There is, however, a silver lining for objective act consequentialists, because there is at least one type of objective act consequentialism – prior existence consequentialism – that is poised to avoid at least some of the epistemic problems discussed in this chapter.

Debunking Objective Consequentialism

79

2 Knowledge-Centric Anti-Luck Epistemology and Justificatory Defeat Take a standard lottery case: LOTTERY I have a ticket for a fair lottery with long odds. The lottery has been drawn, although I have not heard the results yet. Reflecting on the odds involved, I conclude that (L) my ticket is a loser. Besides my (accurate) assessment of the odds, I have no other reason to think my ticket is a loser. As it turns out, my belief that I own a losing ticket is true. It is widely thought that L cannot be known in these circumstances. The explanation for the unknowability of L in these circumstances is not that I lack strong evidence in favor of L. L is extremely probable on my evidence. Rather, the unknowability of L is thought to be owed to the fact that my belief, even if true, would be in some sense true by “epistemic luck” – a kind of luck that is incompatible with knowing. Generally, the nature of this epistemically problematic form of luck is thought to involve a modal defect in my believing L in these circumstances. The two leading accounts of this modal defect locate the problem with my belief not at the actual world where my belief is true but rather in a relation I bear toward myself in nearby possible worlds where I falsely believe L. The sensitivity account and the safety account are the leading accounts of that relation. Here is a common formulation of these proposed requirements on knowledge: SENSITIVITY Had it been that P, S would not still have believed that P (had S used the method that S actually used to determine whether P). SAFETY S could not have easily had a false belief as to whether or not P (using the method that S actually used to determine whether or not P).2 If either principle is correct, then L cannot be known in Lottery, because neither principle is satisfied in that case. It is easy to see why Sensitivity fails in Lottery. My belief in L – that my ticket is a loser – is based solely on my statistical evidence. But I could have that same evidence even if my belief were false – that is, even if there is some nearby world where my ticket is a winner. Because of this, I would still believe L on the basis

80

Paul Silva

of that same statistical evidence even were L false. So Sensitivity offers a straightforward explanation of the idea that L is unknowable in Lottery. It is a bit trickier to see how Safety explains the same fact. The idea that a belief could not have easily been false is the idea that the world would have to be significantly different in order for the target of belief to be false. For example, I truly believe that I will not become a U.S. Navy SEAL this year. And this belief could not have easily been false in the sense that in order for it to be false, the world would have to be quite different in a variety of respects: the relevant governing bodies of the U.S. Navy would have to decide that it’s advisable to lower their fitness standards (dramatically) in order for me to be considered admissible as a SEAL, or there would have to be some kind of conspiracy where all relevant individuals involved in assessing SEAL candidates willingly lie about my admissibility, or something else equally unusual would have to happen. Given my age, my limited physical fitness, my limited ability to persevere through physical pain, my lack of political power to orchestrate a conspiracy, my intention not to join the SEALs, and so on, it could not have easily been false that I will not become a SEAL this year. So this belief satisfies Safety and so is a candidate for knowledge. In contrast, as (Pritchard 2008, Section 4) explains, my belief in L in Lottery could easily have been false since “all that need be different in order for one’s ticket to be a winning ticket is that a few numbered balls fall in a slightly different configuration.” Intuitively, such worlds are not significantly different from the actual world, so my belief could not have easily been false.3 While the unknowability of L in Lottery is widely held, some hold out hope that L might nevertheless be justifiedly believable in Lottery (e.g. standard Lockean evidentialists about justification). But knowledgecentric theorists have generally argued for theses about the relation between knowledge and justification that are inconsistent with this view. While I cannot go into the details here, the general motivation connecting knowledge and justification has to do with the normative role of knowledge in our assessment of belief. First, not only does one intuitively fail to know in Lottery cases but also one intuitively shouldn’t hold and so can’t justifiedly hold a lottery belief. If justification requires knowledge, this fact about lottery beliefs can easily be explained. Second, the idea that justification requires knowledge can explain why certain Moorean beliefs shouldn’t be held – for example, I believe P, but I do not know P. Third, a knowledge requirement on justification for belief can explain our critical practices of assessing others’ beliefs. For instance, if someone believes P, it’s perfectly normal to object to their believing P on the basis of the fact that they don’t know it. A knowledge requirement on justification would also explain this. It turns out there are different ways of putting knowledge at the center of the justification of belief. For example, some knowledge-centric theorists think that knowledge is necessary and sufficient (and to be

Debunking Objective Consequentialism

81

token-identified with) justified belief (Williamson 2013; Sutton 2007; Littlejohn 2012): J↔K: S justifiedly believes that P iff S knows that P. If correct, then not only can L not be known in Lottery, but it cannot be justifiedly believed in Lottery either. Alternatively, some knowledgecentric theorists have argued that justification requires something just shy of knowing: being in a position to know (Bird 2007; Ichikawa 2014; Rosenkranz 2018):4 J→PK: S justifiedly believes P only if S is in a position to know P. If correct, the impossibility of knowing L entails the impossibility of justifiedly believing L in Lottery also.5 Now, if it’s possible to have justified false beliefs, then perhaps it’s possible to fail to know L while nevertheless having justification for certain higher-order beliefs: the belief that one knows L (though one doesn’t) or the belief that one is justified in believing L (though one isn’t). But notice that both J↔K and J→PK entail the defeat of these higher-order beliefs, because if either J↔K or J→PK are true, it’s impossible to have justified false beliefs if they cannot ever constitute knowledge. Accordingly, it’s not just first-order lottery beliefs that are unjustified; these higher-order beliefs are also unjustified. So justification for both first-order beliefs and higher-order beliefs are lost. Unsurprisingly, the idea that knowledge must be a bare possibility for one to have justification has seemed to some a difficult bullet to bite. Accordingly, some have sought to unify the intuitions driving knowledgecentric views of justification without committing themselves to the idea that knowledge or possible knowledge is required for justified belief. For example, Smithies (2012) has argued that justification for the belief that P requires that one enjoy justification to believe a higher-order claim about the knowability of P – that is, one must have justification to believe that one is in a position to know P: J→JPK: S justifiedly believes P only if S has justification to believe that they’re in a position to know P. If this is correct then there is room for both first-order and higherorder justified beliefs in L. But of course, such justified beliefs are limited to those who are sufficiently ignorant of the fact that one cannot know L in Lottery: those who lack access to reasons sufficient to defeat the claim that one can know L in Lottery. Accordingly, J→JPK leaves many of us who are reflective about lottery cases in much the same position as J↔K and J→PK: we cannot know L, we cannot justifiedly believe L,

82

Paul Silva

and we cannot even falsely justifiedly believe that we have justification to believe L. As formulated, each of the knowledge-centric principles of justification just mentioned concern only doxastic justification. It is typically important to keep in mind the difference between propositional justification (= having justification to ϕ) and doxastic justification (= justifiedly ϕ-ing). I will not make much of this in what follows and will switch freely between the two locutions. This will make no difference under the assumption that having propositional justification requires at least the bare possibility of doxastic justification. Since knowledge of lottery beliefs is impossible, this will make both doxastic and propositional justification impossible to come by on the knowledge-centric views just mentioned. In what follows, I’ll explain the surprising problem that anti-luck epistemology and knowledge-centric epistemology generate for ethical consequentialists.

3 Objective Act Consequentialism For present purposes, let a consequentialist about moral requirements be anyone who thinks that the truth of claims about what actions are morally required (forbidden, optional) depend solely on the long-term consequences of that action in the following way: CONSEQUENTIALISM S is required to perform action A if the long-term net value of A-ing is greater than the long-term net value of performing any alternative action. If the long-term net value of A-ing is less than the long-term net value of performing some alternative action, then A-ing is wrong. If the long-term net value of A-ing is the same as the long-term net value of some alternative and there is no other alternative action with a higher net value, then A-ing is optional. NET VALUE The net value of an action is the amount of value that results from that action in a specified period of time minus the disvalue that results from that action in that same period of time. This leaves open a number of dimensions along which to specify one’s preferred version of (objective, act) consequentialism. One could take an egoist view on which all that matters is how one’s present actions maximize the net amount of pleasure of one’s experiences in their life – in which case “the long term” is just the duration of one’s life. Alternatively, one could take a classical utilitarian view on which right action

Debunking Objective Consequentialism

83

is determined by actions maximizing the net pleasure for all sentient beings who stand to be impacted by one’s actions – in which case “the long term” is just the duration of the effects of one’s potential actions. This could be ten minutes, ten years, or ten millennia from the time of action. Alternatively, one could take a form of prior existence utilitarianism, where all that matters for fixing right action is how one’s prospective actions would impact people who already exist (or would come to exist irrespective of which action is performed) – in which case “the long term” is limited to the lifespan of those individuals.6 While the forms of consequentialism that I’m explicitly discussing are the maximizing varieties, satisficing varieties will have an equally difficult time in avoiding the epistemic problem that I develop for maximizing varieties.7 Also, while consequentialist views suffer perhaps the worst from knowledge-centric anti-luck epistemology, any moral theory that creates space for the moral status of at least some actions to be determined by the net value of their consequences will face a version of this problem.8 Let’s start by considering high-stakes moral beliefs about the actual world. The belief that it’s actually wrong to murder the entire Rohingya population of Myanmar, the belief that it’s actually wrong to steal large sums of money from effective charities that would use it to alleviate the suffering of many, the belief that it’s actually wrong to euthanize every other baby in the world, the belief that it’s actually wrong to drop an atomic bomb near a large population to observe its negative effects across that population, and so on. These are all claims about prospective actions in the actual world as opposed to merely possible worlds. When it comes to merely possible worlds, we can often a priori specify the net value of consequences of our potential actions more or less arbitrarily. Not so with the actual world. What I’m concerned with is the epistemic standing of our moral beliefs about actions that we and others can perform in the actual world. One thing these moral beliefs have in common is that they concern foreseeablely high-stakes actions: these are prospective actions that are specified in such a way that it is assumed that the individual for whom they are prospective actions is in a position to know that in the near term an exceptionally bad moral state of affairs would result from performing them. Now, there can be prospective high-stakes actions in circumstances where every alternative action is also a high-stakes action (think of trolley cases where the numbers on the tracks are roughly equal). These are not the kind of circumstances I have in mind in what follows. Our moral beliefs about these kinds of high-stakes actions will be controversial, and skepticism about them will be far less troubling. Rather, I’m limiting reflection to cases where a prospective high-stakes action has at least one prospective low-stakes alternative that one could easily perform. For example, a military leader’s ability (a) to murder or displace all, or nearly all, of Myanmar’s Rohingya population as well as their ability (b) to not

84

Paul Silva

murder or displace any of the population. Ordinary moral judgements would affirm that performing (a) is morally wrong when – though perhaps not only when – (b) is an available prospective course of action. High-stakes moral beliefs are common and generally uncontroversial and often function as starting points (and sometimes as fixed points) in non-skeptical moral theorizing. Their evidential usefulness in moral theorizing is owed to the fact that they seem to be justified and knowledgeable moral beliefs. But if our high-stakes moral beliefs are to be justified and knowledgeable from the consequentialist point of view, their justification and knowledgeability depends on induction in some way or other. Accordingly, we must inductively project in some way from past experience with a given action type to the conclusion that the target instance of that type will in one’s present circumstances maximize net value in the long run. But a worry immediately arises: it’s surely possible that a prospective action that would have horrendous near-term consequences nevertheless maximizes net value in the long run. This is an old objection, and consequentialists have had something to say about this. For example, G.E. Moore (1903 [1988], 93) tentatively suggests the following: As we proceed further and further from the time at which alternative actions are open to us, the events of which either action would be part cause become increasingly dependent on those other circumstances, which are the same, whichever action we adopt. The effects of any individual action seem, after a sufficient space of time, to be found only in trifling modifications spread over a very wide area, whereas its immediate effects consist in some prominent modification of a comparatively narrow area. Since, however, most of the things which have any great importance for good or evil are things of this prominent kind, there may be a probability that after a certain time all the effects of any particular action become so nearly indifferent, that any difference between their value and that of the effects of another action, is very unlikely to outweigh an obvious difference in the value of the immediate effects. J.J.C. Smart (1973, 33) concisely reiterates the idea: we do not normally in practice need to consider very remote consequences, as these in the end rapidly approximate to zero like the furthermost ripples on a pond after a stone has been dropped into it. Shelly Kagan (1998, 64) says that Of course, it remains true that there will always be a very small chance of some totally unforeseen disaster resulting from your act.

Debunking Objective Consequentialism

85

But it seems equally true that there will be a corresponding very small chance of your act resulting in something fantastically wonderful, although totally unforeseen. If there is indeed no reason to expect either, then the two possibilities will cancel each other out as we try to decide how to act. If Moore, Smart, and Kagan are right, then past experience allows us to reliably project the net value of a given action at least in a reasonable range of normal circumstances. According to their suggestion, the long-term effects of any prospective action will (or is objectively likely to) wash out in a way that tends to make its foreseeable near-term net value representative of its long-term net value. Accordingly, if any kind of action allows for this sort of projection, it’s exactly the sort of high-stakes actions described earlier where the foreseeable near-term consequences are extremely high.9 Let’s first clarify just how we can move from the informal assertions of projectability made earlier to outright claims about a given action being right or wrong. Here is an apparently cogent way of specifying the needed details: CONSEQUENTIALIST MORAL REASONING (CMR): (1) If up to the present, A-ing in circumstances like the ones I’m in have (or would have) frequently enough maximized net value up till now, then A-ing in my present circumstances will likely maximize net value in the long run (suggested by Moore, Smart, and Kagan). (2) Up to the present A-ing in circumstances like the ones I’m in have (or would have) frequently enough maximized net value up to now. (3) Therefore, A-ing in my circumstances will likely maximize net value in the long run (from 1 and 2). (4) Therefore, given that I have no significant reason to think that A-ing will fail to maximize net value in the long run, A-ing in my circumstances will maximize net value in the long run (from 3 and contraction [see later]). (5) An action is wrong iff it fails to maximize net value the long run (consequentialism). (6) Therefore, refraining from A-ing in my circumstances is wrong (from 4 and 5). Let me say a few things about this pattern of reasoning before getting to the lottery problem for consequentialists. (1) offers us a way of specifying the underlying idea that Moore, Smart, and Kagan suggested on behalf of consequentialism. The parenthetical “would have” in (1) is to indicate that sometimes we can judge a possible past action’s prospective net value in the near term without the need of

86

Paul Silva

anyone having performed that specific action in the past. At no point in the past has 99.9% of Earth’s population suffered horribly unto death from the release of a virus. Yet we know (or can at any rate be reasonably certain) that if someone were to have done that five years ago, that action would have failed to maximize net value up to now: it’s an action whose net value calculated up to the present moment is lower than some alternative prospective action’s net value when calculated up to the present moment. Importantly, the justification for (1) is inductive: provided induction from past experience is sufficiently reliable in the case of high-stakes beliefs, we have defeasible inductive justification for endorsing the conditional specified by (1). Of course, as noted earlier, the consequentialist application of (1) assumes that maximization of net value up to the present is a sufficiently reliable indicator of maximization of net value in the long run. Again, some versions of consequentialism will have a relatively easy time justifying this (prior existence utilitarianism), while other versions will have a comparatively difficult time doing so (classical utilitarianism). This is something that warrants further discussion, but I will pass it by to discuss other issues. For the most part, the justification of (1) becomes progressively easier the more high-stakes our potential actions are, irrespective of the version of consequentialism one endorses. (2) stands to be justified by historical knowledge of the effects of such actions in suitably similar circumstances. (3) is a deductive conclusion from (1) and (2). (4) relies on what I’m terming a contraction principle: a principle that licenses transitioning from probabilistically qualified claims to probabilistically unqualified claims in the absence of defeating information. For example, when a radiologist examines an X-ray of your leg and concludes that you have a hairline fracture, they are (or can be) implicitly reasoning from a contraction principle. This is because while X-rays offer us highly reliable representations when interpreted by a trained professional, there is still some small margin of error; there is still some small chance that either the X-ray production involved some error in representation or the reader mistook something for a hairline fracture that was not a hairline fracture. Even so, a skilled radiologist can justifiedly judge that you in fact have a hairline fracture despite the small error possibilities, so long as they have no reason to think the small error possibility is actual. Similarly, most of the time, a jury judges (and sincerely believes) someone guilty of a crime on the basis of a body of evidence, and they engage in a form of contractive reasoning from how things likely are to how things actually are. There is a lot to say about contractive reasoning, and I will return to this later on. For now, it’s enough to note that we regularly engage in (or could engage in) such reasoning and that often enough such reasoning is justified.

Debunking Objective Consequentialism

87

(5) is just a coarse-grained representation of the objective consequentialist thesis. If any of our high-stakes moral beliefs are to be justified on the assumption that some form of consequentialism is correct, then it has to be the case that something akin to a CMR argument underlies the justification our high-stakes moral beliefs.

4 Applied Moral Skepticism Why should we think that our high-stakes moral beliefs are like lottery beliefs if consequentialism is true? The answer flows out of the different constraints that Sensitivity and Safety impose on knowledgeable belief. Take Sensitivity first. Suppose you believe A-ing is wrong on the basis of a CMR argument. Would you still hold this belief on the basis of a CMR argument even if your belief were false – that is, even if A-ing were not wrong? You would. This is because CMR arguments are not “sensitive” to the falsity of the beliefs that they support. The reason for this lies with premise (1), which is a conditional that relies on induction from past experience for its justification. But any reasoning that relies on the past as an indicator of the future will fail to satisfy Sensitivity. For example, my believing that the sun will rise tomorrow because it has always done so in the past will not count as knowledge, since I would still have believed this even if, for whatever reason, the sun were destroyed or the rotation of the Earth halted before it had a chance to rise tomorrow (Vogel 1987; Comesaña 2007). Similarly, it could improbably turn out to be the case that murdering all the local schoolchildren maximizes net value because that would lead to a distant future where many, many more children are saved from lethal harm that they otherwise would have suffered. But even were that the unlikely truth, if my belief that it’s wrong were based on a CMR argument, I would still have believed that it’s wrong to kill all the local schoolchildren just on the basis of the fact that such an action would foreseeably fail to maximize net value. Accordingly, if Sensitivity is true, then premise (1) is unknown given its inductive justification. And if premise (1) is unknown, then presumably we cannot know the moral status of an action on the basis of a CMR argument – at least not those that rely on induction for their justification of premise (1). Now, it’s easy to see how little knowledge remains to us if Sensitivity is true since inductive knowledge becomes exceedingly difficult to come by in general (Vogel 1987; Comesaña 2007). For this reason, many philosophers have been reluctant to endorse Sensitivity and have turned to its contrapositive cousin, Safety, for help. Safety lacks the skeptical implications of Sensitivity for inductive beliefs. So if Safety is true, then premise (1) of CMR is not obviously in jeopardy. And if one can know premise (1) along with the rest of the premises, then it would seem that one could also come by moral knowledge and justified moral beliefs on the basis of CMR arguments also.

88

Paul Silva

The thing to note is that while Safety doesn’t threaten premise (1), it does obstruct the derivation of its conclusion by imposing limits on the ability of contractive reasoning to afford us knowledge. After all, recall that to reach (4) in the CMR, we needed to rely on the idea that we can transition from claims about how things very likely are(/will be) to how things actually are(/will be). But if Safety is true, then we can gain knowledge of actuality from knowledge of likelihoods only when beliefs so based could not have easily been false. But if consequentialism is true, almost all of our high-stakes moral beliefs are just that: they could easily have been false To see this, consider a variation on the Lottery case from earlier: Agent-Causal Lottery You’re holding a lottery ticket whose number reads 1524353214. Unlike a standard lottery, the winning ticket number is not determined through a near-random mechanical process. Rather, the winning ticket number is determined by the following process. Each day starting from tomorrow a new participant is selected to decide which number comes next in the series. It can be any number between 1 and 5. Since there are ten numbers on the ticket, this will take ten days and will require ten participants to select these numbers. Each participant is freely selected by the previous participant, while the first participant is chosen at random. And, excepting the first participant, each participant is informed as to the number selected by the previous participant and encouraged, but not forced, to choose a different number than the previous participant. Accordingly, the participants are influenced by that knowledge when selecting a number. Given this selection process, it’s clear that the winning ticket number will not be chosen at random. So you know that, unlike a fair lottery, the exact chances that your ticket’s number will be selected cannot be calculated in any precise way. But you do know that your chance of winning is obviously very small. So it’s extremely likely that your ticket is a loser. Reflecting on the long odds involved, you conclude (L*) that your ticket is a loser. Besides your rough assessment of the odds, you have no other reason to think your ticket is a loser. As it turns out, your number will not be selected in the following ten days, so your belief that you own a losing ticket is true. This is an “agent-causal lottery” in the sense that there is a clear nonrandom agent-causal path reaching from the first participant to the winning ticket number. Each participant (except for the first) is selected by the previous participant, and each participant’s number selection is causally influenced by their knowledge of the previous participant’s number selection (except for the first). Accordingly, the outcome of this lottery is

Debunking Objective Consequentialism

89

casually produced by the actions performed by previous agents and the responses of the agents who are impacted by those past actions. Do you know (L*) that you have a losing ticket in Agent-Causal Lottery? I venture to claim that anyone convinced that you cannot know L in Lottery, will be likewise drawn to the conclusion that you cannot know L* in Agent-Causal Lottery. The parallels are too deep, and the differences are too superficial. But more to the point, if Safety is what we rely on in diagnosing what goes wrong with my belief in Lottery, then we must also rely on it in diagnosing this case. And given the parallels between this case and the original, it’s easy to see where this is heading. In the original case, my belief in L fails to satisfy Safety, because it could easily have been false since “all that needs to be different in order for one’s ticket to be a winning ticket is that a few numbered balls fall in a slightly different configuration” (Pritchard 2008, Section 4). And similarly, your belief in L* in Agent-Causal Lottery could easily have been false since all that needs to be different in order for your ticket to be a winning ticket is for a few people to have made a slightly different decisions about which number and subsequent participant to choose. So if Safety explains my failure to know in Lottery, it explains your failure to know in the AgentCausal Lottery. The Agent-Causal Lottery is a trying case for non-skeptical consequentialists, because the outcomes of our high-stakes actions in the long run are a lot like the outcome in an Agent-Causal Lottery. In both kinds of cases, there is an agent-causal path that produces the relevant outcome, and at each (or very many) nodes in that path, things could easily have been otherwise. This is because there are almost always many alternative available prospective actions that an agent might easily have performed, thereby leading to different outcomes with a plausibly different net value. Take, for example, the 2019 mass shooting in El Paso, Texas, where 22 were killed and 24 were injured (if this is insufficiently high-stakes, just imagine that many more were killed and injured). By the lights of ordinary moral intuitions, this was a wrong action and we know that this was a wrong action. But the knowledgeability of this as a wrong action doesn’t depend on just (i) the known failure of this action to maximize net value in the near term and (ii) the idea that (i) ensures that the shooting is also very likely to fail to maximize net value in the long run. The lesson of the lottery cases is that beliefs that are highly likely to be true don’t necessarily satisfy Safety. According to Safety, for the belief that the shooting was wrong to constitute knowledge, it has to be the case that the shooting could not have easily maximized net value in the long run. But just as in the AgentCausal Lottery, where it’s easy to see how one could have easily beaten the odds, since each person could have easily chosen a slightly different number, in the mass shooting case too, one could “easily have beaten the odds” in at least some of the following ways. Consider that the El

90

Paul Silva

Paso shooting was the most lethal shooting in the US since 1949. And it came just after a string of other disturbingly violent shootings. It is also a shooting that took place in a highly populated city in Texas – a state that ordinarily shows strong resistance to firearm restrictions. Now we know that, historically, social movements that bring about social change often erupt from tragic events. Could it not easily be the case that this shooting helps tip the balance in support of gun-law reform in the United States in such a way that future shootings are significantly reduced and thereby maximize net value in the long run? It’s hard to see what grounds there could be for resisting this judgment in our present circumstances. But even if that fails to be the case, it could easily be the case that this shooting makes the general population much more vigilant and willing to report on people they know who might be at risk of committing a mass shooting, thus preventing many more shootings, and thereby maximizes net value in the long run. Further, it could easily be the case that this shooting has “identity-affecting” consequences that maximize net value: it impacts which people come into existence in the long run and thereby impacts the net value of one’s action. Perhaps, for example, by killing these people in El Paso, the shooter has impacted history in a way that would prevent the births of what would otherwise have been the next massively genocidal dictator and thereby maximizes net value in the long run (cf. Lenman 2000). According to the thesis floated by Moore, Smart, and Kagan, these are very unlikely possibilities since they are possibilities where the near-term consequences fail to accurately represent the long-term consequences, but so too is my winning the lottery. And what the lottery teaches us is that the unlikelihood of an outcome doesn’t ensure that the unlikely outcome could not have easily obtained. Recall that Sensitivity undermined the knowledgeability of premise (1). This was due to the fact that Sensitivity undermines inductive knowledge generally. But Safety is compatible with inductive knowledge, so it imposes no immediate threat to (1). Rather, Safety obstructs the justification of (4) by imposing a limit on knowledge-generating contractive reasoning. Recall that to reach (4) in the CMR, we needed to rely on the idea that we can transition from claims about how things very likely are(/will be) to how things actually are(/will be). But if Safety is true, then we can gain knowledge of actuality from knowledge of likelihoods only when beliefs so based could not have easily been false. But given the nature of the relation between our present actions and their outcomes (especially in the social world), consequentialism seems to imply that the vast majority of our high-stakes moral beliefs could easily have been false. This puts knowledge of our high-stakes moral beliefs out of reach. What impact might this have on the justificatory status of high-stakes moral beliefs if consequentialism is true? Well, if J↔K or J→PK are true, then our high-stakes moral beliefs are unjustified since both principles

Debunking Objective Consequentialism

91

limit justified beliefs to those that are potential knowledge. If J→JPK is true, the justificatory status of our high-stakes moral beliefs fares a bit better. This is because so long as one is ignorant of the unknowability of our high-stakes moral beliefs, it will be easier to have justification to believe that one is in a position to know them. This is in some sense “good news” for consequentialists since it carves out space for there to be some unknown yet justified high-stakes moral beliefs. But the consequentialists for whom this is good news are only those who are ignorant of the fact that knowing requires either Sensitivity or Safety.

5 Consequentialism and Self-Defeat According to many, the justification for believing (or assigning high credence to) consequentialism or any other general normative theory of ethics depends substantially on its ability to explain our “considered moral judgments,” which includes our concrete case moral beliefs (or intuitions) that we reflectively endorse. Our high-stakes moral beliefs form an important subset of our considered moral judgments about the actual world in that they tend to be widely believed and are intuitively striking in the sense that their denials seem clearly to be false. This is doubtless due to, as defined, our high-stakes moral beliefs involving our wantonly harming others in circumstances that have extremely bad effects in the near term. Consequentialists have regularly argued that consequentialism can explain the truth of at least an important range of our (correct) considered moral judgments while offering error theories for those it cannot explain. Let HSMBs refer to our high-stakes moral beliefs about the moral status of prospective actions that consequentialism can explain. This will include only those HSMBs that are true by the lights of consequentialism. For example, if the HSMB is that it’s wrong to murder Sam, it is an HSMB that consequentialism can explain only if killing Sam fails to maximize net value in the long run. Otherwise, it’s just not true, so not the sort of HSMB that a consequentialist should expect their theory to explain. Explaining why people have false HSMBs is the job of an error theory. As is common, I interpret the evidential relevance of explanatory considerations in probabilistic terms. That is, we are to understand data that is explained by a hypothesis as increasing the likelihood of that hypothesis. Thus, Pr(Consequentialism|HSMBs) > Pr(Consequentialism) But this evidential inequality is only one part of the story of how we might be able to come to justifiedly increase our confidence in consequentialism in light of our HSMBs. To justifiedly increase our confidence

92

Paul Silva

in consequentialism we must be able to update (e.g. conditionalize, Jeffrey conditionalize, pseudo-conditionalize) on our HSMBs. But on all accounts, to update on some evidence, E, we have to stand in some epistemically significant relation to E. For example, standard update rules require that we have learned E. Now, it’s an interesting question what it takes to “learn” that E. But at a minimum, it should require that one have justification for believing E. The idea that one can justifiedly update on information that one doesn’t even have justification to believe is hard to make sense of. Now, the arguments of the previous section show that knowledgecentric anti-luck epistemology is inconsistent with having justification for believing our HSMBs, so knowledge-centric anti-luck epistemology is inconsistent with the justification of updating on our HSMBs if consequentialism is true. This threatens to yield a form of standard first-order epistemic defeat for consequentialism since a crucial part of the evidence that is supposed to justify it, our HSMBs, are inaccessible to us. Yet consequentialism might be false. If consequentialism is false, then our HSMBs can be justified provided the correct normative theory of ethics doesn’t give the long-term consequences of our actions a role in determining right from wrong (and thereby run afoul of knowledgecentric anti-luck epistemology). Generally, non-consequentialist theories of ethics and rule consequentialist theories don’t do this, and they thereby create a more hospitable environment for the justification of our HSMBs. Now, if our HSMBs can be justified, then presumably they can also be justifiedly updated on, and therefore, they can function as evidence for consequentialism. This is a surprising little fact, one that gives us a bit of higher-order information about our evidence for consequentialism: Higher-Order Evidence: Our evidence (constituted by our HSMBs) supports having an increased degree of confidence in consequentialism only if consequentialism is false. Now on the probabilistic outlook, I began with the question whether having a justified high credence in consequentialism depends on the evidential relation between our prior confidence in consequentialism and whether our prior confidence in consequentialism is conditional on our HSMBs. Having observed how Higher-Order Evidence follows from earlier arguments, our evidence now includes both our HSMBs and HigherOrder Evidence. Intuitively, Higher-Order Evidence should have some evidential impact on our confidence in consequentialism. But what kind of impact should it have, exactly? Surely it should not increase our confidence in consequentialism. Put in general terms, it’s beyond credulity to think one can know/justifiedly believe that (i) E and that (ii) E supports P only if ¬P and think that one can justifiedly believe (or increase confidence in) P on

Debunking Objective Consequentialism

93

the basis of (i) and (ii).10 That leaves two options with regard to the evidential impact of Higher-Order Evidence: either Higher-Order Evidence lowers our posterior confidence in consequentialism, or it screens off the relevance of our HSMBs. I don’t know of an uncontroversial reason to prefer either disjunct. So let us remain neutral on this issue. Accordingly, we have the following inequality: Pr(Consequentialism|HSMBs & Higher-Order Evidence) ≤ Pr(Consequentialism) That is to say, learning Higher-Order Evidence at the very best screens off whatever justification our HSMBs afforded us for thinking that consequentialism is true; at worst, it should lower our credence in consequentialism. So unless one has sufficient reason to believe (or at least increase confidence in) consequentialism that is wholly independent of our HSMBs, consequentialism is not a moral theory that we can justifiedly believe or have high confidence in. Put differently, if justifiedly believing consequentialism depends on its ability to explain our HSMBs, then consequentialism doesn’t seem like the sort of ethical theory that can be justifiedly believed. It’s a kind of blind spot, a truth about the structure of moral normativity that we would seem incapable of rationally recognizing as such. There are a number of questions that this raises about the method of justifying normative theories. I think the most salient one it raises concerns the ability of concrete case judgments or intuitions about merely possible cases to justify normative theories. When it comes to the merely possible cases, we can generally specify in the very construction of the cases whether the prospective action under consideration maximizes net value in the long run. If ethical theorizing can function in an epistemically robust way with only such cases to work with, then the applied moral skepticism of consequentialism would not impose a limit on our ability to justifiedly increase our confidence in consequentialism. But, as others have worried, there seems to be something epistemically circular about this.11 Whether it is an epistemically problematic form of circularity is a discussion for another time. There is one form of consequentialism that may evade these worries: prior existence consequentialism. On such a view, whether an action is right or wrong depends only on its impact on individuals who already exist (or would exist no matter which action were performed). Now, unlike our prospective actions, we have historical knowledge of the actual outcomes of people’s past actions and how those actions impacted the people who existed (or doubtless would exist) at that time. Arguably, this historical knowledge can afford us knowledge of the moral status of past high-stakes actions if a version of prior existence consequentialism

94

Paul Silva

is correct. For example, the dropping of a second atomic bomb on a populated area like Nagasaki was gratuitous for quickly ending war with Japan. The war could have been brought to just as quick an end if it were not dropped or if it were dropped on an uninhabited area of Japan. We know this. So we know that this past action was wrong, even if at the time those who were making the decision were not – by the lights of knowledge-centric anti-luck epistemology – able to know or justifiedly believe it. The upshot is that prior existence consequentialist views may imply only a limited form of skepticism: we may not be able to know or justifiedly believe whether or not a prospective action of ours is wrong (due to the arguments of Section 3), but at least we can justifiedly increase our confidence in prior existence consequentialist views in virtue of their explanatory power with respect to HSMBs about the past.12

Notes 1. I’m grateful to the Alexander von Humboldt Foundation, which funded this research, as well as to Michael Blome-Tillmann, Michael Klenk, and Rik Peels, who offered helpful feedback on this project. 2. Pritchard (2005, 2008, 2012). 3. See Rabinowitz (2019) for further discussion of safety conditions. 4 Though the target property I refer to as ‘being in a position to know’ is interpreted differently by each cited author. 5. Alternatively, some knowledge-centric theorists have argued that justification should be understood virtue theoretically in terms of exercises of knowledgeyielding abilities, competences, or dispositions (Miracchi 2015; Silva 2017; Kelp 2018; Millar 2019): J→EKD: S justifiedly believe P only if S’s belief is produced from an exercise of a knowledge-yielding ability (process, competence, disposition). Provided such competences are understood in an anti-luck fashion as requiring Safety or Sensitivity, then we will get the same result here as with J→PK. However, Silva, Miracchi, and Kelp all seem to allow for justified beliefs in standard Gettier cases (where Safety and Sensitivity are not satisfied) and thus do not require for the exercise of those abilities that one believe safely or sensitively. Millar (2019) may be an exception to this. 6. For critical discussion of prior existence utilitarianism, see Singer (2011). 7. If Sensitivity is right, this will be because our view as to whether or not a prospective action satisfices net value will still have to rely on something like premise (1) of the CMR (see later). While if Safety is right, this will be because just as prospective actions could have easily failed to maximize net value, they could also have easily failed to satisfice net value (see later). 8. For example, one needn’t be a consequentialist to endorse the following. Suppose it would otherwise be permissible either to help group A or to help group B by providing one with money. However, you know that group B would use a portion of that money to set in place a series of events whose outcomes would likely severely and unnecessarily harm group A in the long run. Provided helping group A would not have comparably bad likely outcomes, helping group B would (ceteris paribus) intuitively be wrong, and it would be wrong because of the likely consequences of doing so. If that

Debunking Objective Consequentialism

9.

10.

11. 12.

95

intuitive moral judgement is right, it is not one that could be known or justifiedly believed if the later argument against consequentialism is sound. Yet some have worried about this Moore-Smart-Kagan style of response, because it’s not hard to imagine a series events where, say, murdering lots of children has the highest net value in the long run, and it’s not quite clear why we should think that the long-term consequences will (or are objectively likely to) balance out (Greaves 2016), Moreover, Elgin (2015) points out that under certain conditions, the mere fact that there are long-term distant consequences statistically prohibits us from being able to reliably assess the net value of our prospective actions. But Elgin’s criticism does not apply with equal force to all forms of consequentialism. For example, some forms of consequentialism give special place to the interests of beings that already exist (e.g. prior existence utilitarianism, ethical egoism). Once that is done, distant-future consequences become irrelevant when it comes to undermining the Moore-Smart-Kagan thesis. An upshot of the epistemic objection that I present later on is that one cannot retreat to something like prior existence utilitarianism to avoid it. If the evidential support relation involved some kind of standard conditional (material, indicative, counterfactual, strict), then (i) and (ii) would yield a contradiction. After all, whichever way the conditional is interpreted, it cannot be self-consistent and non-trivially true that (E&(E → P)) → -P. But I’m here assuming that evidential support is not essentially bound to such conditionals and can often be understood in broadly probabilistic terms. See Pust’s (2013, ch. 1) discussion of Rawls and the method of reflective equilibrium. For obvious reasons, this historical knowledge is not helpful for versions of consequentialism, like classical utilitarianism, which make the moral status of past actions depend on distant-future outcomes. I’m grateful to Rik Peels for prompting me to reflect on whether our historical knowledge of outcomes could be useful in the justification of consequentialist views.

References Bird, Alexander. 2007. “Justified Judging.” Philosophy and Phenomenological Research 74 (1): 81–110. https://doi.org/10.1111/j.1933-1592.2007.00004.x. Comesaña, Juan. 2007. “Knowledge and Subjunctive Conditionals.” Philosophy Compass 2 (6): 781–91. https://doi.org/10.1111/j.1747-9991.2007.00076.x. Elgin, Samuel. 2015. “The Unreliability of Foreseeable Consequences: A Return to the Epistemic Objection.” Ethical Theory and Moral Practice 18 (4): 759–66. https://doi.org/10.1007/s10677-015-9602-8. Greaves, Hilary. 2016. “Cluelessness.” Proceedings of the Aristotelian Society 116 (3): 311–39. https://doi.org/10.1093/arisoc/aow018. Ichikawa, Jonathan Jenkins. 2014. “Justification Is Potential Knowledge.” Canadian Journal of Philosophy 44 (2): 184–206. https://doi.org/10.1080/0045509 1.2014.923240. Kagan, Shelly. 1998. Normative Ethics. Boulder, CO: Westview Press. Kelp, Christoph. 2018. Good Thinking: A Knowledge First Virtue Epistemology. New York, NY: Routledge. Lenman, James. 2000. “Consequentialism and Cluelessness.” Philosophy & Public Affairs 29 (4): 342–70. Littlejohn, Clayton. 2012. Justification and the Truth-Connection. Cambridge: Cambridge University Press.

96

Paul Silva

Millar, Alan. 2019. Knowing by Perceiving. Oxford: Oxford University Press. Miracchi, Lisa. 2015. “Competence to Know.” Philosophical Studies 172 (1): 29–56. Moore, George Edward. 1903 [1988]. Principia Ethica. Amherst, NY: Prometheus Books. Pritchard, Duncan. 2005. Epistemic Luck. Oxford: Oxford University Press. Pritchard, Duncan. 2008. “Knowledge, Luck, and Lotteries.” In New Waves in Epistemology, edited by Vincent F. Hendricks and Duncan Pritchard, 28–51. Basingstoke: Palgrave Macmillan. Pritchard, Duncan. 2012. “Anti-Luck Virtue Epistemology.” The Journal of Philosophy 109 (3): 247–79. https://doi.org/10.5840/jphil201210939. Pust, Joel. 2013. Intuitions as Evidence. New York, NY: Routledge. Rabinowitz, Dani. 2019. “The Safety Condition on Knowledge.” www.iep.utm. edu/safety-c/. Rosenkranz, Sven. 2018. “The Structure of Justification.” Mind 127 (506): 309– 38. https://doi.org/10.1093/mind/fzx039. Silva, Paul. 2017. “Knowing How to Put Knowledge First in the Theory of Justification.” Episteme 14 (4): 393–412. https://doi.org/10.1017/epi.2016.10. Singer, Peter. 2011. Practical Ethics. Cambridge: Cambridge University Press. Smart, John Jamieson Carswell, and Bernard Williams. 1973. Utilitarianism: For and against. Cambridge: Cambridge University Press. Smithies, Declan. 2012. “The Normative Role of Knowledge.” Noûs 46 (2): 265– 88. https://doi.org/10.1111/j.1468-0068.2010.00787.x. Sutton, Jonathan. 2007. Without Justification. Bradford Books. Cambridge, MA: MIT Press. Vogel, J. 1987. “Tracking, Closure, and Inductive Knowledge.” In The Possibility of Knowledge: Nozick and His Critics, edited by Steven Luper, 197–215. Lanham, MD: Rowman & Littlefield. Williamson, Timothy. 2013. “Knowledge First.” In Contemporary Debates in Epistemology, edited by Matthias Steup, John Turri, and Ernest Sosa, 1–9. Hoboken: Wiley-Blackwell.

4

Disagreement, Indirect Defeat, and Higher-Order Evidence Olle Risberg and Folke Tersman

1 Introduction A belief can be challenged in both direct and indirect ways. While a direct challenge gives evidence that the belief is false, an indirect challenge targets the justification of the belief rather than its truth. For example, the evidence invoked by an indirect challenge may indicate that none of the considerations that the subject took to support the belief actually does so. Alternatively, it may suggest that the belief was formed under the influence of factors (such as sleep deprivation or a mind-distorting drug) that undermine the subject’s ability to make reliable assessments in the relevant area. Such challenges do not show the target beliefs to be false, of course. After all, the fact that we have come to believe that Ljubljana is the capital of Slovenia under the influence of LSD is hardly a strong indication that Ljubljana is not the capital of Slovenia. The idea is that they may nevertheless command abandoning the target beliefs, by undermining their justification. Evidence of the first kind (evidence that pertains directly to the truth of the target beliefs) is usually termed first-order evidence, while evidence of the latter kind is sometimes termed higher-order evidence. Common examples of challenges which involve higher-order evidence are, as illustrated earlier, those that point to factors suggesting that the subject’s cognitive abilities are somehow impaired or that the conditions for their proper functioning are not satisfied, such as bad lighting and the like.1 Other examples include so-called debunking arguments, although such arguments usually target whole sets of beliefs (such as all of our moral beliefs) rather than particular ones, by relying on more general theories about their causal background (such as theories about their evolutionary history). Yet further examples of indirect challenges that invoke higherorder evidence are those that appeal to the fact that one’s beliefs are disputed by others who, by all appearances, are equally competent and well informed as oneself. If one’s views are rejected by people who are in this sense one’s “peers” – the argument goes – then this is a reason to

98

Olle Risberg and Folke Tersman

abandon the beliefs even granted that it is not a reason to conclude that they are false. While appeals to higher-order evidence occur frequently in argumentative contexts, the status of such appeals is contested. Some question the undermining force of higher-order evidence on the grounds that it can be misleading. Higher-order evidence may generate reasons to doubt that a belief obtains support from one’s first-order evidence. Those who question its skeptical significance stress that even if it does provide such a reason, the belief might nevertheless be supported by that evidence. What they furthermore stress is that what matters for the justification of a belief is whether it is warranted by the (first-order) evidence and not whether we have reasons to think so. Therefore, if the higher-order evidence is misleading in the way just illustrated, it cannot undermine a belief’s justification. That a piece of higher-order evidence may fail to make a belief unjustified even though it provides a reason to think that the belief is unjustified is the upshot of so-called level-splitting views. What do such views imply about cases where the higher-order evidence is not misleading (because what it indicates – i.e. that the target belief is not warranted by the relevant evidence – is in fact the case)? The answer seems to be that the higher-order evidence still fails to make the belief unjustified, since it was never justified in the first place. Higher-order evidence is supposed to command dropping a belief by defeating the justification that we have for it or by severing the evidential link between our first-order evidence and the belief. However, if the belief never in fact obtained any support from the subject’s first-order evidence, then there was no evidential link to sever in the first place and so nothing to defeat. Hence, either way, although higher-order evidence may command revisions among a person’s views about whether their (first-order) beliefs are justified, it does not command any revisions among the (first-order) beliefs themselves.2 This account of the level splitters’ worries about the undermining force of higher-order evidence raises questions, but our main aim in this chapter is not to evaluate those worries but rather to discuss their implications. We shall focus primarily on their implications for the prospects of the success of skeptical arguments from disagreement. As we noted, disagreement belongs to the set of phenomena that are often classified as being higher-order evidence, and the worries about the undermining force of higher-order evidence can therefore be seen to cast doubts on such arguments. After all, what those arguments are aimed to show is precisely that the disagreement that occurs in an area does undermine the justification of the contested beliefs (and not merely that it generates a reason to think that they are unjustified). So if it turns out that higherorder evidence lacks the capacity to yield such implications, it may seem

Indirect Defeat and Higher-Order Evidence

99

that skeptical arguments from disagreement are bound to fail.3 The purpose of this chapter is to examine that idea.

2 Thesis and Plan What we shall do, more specifically, is challenge the line of reasoning just indicated and argue that the worries about the undermining force of higher-order evidence leave the prospects of skeptical arguments from disagreement untouched. Part of our strategy will be to illustrate that facts about disagreement can defeat by playing other dialectical roles than that which is associated with higher-order evidence. For example, a straightforward way to argue that we should reduce our confidence in a belief that is opposed by someone else is to claim that their dissent counts as first-order evidence against the belief. If the opponent is sufficiently clever and reliable, then their dissent is a negative bit of evidence that directly indicates that the belief is false, regardless of whether it can also serve as higher-order evidence. Although the suggestion that dissent can defeat by being first-order evidence illustrates that the status of higher-order evidence does not settle the question of the undermining force of disagreement, we shall not rely on it in what follows. The reason is that it is not enough to secure the quite general and grave skeptical conclusions that those arguments are commonly taken to establish. What we have in mind are arguments to the effect that the disagreement that occurs in areas such as morality or religion is so deep and widespread that it undermines the justification of all our substantive beliefs in that area and that it furthermore does so in a way that cannot be compensated for merely by gathering more first-order evidence for those beliefs.4 The problem with the suggestion that a peer’s dissent may count as first-order evidence is that the negative bit of first-order evidence that the dissent amounts to must be weighed against the positive bits that may also be available. Since those pieces may outweigh the evidence provided by the dissenting verdict, the suggestion that dissent can undermine in virtue of being first-order evidence arguably cannot account for the wide-ranging and profound significance ascribed to disagreement by the arguments that we focus on.5 We shall therefore not pursue that suggestion but rather focus on a different idea. Our point of departure is the distinction that David Christensen (2010), Maria Lasonen-Aarnio (2014), and others have made between higher-order evidence and what is sometimes called undercutting defeaters or ordinary undercutters.6 Although both higher-order evidence and undercutters are taken to generate challenges that are indirect, the challenges are commonly held to be different, in that the undermining force of undercutters is thought to work differently from that of

100

Olle Risberg and Folke Tersman

higher-order evidence. What we shall argue is that, unlike the idea that disagreement defeats by being first-order evidence, the suggestion that it provides undercutting defeat is capable of accounting for the significance that skeptics have attributed to, for example, moral and religious disagreement. The plan is as follows: in the next section (Section 3), we elaborate how the distinction between higher-order evidence and undercutters is commonly understood and explain in more detail how undercutters can defeat. We then go on (in Section 4) to illustrate how disagreement can serve as an undercutter along the lines indicated in Section 3. In Section 5, we address an objection according to which the worries about the undermining force of higher-order evidence apply just as well to undercutters. We respond to this objection by illustrating how the worries about higher-order evidence presuppose certain quite radical externalist or objectivist views about the relation between evidence and justification that when applied to ordinary undercutters, yield highly implausible consequences. The upshot is that if one insists on a strong parity between higher-order evidence and ordinary undercutters, then the proper conclusion is not that the worries about the undermining force of higher-order evidence extend also to undercutters but rather that they apply neither to higher-order evidence nor to undercutters. This conclusion further supports our thesis that worries about higher-order evidence fail to undermine the skeptical arguments from disagreement. We end by making some concluding remarks (Section 6).

3 Higher-Order Evidence vs Ordinary Undercutters One of the philosophers who distinguish between defeat provided by higher-order evidence and undercutting defeat is David Christensen. His account of the distinction provides the basis for our subsequent discussion. Christensen’s explanation of undercutting defeat is based on the view that our justification for a belief sometimes proceeds via a background belief that links our evidence for the belief to its truth. Roughly, the idea is that while undercutters defeat by disconfirming such linking claims, higher-order evidence does not. The point about linking claims is illustrated thus (Christensen 2010, 188): I may find out that a yellow Hummer was seen driving down a murder-victim’s street shortly after the murder. This may give me evidence that Jocko committed the murder – but only because of my background belief that Jocko drives a yellow Hummer. This background belief is needed to make the connection between the Hummer-sighting and the murderer’s identity.

Indirect Defeat and Higher-Order Evidence

101

What does Christensen mean in saying that the background belief is “needed”? A hint is given by the fact that he stresses that “information about what Jocko drives is essential to the bearing of the Hummersighting on the question of who committed the murder” (Christensen 2010, 188). The idea seems to be that the truth of the claim that Jocko drives a Hummer is required for the Hummer-sighting to stand in a relation of evidential support to the conclusion that Jocko committed the murder. Strictly speaking, that is not quite right, because there are many other claims whose truth would also connect the evidence to the conclusion, such as that Jocko is constantly followed by someone else who drives a yellow Hummer. But we may perhaps assume that some such claim must be true for the Hummer-sighting to evidentially support the pertinent conclusion. Importantly, however, the mere truth of such a claim does not in turn suffice, on Christensen’s account, for the Hummer-sighting to justify (or “give” him evidence for) his belief about Jocko. What has to be the case is rather that he reasonably accepts such a linking claim, at least implicitly. In other words, he must have a reasonable “background belief” whose content is the linking claim. And the point is that if the linking claim that he thus accepts is that Jocko drives a Hummer, then we can sap his justification for the belief by giving evidence that refutes that claim.7 This is because if the linking claim is thus refuted (and is not replaced by some other plausible linking belief), then his evidence fails to justify his belief. That is how an ordinary undercutter is supposed to defeat.8 Why should we suppose that on Christensen’s account, the mere truth of some linking claim (that connects the Hummer-sighting to Jocko’s guilt) does not suffice for the sighting to justify his belief? The answer is that without that supposition, it is hard to make sense of the following element of the view (Christensen 2010, 194): Consider a case where justification proceeds via an empirically supported background belief, as in the case where the sighting of a yellow Hummer on the murder-victim’s street supports the hypothesis that Jocko is the murderer. My justification can be undercut by my finding out that Jocko’s Hummer was repossessed a month before the crime, and he’s now driving a used beige Hyundai. What this passage suggests is that the finding that Jocko drives a Hyundai is enough to undercut his justification for the belief that Jocko did it even if it fails to rule out all claims that potentially connect the evidence to the belief, including (again) the claim that Jocko is followed by another person in a Hummer. Thus, if learning that Jocko does not drive a Hummer is enough to defeat Christensen’s belief in Jocko’s guilt, as he claims that it is, then it follows that the mere truth of a claim that

102

Olle Risberg and Folke Tersman

links the Hummer-sighting to the truth of the belief is not enough for the evidence to justify it. In cases that potentially involve undercutting defeat, then, one has to accept a linking claim that connects one’s evidence to one’s belief in order for the evidence to justify the belief. While that view is a crucial component of Christensen’s account, the account does not entail that one must consciously have used that background belief when forming the belief. This is clear from his treatment of another typical example of undercutting defeat, in which “the justification for [someone’s] belief that an object is red, on the basis of its looking red, is undercut by the information that it’s illuminated by red lights” (Christensen 2010, 194). The linking claim that is defeated by the information that the object is illuminated by red lights is, presumably, some assumption of the following type: in this situation, things have the color that they appear to have. Although the subject may in some sense be relying on that claim, it is far-fetched to think that they have formed their belief via consciously consulting it. (In this regard the case differs from the Jocko case, since it is more plausible to think that the belief that Jocko owns a Hummer actually figured in the agent’s reasoning.) We shall therefore take Christensen’s account to allow that a consideration can provide undercutting defeat even if the targeted linking claim was not consciously employed by the subject in some piece of reasoning that led to the belief. As for higher-order defeat, Christensen uses other illustrations. One is a case of a physician who reaches a conclusion about how best to treat a patient on the basis of the patient’s symptoms but reduces their confidence in it upon learning that they have not slept in 36 hours. The idea is that if the latter information (which counts as higher-order evidence) defeats the belief, then it does not do so by disconfirming any linking claim. For (Christensen 2010, 188), [w]hile the information about what Jocko drives is essential to the bearing of the Hummer-sighting on the question of who committed the murder, no fact about my being well-rested seems to be needed in order for the basic symptoms my patient exhibits to bear on the question of what the best treatment is for her. Thus, in the Jocko case, Christensen assumes that whether the Hummersighting supports the conclusion that Jocko did it depends on whether Jocko drives a Hummer. Whether the patient’s symptoms support the physician’s verdict about how to treat the patient, by contrast, seems not to depend in any way on whether the physician is well rested. After all, no assumption about whether the doctor is rested – and indeed no assumption about the doctor at all – is required to derive the verdict about the

Indirect Defeat and Higher-Order Evidence

103

patient’s treatment from facts about their symptoms. In this sense, the fact that the physician has not slept for many hours appears to leave the connection between their evidence and the belief intact. The higherorder evidence thus cannot be seen to disconfirm any linking claim that the physician needs to accept for their evidence to support their belief. Hence, the idea is that if the higher-order evidence nevertheless defeats the belief, it must do so in some other way. The supposed fact that the higher-order evidence in this case leaves the relation between the evidence and the challenged belief intact makes Christensen puzzled about its defeating force (cf. Christensen 2010, 195). This puzzlement is congenial with the attitude that motivates levelsplitting views. If higher-order evidence works along the just-indicated lines, then there is room for the possibility that while the higher-order evidence motivates doubts about whether our first-order evidence supports our belief, the first-order evidence does provide such support. And if it does provide such support, why should we think that the higherorder evidence nevertheless provides a reason to drop the belief? Why, in other words, should we abandon a belief that is supported by our evidence? As announced earlier, we shall ignore these worries about the defeating force of higher-order evidence, although we shall return, in Section 5, to the general question about the relation between higher-order evidence and undercutters. What we want to stress at this juncture is just that, on this picture, whether a given bit of information serves as an undercutting defeater is highly context-dependent. For example, one commonly cited example of higher-order evidence is information that one has been fed a mind-distorting drug that randomly makes things seem to one to be in a certain way. However, there may also be contexts in which the same information instead serves as an undercutter. To see this, suppose that our evidence for believing that a childhood friend has climbed Mount Everest is that we seem to remember seeing this on the news. Whether this evidence supports our belief plausibly depends on whether our memory faculty works properly. And this in turn means that when we learn about the mind-distorting drug, that information severs the link between our evidence and the belief in the way that, on Christensen’s account, is characteristic of undercutters. Hence, depending on the context, one and the same consideration can serve either as an undercutter or as higherorder evidence.

4 Disagreement and Undercutting The next step in our argumentation is to show that given the account of undercutting defeat just indicated, the fact that one’s views are opposed by one’s peers can sometimes serve as an undercutter. This is not

104

Olle Risberg and Folke Tersman

difficult, since all that is needed is a situation in which the disagreement indicates just what the information about the red lamp and the influence of the drug indicates in the cases described earlier; namely, that relying on how things seem to be is not (in the relevant context) a reliable way to figure out what actually is the case. For example, consider a peer disagreement that is due to differences in what the parties, X and Y, take themselves to remember. Suppose that X and Y know from reliable psychological tests that their memory faculties in general function equally well. Suppose also that X and Y have both been present at a yearly parade for many years but have different views about what happened at the parade in 1975. X believes that it occurred on a sunny day and was well attended, with the first band playing a Stevie Wonder song. Y thinks that it took place on a rainy day and was poorly attended, with the first band playing an Aretha Franklin song. They also lack access to other resources (such as newspaper reports and the like) that could help them settle the dispute. Their apparent memories are all they have in the form of evidence that may justify their beliefs. Now, if we suppose that both X and Y should respond to the finding that they disagree by reducing their confidence in their conflicting beliefs, then this can straightforwardly be explained on the assumption that their disagreement works as an undercutter. Each person can be seen to rely on a linking claim that connects the evidence (the apparent memories) to the truth of the beliefs, whether or not they have consciously invoked it in any piece of reasoning, namely the claim that their memory faculties work reliably. Since that claim is challenged by the fact that the opponent, whose memory faculties are equally good, has reached different beliefs, the connection between their evidence and their beliefs is undermined, so the justification that they have for their respective beliefs is undercut. The point can be further reinforced by considering peer disagreements that involve more than two people. Suppose that X and Y have three friends – A, B, and C – who were also present at the 1975 parade. While it is known that A, B, and C have memory faculties that are not inferior to those of X and Y, each of the five friends have (on the basis of what they seem to remember) reached different views about what the weather was like that day, what music was played, and so on. When the case is expanded in this way, it becomes increasingly plausible that the connection between what these people seem to remember and that the truth of their beliefs about 1975 is too fragile to entitle them to hold their beliefs on the basis of their apparent memories. And what does the undercutting is precisely the fact that people who are no less competent than they are have reached different views about the disputed matters. The points just made illustrate how disagreement can impact the justification of the contested beliefs in virtue of being undercutting. However, our aim in this section is more ambitious. What we want to

Indirect Defeat and Higher-Order Evidence

105

show is not just that disagreement could provide undercutting defeat but also that this view helps motivate the idea that disagreement can generate the grave and wide-reaching conclusions that skeptical arguments from disagreement are intended to establish (e.g.) in the moral domain. We noted earlier that although disagreement can challenge a belief in virtue of being first-order evidence, this idea fails to support conclusions of the kind that skeptics have sought. Why should we think that the idea that information about disagreement can be undercutting fares any better in this regard? What we mean by saying that a skeptical conclusion is “grave” is that it not only entails that the target beliefs are unjustified. It also entails that their justification cannot be restored by some easy fix, such as gathering more first-order evidence or eliminating inconsistencies among them. As we noted in Section 2, it is doubtful that the fact that dissent can serve as first-order evidence is enough to secure a conclusion of that kind, as the (negative) bit of first-order evidence that consists in a peer’s dissent may well be outweighed by other (positive) bits. By contrast, a successful undercutter cannot be outweighed by new first-order evidence in the same way. Rather, if the undercutter refutes the linking claim on which the justificatory force of the subject’s first-order evidence depends, then it “silences” the evidence and undermines the justification for their belief in a way that cannot be undone by simply gathering more evidence of the same type. For example, if we believe that Jocko is guilty on the sole ground that one reliable witness reports that a Hummer was seen at the crime site and later find that there is no connection between Jocko and the car, then we should drop our belief in his guilt. Once the linking claim has been refuted, learning that there are in fact several reliable witnesses who independently testified that they saw a Hummer (and not just one) provides no additional, or indeed any, support for thinking that Jocko did it. The fact that undercutters can thus silence some evidence is one reason for thinking that facts about disagreement, when viewed as undercutting, can generate a form of skepticism about the target beliefs that is grave in the sense described earlier. It is not a decisive reason, however, as there are ways that the significance of an undercutter can be limited in spite of its silencing capacity. For example, the subject might have evidence for the target belief that is not dependent on the linking claim that the undercutter threatens. If our evidence for believing that Jocko did it not only includes the Hummer-sighting but also the fact that his fingerprints were found at the crime site, for instance, then the finding that he does not drive a Hummer of course does not make the belief unjustified by itself. In addition, the significance of an undercutter could be limited by the fact that the target linking claim could be replaced by a plausible alternative linking claim that restores the connection between the evidence and the belief. The assumption that Jocko is followed by someone in a

106

Olle Risberg and Folke Tersman

Hummer is a case in point, because it vindicates the connection between the evidence (the Hummer sightings) and the belief even given that Jocko himself does not drive a Hummer. However, neither of these possible grounds for doubting an undercutter’s impact applies in crucial cases of moral disagreement, which are in this regard more similar to the memory case than the Jocko case. In the parade case, for example, the linking claim which the undercutter threatens is plausibly something like the following: what X seems to remember about 1975 is what actually happened in 1975 (and mutatis mutandis for the others). We further assumed that the disagreeing agents have no other resources besides their apparent memories to try to figure out what actually happened. Hence, if the threatened linking claim is abandoned, it is much harder to see how the connection between their beliefs and their apparent memories could be re-established by an alternative linking claim. In other words, if one’s memory is known to be bad, it is extremely difficult (if not impossible) to justify some alternative linking claim that evidentially connects one’s seeming to remember that P to one’s belief in P in the relevant way.9 The targeted linking claim is in this sense less replaceable than in the Jocko case. Similarly, many moral disagreements are also at least partially due to the fact that the different participants rely on their “seemings” (or “intuitions”) in the formation of their moral views and that their seemings differ, in that certain moral claims that seem true to one person do not seem true to others. This can be so both in the case of more general claims (e.g. that freedom is intrinsically good) and more specific ones (e.g. that some particular action in a certain situation is right). The linking claim that is pertinent in this case is therefore analogous to that relevant in the memory case: roughly, the things that seem to us to be good and right (and so on) really are good and right.10 And, just like in the memory case, it is hard to see how this claim could be replaced by a backup linking claim that reconnects our moral “seemings” to our moral beliefs. This allows one to see how the suggestion that disagreement can serve as an undercutter also accommodates the generality of the conclusions that arguments from disagreement are taken to establish. Recall that the idea is that the arguments are aimed to undermine not only some limited subset of our substantive moral beliefs but rather all of them. One worry in this context is that although many substantive moral beliefs are contested, there are also those that are universally, or almost universally, shared. For example, most of us agree that it is typically morally right to look after one’s family and friends, and it can seem mysterious why facts about disagreement should have the potential to undermine those shared beliefs in addition to the controversial ones.11 However, our suggestion provides an answer to that question as well. The reason is that the linking claim that is targeted – that the moral facts are how they seem to us to be – is one that we rely on just as heavily in the uncontroversial cases as

Indirect Defeat and Higher-Order Evidence

107

in the controversial ones. Hence, if peer disagreement gives us reason to abandon that claim, it undercuts both our controversial moral views and our uncontroversial ones to the same extent.12 There is of course more to say here. For example, whether facts about disagreement undermine uncontested beliefs plausibly depends on why those beliefs are uncontested. A non-skeptic might argue that the best explanation of this fact is that they are supported by evidence that is independent of any linking claim that the disagreement threatens. A skeptic may respond, however, by offering alternative explanations. For example, they could present an empirical “debunking” account of our moral beliefs that attributes them, at least in part, to the forces of natural selection and that does not suggest that uncontested moral beliefs are justified in a way that differs from the contested ones. Since the impact of Darwinian processes leads us to expect at least some overlap in our moral outlooks, such an account can help explain why some moral beliefs are uncontested, without vindicating the non-skeptic’s response to our argument. While a full discussion of these interesting topics is beyond the scope of this chapter, we think that they illustrate how the debunking strategy and arguments from disagreement can interact to strengthen the skeptic’s case (see Tersman 2017 for further discussion).

5 Higher-Order Evidence vs Undercutters Revisited The discussion in Section 4 is meant to illustrate that skeptics can avoid the worries about higher-order defeat by pursuing the idea that disagreement (e.g. about morality) may instead serve as an undercutter. We also suggested that this idea enables the skeptic to account for both the gravity and the generality of the conclusions that they seek. But what if the puzzles about higher-order evidence apply equally well to undercutters? Then nothing seems to be gained by pointing out that disagreement can play an undercutting role. The purpose of this section is to address this objection. As we explained in the introduction, the puzzles about higher-order evidence have to do with the fact that it can be misleading, in the sense that it may suggest that the subject’s first-order evidence does not support their beliefs even though it actually does so. Level splitters take this to suggest that even if higher-order evidence may justify the subject in thinking that the target belief is unjustified, it does not (thereby) ensure that it is unjustified. The objection that we shall address rests on the observation that something similar can be said about undercutters. Consider again the Jocko case, and suppose that although we have gathered strong evidence against our background (linking) belief that Jocko drives a Hummer, that belief is nevertheless true. What this may be taken to mean is that although we have evidence indicating that our verdict about Jocko’s guilt is not supported by our first-order evidence (the Hummer

108

Olle Risberg and Folke Tersman

sightings), that verdict is nevertheless supported by the evidence, in virtue of the truth of our background belief. If so, why should we think that the evidence that we have against the background belief establishes that our belief in Jocko’s guilt is unjustified and not just that we should believe that it is unjustified? It is notable, however, that if the argument for the indicated parity between higher-order evidence and undercutters is sound, then it not only motivates the relatively modest form of level splitting about undercutters just sketched, according to which what matters is whether the linking claim that we accept is true. It also motivates the more radical view that in order for our belief in Jocko’s guilt to be justified by the Hummer sightings, it is enough that some such linking claim is true, regardless of whether it is one that we have pondered or have any reason whatsoever to accept. And this implication of the level-splitting approach has highly counterintuitive consequences. For example, it entails that we could be justified in thinking that Jocko is guilty on the basis of the Hummer sightings, not only if we know for certain that Jocko does not drive a Hummer, but also if we have ruled out all other ways that we can imagine in which the Hummer sightings may have anything at all to do with Jocko’s guilt. All it takes is that, unbeknownst to us, there is some true story “out there” that connects the Hummer sightings to his guilt in the relevant way. It would suffice, for example, if undetectable aliens for their sheer enjoyment like to trick people with yellow Hummers into driving by whenever someone with Jocko’s shoe size commits a crime. Even granted the truth of that story, it is close to absurd, we submit, to hold that it makes us justified in believing in Jocko’s guilt on the basis of the Hummer sightings. What this suggests, we think, is that the objectivist or externalist view on justification that fuels the level-splitting position about higherorder evidence does not give a satisfactory account of undercutting defeat. Undercutting is better explained by a non-objectivist or internalist view, which takes the justification provided by a subject’s evidence to depend at least partially on personal features of the subject, such as which linking claims they in fact accept and their grounds for doing so. What it also suggests is that if one wants a unified account of higher-order defeat and undercutting defeat (i.e. one which explains both forms of defeat with reference to the same general views about justification), then it is the non-objectivist approach associated with undercutting that should be extended to cover both types rather than the objectivist approach that level splitters invoke in the case of higherorder evidence. As for the latter suggestion, there is of course room to question whether we should seek such a unified account in the first place. For example, as we have seen, the picture offered by Christensen is a disunified one on which higher-order evidence and undercutting work

Indirect Defeat and Higher-Order Evidence

109

differently. Undercutters are taken to defeat by attacking a linking claim while higher-order evidence defeats without attacking such a claim, which is also what underlies Christensen’s puzzlement about higher-order evidence. However, we think that the distinction between cases where linking claims are involved and cases where they are not will ultimately be difficult to maintain. Suppose, for example, that we believe that somebody is in the park on the basis of videos from a surveillance camera. The claim that the camera is functioning properly appears to be a linking claim in the relevant sense, since our belief would plausibly be undercut if that claim were refuted. Compare this with a case where our belief that somebody is in the park is instead formed as a result of direct perception. The disunified view here requires that the claim that our eyes are functioning properly is not a linking claim in the relevant sense, since certain forms of higher-order evidence (i.e. evidence that we see poorly) could defeat the justification of our belief without attacking any such claim. But it is awkward to suppose that these cases should be treated differently. After all, the only difference is that the relevant equipment is in one case outside of our heads and in the other case on the inside. It is hard to see how this mere difference in location could ground a principled epistemological difference.13 The upshot of the view that we find most promising, then, is that the distinction between higher-order evidence and undercutting defeaters should be abandoned. On the view in question, the familiar examples of defeat provided by higher-order evidence simply are cases of undercutting defeat. Hence, this view avoids both the counterintuitive consequences of level-splitting approaches and the alleged puzzles about how higher-order evidence can defeat. In other words, information commonly viewed as higher-order evidence defeats, if at all, in the same way that undercutters do, namely by attacking a linking claim between the agent’s belief and their grounds for the belief.14 Consider, for example, the doctor who learns that they are sleep-deprived (cf. Section 3). Which linking claim could that information plausibly be seen to attack? One good candidate, we submit, is simply the claim that they are currently capable of figuring out what the best treatment is on the basis of their observations of the patient’s symptoms. This view of course blatantly contradicts Christensen’s intuition that higher-order evidence defeats without attacking a linking claim. But we think that intuitions to that effect can be explained away. Consider the case of causation. Even if an event is brought about by several quite different factors, we may still be disposed to think and speak of one of them as the cause of the event. What explains such dispositions is often just that the other conditions can usually be taken for granted in situations of the relevant type, that they are less easy to manipulate, or that they are irrelevant to the moral questions that we might have about the situation. For example, although the presence of oxygen is obviously needed for a

110

Olle Risberg and Folke Tersman

forest fire to start, we may be more inclined to highlight a recklessly managed camp fire when offering a causal explanation of it. The role played by the oxygen is, in a sense, hidden or less visible to us through our background knowledge and our practical and moral concerns. The point is that if such mechanisms underlie our disposition to single out some factors in our explanations while ignoring others, then the disposition appears entirely consistent with thinking that there is no deep metaphysical difference between them. Something similar can be said, we think, about the tendency to reconstruct some but not other indirect challenges to our beliefs as being cases of undercutting defeat. Undercutting defeat is typically illustrated with cases where the significance of the undercutter is at best highly limited, in that it may succeed in undermining the justification of the specific belief addressed but leaves the subject’s other beliefs unscathed. In the Jocko case, for example, this is so because the relevant linking claim (that Jocko drives a yellow Hummer) has such a peripheral role in the agent’s web of belief and is not a claim that they normally rely on in their reasoning. There are also cases, however, in which the relevant linking claims are more firmly entrenched in our system, such as the claim that we can trust our senses, or other fundamental belief-forming methods, in certain circumstances. In those cases, we are more committed to the claims in question, because the justification of large portions of our beliefs depends on them in such a way that dropping those claims this would potentially command substantial revisions. Now, what we want to suggest is that their entrenchment may make those linking claims less salient to us, just as the causal contribution of a condition (like the presence of oxygen) may be less salient to us because we normally take for granted that it obtains. And since the linking claims that are relevant in the context of higher-order evidence are precisely of that kind, this explains why people are less prone to understand such examples as being cases of undercutting defeat. Just like in the causal case, this explanation of our dispositions is entirely consistent with thinking that the cases in question nevertheless are cases of undercutting defeat.15

6 Concluding Remarks The controversies about higher-order evidence and the difficulty of upholding the distinction between higher-order defeat and undercutting defeat point, we think, to a broader problem. The broader problem concerns how to find the right mix between subjectivist and objectivist elements in our theory of epistemic justification. The account of undercutting defeat that we have employed relies on a view according to which the justification of a subject’s beliefs crucially depends on their background beliefs.16 Those who are puzzled about how higher-order evidence can defeat, by contrast, are inclined to think that the objective fact

Indirect Defeat and Higher-Order Evidence

111

that a subject’s first-order evidence evidentially supports a certain claim, perhaps (but not necessarily) by logically implying it, may give them reason to accept the claim even if their background beliefs suggest that the claim does not obtain such support. By proposing that the view on justification that is associated with undercutting defeat applies also in cases of higher-order defeat, we side, in a way, with a non-objectivist approach. Even so, we think that the prospects of a purely subjectivist account of justification are bleak. A theory according to which the logical relations that in fact hold between the contents of a subject’s beliefs are completely irrelevant to what they are justified in believing, for example, is likely to get into serious problems fast. Any plausible theory will therefore have to involve both subjectivist and objectivist components. This much is at least implicitly acknowledged by most commentators in the debate about higher-order evidence. Consider, for instance, Thomas Kelly’s total evidence view, on which what a person is justified in believing depends partly on what conclusions their first-order evidence actually supports (independently of their own assessment of what it supports) and partly on the evidence they may have about what it supports, for example in the form of peer disagreement (Kelly 2010). As Kelly puts it, although one’s first-order evidence “typically counts for something,” it does not “count for everything” (Kelly 2010, 141). The total evidence view can therefore seem like a pleasantly reasonable compromise between the two extremes. A problem with the view, however, is that it is extremely difficult to come up with a principled theory about how the different kinds of considerations should be weighed against each other. Requiring such a theory is asking a lot, of course. But we nevertheless want to register our suspicion that in the absence of such a theory, the total evidence view is a way of simply sweeping the fundamental question of how to combine subjective and objective elements in a general theory of justification under the rug. What we would like to suggest is that to get a better understanding of higher-order evidence and its interaction with other forms of evidence, that question needs to be addressed in a more direct way, for example by articulating more clearly the background constraints that we want a theory of epistemic justification to satisfy.17

Notes 1. In the literature, the term higher-order evidence is used in two ways. Some use it to denote only evidence that pertains to what one’s evidence supports. Others use it to denote information about one’s being drugged, sleep deprived, or the like; for example, Lasonen-Arnio describes higher-order evidence as “evidence that [one is] subject to a cognitive malfunction of some sort and hence, that [one’s] doxastic state is the output of a flawed cognitive process” (Lasonen-Aarnio 2014, 315–16; Christensen 2010, 186). Since it is the epistemic implications of such information that we are mainly interested in, we use the term in the latter way.

112

Olle Risberg and Folke Tersman

2. See Tal (2018) for a relevant discussion. See also Christensen (2010), Coates (2012), DiPaolo (2018), Lasonen-Aarnio (2014), Weatherson (n.d.), and Turnbull & Sampson (this volume). 3. An argument along these lines is presented in Tiozzo (2019, ch. 3). 4. We say “substantive” since exceptions may be made for, for example, the belief that freedom either is or is not intrinsically valuable. 5. Thomas Kelly reaches a similar conclusion about the significance of disagreement when viewed as first-order evidence in 2010 (see esp. pp. 197f). 6. Christensen (2010). The phrase undercutting defeaters is due to John Pollock, who suggests that what is characteristic about undercutting defeaters is that they attack “the connection between the evidence and the conclusion [which constitutes the content of the target belief] rather than . . . the conclusion itself” (1986, 39). 7. Provided, of course, that he has no other, independent evidence for believing in Jocko’s guilt. 8. Note that as we have reconstrued Christensen’s account, it does not imply that in cases like the Jocko one, the subject’s linking belief must be true in order for their evidence to justify their belief. This leaves room for the possibility that one can undercut the belief also by presenting compelling though misleading evidence to the effect that the linking claim accepted is false. 9. Perhaps one way to restore the link is by learning from God that what we seem to remember impacts what the past is like through some process of backwards causality. More realistic scenarios are difficult to imagine, however. 10. See Michael Huemer’s contribution to this volume for a discussion about relevant questions here. 11. Thus, some writers have invoked the overlap in our moral views to try to question the soundness of arguments from moral disagreement (Smith 1994). Others reconstruct the challenge so that it only targets controversial moral beliefs (McGrath 2008). 12. If we were to give up all our moral beliefs, could we still regard each other as peers about moral questions? One might think that seeing somebody as our peer requires that we agree with them about some of the questions in the relevant domain, such as morality. In our view, however, the most plausible version of this idea is one on which two people can count as agreeing morally even if they do not have any positive moral beliefs at all, for example because they both suspend judgment about those issues instead. 13. A discussion about similar cases is provided by White (2010, 598–9). However, although White’s view of them is different from ours, his arguments to that effect seem at crucial points to rely on the kind of externalist view about indirect defeat that we have questioned. 14. This accords with Feldman’s suggestion that “whether it [higher-order evidence] is a different sort of undercutting defeater or a new kind of defeater is a terminological issue not worth worrying about” (Feldman 2005, 113). However, we disagree with Feldman’s claim that higher-order evidence defeats “not by claiming that a commonly present connection fails to hold in a particular case, but rather by denying that there is an evidential connection at all” (Feldman 2005, 113). In the typical cases discussed as higher-order evidence, for example ones that involve sleep deprivation, the point might be exactly that one’s faculties are generally reliable when one has slept well but not reliable in the particular case when one has not slept well. 15. A full defense of the suggestion that higher-order evidence defeats (if at all) by serving as undercutters would obviously require addressing a range of further objections. One might argue, for example, that it cannot accommodate

Indirect Defeat and Higher-Order Evidence

113

cases in which higher-order evidence (allegedly) defeats even though the subject’s evidence appears to consist of facts that jointly logically imply the target belief, in which case no linking claim seems to be involved (see, e.g., Christensen 2010, 187–8). Alternatively, one might object that this view cannot account for the “retrospective aspect” that some think is characteristic of higher-order defeat, which is supposed to entail that unlike undercutters higher-order evidence shows not only that the target belief is unjustified but that it was never justified to begin with (Lasonen-Aarnio 2014, 317). Although we think those objections can be met, we shall ignore them in the present context because they have no bearing on the paper’s main conclusion, which is that the worries about higher-order evidence do not sap the undermining force of skeptical arguments from disagreement. 16. This formulation of the view is obviously consistent with a number of different ideas about how the features of the subject’s situation affect what they are justified in believing, including which of their background beliefs that are relevant to defeat. However, the question of how to best spell out the view is beyond the scope of this paper. For further discussion of these issues, see Klenk (2019). 17. Many thanks go to Joshua DiPaolo and Michael Klenk for helpful comments.

References Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomenological Research 81 (1): 185–215. https://doi.org/10.1111/j.1933-1592.2010. 00366.x. Coates, Allen. 2012. “Rational Epistemic Akrasia.” American Philosophical Quarterly 49 (2): 113–24. DiPaolo, Joshua. 2018. “Higher-Order Defeat Is Object-Independent.” Pacific Philosophical Quarterly 99 (2): 248–69. https://doi.org/10.1111/papq.12155. Feldman, Richard. 2005. “Respecting the Evidence.” Philosophical Perspectives 19 (1): 95–119. https://doi.org/10.1111/j.1520-8583.2005.00055.x. Huemer, Michael. 2020. “Debunking Skepticism.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Kelly, Thomas. 2010. “Peer Disagreement and Higher-Order Evidence.” In Disagreement, edited by Richard Feldman and Ted A. Warfield, 111–74. Oxford: Oxford University Press. Klenk, Michael. 2019. “Objectivist Conditions for Defeat and Evolutionary Debunking Arguments.” Ratio: 1–14. https://doi.org/10.1111/rati.12230. Lasonen-Aarnio, Maria. 2014. “Higher-Order Evidence and the Limits of Defeat.” Philosophy and Phenomenological Research 88 (2): 314–45. McGrath, Sarah. 2008. “Moral Disagreement and Moral Expertise.” In Oxford Studies in Metaethics. Vol. 3, edited by Russ Shafer-Landau, 87–108. Oxford: Oxford University Press. Pollock, John L. 1986. Contemporary Theories of Knowledge. Lanham, MD: Rowman & Littlefield. Smith, Michael. 1994. The Moral Problem. Philosophical Theory. Oxford: Blackwell. Tal, Eyal. 2018. “Self-Intimation, Infallibility, and Higher-Order Evidence.” Erkenntnis 12 (1): 398. https://doi.org/10.1007/s10670-018-0042-4. Tersman, Folke. 2017. “Debunking and Disagreement.” Noûs 51 (4): 754–74. https://doi.org/10.1111/nous.12135.

114

Olle Risberg and Folke Tersman

Turnbull, Margarete Greta, and Eric Sampson. 2020. “How Rational LevelSplitting Beliefs Can Help You Respond to Moral Disagreement.” In HigherOrder Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Tiozzo, Marco. 2019. “Moral Disagreement and the Significance of HigherOrder Evidence.” PhD thesis, Gothenburg University. https://gupea.ub.gu.se/ handle/2077/57974 Weatherson, Brian. n.d. Do Judgments Screen Evidence? Accessed July 30, 2018. White, Roger. 2010. “You Just Believe That Because . . .” Philosophical Perspectives 24: 573–615.

Part II

Rebutting Higher-Order Evidence Against Morality

5

Higher-Order Defeat in Realist Moral Epistemology Brian C. Barnett

1 Introduction1 According to a realist moral ontology, moral properties are real features of the objective, mind-independent world. On a robust version of this thesis, these features occupy an ontological realm of their own: they are not part of the natural world and for this reason are often taken to be causally inefficacious.2 If cognitivism is correct, these non-natural properties manage to figure in propositions that serve as the contents of moral beliefs, which are true or false in the full-fledged sense of bearing a correspondence to reality (the correspondence theory of truth). Assuming there’s no systematic reason for all such beliefs to be false (contra some error theories), at least some of them may (are likely to) be true whether or not we cannot tell which. To rule out sheer luck in obtaining moral truth, a maximally robust moral realist package (hereafter realism) combines the aforementioned views with an optimistic moral epistemology: many of our ordinary moral beliefs constitute knowledge, or at least some weaker positive epistemic status, such as epistemic justification.3 Each component of this package – robust moral realism, cognitivism, the correspondence theory of truth, the denial of error theory, optimistic moral epistemology – has been the locus of attack.4 But how the package hangs together raises difficulties in its own right, since some of its components seem unlikely bedfellows. My focus here is on whether optimistic moral epistemology is plausible given the rest of the package. Specifically, I will address a generalized epistemological version of the debunking challenge. This challenge grants the non-epistemic components of realism, along with some modest epistemological footing: many of our ordinary moral beliefs are prima facie justified. However, the debunking challenge purports to debunk this justification by establishing that realist assumptions yield defeaters (debunkers), so that moral beliefs are not ultima facie justified. Among the most prominent candidate debunkers are the so-called evolutionary debunking arguments5 (EDAs), which begin with the premise that our moral beliefs are shaped by Darwinian processes (e.g. random

118

Brian C. Barnett

gene introduction and mutation, genetic inheritance, and their survival value) – factors that have no connection to a non-causal, non-natural moral domain. Therefore, we should not expect moral beliefs to track the truth. They are unreliable and therefore defeated (at least upon grasping the relevant facts). The problem generalizes to beliefs about any noncausal, non-natural ontological realm, such as Platonic abstracta, yielding the so-called Benacerraf–Field challenge (BFC).6 Another prominent candidate debunker arises in the disagreement literature.7 Given widespread and persistent moral disagreement, realism implies that we cannot consistently maintain our moral beliefs without dismissing those of others. Dismissal is epistemically unproblematic in cases in which we can rationally attribute a greater probability of error to interlocuters. However, it appears that we are often confronted with epistemic peers – those who share the same information, are just as smart, equally evidentially responsive, and the like. In short, they are just as likely as we are to be right.8 According to conciliationism (in its strongest form9), the rational response is to suspend judgment – that is, to give up the contested beliefs.10 If correct, the ramifications are potentially quite skeptical.11 After all, you can find a smart philosopher who will disagree about nearly anything. Perhaps we can salvage at least a small core of moral beliefs that are sufficiently agreed on that they are not subject to the problem. But we must consider whether even those beliefs are indirectly called into question by a conciliationist treatment of wider-ranging metaethical disagreement. Many responses to debunking arguments have questioned their empirical details and metaphysical presuppositions.12 Unfortunately, these are shaky grounds on which to rest, even for those who are scientifically informed and well trained in metaphysical theorizing. So much the worse for the moderately informed layperson: an agent who possesses the minimal information about evolution and disagreement sufficient to give the debunking threats some foothold but who nevertheless has no recourse to the complex empirical and metaphysical challenges needed to dispel such threats. Realists presumably prefer a response that protects a wide range of ordinary moral beliefs of average folk. And in order to achieve this, we need our response to go epistemic.13 In my preferred epistemological framework – internalist evidentialism – the debunking challenge amounts to the claim that there is available evidence such that those who are sufficiently aware of the relevant facts (about evolution, disagreement, etc.) are no longer justified in their moral beliefs (given the remainder of the realist package). More specifically, candidate debunkers fall into a class of evidence of recent interest to epistemologists – higher-order evidence, or evidence about evidence.14 Peer disagreement provides evidence about how others assess the evidence, which in turn yields comparative evidence about

Defeat in Realist Moral Epistemology

119

how well we have assessed ours. EDAs and the BFC provide evidence that casts doubt on the reliability of the evidence for our moral beliefs (moral intuition, reasoning, testimony, etc.). This higher-order evidence purports to defeat the first-order evidence for our moral beliefs. Whether the debunking challenge succeeds, then, is not determined solely by their empirical and metaphysical claims but also in part by the nature of higher-order defeat. My aim in this chapter is twofold: one primary and one secondary. My primary aim is to defend an optimistic moral epistemology given the rest of the realist package by appealing to a theory of higher-order defeat in such a way that the defense extends even to laypeople who are not themselves in a position to formulate a response. My secondary aim is to introduce a theory of higher-order defeat and demonstrate its utility in moral epistemology. I first outline the theory in the next section, which I then apply to candidate debunkers in subsequent sections. I will show that they fail on purely epistemic grounds, even granting their empirical and metaphysical premises. Despite this failure, I conclude in the final section by suggesting alternative roles that debunking may continue to play in moral epistemology.

2 Conditions on Higher-Order Defeat15 The proper starting point in a theory of higher-order defeat is the concept of higher-order evidence. Roughly, higher-order evidence is evidence about evidence. The evidence it is about is lower-order evidence, which may in turn be about further evidence. At some point, we reach firstorder evidence: evidence directly concerning the proposition at the object level (one that is not evidence for some further proposition). Suppose, for example, that E2 is evidence about evidence E1, which is evidence about proposition P, which is at the bottom of the evidential chain. Then E2 is higher order (specifically second order), E1 is lower order (specifically first order), and P is at the object level. If E2 being about E1 suffices to render it higher order, then when some X is added to E2, E2 + X continues to be about E1 and is likewise higher order, even if X is first order. Also notice that E2 being about E1 in one way does not rule out E2 being about E1 in additional ways, nor does it rule out E2 being about some additional evidence E1*. I term this supportive complexity. We will soon see that such seemingly trivial details matter. Higher-order evidence raises two primary questions about evidential support. First, what bearing, if any, does evidence at a higher level have on the object level? Second, on the question of levels interaction, how do different evidential levels interact when combined? What, in other words, does the total evidence support?

120

Brian C. Barnett

One promising answer to the first question is that evidential support can “filter” down from higher levels, through lower levels, and to the object level. As Feldman (2006) puts it, “evidence of evidence is evidence.” Here’s a more careful formulation: Filtration Principle: If E2 is evidence that there is evidence E1 for P, then E2 is evidence for P. For example, let E2 be evidence that a reliable mathematician sincerely tells you that there is a sound proof (E1) for theorem P. If all you know is E2, you have enough evidence to believe P. So filtration is intuitively plausible.16 If true, this has immense epistemic importance, especially when one possesses higher-order evidence but lacks access to the first-order evidence. It potentially explains the evidential story in many cases of testimony17 and how we retain justification in cases in which we no longer remember our original evidence but nevertheless remember having once had evidence (a solution to the problem of forgotten evidence18). Despite its plausibility, filtration has at least three exceptions. First, Hud Hudson observes that one can have a defeater D that defeats one’s support for P without defeating E2’s support for the claim that E1 supports P.19 For example, Hud convincingly lies to Rich that it’s his birthday, giving Hud evidence E2 that Rich has evidence E1 that it’s Hud’s birthday (P), even though Hud knows it’s not his birthday (~P). This is because he possesses prior evidence D against P, which defeats E2 but has no bearing on E1’s support for P. Feldman responds that this doesn’t refute filtration, since E2 continues to support P. What’s true is that E2 + D does not support P. I agree. However, just relocate the defeater and voila: if E2* = E2 + D, then E2* supports that E1 supports P, even though E2* does not support P, contrary to filtration. So filtration occurs only when the higher-order evidence does not contain (undefeated) defeaters. A second exception arises when one doesn’t understand much about what one has evidence for. So suppose that E2 supports that E1 is a sound argument for P. If E2 is testimonial evidence given by me in my logic class to a student who knows the standard definition of deductive validity but has a confused understanding of it, then perhaps E2 does not yield evidence for P. E2 needs to contain adequate conceptual information about the propositions that it supports in order for filtration to kick in. For the third exception, suppose we do a little reflection in epistemology class and conjure up some intuitive evidence E2, which supports that certain experiences (or propositions) E1 are evidence of trees. Contrary to filtration, if no one is having those experiences (nor reason to think the propositions are true), then E2 doesn’t give us evidence of trees. For E2 to support P, it needs to ground E1 by supporting that E1 is an occurrent experience or contains true propositions.

Defeat in Realist Moral Epistemology

121

Only in some cases does filtration require E2 to ground E1 or contain adequate conceptual information about E1’s relation to P. E2 needs to meet these conditions in order to support P in virtue of its support for the claim that E1 supports P. But due to the possibility of supportive complexity, E2 can support P in other ways. For example, E2 might also support that E1* supports P. If E2 grounds E1* and contains adequate conceptual information about E1*’s relation to P, then it can support P via this route without grounding E1. So there are various restrictions, and some of them apply only some of the time. But suppose that we have a case in which filtration occurs. Then it is crucial to note what I term the dampening effect. Suppose that E1 supports P to degree N1. And suppose that E2 supports to degree N2 that E1 supports P (to degree N1). I claim that in general, even when E2 meets the conditions for filtration, E2 supports P to some degree M < N1, N2. This is because the further the evidential distance from a proposition, the greater risk, the less likely it is true, and the weaker the support – that is, the more it has dampened. To illustrate the simplest case, where N1 = N2 = N, compare the following two scenarios: you are confident to degree N that you have a proof of theorem T vs you are confident to degree N that you have a proof that there is a proof of theorem T. It seems to me that the first justifies more confidence in T. However, dampening does not entail that higher-order evidence always provides weaker support at the object level than does first-order evidence, since, for example, higher-order evidence can accumulate in the absence of first-order accumulation. We now turn to cases in which one has both higher-order evidence and lower-order evidence, which raises the following question: what does the total evidence E1 + E2 support? In cases in which E2 is friendly to E1 (i.e. E2 supports E1 or agrees with E1 by supporting that it bears the relation to P that E1 actually does bear to P), there is little difficulty in deciding how the two levels interact. Although there may be disputes about whether E1 + E2 supports P more than E1 alone, there seems to be widespread presupposition that E1 + E2 supports P. In hostile cases, E2 challenges E1 in some way. There are also inert cases – cases in which E2 is neither friendly nor hostile. Although this last case is not obvious and will play a crucial role in my treatment of debunking, we will come back to it later. For now, we focus on hostility. The challenge to E1 in hostile cases can be tangential to E1: E2 supports that some further evidence E1* is evidence against P. Then, via filtration, E2 is evidence against P, thus serving as a rebutting defeater for E1. The challenge to E1 can also be direct: E2 might challenge E1’s intrinsic merits (e.g. it contains false propositions, or it is a blurry perception) or challenge E1’s bearing on P, thereby serving as a potential undercutting defeater. Now comes what I term the latching problem. Suppose E2 is evidence that some evidence supports ~P, but you don’t know what that evidence is. Unbeknownst to you, it is the evidence E1 that you are already relying

122

Brian C. Barnett

on to support P. It cannot latch onto E1 and so cannot serve as an undercutter.20 From your perspective, E2 might as well be about some other evidence of which you know not. Even though it cannot undercut your evidence, E2 still has the potential to rebut E1 as evidence against P. But in order to do so, E2 will have to be evidence against P by meeting the conditions necessary for filtration. In that case, its negative impact is lessened due to dampening and can more easily be withstood by the first-order evidence. We have a few last notes about levels interaction. First, as with filtration, some of the above claims won’t apply in special cases, due to conceptual inadequacy and supportive complexity. Second, we need a no-defeater clause, since even in a case of higher-order defeat, the defeater can itself be defeated. Third, defeat can be partial. When I don’t make that qualification, I mean full defeat, since that is the primary aim of candidate debunkers. So when hostility leads to at least partial defeat, we need a condition that guarantees it meets some degree threshold for full defeat. One plausible view is calibrationism, which is roughly the view that the degree to which P is supported should match the degree to which E2 says E1 supports it.21 For now, we’ll just say that the degree has to be “sufficient” as a placeholder and leave details to be worked out as the need arises. So the theory is the following: except in special cases (due to conceptual inadequacy and supportive complexity), higher-order evidence E2 (fully) defeats an agent’s lower-order evidence E1 concerning P (i.e. E1 supports P, but E1 + E2 does not support P) iff all of the following obtain: 1. The agent possesses E2 (in addition to E1). 2. The agent does not possess an undefeated (full) defeater for E2. 3. Either: (a) E2 is directly hostile to E1 and latches onto it, in which case E2 has at least some undercutting power; or (b) E2 is tangentially hostile to E1 and has an object-level bearing in virtue of meeting the conditions necessary for filtration, in which case E2 has at least some rebutting power, albeit dampened. 4. E2’s hostility is sufficiently strong: its rebutting or undercutting power (the greater of the two if E2 has the potential for both) surpasses the minimum threshold.

3 Debunkers as Higher-Order Defeaters Having outlined the theory of higher-order defeat, we now apply it. In this section, we see how candidate debunkers can be thought of as potential higher-order defeaters, make an initial assessment of whether they meet the conditions of the theory, and in the process see why existing responses to debunking fall short, thereby motivating the need for a new solution.

Defeat in Realist Moral Epistemology

123

We’re granting for the sake of argument that we have evidence for our first-order moral beliefs. By definition, candidate debunkers aim to challenge this evidence and so appear to be hostile. Indeed, by definition, they purport to be defeaters. However, they cannot consistently purport to offer direct evidence against first-order moral beliefs, since that would be to put forward exactly what they contest.22 Candidate debunkers must pose higher-order challenges, and the theory of higher-order defeat therefore applies. As a precondition of the theory, candidate debunkers must be possessed by the agents in question. Those who’ve never been exposed to the relevant considerations are not subject to epistemic threat. This is not a trivial point. Determining the relevant group of agents is more difficult than it might otherwise seem. One extreme is to think that only a handful of specialists who have read the literature, or discussed it in a seminar, are sufficiently aware of the relevant considerations. However, one need not know the details provided by such contexts for the challenge to kick in. It doesn’t take much to realize that there’s a lot of moral disagreement among smart, informed people. And most adults nowadays know about evolution, and at least some of the more reflective of them might have realized that this affects our moral judgments. It is plausible that these rudimentary considerations are enough to get the threat going. So I assume that a significant percentage of average folk are in possession of at least one candidate debunker. Besides, protection from the threat by appeal to ignorance is unsatisfactory. After all, those of us attempting to respond to the debunking challenge are well informed, and we too want our moral beliefs to be safe! So minimizing the relevant group doesn’t provide a satisfactory response. The other precondition to be satisfied is exclusion from special cases. The first such case, conceptual inadequacy, is easily dismissed: the group we have settled on (ranging from most average folk to experts) presumably has a sufficient handle on the basic concepts, such as evidence and support, and can carry out simple valid deductions. The other special case is supportive complexity. There is a simple trick for obtaining supportive complexity: plug into the theory one’s total body of higher-order evidence, which contains debunking evidence as a mere proper part. I will follow a more useful approach: isolate the debunking evidence, plug this alone into the theory, see what results, then separately consider after the fact whether the remaining evidence defeats those results. Since we are limiting consideration to consistent candidate debunkers (ones that do not presuppose first-order moral claims), there is no reason to think the isolated debunking evidence will enter into any evidential relations other than those that it wears on its sleeve: its negative relations to first-order moral evidence. So with the proper construal of the higher-order evidence under consideration, we can safely set aside supportive complexity. Having identified a relevant group of agents and ruled out special cases, the theory of higher-order defeat kicks in, and its more substantive

124

Brian C. Barnett

conditions must be applied. First, we should decide whether direct or tangential hostility is at issue. Some treatments of disagreement rely on the filtration principle: finding out that a peer disagrees yields evidence that there is evidence against one’s view, which via filtration is evidence against the view, thereby yielding tangential hostility.23 This is a mistake, even if the conditions necessary for filtration are satisfied. The dampening effect weakens the bearing of the evidence from disagreement on our moral beliefs, increasing the ease with which it can be outweighed by the first-order evidence. So if treated as tangential hostility, the support for moral beliefs can remain intact in the face of disagreement, albeit to a weakened degree. Of course, conciliationists might observe that we typically know numerous peers who disagree on any given topic and so might make the argument that dampened support accumulates. However, as frequently observed, it is dubious that our peers’ beliefs are evidentially independent of one another, and to treat them as such would be illegitimate double-counting of the available support.24 So it is doubtful that repeated instances of dampened support accumulate. If it does, then presumably not according to a linear measure, thereby limiting the prospects for any cumulative effect to surpass the threshold necessary to fully override the competing first-order evidential support. There is a more promising treatment of disagreement. Discovering that a peer’s evidential assessment conflicts with one’s original assessment yields reason to doubt whether the evidence really supports what one initially thought; that is, it’s directly hostile. The case for direct rather than tangential hostility is even clearer in the case of EDAs and the BFC, since they explicitly attempt to make trouble for our evidential basis. Given that candidate debunkers are best treated as cases of direct hostility, the next condition to examine is the latching requirement. But we can easily see that the candidate debunkers are about the very evidence on which we rely.25 There’s only a little room for disconnect. Mental compartmentalization might do the trick, or Hume’s (in)famous treatment of skepticism: philosophical reflection giving rise to defeaters will inevitably be overturned by habit, at which point we’ll resume normal life and forget about the defeaters. Perhaps something like this can stop us from connecting the dots in everyday moral belief formation, preventing latching, saving us from defeat. But this will depend on how conscious evidence and belief need to be, which is a contentious matter.26 Moreover, most of us find such solutions unsatisfactory, since we prefer to retain justification even in circumstances of full and integrated awareness. Grant, then, that latching occurs, at least in most circumstances. As a result, there’s no need to revert to the weaker tangential treatment with the dampening effect of filtration. Given all this, there’s at least some degree of defeat unless the defeaters are defeated. Now we must come to questions of degree. On a weak interpretation, candidate debunkers aim merely to show that our evidence is

Defeat in Realist Moral Epistemology

125

less reliable than initially thought. Although I will later return to this possibility, I set it aside for now, since something stronger is normally intended: candidate debunkers aim to show that our moral evidence should not be relied on at all. On this stronger interpretation, since all the previous conditions are met, it follows from calibrationism that there is full defeat – unless the defeater is defeated. This leads us to the dominant strategy in the current literature: defeat the defeater by providing direct evidence against the debunking premises. Rather than review the vast literature in the limited space here, it suffices to note that there are many versions of this strategy in the literature, each with its counterarguments, both sides appealing to subtle and complex empirical and metaphysical claims. This is not to deny that there is a more plausible side. But the level of dialectic and requisite background knowledge is largely inaccessible to non-experts. The weakness in this approach is that laypeople lack the defeater defeaters. It seems, then, that candidate debunkers satisfy all of the conditions on higher-order defeat, for at least a relatively large group of average people – despite the usual strategies for rebuttal. If so, debunking succeeds for the relevant group. Is there a way out?

4 The Inertness Thesis There is one detail we have missed. I have glossed over whether candidate debunkers are hostile. They purport to be, and upon first inspection, it seems obvious that they are. Certainly, they aren’t friendly. I claim that they are inert (neither friendly nor hostile) and that this is the key to answering the debunking challenge. Seeing this requires a careful examination of the conditions under which unfriendly cases are hostile (hence have defeating power) or inert (thereby lack such power). First, I distinguish evidential relations from what I term external measures of evidence. The former is any relation in virtue of which something qualifies as evidence concerning a proposition. The latter is any other evaluation of the evidence. External measures capture how well evidence objectively matches reality. For example, consider a visual impression of a tree, which (along with conceptual information about treehood) is evidence that there is a tree. The relationship between the visual impression and the proposition is an evidential relation. Whether there really is a tree (whether the evidence is veridical or misleading) is an external measure. Likewise, the objective probability that there is a tree (given the visual experience) is an external measure. If you prefer, feel free to think of external measures in terms of reliability, safety, tracking, and so on, though I will use these terms interchangeability since the details won’t matter.27 Now suppose your tree impression is hallucinatory (or that your environment makes this objectively probable), but you have no indication

126

Brian C. Barnett

that this is so. Your tree impression continues to be evidence for the tree proposition despite its objective unreliability. The latter does not negate the former. In general, external measures on evidence do not alter the evidential relations themselves.28 Of course, if you find out about an external measure (regardless of whether your information about it is correct or misleading), this awareness is new evidence. If hostile, this new evidence undercuts the initial evidence. Still, the initial evidential relation holds. What fails to hold is a positive evidential relation between the new total body of evidence (the perception plus awareness of unreliability) and the proposition that there is a tree. In other words, defeat works not by nullifying an evidential relation that is defeated but by expanding the body of evidence to one that does not bear that evidential relation. So while external measures on one’s evidence are inert, awareness of positive external measures (more objectively probable than not) are friendly, and the awareness of negative external measures (less objectively probable than not) are hostile. This does not depend on whether the awareness is veridical. It can be misleading about the external measures on one’s first-order evidence. Whether misleading or veridical, it makes the same contribution to evidential support. Moreover, awareness of external measures is not required for evidential support. In other words, the absence of such higher-order evidence is inert. Think, for example, of a small child, who can have justified beliefs based on perceptual evidence without yet having a clue about external measures. In fact, it would lead to a vicious infinite regress to require that evidential support always requires additional evidence about external measures to confirm it. Not only is the absence of any information about external measures inert, but awareness of one’s ignorance of such information is also inert. Having just observed in the last paragraph that necessarily we have good evidence without other evidence about any external measures to support it, you and I are aware in this moment that we are ignorant in this respect, and surely this does not defeat our justification. As another example, consider adults unindoctrinated into the world of epistemology. When asked why they think what they think, they may be able to identify their evidence and cite it (or maybe not), but when challenged with questions about the objective likelihood that their evidence actually fits the external world (especially in light of skeptical scenarios) and asked how to noncircularly confirm reliability, they probably cannot give answers no matter how hard they try. After all, it’s controversial whether any expert epistemologists are up to the task.29 But this failure should not mean that our evidence is defeated, on pain of an extreme justificatory skepticism.30 On the other hand, suppose you learn that you’ve been drugged. There’s a 50/50 chance of hallucination, which does seem to defeat. We now have a hostile situation. In contrast, the preceding observations make clear that being aware that one is ignorant of objective probability is inert. What’s the difference? In the hostile case, you have reason to think that

Defeat in Realist Moral Epistemology

127

the objective relation between your evidence and the proposition confers a 50/50 probability (i.e. the evidence confers what I term neutral support). In the inert case, there’s an absence of support altogether (neither positive, negative, nor neutral). The evidence simply fails to yield a verdict about what the probability is (not even that it lies within a relevant ballpark). In other words, it is entirely inscrutable. Neutral probability is crucially different from inscrutability.31 To further illustrate the difference, suppose a quantum mechanical calculation yields a 50/50 probability that an electron will be spin up. Consider a layperson who is given the mathematical apparatus and told what it allows one to calculate but who is unable to perform the calculation. When asked what the probability is, their answer should be that they don’t know, not that the probability is 50/50. The probability is inscrutable to them, not neutral. This difference between a 50/50 probability and inscrutable probability corresponds to a distinction in the philosophy of probability. According to the infamous indifference principle (or principle of insufficient reason), when there’s no reason to prefer one proposition over another, they automatically receive equal probability. This is now in wide disrepute. In addition to not making sense of the quantum mechanics example, it is a well-known result that the principle yields incoherent probability assignments.32 Let’s sum up. First, we are concerned here only with unfriendly cases that are directly about one’s first-order evidence, since candidate debunkers are not best treated as tangential. We need to divide the direct unfriendly cases into the hostile and inert categories. The direct unfriendly cases that we have determined to be inert are the following: (i) (ii) (iii) (iv)

External measures on our evidence. Lack of awareness of such external measures. Awareness of our ignorance of external measures. Evidence of their inscrutability.

The following are the direct unfriendly cases that are hostile: (v) Evidence of negative/neutral evidential relations. (vi) Evidence of negative/neutral external measures. To determine whether candidate debunkers are hostile or inert, we need to determine which of these categories they fall under. First consider peer disagreement. By definition, if a person S1 has good reason to believe that another S2 is an epistemic peer, this is evidence for S1 that S2 is as likely to be right. When S1 also finds out that S2 disagrees, S1’s total evidence yields a 50/50 shot that S1 is right. So peer disagreement yields neutral, hostile support. After all, it is isomorphic to the earlier drug case, which is well accepted as a case of defeat.33 As this result shows, the

128

Brian C. Barnett

theory of higher-order defeat entails conciliationism about peer disagreement (at least given peerhood as here conceived). But how often is there good reason to accept peerhood? Although it is often irrational to deny peerhood, let’s be careful what we infer from this. We often think new thoughts that we think haven’t yet occurred to others. Moreover, people frequently change their minds when given new thoughts. So we should recognize that we sometimes have evidence that others lack that might change their minds – not because we are smarter but for practical reasons. Of course, we should also recognize that others sometimes have evidence that we lack that might change our minds. And since thinking happens continually, our evidential situations are ever in flux. As a result, in many cases of disagreement with at least near peers (that much is easily discerned), it’s an open question whether I’m currently in a better position to judge, they’re in a better position to judge, or we’re equally well positioned. So I might be more likely to be right, they might be more likely, or we might be equally likely.34 The best description of this situation is that the probability is inscrutable, not 50/50. But peerhood is the minimal comparative evidential status that when combined with disagreement yields (full) defeat. Given this, along with peerhood inscrutability, the inertness thesis yields the conclusion that the total evidence in many ordinary cases of disagreement is inert rather than hostile, despite being unfriendly.35 Two qualifications are in order. First, it’s not always like this. Sometimes we so thoroughly discuss an issue with another to the point that it seems nothing further could be added to tip the scales, in which case it seems clear that we’re on equal footing, yielding a 50/50 chance of being on the right side of the dispute. Such cases yield neutral support, which is hostile and therefore defeats. Plausibly, however, these are uncommon in moral disputes. Even for moral beliefs nearing universal acceptance, there are dissenters motivated by metaethical concerns, the disputes over which reintroduce peerhood inscrutability. If so, many moral disagreements are only of the inert sort. Second, up to this point we’ve considered only full defeat. Partial defeat I happily concede in a wide range of cases. Even if peerhood is inscrutable, near peerhood (or some weaker positive status) is more easily discernible. Suppose you receive testimony against your belief, and your evidence affords the testifier some minimal positive evidential status. That there is some contest to your belief is not inscrutable, yet it is insufficiently hostile for full defeat. Its hostility is sufficient for mere partial defeat. Given partial defeat, you should maintain your doxastic stance with decreased confidence. Inscrutability doesn’t save you from this result; it at best offers protection from full defeat. Turn now to EDAs, which are spelled out in various ways, sometimes without much to do with evolution. Some are purely probabilistic: one might suggest that any given moral belief, being only one of many

Defeat in Realist Moral Epistemology

129

competing beliefs, has a low probability due to this proportion.36 But this cannot be right, since it would yield reason to think the belief is probably false, from which one should infer its negation, yielding justification for a different moral proposition instead. This shifts justification rather than removes it. Better would be an argument establishing neutral probability. Such an argument cannot be made purely by appeal to the fact that my belief in P is only one option of two, the other being ~P. This would presuppose equal probabilities, which I have no reason to suspect given our assumption that I already have first-order evidence that seems to point in one direction over the other. No appeal to a lack of evidence about my evidence will yield a neutral probability, for that is to assume the indifference principle, ignoring the possibility of inscrutability. To establish neutral probability, I must be given some evidence that my evidence is as likely reliable as not. Perhaps evolution gives me this evidence by explaining moral beliefs in a way that does not link them up to the moral properties that they are about. But the adequacy of such an explanation does not rule out there also being a “bridge” between those beliefs and the moral properties that they are about, a bridge that grounds reliability. One option is to appeal to the fact that moral properties are non-causal and mind-independent, making it difficult to imagine such a bridge. This takes us to the BFC, which denies the existence of a bridge on those grounds. But if we are to avoid lapsing back into any of the justmentioned strategies, we need to be given evidence that renders the probability of the bridge less than or equal to 50%. An appeal to naturalism is question-begging in this context. Nor should it be argued that a bridge is impossible on other grounds, since we can conceive of options (a special faculty of rational intuition, a constitutive relation between moral beliefs and properties, or divine revelation).37 The best bet is Ockham’s razor: we already have an evolutionary explanation for moral beliefs, so no need to posit anything further, and the fewer entities, the simpler the theory and the more probable. But it’s far from clear that this argument is legitimate in this context. First, we are already assuming there are non-natural moral properties. So we should grant that they cannot be ruled out on explanatory grounds. And if such grounds don’t rule out such things, explanatory arguments against non-natural entities don’t generally work. It’s difficult to see why there would be a special explanatory reason that would rule out bridges in particular. Actually, on explanatory grounds, we might have reason to posit bridges. Perhaps cognitivism, which we are granting, is best explained by positing a connection between beliefs and the corresponding non-natural moral properties, a connection in virtue of which beliefs can be about them. Such a connection need not track the truth, of course, so this leaves open the epistemological question. But once we have some connection, it isn’t clear that explanatory considerations disfavor truthtracking. Perhaps granting cognitivism was unwise. If debunkers wish to

130

Brian C. Barnett

drop it, fine. But then it’s no longer a question about moral epistemology but about the nature of mental content and moral language. Here I am concerned only with epistemology; Assuming cognitivism is necessary to isolate this concern. So, it seems, there’s no basis on which even experts can rule out a bridge in the current context. Inscrutability is the closest that they can get – a fortiori for laypeople. Given inscrutability, EDAs and the BFC are inert and hence incapable of full defeat. But as with disagreement, it is not inscrutable that there’s at least some chance that EDAs and the BFC are correct. Therefore, partial defeat remains feasible.

5 Debunking Reconstrued I’ve argued in this chapter that the debunking challenge is best understood in terms of higher-order defeat. I also outlined a theory of such defeat, which allowed us to systematically eliminate subpar responses to the debunking challenge, ultimately revealing a small window through which to escape. The result is that a significant range of ordinary first-order moral beliefs – of laypeople and experts alike – are safe from full defeat, even on the assumption of a maximally robust moral realist package. I have conceded that many such beliefs are nevertheless partially defeated. Moreover, I conceded this “happily.” The reason for this attitude is that it seems to promote the virtues of epistemic humility and open-mindedness without relinquishing our moral positions altogether. Insofar as these intellectual virtues are conducive to moral virtue, debunking may have the ironic result of moral improvement, especially in our approach to moral disputes. On a final note, suppose we grant that our moral beliefs are fully defeated given realism. Still, it doesn’t obviously follow that our moral beliefs are defeated – if we abandon realism. Perhaps there’s a third way: keep the moral beliefs but abandon realism.38 Realism might still happen to be true, and our moral beliefs will be safe from defeat. In this way too, candidate debunkers can be inert at the object level, depending on what other views one wishes to retain alongside ordinary moral judgments. If debunkers are directed against realist metaethics rather than normative beliefs, they may circumvent the objection just suggested as well as my contentions in this chapter. My defense of optimistic moral epistemology from debunking thus has the potential advantage of accommodating other roles for debunkers.

Notes 1. I am grateful to Michael Klenk, Margaret Greta Turnbull, and Eric Sampson for helpful feedback on an earlier draft. 2. For an exception to the causal claim, see Oddie (2009). 3. Enoch (2010) and Shafer-Landau (2012, 2003) defend this sort of package.

Defeat in Realist Moral Epistemology

131

4. A helpful survey is given by Campbell (2015). 5. Influential proponents include Joyce (2006) and Street (2006). 6. The BFC originates in the work of Benacerraf (1973) and Field (1989). Klenk (2017) mounts a strong argument that EDAs depend on the BFC. 7. E.g. Wedgwood (2010). 8. The notion of epistemic peerhood derives from Gutting (1982). Cf. Turnbull & Sampson’s contribution in this volume for a weaker conception. 9. Sometimes termed the equal weight view (Kelly 2011), though there are weaker versions that require only decreased confidence in the face of peer disagreement. 10. For defenses of conciliationism, see Christensen (2007, 2009, 2010), Elga (2010), Feldman (2006, 2007, 2009), and Matheson (2009). For non-conciliatory views, see Kelly (2005, 2011) and Wedgwood (2010). See also Turnbull & Sampson’s contribution to this volume, which develops an account of non-conciliationism in terms of rational level-splitting beliefs. 11. See Machuca (2013) on the prospective skeptical implications of disagreement. 12. E.g. Copp (2008), FitzPatrick (2015), Huemer (2016), Klenk (2017, 2018), and Tersman (2017). 13. For existing epistemic responses, see Clarke-Doane (2017), Vavova (2015), and the non-conciliationists referenced earlier. 14. For a sampling of this growing body of literature, see Christensen (2009, 2010), Feldman (2005, 2006, 2007, 2009), Kelly (2005, 2011), Matheson (2009), and Sliwa and Horowitz (2015). 15. Barnett (2016) presents a fuller account. 16. Fitelson (2012) and Kelly (2005, 2011) raise objections to filtration to which Barnett (2016), Feldman (2009), and Matheson (2009) respond. 17. See the contributions by Lee, Sinclair, and Robson in this volume for higherorder defenses of testimony. 18. For discussion of this problem, see Conee and Feldman (2010). 19. In p.c. to Feldman (2009). 20. See Sturgeon (2014) and Klenk (2019b) for the related debate over subjective vs objective defeat. 21. Sliwa and Horowitz (2015) defend a version of this. 22. Thus, candidate debunkers as construed here are not subject to the selfrefutation raised by Rini (2016). 23. Feldman (2006, 2007, 2009) and Matheson (2009). 24. Christensen (2007), Elga (2010), Goldman (2001), and Kelly (2011) propose independence conditions. 25. Turnbull and Sampson argue that certain level-splitting beliefs can be rational – e.g. “P, but the evidence as construed by my peer doesn’t support P” (my wording). I can accommodate this with latching. If the evidence as construed by my peer doesn’t support P, yet I’m justified in maintaining my stance, I must have reason to expect an evidential difference between us. This can happen only given the weaker notion of peerhood, which doesn’t require sharing the same evidence. As such, what I know about my peer’s evidence doesn’t quite map onto mine and hence doesn’t latch and therefore doesn’t undercut. I can still account for my peer’s evidence without latching via filtration, but then there’s dampening. Either way, the level-splitting belief remains rational. However, once faced with a peer in my stronger sense, this no longer applies. 26. See Feldman (1988) for discussion. 27. See also Conee and Feldman (2008) for this distinction. 28. This is part of the reason why, as Feldman (2009, 309–10) puts it, evidential relations are “timeless and eternal and necessary.” For accounts of such relations, see Conee and Feldman (2008) and Barnett (2016). 29. Bergmann (2006) provides a survey and a worthwhile attempt of his own.

132

Brian C. Barnett

30. The usual responses to skepticism cannot bypass this point. Contextualism concedes skepticism in contexts in which skeptical questions are posed. Standard anti-skeptical theories (e.g. Mooreanism and explanationism) contend that there exist positive external measures on our evidence (even if we cannot say much about them). However, ordinary folk won’t know these arguments and often admit they don’t have knowledge of external measures – yet retain a strong intuition they do know the first-order claims. This natural response seems rational. The only way to accommodate it is to accept that awareness of ignorance of external measures is inert. Hence, any adequate response to skepticism must supplement this point, not replace it. 31. Alexander (2013), Bergmann (2005), and Plantinga (1993) make the related distinction between suspended judgment (corresponding to neutral support) and the absence of a doxastic attitude (corresponding to inscrutable support). 32. Mellor (2005). 33. Given this isomorphism and the implausibility of escaping defeat in the drug case by endorsing epistemic optionalism (the denial of the uniqueness thesis), we should say the same about peer disagreement. 34. In King’s (2012) phrase, “A good peer is hard to find.” 35. Inertness is then bolstered by an additional factor: the familiar fact that people routinely talk past one other, which often flies far under the radar despite best efforts at mutual understanding. This tendency justifies at least modest hesitance concerning whether there’s a real disagreement in the first place, which diminishes hostility (either by decreasing the probability that one’s judgment is unreliable or by contributing to the inscrutability of that verdict). 36. Consider Street (2006) and Shafer-Landau’s (2012) interpretation. 37. Klenk (2018). 38. See Sauer’s (2018) argument and Klenk’s (2019a) response, along with Carter (2018).

References Alexander, David J. 2013. “The Problem of Respecting Higher-Order Doubt.” Philosopher’s Imprint 13 (18): 1–12. Barnett, Brian. 2016. “Higher-Order Evidence: Its Nature and Epistemic Significance.” PhD thesis, University of Rochester. https://urresearch.rochester.edu/ institutionalPublicationPublicView.action;jsessionid=814DA526F8AA56B117 5A8B21300FAE65?institutionalItemId=31040. Benacerraf, Paul. 1973. “Mathematical Truth.” The Journal of Philosophy 70 (19): 661–79. https://doi.org/10.2307/2025075. Bergmann, Michael. 2005. “Defeaters and Higher-Level Requirements.” The Philosophical Quarterly 55 (220): 419–36. https://doi.org/10.1111/j.0031-8094. 2005.00408.x. Bergmann, Michael. 2006. Justification without Awareness: A Defense of Epistemic Externalism. Oxford: Oxford University Press. Campbell, Richmond. 2015. “Moral Epistemology.” In Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. Winter. Accessed July 30, 2018. https:// plato.stanford.edu/archives/win2015/entries/moral-epistemology. Carter, J. Adam. 2018. “Meta-Epistemic Defeat.” Synthese 195 (7): 2877–96. https://doi.org/10.1007/s11229-016-1187-9. Christensen, David. 2007. “Epistemology of Disagreement: The Good News.” The Philosophical Review 116 (2): 187–217.

Defeat in Realist Moral Epistemology

133

Christensen, David. 2009. “Disagreement as Evidence: The Epistemology of Controversy.” Philosophy Compass 4 (5): 756–67. https://doi.org/10.1111/j. 1747-9991.2009.00237.x. Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomenological Research 81 (1): 185–215. https://doi.org/10.1111/j.1933-1592.2010. 00366.x. Clarke-Doane, Justin. 2017. “What Is the Benacerraf Problem?” In Truth, Objects, Infinity: New Perspectives on the Philosophy of Paul Benacerraf, edited by Fabrice Pataut, 17–44. Dordrecht: Springer. Conee, E., and Richard Feldman. 2008. “Evidence.” In Epistemology: New Essays, edited by Quentin Smith, 83–104. Oxford: Oxford University Press. Conee, E., and Richard Feldman. 2010. “Internalism Defended.” In Epistemology: Internalism and Externalism, edited by Hilary Kornblith, 231–60. Malden, MA: Wiley-Blackwell. Copp, David. 2008. “Darwinian Skepticism about Moral Realism.” Philosophical Issues 18: 186–206. Elga, Adam. 2010. “How to Disagree about How to Disagree.” In Feldman and Warfield 2010, 175–86. Enoch, David. 2010. “The Epistemological Challenge to Metanormative Realism: How Best to Understand It, and How to Cope with It.” Philosophical Studies 148 (3): 413–38. https://doi.org/10.1007/s11098-009-9333-6. Feldman, Richard. 1988. “Having Evidence.” In Philosophical Analysis: A Defense by Example, edited by David Austin, 83–104. Dordrecht: Kluwer. Feldman, Richard. 2005. “Respecting the Evidence.” Philosophical Perspectives 19 (1): 95–119. https://doi.org/10.1111/j.1520-8583.2005.00055.x. Feldman, Richard. 2006. “Epistemological Puzzles about Disagreement.” In Epistemology Futures, edited by Stephen C. Hetherington, 216–36. Oxford: Oxford University Press. Feldman, Richard. 2007. “Reasonable Religious Disagreement.” In Philosophers without Gods: Meditations on Atheism and the Secular Life, edited by Louise M. Antony, 194–214. Oxford: Oxford University Press. Feldman, Richard. 2009. “Evidentialism, Higher-Order Evidence, and Disagreement.” Episteme 6 (3): 294–312. https://doi.org/10.3366/E1742360009000720. Feldman, Richard, and Ted A. Warfield, eds. 2010. Disagreement. Oxford: Oxford University Press. Field, Hartry. 1989. Realism, Mathematics and Modality. Oxford: Wiley-Blackwell. Fitelson, Brandon. 2012. “Evidence of Evidence Is Not (Necessarily) Evidence.” Analysis 72 (1): 85–8. https://doi.org/10.1093/analys/anr126. FitzPatrick, William J. 2015. “Debunking Evolutionary Debunking of Ethical Realism.” Philosophical Studies 172 (4): 883–904. https://doi.org/10.1007/ s11098-014-0295-y. Goldman, Alvin I. 2001. “Experts: Which Ones Should You Trust?” Philosophy and Phenomenological Research 63 (1): 85–110. https://doi.org/10.2307/ 3071090. Gutting, Gary. 1982. Religious Belief and Religious Skepticism. Notre Dame, IN: University of Notre Dame Press. Huemer, Michael. 2016. “A Liberal Realist Answer to Debunking Skeptics: The Empirical Case for Realism.” Philosophical Studies 173 (7): 1983–2010. https://doi.org/10.1007/s11098-015-0588-9.

134

Brian C. Barnett

Joyce, Richard. 2006. The Evolution of Morality. Life and Mind. Cambridge, MA: MIT Press. Kelly, Thomas. 2005. “The Epistemic Significance of Disagreement.” In Oxford Studies in Epistemology. Vol. 1, edited by Tamar S. Gendler and John P. Hawthorne, 167–96. Oxford: Oxford University Press. Kelly, Thomas. 2011. “Peer Disagreement and Higher-Order Evidence.” In Social Epistemology: Essential Readings, edited by Alvin I. Goldman and Dennis Whitcomb, 183–217. Oxford: Oxford University Press. King, Nathan L. 2012. “Disagreement: What’s the Problem? Or a Good Peer Is Hard to Find.” Philosophy and Phenomenological Research 85 (2): 249–72. https://doi.org/10.1111/j.1933-1592.2010.00441.x. Klenk, Michael. 2017. “Old Wine in New Bottles: Evolutionary Debunking Arguments and the Benacerraf–Field Challenge.” Ethical Theory and Moral Practice 20 (4): 781–95. https://doi.org/10.1007/s10677-017-9797-y. Klenk, Michael. 2018. “Third Factor Explanations and Disagreement in Metaethics.” Synthese. https://doi.org/10.1007/s11229-018-1875-8. Klenk, Michael. 2019a. “Hanno Sauer, Debunking Arguments in Ethics (Cambridge: Cambridge University Press, 2018), Pp. Xi + 244.” Utilitas: 1–5. https:// doi.org/10.1017/S095382081900027X. Klenk, Michael. 2019b. “Objectivist Conditions for Defeat and Evolutionary Debunking Arguments.” Ratio: 1–14. https://doi.org/10.1111/rati.12230. Lee, Marcus, Neil Sinclari, and Jon Robson. 2020. “Moral Testimony as HigherOrder Evidence.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Machuca, Diego E., ed. 2013. Disagreement and Skepticism. New York, NY: Routledge. Matheson, Jonathan. 2009. “Conciliatory Views of Disagreement and Higher-Order Evidence.” Episteme 6 (3): 269–79. https://doi.org/10.3366/E1742360009000707. Mellor, D.H. 2005. Probability: A Philosophical Introduction. London: Routledge. http://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nl ebk&db=nlabk&AN=106564. Oddie, Graham. 2009. Value, Reality, and Desire. Oxford: Oxford University Press. Plantinga, Alvin. 1993. Warrant and Proper Function. Oxford: Oxford University Press. Rini, Regina A. 2016. “Debunking Debunking: A Regress Challenge for Psychological Threats to Moral Judgment.” Philosophical Studies 173 (3): 675–97. https://doi.org/10.1007/s11098-015-0513-2. Sauer, Hanno. 2018. Debunking Arguments in Ethics. Cambridge: Cambridge University Press. Shafer-Landau, Russ. 2003. Moral Realism: A Defence. Oxford: Oxford University Press. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10266702. Shafer-Landau, Russ. 2012. “Evolutionary Debunking, Moral Realism and Moral Knowledge.” Journal of Ethics and Social Philosophy 7 (1): 1–37. Sliwa, Paulina, and Sophie Horowitz. 2015. “Respecting All the Evidence.” Philosophical Studies 172 (11): 2835–58. https://doi.org/10.1007/s11098-0150446-9. Street, Sharon. 2006. “A Darwinian Dilemma for Realist Theories of Value.” Philosophical Studies 127 (1): 109–66. https://doi.org/10.1007/s11098-005-1726-6.

Defeat in Realist Moral Epistemology

135

Sturgeon, Scott. 2014. “Pollock on Defeasible Reasons.” Philosophical Studies 169 (1): 105–18. https://doi.org/10.1007/s11098-012-9891-x. Tersman, Folke. 2017. “Debunking and Disagreement.” Noûs 51 (4): 754–74. https://doi.org/10.1111/nous.12135. Turnbull, Margarete Greta, and Eric Sampson. 2020. “How Rational LevelSplitting Beliefs Can Help You Respond to Moral Disagreement.” In HigherOrder Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Vavova, Katia. 2015. “Evolutionary Debunking of Moral Realism.” Philosophy Compass 10 (2): 104–16. https://doi.org/10.1111/phc3.12194. Wedgwood, Ralph. 2010. “The Moral Evil Demons.” In Feldman and Warfield 2010, 216–46.

6

Moral Peer Disagreement and the Limits of Higher-Order Evidence Marco Tiozzo

1 Introduction One of the most frequently invoked arguments against moral realism is the argument from moral disagreement. The argument holds that widespread and persistent moral disagreement is better explained by some antirealist alternative, such as moral error theory or moral relativism.1 For example, J.L. Mackie (1977) famously argued that moral disagreement is better explained by psychological and sociocultural facts about us than by the existence of objective moral facts. Moral realists, in turn, have responded to this challenge by providing so-called defusing explanations to suggest that moral diversity is the result of badly distorted perceptions of objective moral facts.2 Since there has been no argument advanced so far to tilt the scales, the debate seems to have ended in stalemate. Recently, another version of the argument from moral disagreement has received much attention.3 This type of argument draws on considerations involving cases of peer disagreement and the normative significance of higher-order evidence. Epistemic peers are, roughly, subjects who are in an equally good epistemic position with respect to finding out the truth of a certain matter. According to a popular view about peer disagreement and higher-order evidence more generally, one is rationally required to suspend judgment or at least to be significantly less confident about one’s view regarding the relevant matter given that one is in possession of higher-order evidence. The argument I have in mind is based on this conciliatory view and holds that peer disagreement prevents most (if not all) of our moral beliefs, from amounting to justified belief or knowledge. I will call this the argument from moral peer disagreement.4 In the following, I will argue that the argument from moral peer disagreement fails to make a case for widespread moral skepticism. I will not question the fact that there is a lot of peer disagreement about moral matters. Nor will I dispute that peer disagreement and higher-order evidence have the ability to defeat rational belief, in many cases. Instead, I will argue that the connection between higher-order evidence and defeat is much weaker than many seem to presume. The main reason for this is

Moral Peer Disagreement

137

that peer disagreement (and higher-order evidence more generally) only contingently gives rise to defeat and, importantly, that the condition that it is contingent on is often not satisfied when it comes to moral peer disagreement specifically, since the level of peer intransigence is high in the moral domain. The chapter will now proceed as follows. In Section 2, I will explicate how the argument from moral peer disagreement is supposed to work: through considering peer disagreement to be higher-order evidence and through taking higher-order evidence to function as an undercutting defeater of knowledge-level justification. Then in Section 3, I will present two principal ways that one might explain why higher-order evidence leads to defeat: the objective defeat explanation (ODE) and the subjective defat explanation (SDE). I will argue that ODE is problematic and that it at best collapses into SDE, which in turn is able to provide a straightforward explanation to higher-order defeat. Finally, in Section 4, I will first explicate the contingency of higher-order defeat that follows given SDE. Then I will argue that the level of peer intransigence in the moral domain will make it so that the condition higher-order defeat is contingent on is often not satisfied when it comes to moral peer disagreement specifically. As a result, the argument from moral peer disagreement fails to make a case for widespread moral skepticism.5 Section 5 concludes.

2 The Argument From Moral Peer Disagreement The argument from moral peer disagreement was first introduced by Sarah McGrath in her seminal: “Moral Disagreement and Moral Expertise” (2008).6 Similar arguments have later also been discussed by Vavova (2014), Locke (2018), and Rowland (2017). Here is a slightly modified version of McGrath’s argument:7 P1 If, in the face of disagreement about x, you have reason to believe that your opponent is an epistemic peer, then your belief about x does not amount to knowledge. P2 Many of most people’s moral beliefs are subject to disagreements where they have reason to believe that the other party is an epistemic peer. C Therefore, many of most people’s moral beliefs do not amount to knowledge. First, epistemic peers are people who share the same or at least comparable evidence with respect to the disputed matter and are roughly equivalent in terms of cognitive abilities and motivation to arrive at the truth.8 In this case, the shared evidence is supposed to consist of the non-moral

138

Marco Tiozzo

facts that bear on the relevant matter and the subsequent moral intuition or seeming. Notice that the argument from moral peer disagreement does not require that the other party is an actual epistemic peer nor that you believe that the other party is an epistemic peer; merely having sufficiently good reason to believe that the other party is an epistemic peer suffices to get the skeptical challenge going.9 Second, the reason to believe that the other party is an epistemic peer is supposed to be higher-order evidence against your view regarding the disputed matter.10 The argument from moral peer disagreement is based on a conciliatory view about peer disagreement and higher-order evidence more generally. The general idea is that higher-order evidence takes away the justification or the rationality (I will use the terms interchangeably) of one’s belief about the relevant matter. Given the uncontroversial assumption that rationality is required for knowledge, one’s belief regarding the disputed matter will no longer amount to knowledge. Notice that the argument does not exclude that our controversial moral beliefs would have amounted to knowledge in the absence of peer disagreement. The argument from moral peer disagreement thus does not aim to show that we lack moral knowledge, however the world may be.11 Finally, the argument from moral peer disagreement does not target all of our moral beliefs, only many of most people’s moral beliefs. The relevant subset of moral beliefs are, according to McGrath (2008, 92–3), the ones that tend to be hotly contested in the applied ethics literature but also in broader culture, such as in what circumstances it is morally permissible to enforce the death penalty, to have an abortion, or to eat meat; whether we are morally required to donate to charity; and so forth. The conclusion of the argument does not exclude, therefore, that we can have moral knowledge about less controversial moral matters, such as whether pain is bad or whether it is morally permissible to kill someone just to watch them die, and so on. The conclusion of the argument is therefore supposed to be local rather than global moral skepticism.12 Objections to the argument from moral peer disagreement fall into two broad categories. The first category of objections is directed at P1, whereas the second is directed at P2. Objections of the first type tend to grant P2 but argue against the conciliatory view of peer disagreement (e.g. Setiya 2012; Wedgwood 2010). Objections of the second type tend to grant P1 but claim that actual peer disagreements on moral matters are few and far between (e.g. Decker and Groll 2013; Vavova 2014). Although there might be a priori reasons to suspect that moral peer disagreement is a rare phenomenon, the only way to finally settle the matter is through careful empirical investigation. As Vavova astutely points out, “At some point, we have to go out in the world and count those peers” (2014, 313). Although studies exist on moral disagreement in general (e.g. Nisbett and Cohen 1996), there are, as far as I know, no studies on moral peer disagreement in particular. In the absence of the relevant empirical data, I think it is better to focus on P1 and set P2 aside.

Moral Peer Disagreement

139

So how is P1 supposed to work? The argument from moral peer disagreement rests on what is known as the conciliatory view about peer disagreement and higher-order evidence more generally. Higher-order evidence is, broadly, evidence about the epistemic status of first-order beliefs. For instance, evidence about one’s reliability regarding the relevant matter or about the quality of one’s evidence. Typical examples of higherorder evidence in the literature include sleep deprivation, mind-distorting drugs, and biases of various sorts. What these examples have in common is that the higher-order evidence gives the subject reason to think that their belief about the relevant matter fails to be epistemically appropriate. Evidence of peer disagreement is in a similar way supposed to provide higher-order evidence about the epistemic status of one’s belief. Here is Thomas Kelly (2005, 186): Given that reasonable individuals are disposed to respond correctly to their evidence, the fact that a reasonable individual respond to her evidence in one way rather than another is itself evidence: it is evidence about her evidence. That is, the fact that a (generally) reasonable individual believes hypothesis H on the basis of evidence E is some evidence that it is reasonable to believe H on the basis of E. The beliefs of a reasonable individual will this constitute higher-order evidence, evidence about the character of her first-order evidence. Suppose that your original evaluation of E makes you believe not-H and that you later find out that an epistemic peer believes H. The fact that an epistemic peer believes H provides you with higher-order evidence about H – that is, evidence to believe that it is reasonable to believe H on the basis of E. As a result, it seems that you no longer can trust your original evaluation of E. Taking H to be true rather than not-H appears in a way to be arbitrary in light of the higher-order evidence. A great number of authors argue for similar reasons that higher-order evidence of peer disagreement therefore has the ability to defeat the rationality of one’s belief about the disputed matter at issue.13 In the following, I will argue that peer disagreement does not have the sort of systematic defeating impact on our moral beliefs that the argument from moral peer disagreement presupposes. However, to make my argument, it will be crucial to first explain why higher-order evidence (more generally) will not always lead to defeat.

3 Explaining Higher-Order Defeat A large number of philosophers think that higher-order evidence provides a defeater for justified belief and knowledge. But it is not evident why exactly higher-order evidence should lead to defeat. To explain this, we have to take a look at how defeaters in general are supposed to work. On the most general level, defeaters are supposed to be facts or mental

140

Marco Tiozzo

states that if present will defeat the justification or rationality of the relevant belief at issue. For instance, my belief that a certain vase is red is defeated if I learn that the reason that the vase appears red is that it is illuminated by red lightning. Most epistemologists agree that knowledge is incompatible with undefeated defeaters. Defeaters come in different flavors. A common distinction is the one between rebutting defeaters and undercutting defeaters. Rebutting defeaters indicate that one’s belief is false, whereas undercutting defeaters indicate that one’s belief is not well grounded. I will follow others and take higher-order evidence to provide an undercutting defeater.14 But what matters most for our purposes is another important distinction between objective defeaters and subjective defeaters (Klenk 2019). An objective defeater is a fact that makes the relevant belief unjustified, typically evidence that is in one’s possession. For instance, it might be argued that if one has sufficiently strong evidence against one’s belief about p, then one’s belief about p is defeated.15 By contrast, a subjective defeater is a belief that there is a defeater of some sort. For example, if I believe that my belief about p is false, lacks evidential support, or is epistemically inappropriate for some other reason, then I have a subjective defeater for my belief about p. Notice that an objective defeater does not have to be believed to be a defeater in order to provide defeat; it is sufficient that one possesses the relevant evidence (e.g. evidence to indicate that one’s belief fails to be rational) for one’s belief about p to be defeated in this sense. Taking a belief to be defeated is therefore neither sufficient nor necessary for objective defeat. Given the distinction between objective and subjective defeaters, we can outline two explanations of why higher-order evidence has the ability to defeat the rationality of one’s beliefs – either by providing an objective defeater or by providing a subjective defeater: ODE: Higher-order evidence undercuts the rationality of one’s belief about p by providing an objective defeater to one’s belief about p. SDE: Higher-order evidence undercuts the rationality of one’s belief about p by providing a subjective defeater to one’s belief about p. The main difference between these two explanations lies in how they describe the relevant defeater. According to ODE, it is the mere possession of sufficiently strong higher-order evidence about one’s belief about p that provides a defeater for one’s belief about p, whereas according to SDE, it is coming to believe that one’s belief about p is epistemically inappropriate that provides a defeater for one’s belief about p. I will now argue that ODE is unable to provide a satisfactory explanation of higher-order defeat and that it at best collapses into SDE. Then I will go on to argue that SDE, by contrast, is able to provide a straightforward explanation of higher-order defeat, at least given that rationality

Moral Peer Disagreement

141

demands that one’s beliefs satisfy a certain structural requirement of rationality. 3.1 The Objective Defeat Explanation According to ODE, sufficiently strong higher-order evidence undercuts one’s belief about p regardless of whether one comes to believe that one’s belief about p fails to be epistemically appropriate. But as I pointed out earlier, it needs to be explained why exactly the defeat happens. One might either argue that higher-order evidence undercuts the rationality of one’s belief in the propositional sense or by arguing that it undercuts the rationality of one’s belief in the doxastic sense. There is a difference between saying that it is rational to believe p and saying that one’s belief that p is rational. It is normally assumed that propositional rationality is a matter of having on balance good reasons or justification to believe a proposition, whereas doxastic rationality is a matter of believing a proposition in a way that is reasonable or well grounded. Doxastic rationality entails propositional rationality, but not the other way around. Having on balance good reasons to believe p is not enough for doxastic rationality; in addition, one’s belief must also be properly based on those good reasons. In short, doxastic rationality is propositional rationality plus proper basing.16 So a first alternative is to flesh out ODE by arguing that sufficiently strong higher-order evidence will prevent one’s belief about p from being rational in the propositional sense; that is, higher-order evidence prevents one from having on balance good reasons to maintain one’s belief about p. Now, there is nothing mysterious about the fact that additional evidence can rationalize a change of belief. The idea is roughly the following. S’s belief about p is rational in the propositional sense at t1. But then at a later time, t2, S acquires higher-order evidence about their belief about p. This changes the evidential situation since S’s total evidence at t2 is composed by (i) the original evidence and (ii) the higher-order evidence. It might then be argued that S’s belief about p is no longer rational in the propositional sense, given the more expansive body of evidence at t2. However, it is not at all obvious why higher-order evidence should have this effect on one’s total evidence. All sides in the debate agree that higher-order evidence has a bearing on what to believe about the rationality of one’s belief about p at a meta level – that is, whether it is rational to believe that one’s belief about p is rational. However, whether higher-order evidence also has a bearing on what to believe about p at the object level is something that needs to be established rather than merely presumed. Several authors (e.g. Coates 2012; Lasonen-Aarnio 2014; Worsnip 2018) have argued in favor of so-called level-splitting views; that is, it is possible for one’s total evidence to all things considered support believing p but also all things considered support believing that one’s

142

Marco Tiozzo

belief about p fails to be rational, or to have what might be called selfmisleading total evidence.17 So why believe that it is impossible to have self-misleading total evidence of this sort? One might point to the fact that one’s total evidence becomes mismatched in a certain way. In the standard case, we are to assume that one’s original evidence makes one’s belief about p rational and that one’s higher-order evidence supports believing that one’s belief about p fails to be rational. Some proponents of higher-order defeat (e.g. Feldman 2005; Horowitz 2014) emphasize that it is absurd to believe something of the form of p but my belief that p fails to be rational. If one does not take one’s evidence to support p, it just does not make much sense to believe p in the light of that assessment. To have attitudes that diverge in this sense is considered to be a form of epistemic akrasia. Moreover, as Horowitz (2014) has convincingly argued, to hold akratic combinations of attitudes will engage you in bad reasoning and irrational action. For example, she points out that “It seems patently irrational to treat a bet about P and a bet about whether one’s evidence supports P as completely separate” (2014, 728). Considerations having to do with the irrationality of holding akratic combinations of beliefs therefore seem to lend support to higher-order defeat and to speak against level-splitting views. However, it is not clear that considerations about epistemic akrasia lends any support to an explanation like ODE. The appeal to epistemic akrasia relies on the fact that S simultaneously believes p and believes that their belief that p fails to be rational. For example, what makes Moorean conjunctions absurd is that it is irrational to simultaneously believe or assert something of the form of p but my belief that p fails to be rational – it is not a problem about merely possessing certain combinations of mismatched evidence. Bad reasoning and irrational action also seem to be something that is caused by entertaining akratic combinations of attitudes and not by merely possessing higher-order evidence. Moreover, and more importantly, given that the explanation of higher-order defeat requires that S actually comes to believe that their belief fails to be supported by the evidence, it seems that ODE collapses into SDE.18 Other authors (e.g. Smithies 2015; Silva 2017; van Wietmarschen 2013) have instead framed higher-order evidence as a defeater for doxastic rationality. This type of explanation grants that one’s belief about p might be rational in the propositional sense despite the fact that one has higher-order evidence that supports believing that one’s belief about p fails to be rational. The general idea is that if one acquires sufficiently strong higher-order evidence to indicate that one’s belief about p fails to be rational, then one’s belief about p can no longer be well grounded. The reason why higher-order evidence prevents one’s belief about p from being well grounded is in turn supposed to be that the mere possession of this sort of evidence affects one’s reasoning or belief formation.19 More

Moral Peer Disagreement

143

precisely, one’s belief about p cannot be the result of a good reasoning process. For example, Han van Wietmarschen (2013) argues that doxastic rationality requires that “the subject engages in the right kind of process of reasoning” (van Wietmarschen 2013, 414). In a similar vein, Declan Smithies (2015) argues that one’s reasoning has to be sensitive to evidence about our cognitive imperfection. But why exactly should higher-order evidence impede good reasoning? To evaluate this explanation of higher-order defeat, we need to say something more about what is good reasoning. Modus ponens should be a paradigmatic example of good reasoning: given that p, and that if p then q, I can conclude that q. In a similar way, it seems that one is reasoning properly if one goes from believing that the evidence supports p to then believe p and conversely to not believe p if one believes that the evidence doesn’t support believing p – at least given the assumption that what is rational to believe is closely connected to what one’s evidence supports. Notice, however, that in neither case is it necessary to start out with a belief that is supported by the evidence. Whether your belief about whether the evidence supports p is in itself supported by the evidence seems to make little or no difference to the quality of your reasoning about what to believe as a result of what you think that the evidence supports. The lesson is that good reasoning does not require evidential support.20 Forming a belief about p on the basis of a belief that itself fails to be supported by the evidence therefore does not have to involve any bad reasoning on behalf of the subject in question. Nor does failing to evaluate one’s higher-order evidence correctly have to pose a problem for the propositional rationality of one’s belief about p. So the mere possession of higher-order evidence will not prevent one’s belief about p from being rational in the doxastic sense. In contrast, one is not reasoning properly if one goes from believing that one’s belief about p fails to be rational to believing p. However, a proponent of ODE cannot base their explanation on the fact that the subject in question actually comes to believe that their belief about p fails to be rational; in that case, ODE will collapse into SDE. As a result, ODE is unable to provide a satisfactory explanation of higher-order defeat also given a conception of rationality in the doxastic sense. 3.2 The Subjective Defeat Explanation According to SDE, if one in response to higher-order evidence comes to believe that one’s belief about p fails to be rational, then one cannot rationally maintain one’s belief about p. The target of SDE is rationality in the doxastic sense rather than in the propositional sense since the focus is on what one actually believes and not on what one is rational to believe. But why is it not rational in the doxastic sense to maintain

144

Marco Tiozzo

one’s belief about p given that one believes that one’s belief about p fails to be rational? A straightforward way to explain defeat in the relevant cases is to argue that rationality demands that one’s beliefs satisfy certain structural requirements of rationality.21 For instance, it is considered to be paradigmatically irrational not to intend to j if one believes that one ought to j. Much in the same way, it also seems irrational not to believe p if one believes that it is rational to believe p, and to believe p if one believes that it is not rational to believe p. I will follow Horowitz (2014) in calling this latter negative requirement on epistemic rationality the “non-akrasia constraint.” Notice that a structural requirement of rationality to this effect does not require you to hold any particular attitudes. Instead, what matters is what combinations of attitudes you hold. For example, to maintain one’s belief about p despite believing that one’s belief about p fails to be rational stands in clear violation to the non-akrasia constraint. So given that the non-akrasia constraint is a plausible requirement to make on epistemic rationality, it seems that SDE is able to provide a straightforward explanation of higher-order defeat. By contrast, for reasons already mentioned, proponents of ODE are not permitted to appeal to the nonakrasia constraint in order to defend their view; otherwise, their view will collapse into SDE. Moreover, SDE steers clear from the aforementioned pitfalls of ODE. As we have seen, in some cases, it seems intuitively plausible that people have self-misleading total evidence. At least one version of ODE relies on an argument that implies that this is impossible. By contrast, SDE is fully compatible with the view that one sometimes might have self-misleading total evidence. Connectedly, SDE is not committed to the claim that misleading higher-order evidence can affect the propositional rationality of a belief that in fact enjoys evidential support and consequently does not need to try to explain how this sort of defeat is possible. This is because SDE does not rest on the idea that one’s belief about p fails to be rational in the propositional sense given the evidence. Instead, it claims that one’s belief about p might fail to be doxastically rational given what one comes to believe about the rationality of one’s belief about p. However, a controversial feature of SDE and subjective defeaters more generally is the presupposition that unjustified beliefs can confer defeat. Remember that all it takes for one’s belief about p to be defeated according to SDE is that one comes to believe that one’s belief about p fails to be rational or that it is epistemically inappropriate for some other reason. This is how doxastic defeaters are supposed to work. Several writers have recently emphasized problems with this subjective account of defeat (e.g. Alexander 2017; Casullo 2018; Klenk 2019). There is a problem of arbitrariness: if justified belief requires justification, why do defeaters not require justification? Moreover, and perhaps more problematic, if we accept unjustified defeaters, it seems to follow that one can obtain

Moral Peer Disagreement

145

justification merely by being epistemically ignorant (Casullo 2018). For instance, I may retain an unjustified belief about p merely by ignoring all the potential defeaters for my belief about p that are being presented to me. But I think that Albert Casullo’s objection is misguided in relation to the sort of subjective defeater at issue here. It is true that given that we accept unjustified defeaters, it seems to follow that one can fight off potential defeaters to one’s belief about p merely by ignoring them and being pigheaded. But this does not mean that one’s belief about p remains rational, since the relevant considerations against one’s belief about p still can provide an objective defeater for one’s belief about p. Strong evidence against one’s belief about p will defeat one’s belief about p regardless of whether one comes to believe that one’s belief about p is defeated. In other words, ignoring considerations that speak against one’s belief about p will not prevent these considerations from providing an objective defeater for one’s belief about p, at least given that these considerations actually provide reason to give up one’s belief about p.22 So if subjective defeaters and objective defeaters are combined in this way, it seems that we are able to steer clear from Casullo’s objection. Michael Klenk (2019) argues that objective and subjective defeat cannot be reconciled in this way. The upshot of his argument is that objective defeat and subjective defeat offer fundamentally different explanations and what is even worse: they can deliver conflicting verdicts. For instance, if I rationally believe p and I also have good reason to do so but then without good reason come to believe that my belief that p fails to be rational, then it seems that I have a subjective defeater for my belief that p. However, the fact that I have good reason to believe p should in turn provide an objective defeater-defeater for the subjective defeater against believing p. From the subjective defeat perspective, my belief that p is defeated, but from the objective defeat perspective, it is not. So not only do objective defeat and subjective defeat function in different ways, but they also give rise to incompatible verdicts about defeat. However, the problem that Klenk is posing disappears once we distinguish defeat in the propositional sense from defeat in the doxastic sense. What an explanation like SDE presupposes is that unjustified beliefs have the ability to defeat the rationality of one’s beliefs in the doxastic sense, but it does not follow from this that unjustified beliefs also have the ability to defeat the rationality of one’s beliefs in the propositional sense. Unjustified beliefs can give rise to a flaw in one’s reasoning. For instance, it can no longer be rational to sustain one’s belief that p given that one comes to believe that one’s belief that p fails to be rational. The reason for this is that the subjective defeater (one’s belief that one’s belief that p fails to be rational) makes it so that one cannot rationally reason from one’s reasons for believing p to believing p. But this does not prevent that believing p still is rational in the propositional sense. The

146

Marco Tiozzo

unjustified belief that one’s belief that p fails to be rational defeats the doxastic rationality of one’s belief that p, but it does not make it so that believing p fails to be rational in the propositional sense. There is nothing strange about the fact that a belief can be rational in the propositional sense but fails to be rational in the doxastic sense. So in the light of the distinction between propositional and doxastic rationality, there is no deep conflict between subjective defeat and objective defeat. However, it might still be objected that there has to be something epistemically bad about ignoring higher-order defeat in this way. In order not to violate the non-akrasia constraint, one is forced to epistemic failure in another sense, since one will not respond correctly to the higher-order evidence: one fails to believe that one’s belief about p fails to be rational. I will give two quick responses to this objection. First, even if there is something epistemically bad about failing to correctly respond to one’s higher-order evidence, it does not follow that this affects the evaluation of the rationality of one’s belief at the object level. What happens if one ignores higher-order evidence about p is that one will end up with an irrational belief about the rationality of one’s belief about p. But again, this does not seem to exclude that one’s belief about p can be rational at the object level – unless, of course, one presumes that the failure to rationally respond to one’s higher-order evidence is something that “trickles down” and affects the rationality of one’s belief about the relevant matter.23 However, as several writers already have pointed out, this is something that needs to be established rather than merely presumed. Second, even if one grants that failing to correctly respond to one’s higher-order evidence can affect the epistemic status of one’s belief about p, it does not follow that this is a failure of rationality. There are other alternatives to take into consideration. For example, Lasonen-Aarnio (2014), argues that this type of mistake is better characterized as a manifestation of what she calls epistemic incompetence.24 Roughly, what she suggests is that we can evaluate beliefs from two perspectives. First, one can evaluate whether a belief is successful – roughly, to what extent one has correctly responded to the evidence. Second, one can evaluate whether a belief is competently formed – roughly, to what extent the believer is using stable methods of reasoning. Failing to correctly respond to higher-order evidence is arguably a failure in the latter sense, but not necessarily in the former sense.

4 Why the Argument Fails to Make a Case for Widespread Moral Skepticism Before I go on to explain why I think that the argument from moral peer disagreement fails to make a case for widespread moral skepticism, I want to emphasize the limits that an explanation like SDE sets on higher-order defeat more generally.

Moral Peer Disagreement

147

First of all, given that SDE is correct, it follows that higher-order evidence by itself does not have any defeating force. Merely possessing higher-order evidence against one’s belief about p will not defeat one’s belief about p. The idea is not supposed to be, as some authors (e.g. Titelbaum 2015) have argued, that higher-order evidence in the relevant sense is impossible. On the contrary, SDE acknowledges higher-order evidence as evidence. SDE does not contest that higher-order evidence makes it rational at the meta level to believe that one’s belief about p fails to be rational. But then again, this does not entail that one’s belief about p fails to be rational at the object level. Given the possibility of self-misleading total evidence, one might rationally believe p despite possessing higherorder evidence against the rationality of believing p. Second and more important for our purposes, given that SDE is correct, it follows that higher-order defeat is highly contingent. Higher-order defeat becomes contingent on whether one actually comes to believe that one’s belief about p fails to be rational. As I explained earlier, according to SDE, one’s belief about p fails to be rational only in the case that one actually comes to believe that one’s belief about p fails to be rational, which makes sense given that rationality demands that one’s beliefs satisfy a certain structural requirement of rationality: the non-akrasia constraint. Now let me explain why the argument from moral peer disagreement does not appear to make a case for widespread moral skepticism. Recall the following argument: P1 If, in the face of disagreement about x, you have reason to believe that your opponent is an epistemic peer, then your belief about x does not amount to knowledge. P2 Many of most people’s moral beliefs are subject to disagreements where they have reason to believe that the other party is an epistemic peer. C Therefore, many of most people’s moral beliefs do not amount to knowledge. The problem with the argument is that the consequent of the conditional in P1 is too strong. As I have argued, P1 rests on a defective explanation of higher-order defeat (ODE). Having reason to believe that the other party is an epistemic peer (i.e. higher-order evidence) will not by itself defeat the rationality of one’s belief. For higher-order defeat to arise, you have to come to believe that your belief about p fails to be rational. So it seems to follow that the skeptical impact of peer disagreement in a certain epistemic domain will depend on the extent to which people tend to consider peer disagreements to be evidence that speak against their views. Call this the “level of peer intransigence.” Given that people do not tend to take peer disagreement as evidence against their views in a certain epistemic domain D, the level of peer intransigence will be

148

Marco Tiozzo

high in D; given that people tend to take peer disagreement as evidence against their views in D, the level of peer intransigence will be low in D; and so on. As a result (if we follow SDE), for peer disagreement to have a substantive skeptical impact in D, it also has to be the case that the level of peer intransigence in D is low. However, I think that there is good reason to believe that the level of peer intransigence in the moral domain is high. Several authors (e.g. Elga 2007; Kalderon 2005; Pettit 2006; Rowland 2018; Setiya 2012; Vavova 2014) have argued that it is intelligible to remain intransigent in the face of moral peer disagreement. Although these authors do not strictly argue that people tend to be intransigent, I think it is also plausible to assume that peer intransigence is fairly common in the moral domain. For instance, recent empirical research in moral psychology indicates that people tend to stubbornly maintain their moral judgments even in the absence of supporting reasons.25 To systematically ignore higher-order evidence is of course something that is epistemically bad, but this need not concern us here.26 What matters for our purposes is that the level of peer intransigence in the moral domain is something that will take the edge off the argument from moral peer disagreement. Of course, one might insist and argue that not only are many of most people’s moral beliefs subject to peer disagreement but also that most people believe that many of their beliefs consequently fail to be rational. I think, however, that this claim is difficult to make sense of – especially given that most people seem to hold on to their controversial moral beliefs despite the fact that they are subject to peer disagreement. Things would be different if people in general where agnostic about controversial moral questions, but this does not appear to be the case. Moreover, a proponent of the argument from moral peer disagreement cannot back away from the claim that most people believe that many of their moral beliefs fail to be rational; otherwise, higher-order defeat will not apply, since believing that one’s belief fails to be rational is what explains defeat given SDE. But again, it just does not seem plausible that people on a large scale maintain their moral beliefs while believing that they fail to be rational. Given that one believes that a certain belief fails to be rational, we also expect that person to give up that belief. It does not make much sense to maintain one’s belief if one does not take it to be rational or if one takes one’s belief to be epistemically defective in some other way.27 For this reason, I think that it is more plausible to assume that most people do not believe that a large number of their moral beliefs fail to be rational, even if they have reason to believe that those moral beliefs are subject to peer disagreement. So given that the condition higher-order defeat is contingent on is often not satisfied when it comes to moral peer disagreement specifically, it appears that moral knowledge is seldom threatened by moral

Moral Peer Disagreement

149

disagreement and that the argument from moral peer disagreement therefore fails to make a case for widespread moral skepticism. Of course, the claim that people do not tend to consider higher-order evidence to speak against their moral beliefs can be finally settled only by empirical investigation. But as I see it, the burden of proof in this case lies with advocates of the argument from moral peer disagreement.

5 Conclusion In this chapter, I have argued that the argument from moral peer disagreement fails to make a case for widespread moral skepticism. The main reason for this is that peer disagreement (and higher-order evidence more generally) only contingently give rise to defeat. What explains higherorder defeat is the fact that one in response to the higher-order evidence comes to believe that one’s belief about p fails to be rational. However, the argument from moral peer disagreement will not be successful even if restated along these lines. The main reason for this is that most people do not appear to take peer disagreement to be something that makes their moral beliefs fail to be rational. If my argument is sound, we should also expect the same type of mitigating factor to apply to skeptical arguments based on peer disagreement in other areas of knowledge (e.g. economy, philosophy, politics, religion, etc.) – that is, in areas of disagreement where a high level of peer intransigence is to be plausibly expected. The contingency of higher-order defeat will also cast some shadow over conciliatory views in general – at least strong conciliatory views that hold that one is always rationally required to suspend judgment or reduce confidence regarding the disputed matter in cases of peer disagreement. Given SDE, whether one is rationally required to conciliate will in the end depend on whether one perceives peer disagreement to be something that speaks against one’s view regarding the disputed matter, which I have argued is something that might differ greatly, not only from one case to the other but also from one epistemic domain to another.28

Notes 1. Various versions of the argument from moral disagreement can be found in Mackie (1977), Tolhurst (1987), Bennigson (1996), and Loeb (1996). For a comprehensive survey and discussion of different versions of the argument, see Tersman (2006). 2. See, e.g., Brink (1989). For a discussion of defusing explanations in relation to recent findings in experimental philosophy, see Doris and Plakias (2008). 3. According to McGrath (2008, 88), epistemological arguments purport to undermine moral knowledge by establishing that regardless of whether there are any objective moral facts, we are not in a position to have nearly as much moral knowledge as we take ourselves to have. 4. Some authors also argue that the success of the argument from moral peer disagreement is necessary for evolutionary debunking arguments to get off

150

5. 6.

7.

8.

9.

10.

11.

12.

Marco Tiozzo ground (e.g. Bogardus 2016; Mogensen 2017; Wittwer, this volume); however, see Klenk (2018) for objections against the view that evolutionary debunking arguments provide defeating disagreement. Again, note that this also has implications for evolutionary debunking arguments, since the success of these argument might depend on the epistemic significance of moral peer disagreement. This formulation of the argument is inspired by Decker and Groll (2013). McGrath’s argument is also discussed in King (2011a, 2011b), Rowland (2017), Sherman (2014), and Locke (2018). A structurally similar argument has been raised against philosophical views in general, e.g. in Goldberg (2013) and Grundmann (2013). McGrath does not explicitly state her argument in terms of peer disagreement. Instead, she refers to cases in which it is true of the other party that “you have no more reason to think that he or she is in error than you are” (2008, 91). However, in a footnote (ibid, no. 2), she indicates that her argument is about peer disagreement. Peer disagreement is a technical expression that has been characterized in various ways in the literature. Some expositions (e.g. Kelly 2005) focus on the epistemic qualities of the disputants, while others (e.g. Elga 2007) focus on the idea that epistemic peers are equally likely to evaluate the relevant matter correctly. See Gelfert (2011) for an overview. To presume that the disputants are actual peers risks making the argument question begging, since it becomes difficult to see how the relevant beliefs could satisfy a safety condition for knowledge. For S’s belief to be safe, it is often presumed that S could not easily have falsely believed p. Given that one holds a true belief and that the other party is an actual peer, it seems that one equally well could have reasoned like one’s peer and ended up with a false belief. See Hirvelä (2017) for an argument to this effect. Notice that there are also other ways to interpret McGrath’s argument. Lasonen-Aarnio (2013) points out that the claim that one in cases of peer disagreement ought to be equally confident that one’s opinion was correct as that one’s disputant’s opinion was correct is ambiguous. On one reading, it means that the other party’s opinion is as likely to be true, and on another reading, it means that the other party’s opinion is as likely to be reasonable given the evidence. Given the former interpretation, it seems that peer disagreement is better framed as a rebutting defeater than as an undermining defeater and thus not a species of higher-order evidence. However, in this chapter, I will interpret the argument from moral peer disagreement as an argument that draws on the epistemic significance of peer disagreement as higher-order evidence; that is, peer disagreement is evidence that the other party’s opinion is as likely as yours to be reasonable given the body of evidence. This point is also made by Vavova (2014). Notice that the argument from moral peer disagreement should not be confused with the type of epistemological argument from disagreement that is discussed by e.g. Bennigson (1996) and Tersman (2006, ch. 4) and by Tolhurst (1987). This latter type of argument does not merely purport to establish moral skepticism; it also contends that moral disagreement gives us reason to believe that moral realism is false. In contrast, the argument from moral peer disagreement, and moral skepticism more generally, does not hold that moral realism has to be false. See King (2011a). I think, however, that one might also construe the argument in a stronger way if one so wishes. It is plausible to assume that there are people whom we should take as epistemic peers who hold that all moral

Moral Peer Disagreement

13. 14. 15. 16. 17. 18. 19. 20.

21. 22.

23. 24.

25. 26. 27.

28.

151

beliefs are false, e.g. moral nihilists. If this is the case, it follows that all of our moral beliefs are contested by epistemic peers. See, e.g., Christensen (2007), Elga (2007), Feldman (2006), and Matheson (2009) for arguments to the effect that we should conciliate in the face of peer disagreement. See, e.g., Christensen (2010) and Lasonen-Aarnio (2014). Note that normative defeat does not have to be couched in terms of evidence. Lord (2018) argues that a reason to suspend judgment can fall out of one’s other reasons and thus not depend on one’s evidence. See Turri (2010) for criticism against the orthodox way of drawing the distinction. I borrow the expression self-misleading total evidence from Skipper (forthcoming). In Tiozzo (2019) I argue for similar reasons that Horowitz’s use of the nonakrasia constraint as an argument against level-splitting views is misguided. See Christensen (2010), van Wietmarschen (2013), Smithies (2015), and Silva (2017). Neither does good reasoning, more generally, require as its starting point a belief that corresponds to what one has normative reason to believe. I take this to be one of the many important results of John Broome’s research program in the philosophy of normativity. See especially Rationality Trough Reasoning (2013, ch. 12–16). Broome (2013) gives an in-depth discussion and defense of structural requirements of rationality. Note that this is not what is at issue in most of the cases of higher-order defeat that are discussed in the literature. In the typical case, the higherorder evidence provides good but misleading evidence against one’s belief about p, which means that the considerations raised by the higher-order evidence does not necessarily provide an actual reason to give up one’s belief about p. See Kelly (2010) for a discussion about whether higher-order evidence about the epistemic status of one’s belief about p is able to trickle down and have significance for what it is rational to believe about p at the object level. If I understand Lasonen-Aarnio (2014) correctly, what it means to be epistemically competent is broadly to follow certain structural requirements, e.g. the non-akrasia constraint. She is nevertheless careful not to make epistemic competence part of epistemic rationality. The phenomenon is known as dumbfounding in the literature. See McHugh et al. (2017) for references. See Joshua DiPaolo’s contribution to this volume for a discussion about the epistemic problem of fundamentalist beliefs that are intransigent to evidence in this sense. Notice that if one assumes that moral judgments are insensitive to considerations that have to do with evidence in this way, there is less reason to take these mental states to be beliefs in the first place. For an argument against moral cognitivism along these lines, see Eriksson and Tiozzo (n.d.). Versions of this paper were given at the Department of Philosophy at Gothenburg University and at the Department of Philosophy at Brown University. I thank the audiences at these occasions for feedback. Several people have provided helpful comments on previous drafts and close cousins to this chapter. I want especially to thank Gunnar Björnsson, David Christensen, John Eriksson, Ragnar Francén, Benoit Guilielmo, Michael Klenk, Maria Lasonen-Aarnio, Caj Strandberg, and Folke Tersman.

152

Marco Tiozzo

References Alexander, David J. 2017. “Unjustified Defeaters.” Erkenntnis 82 (4): 891–912. https://doi.org/10.1007/s10670-016-9849-z. Bennigson, Thomas. 1996. “Irresolvable Disagreement and the Case Against Moral Realism.” The Southern Journal of Philosophy 34 (4): 411–37. https:// doi.org/10.1111/j.2041-6962.1996.tb00800.x. Bogardus, Tomas. 2016. “Only All Naturalists Should Worry about Only One Evolutionary Debunking Argument.” Ethics 126 (3): 636–61. https://doi. org/10.1086/684711. Brink, David O. 1989. Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Broome, John. 2013. Rationality through Reasoning. Hoboken, NJ: WileyBlackwell. Casullo, Albert. 2018. “Pollock and Sturgeon on Defeaters.” Synthese 195 (7): 2897–906. https://doi.org/10.1007/s11229-016-1073-5. Christensen, David. 2007. “Epistemology of Disagreement: The Good News.” The Philosophical Review 116 (2): 187–217. Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomenological Research 81 (1): 185–215. https://doi.org/10.1111/j.1933-1592.2010. 00366.x. Coates, Allen. 2012. “Rational Epistemic Akrasia.” American Philosophical Quarterly 49 (2): 113–24. Decker, Jason, and Daniel Groll. 2013. “On the (in)Significance of Moral Disagreement for Moral Knowledge.” In Oxford Studies in Metaethics. Vol. 8, edited by Russ Shafer-Landau, 140–67. Oxford: Oxford University Press. DiPaolo, Joshua. 2020. “The Fragile Epistemology of Fanaticism.” In HigherOrder Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Doris, John M., and Alexandra Plakias. 2008. “How to Argue about Disagreement: Evaluative Diversity and Moral Realism.” In Moral Psychology: The Cognitive Science of Morality: Intuition and Diversity, edited by Walter Sinnott-Armstrong, 303–31. A Bradford Book Vol. 2. Cambridge, MA: MIT Press. Elga, Adam. 2007. “Reflection and Disagreement.” Noûs 41 (3): 478–502. https:// doi.org/10.1111/j.1468-0068.2007.00656.x. Eriksson, J., and Marco Tiozzo. n.d. The Argument from Moral Dogmatism: A Challenge to Cognitivism. Manuscript. Feldman, Richard. 2005. “Respecting the Evidence.” Philosophical Perspectives 19 (1): 95–119. https://doi.org/10.1111/j.1520-8583.2005.00055.x. Feldman, Richard. 2006. “Epistemological Puzzles about Disagreement.” In Epistemology Futures, edited by Stephen C. Hetherington, 216–36. Oxford: Oxford University Press. Feldman, Richard, and Ted A. Warfield, eds. 2010. Disagreement. Oxford: Oxford University Press. Gelfert, Axel. 2011. “Who Is an Epistemic Peer?” Logos & Episteme 2 (4): 507– 14. https://doi.org/10.5840/logos-episteme2011242. Goldberg, Sanford C. 2013. “Defending Philosophy in the Face of Systematic Disagreement.” In Disagreement and Skepticism, edited by Diego E. Machuca, 277–94. New York, NY: Routledge.

Moral Peer Disagreement

153

Grundmann, Thomas. 2013. “Doubts about Philosophy? The Alleged Challenge from Disagreement.” In Knowledge, Virtue, and Action: Essays on Putting Epistemic Virtues to Work, edited by Tim Henning and David P. Schweikhard, 72–98. London: Routledge. Hirvelä, Jaakko. 2017. “Is It Safe to Disagree?” Ratio 30 (3): 305–21. https://doi. org/10.1111/rati.12137. Horowitz, Sophie. 2014. “Epistemic Akrasia.” Noûs 48 (4): 718–44. https://doi. org/10.1111/nous.12026. Kalderon, Mark Eli. 2005. Moral Fictionalism. Oxford: Oxford University Press. Kelly, Thomas. 2005. “The Epistemic Significance of Disagreement.” In Oxford Studies in Epistemology. Vol. 1, edited by Tamar S. Gendler and John P. Hawthorne, 167–96. Oxford: Oxford University Press. Kelly, Thomas. 2010. “Peer Disagreement and Higher-Order Evidence.” In Feldman and Warfield 2010, 111–74. King, Nathan L. 2011a. “McGrath on Moral Knowledge.” Journal of Philosophical Research 36: 219–33. King, Nathan L. 2011b. “Rejoinder to McGrath.” Journal of Philosophical Research 36: 243–6. https://doi.org/10.5840/jpr_2011_14. Klenk, Michael. 2018. “Evolution and Moral Disagreement.” Journal of Ethics and Social Philosophy 14 (2): 112–42. https://doi.org/10.26556/jesp.v14i2.476. Klenk, Michael. 2019. “Objectivist Conditions for Defeat and Evolutionary Debunking Arguments.” Ratio: 1–14. https://doi.org/10.1111/rati.12230. Lasonen-Aarnio, Maria. 2013. “Disagreement and Evidential Attenuation.” Noûs 47 (4): 767–94. https://doi.org/10.1111/nous.12050. Lasonen-Aarnio, Maria. 2014. “Higher-Order Evidence and the Limits of Defeat.” Philosophy and Phenomenological Research 88 (2): 314–45. Locke, Dustin. 2018. “The Epistemic Significance of Moral Disagreement.” In The Routledge Handbook of Metaethics, edited by Tristram McPherson and David Plunkett, 499–516. New York, NY: Routledge. Loeb, Don. 1996. “Moral Realism and the Argument from Disagreement.” Philosophical Studies 90 (3): 281–303. Lord, Errol. 2018. The Importance of Being Rational. Oxford: Oxford University Press. Mackie, John Leslie. 1977. Ethics: Inventing Right and Wrong. London: Penguin Books. Matheson, Jonathan. 2009. “Conciliatory Views of Disagreement and Higher-Order Evidence.” Episteme 6 (3): 269–79. https://doi.org/10.3366/E1742360009000707. McGrath, Sarah. 2008. “Moral Disagreement and Moral Expertise.” In Oxford Studies in Metaethics. Vol. 3, edited by Russ Shafer-Landau, 87–108. Oxford: Oxford University Press. McHugh, Cillian, Marek McGann, Eric R. Igou, and Elaine L. Kinsella. 2017. “Searching for Moral Dumbfounding: Identifying Measurable Indicators of Moral Dumbfounding.” Collabra: Psychology 3 (1): 23. https://doi.org/10.1525/ collabra.79. Mogensen, Andreas L. 2017. “Disagreements in Moral Intuition as Defeaters.” The Philosophical Quarterly 67 (267): 282–302. Nisbett, Richard E., and Dov Cohen. 1996. Culture of Honor: The Psychology of Violence in the South. Boulder, CO: Westview Press.

154

Marco Tiozzo

Pettit, Philip. 2006. “When to Defer to Majority Testimony- and When Not.” Analysis 66 (291): 179–87. https://doi.org/10.1111/j.1467-8284.2006.00612.x. Rowland, Richard. 2017. “The Epistemology of Moral Disagreement.” Philosophy Compass 12 (2): e12398. https://doi.org/10.1111/phc3.12398. Rowland, Richard. 2018. “The Intelligibility of Moral Intransigence: A Dilemma for Cognitivism About Moral Judgment.” Analysis 78 (2): 266–75. https://doi. org/10.1093/analys/anx140. Setiya, Kieran. 2012. Knowing Right from Wrong. Oxford: Oxford University Press. Sherman, Ben. 2014. “Moral Disagreement and Epistemic Advantages.” Journal of Ethics and Social Philosophy 8 (3): 1–20. https://doi.org/10.26556/jesp. v8i3.82. Silva, Paul. 2017. “How Doxastic Justification Helps Us Solve the Puzzle of Misleading Higher-Order Evidence.” Pacific Philosophical Quarterly 98: 308–28. https://doi.org/10.1111/papq.12173. Skipper, Mattias. forthcoming. “Higher-Order Evidence and the Impossibility of Self-Misleading Evidence.” In Higher Order Evidence: New Essays, edited by Mattias S. Rasmussen and Asbjørn Steglich-Petersen. Oxford: Oxford University Press. Smithies, Declan. 2015. “Ideal Rationality and Logical Omniscience.” Synthese 192 (9): 2769–93. https://doi.org/10.1007/s11229-015-0735-z. Tersman, Folke. 2006. Moral Disagreement. Cambridge: Cambridge University Press. Tiozzo, Marco. 2019. “The Level-Splitting View and The Non-Akrasia Constraint.” Philosophia 47 (3): 917–923. Titelbaum, Michael G. 2015. “Rationality’s Fixed Point: (or: In Defense of Right Reason).” In Oxford Studies in Epistemology. Vol. 5, edited by Tamar Gendler and John Hawthorne, 253–94. Oxford: Oxford University Press. Tolhurst, William. 1987. “The Argument from Moral Disagreement.” Ethics 97 (3): 610–21. https://doi.org/10.1086/292869. Turri, John. 2010. “On the Relationship between Doxastic and Propositional Justification.” Philosophy and Phenomenological Research 80 (2): 312–26. van Wietmarschen, H. 2013. “Peer Disagreement, Evidence, and WellGroundedness.” Philosophical Review 122 (3): 395–425. https://doi.org/10.1215/ 00318108-2087654. Vavova, Katia. 2014. “Moral Disagreement and Moral Skepticism.” Philosophical Perspectives 28 (1): 302–33. https://doi.org/10.1111/phpe.12049. Wedgwood, Ralph. 2010. “The Moral Evil Demons.” In Feldman and Warfield 2010, 216–46. Wittwer, Silvan. 2020. “Evolutionary Debunking, Self-Defeat and All the Evidence.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Worsnip, Alex. 2018. “The Conflict of Evidence and Coherence.” Philosophy and Phenomenological Research 96 (1): 3–44. https://doi.org/10.1111/phpr.12246.

7

Debunking Skepticism Michael Huemer

1 Introduction Some philosophers believe that we should be skeptical about morality and that this skepticism is supported by certain higher-order evidence – roughly, evidence about the reliability of our moral beliefs. It is sometimes suggested that our capacity for moral judgment evolved by natural selection, that our moral judgments are determined by the culture that we happened to be born into, or that our moral judgments are determined by personal biases. If our moral beliefs are explained by factors unconnected to moral facts, then our moral beliefs probably would not be reliably correlated with the moral facts, even if moral facts existed.1 Alternately, it may be argued that the widespread disagreement about moral questions directly suggests that humans are unreliable at identifying moral truths, if such truths exist. Therefore, it is said, we should conclude either that moral facts don’t exist or that we don’t know any of them.2 Let moral skepticism denote the view just described, namely that either moral facts don’t exist or that we don’t know them, because we have no capacity for reliably detecting moral facts.3 I will not describe the arguments for moral skepticism in detail here; I shall assume they are familiar enough. My purpose here is to articulate a line of attack on moral skepticism analogous to the skeptic’s own attack on ordinary moral beliefs. I contend that we have certain third-order evidence suggesting that the arguments for moral skepticism based on second-order evidence are themselves products of an unreliable belief-forming mechanism.4 Briefly, philosophers appear to have a general bias in favor of skeptical theses; in addition, the widespread disagreement among experts about metaethical issues directly suggests that our judgment about such issues is unreliable. This third-order evidence, I shall contend, supports skepticism about moral skepticism, which has the effect of restoring our common-sense moral beliefs.

2 Skeptics Are Unreliable To make a case that moral skepticism is the product of unreliable beliefforming mechanisms, I shall proceed in four stages. First, I shall describe

156

Michael Huemer

the general tendency of philosophers to fall into skepticism with regard to a wide variety of subject matters. This is necessary but not sufficient for concluding that philosophers have a bias toward skepticism. Second, I shall discuss how moral skeptics and other philosophical skeptics rely on precisely the type of reasoning most easily influenced by bias – for instance, vague, subjective, ideologically charged arguments. Third, I shall describe some plausible accounts of why philosophers might have a bias toward skepticism in general and moral skepticism in particular, regardless of whether skepticism were a correct or justified view. Finally, I will give some reasons why it is implausible that philosophers’ tendency toward skepticism results from reliable belief-forming methods. 2.1 The Peculiar Skepticism of Philosophers Skeptical theses are hardly unusual in philosophy; moral skepticism is but one of many instances. For nearly any major phenomenon discussed by philosophers, a key position held by some respected philosophers will be that the phenomenon under study either (a) does not exist or (b) is unknowable. Consider the following examples: Phenomenon physical reality unobserved entities abstract objects practical reasons epistemic reasons, knowledge free will consciousness time causation meaning truth

Skeptical Views idealism, external-world skepticism inductive skepticism, instrumentalism nominalism, skepticism about abstracta normative nihilism global skepticism hard determinism, hard incompatibilism eliminative materialism unreality of time Humean theories, inductive skepticism semantic nihilism5 theory that all statements are false6

Each of the views named in the right-hand column has been held by respected philosophers. Most are still treated as live topics of philosophical debate; one would not, for example, be surprised to hear a paper defending one of them at a meeting of the American Philosophical Association today. (Admittedly, one would be surprised to hear a paper defending idealism today – but even that view had a day of widespread popularity among philosophers.) This situation is so familiar to philosophers that we may no longer be struck by it. But it is a striking fact in itself. No other academic discipline approaches its subject with such a skeptical attitude. No serious chemist denies that there are chemicals or that it is possible to know things about them. No geologist questions the existence of the Earth. No historian

Debunking Skepticism

157

questions whether anything has a history. No linguist questions whether language exists. Granted, some putative objects of study are in question: historians debate whether the Trojan War ever occurred, and astronomers once debated whether black holes existed. But these cases are the exception. The overwhelming majority of things studied by human beings are things whose existence and knowability are not under debate in any field other than philosophy. Historians do not debate whether World War II occurred, whether Plato existed, or whether Babylon existed. Astronomers do not debate whether the Sun, Mars, or comets exist. Yet for nearly every phenomenon that philosophy touches, a prominent theory among philosophers is that the phenomenon does not exist or cannot be known about. Why is this? 2.2 When Beliefs Are Open to Bias The preceding facts, I shall suggest, point toward a skeptical bias among philosophers – a bias toward rejecting whatever inquiry is at hand, whether by denying the reality of the objects of study or by denying the possibility of knowing anything about them. What is meant by calling this skeptical disposition a bias? Roughly, I understand a bias as a factor that systematically tends to lead one’s beliefs in a certain direction, where the influence operates in a way that is not reliably truth-conducive. Bias is thus a broad category of explanation for belief. In ascribing a skeptical bias to philosophers, I am saying little more than that something about philosophers other than good reasoning or the proper exercise of reliable belief-forming methods tends to make them skeptical. Of course, it is only some philosophers – usually a small minority – who endorse skeptical theses in any given area. So the “skeptical bias of philosophers” is not manifested by most philosophers’ being skeptics; rather, it is manifested by the fact that philosophers as a class are more likely than non-philosopher scholars to take sweeping skeptical stances toward their subject matter, together with the fact that the profession takes such positions seriously. Whatever the source of the skeptical bias, it need not apply to all professional philosophers, nor need it be restricted to philosophers. It need only be sufficiently prevalent in philosophers to account for the phenomenon described in Section 2.1. Not all beliefs are equally susceptible to bias. Before entertaining specific accounts of the skeptical bias, it is worth considering in general which sorts of beliefs and belief-forming processes are most easily affected by bias and which are most resistant to bias. Belief-forming processes that are relatively resistant to bias include observation by the five senses, mathematical calculation, and (with some qualifications) modern scientific reasoning. It is difficult for a researcher’s personal preferences, prejudices, or idiosyncrasies to alter what the

158

Michael Huemer

researcher actually sees, calculates, or concludes from properly designed experiments. A symptom of this robustness is that different researchers will generally agree on a wide range of observations, calculations, and scientific conclusions. This agreement does not entail reliability at uncovering the truth, but it is evidence of reliability. Of course, there are some cases in which experimental results cannot be replicated – and in the sorts of cases in which replication failure is common, we should indeed conclude that unreliable methods are being used.7 By contrast, reasoning that is highly susceptible to bias commonly has one or more of the following characteristics: 1. The reasoning requires premises based on abstract, intellectual reflection rather than observation or reflection on more concrete and specific propositions. 2. The reasoning uses premises stated in vague terms rather than precise terms. Mathematics is resistant to bias despite its basis in abstract reflection; this appears to be due to the precision of mathematical concepts. Thus, it is mainly arguments with both traits (a) and (b) that we must be wary of. 3. The reasoning relies on speculation about matters of fact for which there is no decisive experimental test, or none that has in fact been performed. 4. The reasoning is ideologically significant. Roughly, what I mean by this is that the premises or the conclusion are important to one’s overall worldview, how one sees one’s place in the world, or what sort of person one sees oneself as. Ideologically significant claims will generally have notable emotional appeal to many and will seem to fit with certain personality types. 5. The reasoning requires high-level judgment calls whose basis is difficult to articulate, such as an overall weighing of a complex set of theoretical advantages and disadvantages. I take it to be self-explanatory why one should expect reasoning with the aforementioned characteristics to be open to bias. Now, each of the influential arguments for philosophical skepticism, including moral skepticism, has all or most of the aforementioned characteristics. Skeptical arguments (like other philosophical arguments) frequently rely on vague, abstract premises that are meant to be appreciated by intellectual reflection. For instance, a popular premise is that sincere acceptance of a moral claim entails having at least some motivation to act in some way that matches the moral claim (motivational internalism).8 Other skeptical arguments turn on empirical speculations that lack experimental verification, such as that the capacity for moral judgment is an evolutionary adaptation. Moreover, skeptical theses are highly ideologically significant. Being a skeptic about some major class of phenomena – especially about morality – has about as great an impact on one’s overall worldview

Debunking Skepticism

159

as anything. Skepticism fits with certain types of personalities; it appeals to some on an emotional level, even as it displeases others. Finally, the evaluation of the case for moral skepticism, like the evaluation of most philosophical theses, is commonly held to require a subjective, overall weighing of the theoretical advantages and disadvantages of competing philosophical theories. For these reasons, various forms of skepticism, including especially moral skepticism, are just the sort of view that we should expect to be easily influenced by bias. 2.3 Sources of Skeptical Bias Why might (some) philosophers be biased toward skepticism? Here are a few possible accounts: 1. Perhaps conventional philosophical methodology is unusually epistemically weak – that is, our ways of investigating questions are ineffective at discovering truths, in comparison to the methods used in most other fields of inquiry. Philosophers thus have difficulty resolving questions, which tempts some to conclude that the questions we consider are unanswerable, and others to reject the very entities about which those questions were posed. Scientists, by contrast, have more effective methods of inquiry, by which they uncover many truths about their subject matter; this leaves scientists with little temptation to question the reality or knowability of that subject matter. 2. Perhaps philosophers hold unreasonable standards for knowledge and justification, standards that even the best investigations can scarcely meet. Perhaps philosophers tend to demand unreasonably high levels of certainty; to take seriously a much wider range of hypotheses, including extraordinarily implausible hypotheses; and to demand reasons and explanations for what is self-evident. Perhaps if philosophers’ epistemic standards were applied in other fields, researchers in all or most other fields would also be tempted toward skepticism about their objects of study. 3. Perhaps there is a kind of selection effect: individuals with unreasonably skeptical dispositions either are excluded by other fields, or voluntarily exclude themselves from other fields, and collect in philosophy because only philosophy welcomes the most extreme and implausible positions. This account is compatible with any number of explanations for why some individuals might have excessively skeptical dispositions to begin with; hence, (3) may be combined with (4), (5), and/or (6). 4. Some individuals derive a sense of superiority and cleverness, or experience a pleasurable feeling of rebelliousness, from “debunking” the beliefs of others.

160

Michael Huemer

5. Some individuals have an abnormal fear of being duped, which they express by taking extreme skeptical philosophical stances. 6. Some may find appealing how skepticism makes intellectual life simpler and easier: adopting a skeptical stance about a class of phenomena relieves one of the potentially complicated and difficult task of developing and defending an account of those phenomena. 7. The profession rewards skeptics. In philosophy, publication decisions, scholarly attention, and hence academic prestige are determined in part by how interesting one’s work is considered to be. For this purpose, a clever defense of an extreme and initially incredible position is considered highly interesting. Forms of skepticism directed against seemingly obvious truths paradigmatically fit that bill. Hence, there are professional rewards attached to devising clever defenses of skepticism. Furthermore, even apart from professional rewards, there is the intrinsic satisfaction of devising an interesting position. These accounts seek to explain the general skeptical bias in philosophy. There are also some special reasons why many people might be biased toward moral skepticism in particular:9 8. Moral skepticism expresses toleration toward other cultures. Many other societies have practices judged immoral in our culture. To avoid chauvinistically judging our culture to be superior to many others, we must claim that the mores of all societies are equally valid. One way of doing this is by denying that there are any moral facts. 9. In modern society, it is considered a vice to be “judgmental,” and negative moral judgments directed at others are socially offensive. One way of expressing one’s radical commitment to non-judgmentalness is to deny the validity of all moral judgments, either by claiming that there are no moral facts or by claiming that no one knows any of the moral facts. 10. Many in contemporary society are influenced by an ideology sometimes called scientism, which requires “a sort of exaggerated respect for science . . . and a corresponding disparagement of all other areas of human intellectual endeavor.”10 Since the field of ethics is not part of, and does not follow the methods of, mathematics or natural science, there is a powerful bias against the field. Note that this point is compatible with hypothesis (1): the disposition to reject the field of ethics wholesale for this reason counts as a bias, even if ethics genuinely has less reliable methods than do the natural sciences. 11. Morality is onerous. It often demands that we sacrifice our interests or suppress our inclinations. It often harshly judges us and the people we like. There is therefore a widespread motivation to escape from morality. We could, of course, simply disregard morality’s demands; that is, we could choose to be immoral. But this solution to the

Debunking Skepticism

161

“problem” of morality’s vexatious demands would remain unsatisfying, because we would still be left with the guilty knowledge of our own immorality. The “best” solution for immoral agents is to avoid that knowledge, by refusing to believe in morality or by adopting the belief that all moral claims are equally unjustified. These are only a few possible accounts of philosophy’s skeptical bias. The purpose of raising these possibilities is to show that there are prima facie plausible accounts of why philosophy might be biased toward skepticism, even if skepticism were not in fact a justified or correct stance. No doubt there are many additional possibilities not listed here. 2.4 Why Posit Bias? There are two salient hypotheses about the peculiar skepticism of philosophers. One is that this susceptibility to skepticism reflects some epistemic virtue(s) of philosophers – for example, superior reasoning skill, more reliable intuitions, or less prejudice, compared to non-philosophers. Perhaps skepticism about a wide range of phenomena is actually correct or justified, and it is only philosophers, through their superior judgment and reasoning abilities, who are able to see the merits of skepticism. The other hypothesis is that the susceptibility to skepticism is a bias – that is, it reflects a greater susceptibility to erroneous thoughts or unreliable belief-forming methods. Although I have reviewed some possible reasons why philosophers would have such a bias, we need not know precisely why philosophers are biased toward skepticism in order to see that the bias theory is much more likely than the epistemic virtue theory. There are at least three reasons for this: 1. Before entertaining philosophical arguments for skepticism, it is initially highly implausible that any given item in the list in Section 2.1 either does not exist or is not an object of knowledge. Prior to hearing arguments against these things, it seems extremely likely, to say the least, that there are physical objects, that we are conscious, that some of our actions are up to us, that there are good reasons to do some things, . . . and that we know all this. Perhaps if philosophers had discovered some surprising argument against one of these seemingly obvious things, an open-minded person might seriously entertain that that thing is not as real as it seems – or at least that there are good reasons for doubting its reality. But when one learns that philosophers have devised arguments, which are taken seriously in the field, against every one of the phenomena listed in Section 2.1, the most simple and plausible explanation is that philosophers are biased toward skepticism. The prior probability of philosophers’

162

Michael Huemer

having such a bias is many times greater than the prior probability of a large subset of those phenomena being either nonexistent or unknowable. 2. Philosophers as a group appear to be more skeptical about the things that they study than are researchers in any other field. This cannot be explained by the hypothesis that the subject matter of philosophy is more doubtful than that of other fields, because at least one or another of the forms of philosophical skepticism mentioned earlier would impugn every branch of human intellectual endeavor. If, for example, inductive skepticism were correct, then all the sciences would be invalidated. So the skepticism of philosophers cannot be made coherent with the self-assurance of researchers in other fields. Therefore, it seems that either philosophers are overly skeptical or researchers in every other field are insufficiently skeptical. On the face of it, the former hypothesis is much more likely. 3. Philosophers disagree widely about the merits of skeptical arguments and the overall plausibility of skeptical theses. If skeptical philosophical views were produced by a reliable belief-forming process, we would expect to see more agreement on these views, at least among the leading experts in the field. Argument (3) is parallel to one of the classic arguments for moral skepticism. Moral skeptics cite widespread, first-order disagreements about moral questions as evidence that we lack a reliable capacity for discerning moral truth. But there is at least as much disagreement about philosophical questions, including the merits of moral skepticism, as there is about first-order moral questions. Therefore, if it is reasonable to think that our first-order moral judgment is unreliable, it is at least as reasonable to conclude that beliefs in skeptical philosophical positions are unreliable.

3 Unreliability Undercuts Skeptical Arguments 3.1 The Prima Facie Case for Skepticism About Skepticism Skeptical philosophers rarely merely assert their skepticisms; they usually articulate specific reasons for skepticism. Therefore, one might argue, we need only examine those reasons directly to determine whether they are cogent; once we have done that, speculation about the psychological biases of those who advanced those reasons will be irrelevant. The bias of philosophers would of course be relevant if we had to rely on the intellectual authority of philosophers to assess the reasonableness of skeptical theories. But if we have the skeptical arguments in front of us, so that there is no need to rely on authority, then what does it matter whether philosophers are biased?

Debunking Skepticism

163

In reply, the preceding reasoning would be cogent if, but only if, we could safely assume that our direct assessment of skeptical arguments would be reliable, independent of the truth of the “skeptical bias” hypothesis. But we cannot assume this. The evaluation of the case for moral skepticism is not the sort of perfectly objective cognitive task, like an arithmetical calculation, that would be unaffected by biases; the task is shot through with subjective judgments of plausibility and the weight of theoretical advantages and disadvantages, which is part of why it is plausible to speak of bias in this area to begin with. Now, if the bias theory is true, then our direct assessment of skeptical arguments – that is, the assessment made without taking account of the higher-order evidence concerning bias – is likely to be unreliable. This is because we, the assessors, are likely to be afflicted by the same bias that misleads (other) philosophers. It is uncertain precisely how far the philosophers’ bias toward skepticism extends (e.g. do non-professionals who merely read philosophy recreationally suffer from the same biases as professional philosophers?), but most readers of this chapter are reasonably likely to suffer from the skeptical bias; at least, they cannot assume otherwise. Readers should take account of this potential bias in assessing moral skepticism. 3.2 How Higher-Order Defeat Works Notice how this reasoning is parallel to the reasoning that moral skeptics themselves would have us apply to moral beliefs. When assessing whether, for example, animal cruelty is really wrong, the skeptics say we should not merely think directly about animal cruelty, using our ordinary capacity for moral judgment. Rather, the skeptics insist that we first consider the higher-order evidence concerning whether our moral judgment capacity is likely to be truth-conducive. Allegedly, once we see that that capacity is unlikely to be truth-conducive, we should override our first-order moral judgments, however compelling those judgments may otherwise appear. Why should we do this? When our higher-order evidence challenges our lower-order judgments, why should we reject the lower-order judgments? Why not instead stick with our original, lower-order judgments and reject the higher-order evidence? This question treats the situation as though we were confronted with merely a rebutting defeater, as in the case of two witnesses who give us contradictory testimony. If Stu says that P, and then Sue says that ~P, why should we side with Sue rather than Stu? In cases like this, where we have conflicting first-order evidence, we simply rely on whichever evidence is stronger (diminishing our ultimate confidence in whatever conclusion we draw according to the strength of the counterevidence); there is no structural asymmetry.

164

Michael Huemer

But in the case of defeat by higher-order evidence, there is normally a structural epistemological asymmetry: the higher-order evidence undercuts the lower-order judgment (in the sense of providing an undercutting defeater), but the lower-order judgment does not, even if true, undercut the higher-order evidence.11 Thus, consider a paradigmatic case of defeat by higher-order evidence. I seem to see an aardvark on the road, whereupon I believe that there is an aardvark. Then I remember that I recently swallowed some LSD. This casts doubt on the reliability of my visual perception. Therefore, I diminish my confidence in the existence of the aardvark. I must do so to avoid a kind of incoherence in my belief system, in which my actual credence in a proposition would clash with my belief about how likely I am to be right about that proposition.12 Now, why do I not instead give up (or at least diminish the confidence of) my belief that I swallowed the LSD? Because I have no evidence supporting that adjustment. The proposition that there is an aardvark on the road, even if true, is not evidence that I did not swallow LSD, nor is it evidence that my memory is unreliable. Aardvarks have no effect on the reliability of one’s memory, nor do they affect one’s probability of ingesting hallucinogens. (Matters would of course be different if what I seemed to see was something that would cast doubt on the reliability of my memory. In that case, there would be an epistemological symmetry: the visual experience and the memory would each constitute higher-order evidence casting doubt on the trustworthiness of the other. But that is not the case that we are considering here.) So I cannot appeal to my firstorder judgment or first-order evidence as a reason to reject the higherorder evidence. Thus, to rationally restore coherence to my belief system, I must modify my first-order credence. I take the foregoing account of higher-order defeat to be acceptable to debunking skeptics and to be generally correct. In the case of debunking arguments for moral skepticism, the skeptic should say, we must adjust our first-order moral credences to cohere with what the higher-order evidence indicates about our degree of reliability about morality. We should not, says the skeptic, reject (or even reduce our credence in) the skeptic’s claims about how reliable we are about morality, because we have no evidence supporting that revision. In particular, our first-order moral judgments provide no evidence against the skeptic’s reliability claims. For instance, whether animal cruelty is permissible or impermissible is evidentially orthogonal to whether we have reliable moral fact detectors; the wrongness of animal cruelty would not render us more likely to have reliable wrongness detectors. 3.3 How Third-Order Evidence May Restore First-Order Belief What the moral skeptic overlooks is that we may have even higher-order evidence casting doubt on our original higher-order evidence. We may have, for example, third-order evidence casting doubt on the reliability

Debunking Skepticism

165

of the second-order evidence casting doubt on the reliability of our firstorder judgment. In such a case, the same argument that leads to privileging, so to speak, the second-order evidence over the first-order evidence also leads to privileging the third-order evidence over the second-order evidence. In the case at hand, we have higher-order (specifically, third-order) evidence suggesting that philosophical arguments for skepticism are unreliable. If we were initially persuaded by some philosophical arguments for moral skepticism, we must reduce our credence in the conclusions of those arguments, so as to cohere with our rational assessment of how likely we are to be correct in making such arguments. We cannot instead simply reject the third-order evidence, because we have no evidential basis for doing so. In particular, the arguments for moral skepticism do not, even if correct, undermine the arguments given earlier to show that philosophers have a skeptical bias. Even if, for example, there are no moral facts, it remains highly plausible that philosophers are biased toward skepticism, for the reasons cited earlier. Once we have given up the arguments for moral skepticism (the secondorder evidence), the result is to restore our original first-order beliefs, or at least move our first-order credences closer to their original levels. (Since we cannot be certain that the arguments for moral skepticism are not correct, those arguments may leave us with reduced moral credences, relative to our credences before learning of those arguments. But since we should have low confidence in the arguments for moral skepticism, we should allow them to have only a small effect on our moral credences.) In this way, the effects of the third-order evidence filter down to the level of first-order belief. Note, however, that this is not an application of the filtration principle discussed by Brian Barnett elsewhere in this book.13 Barnett’s principle states, “If E2 is evidence that there is evidence E1 for P, then E2 is evidence for P.” While I find Barnett’s principle highly plausible, it is not at play here. It is not, for example, that the third-order evidence supports the claim that our first-order (or second-order, or any other) evidence supports some ordinary moral judgment. The third-order evidence is neutral on first-order questions. For example, the claim that philosophers are biased toward skepticism is in itself neutral with regard to whether animal cruelty is wrong. If you initially lacked any intuitions supporting the wrongness of animal cruelty, you will continue to lack support for that conclusion after reflecting on philosophers’ skeptical bias.14 Rather, the third-order evidence simply undercuts the argument from the secondorder evidence, thus diminishing the effect of the second-order evidence. The first-order evidence is then restored to something close to its initial strength. For an analogy, imagine that a person has put in place a pillar holding up the roof of a house. A second person plants a bomb at the base of the pillar, which would destroy the pillar, whereupon the roof would

166

Michael Huemer

collapse. A third person, however, removes the bomb. As a result, the roof is safe. The third person’s action does not in itself support the roof. It merely removes a threat that would have prevented the pillar from supporting the roof. 3.4 Comparison to the G.E. Moore Shift The G.E. Moore shift is an argumentative move in which one rejects an argument, or a premise of an argument, simply on the grounds that the argument’s conclusion is initially highly implausible.15 My reasoning may seem similar to the G.E. Moore shift because one of the reasons I have given for thinking that philosophers have a skeptical bias is the initial implausibility of many skeptical theses that philosophers have entertained (Section 2.4, point [1]). My reasoning, however, is importantly different from the G.E. Moore shift. To make the G.E. Moore shift in the present context would be to claim that arguments for moral skepticism should be rejected because moral skepticism is intuitively implausible. I claim that we should reject arguments for moral skepticism, but I do not rely solely on the implausibility of moral skepticism itself to motivate this. Rather, I rely on the implausibility of various forms of skepticism – external-world skepticism, inductive skepticism, eliminative materialism, and so on – to argue that philosophers have a skeptical bias. Once we have recognized this general bias, we should discount other arguments that we encounter for forms of philosophical skepticism, including moral skepticism. Because of this difference between my approach and the traditional G.E. Moore shift, my approach in this chapter cannot be accused of begging the question in the way that the G.E. Moore shift sometimes is.16

4 Objections 4.1 “Philosophers Are Superior Thinkers” An alternative to the bias theory is the epistemic virtue theory mentioned in Section 2.4: the reason philosophers are peculiarly skeptical is that only philosophers are sufficiently good at thinking to (periodically) overcome natural human prejudices and appreciate the power of skeptical arguments. There is at least some reason to think that this might be so: philosophers appear to be especially skilled at analyzing concepts, drawing distinctions, and understanding and evaluating arguments. Furthermore, philosophers tend to ask more fundamental questions than those in other fields and to consider alternatives that others would ignore. All of these traits appear, on their face, to be cognitive virtues; hence, without begging any questions concerning skepticism, we can cite some evidence that philosophers may be, on average, especially good thinkers.

Debunking Skepticism

167

We have already seen some reasons for preferring the bias theory over the epistemic virtue theory (Section 2.4). Here, I will more directly address the empirical evidence concerning philosophers’ possibly superior reasoning abilities. I grant that philosophers tend to have certain cognitive advantages, including those listed above (skill at analyzing concepts, drawing distinctions, and so on) and that these advantages are probably positively correlated with reliability in general. However, these advantages, taken jointly, fall far short of including all the factors that bear on one’s reliability. It is thus perfectly compatible with philosophers’ having all the mentioned advantages that philosophers also tend to be, overall, relatively unreliable and biased toward skepticism. Let us therefore look for evidence of the overall cognitive reliability of philosophers. What would we expect to see if philosophers were overall especially reliable compared with thinkers in other fields of study? I suggest that there are at least two things we would expect to see. First, we would expect progress. It is reasonable to assume in general that fields of study, including philosophy, will not have already more or less arrived at their cognitive goals (e.g., an extensive, explanatory, and generally correct set of beliefs about their subject matter) right at their inception. For example, it is not the case that at the start of the Western tradition, in ancient Greece, the main philosophical questions were already pretty well answered, so that there was little room for progress. Given that the main answers were not known at the start, we should expect to see progress over time. Furthermore, the more reliable the methods and the practitioners in a given field are, the faster that progress should be. Thus, if philosophers are especially reliable compared to other researchers, then philosophy should make faster progress than most other fields of study. Second, we would expect agreement. Reliable belief-forming methods should typically produce results that cohere with each other, and reliable believers (believers with a strong tendency to arrive at the truth) should form beliefs that cohere with each other’s beliefs. By contrast, unreliable believers or belief-forming methods can be expected to produce less coherence. This is not guaranteed to occur, but it is what we should generally expect. Therefore, if philosophers are especially reliable compared to other researchers, then philosophers should generally agree with each other more than researchers in other fields. Neither of these predictions fits the facts. By most accounts, the field of philosophy makes much slower progress than most fields, and philosophers have much less agreement than researchers in most fields.17 The reasonable inference is that philosophers are not superior but rather inferior thinkers, in the relevant sense. Granted, philosophers may be superior thinkers in certain respects; nevertheless, when it comes to overall reliability, philosophers are hardly to be envied. It is

168

Michael Huemer

this judgment of overall reliability that is relevant to the arguments of Sections 2–3. Perhaps, one might think, it is merely the skeptics who are good at thinking; non-skeptical philosophers are bad at thinking, and most philosophers are non-skeptics, which explains why the field has made little progress. There does not, however, appear to be any evidence for this. It is not as though non-skeptical philosophers commit more fallacies, fall prey to more misunderstandings, or have less-sophisticated theories than skeptics. Skeptics do not appear to have mastered the methods of the discipline to a greater degree than their non-skeptical colleagues. Nor do skeptical philosophers on the whole tend to agree with each other: those who are skeptical in one area often disagree with those who are skeptical in another area, and even those who are skeptical in the same area usually disagree with each other about other philosophical matters, about as much as philosophers in general disagree with each other. 4.2 “Skepticism Is Inherently Philosophical” Perhaps philosophers are more likely to be skeptical than thinkers in other fields because skepticism (about things generally taken for granted by common sense) simply counts as a philosophical theory, rather than, say, a scientific theory. This observation, however, would not really explain why philosophers are more skeptical than anyone else. Non-philosophers are not barred from holding views about their field that count as philosophical. For instance, if there were good reasons for doubting that living things exist, one would expect many biologists to be skeptical about life, whether or not this stance would count as “philosophical.” Yet this does not appear to be the case. Nor are many chemists skeptical about chemicals, geologists about the Earth, linguists about language, and so on. A related idea is that the type of arguments that lead to skepticism about a wide variety of phenomena simply count as philosophical arguments, rather than, say, scientific arguments, regardless of the subject matter about which they encourage doubt. For this reason, perhaps, nonphilosophers tend to be unaware of these arguments or to disregard them when they are aware of them. I agree with this last suggestion. I do not think, however, that it undermines the skeptical bias hypothesis. If philosophers were to be biased toward skepticism, an extremely likely way that this bias might operate is that some types of reasons for skepticism would be taken seriously by philosophers but disregarded by researchers in other fields. This type of reason would likely be classified, by both philosophers and nonphilosophers alike, as a “philosophical” reason. Perhaps the type of reason would also be similar in important respects to other (non-skeptical)

Debunking Skepticism

169

philosophical arguments – for example, it might rely on a priori intuitions, conceptual analysis, and other philosophical tools. All of this is perfectly coherent with the hypothesis of a skeptical bias among philosophers. In other words, the fact that (most) skeptical arguments count as philosophical just indicates that philosophical reasoning is susceptible to falling into skepticism in a way that scientific or other non-philosophical reasoning is not. This is almost entailed by the bias hypothesis, so it does not disconfirm the bias hypothesis. One might ask, however, why we need the bias hypothesis. We can explain philosophers’ peculiar skepticism simply by noting that most arguments for skepticism qualify as philosophical. What is accomplished by adding that the tendency to take these arguments seriously counts as a bias? But this question has already been answered. A bias, as I use the term, is simply a non-truth-conducive factor that tends to lead one’s beliefs in a certain direction. The evidence that philosophers’ tendency toward skepticism is not truth-conducive was stated in Sections 2.4 and 4.1: the prior probability of many forms of skepticism being correct is extremely low; philosophers are more skeptical than researchers in any other field; philosophers disagree with each other to an unusually great extent; and philosophy makes unusually slow progress. All of this makes it reasonable to ascribe low reliability to the belief-forming mechanisms that lead to philosophical skepticism. 4.3 “Non-Philosophers Are Skeptics Too” It is not only professional philosophers who take skeptical stances. Thinkers in other fields sometimes adopt skeptical philosophical beliefs about one phenomenon or another. Scientists are sometimes skeptical, for example, about morality, free will, or consciousness. This seems to undermine the claim of a philosophers’ bias toward skepticism and suggests instead that these objects of philosophical contemplation are genuinely problematic. In reply, it is unclear how many scientists or other non-philosopher thinkers are skeptics; anecdotally, it appears that philosophers are much more likely to be skeptical, and about many more things, than scientists.18 (Scientists seem much more likely to take reductionist views than skeptical views.) Be that as it may, however, it is compatible with the skeptical bias theory that the bias toward skepticism may extend beyond professional philosophers, to affect some laypeople and researchers from other fields who think about philosophy. This is plausible for at least two reasons. First, people who think about philosophy, even if not themselves professional philosophers, are usually influenced by professional philosophers. They learn skeptical ideas and arguments from the philosophical

170

Michael Huemer

community. If the philosophical community has a skeptical bias, then people who think about philosophy non-professionally can be expected to share that bias. Second, the “skeptical bias of philosophers” may plausibly be due to some features of philosophical reflection as conducted by human beings rather than something specific to the particular group of individuals presently working in philosophy departments or to the present culture of professional philosophy. Perhaps when human beings contemplate philosophical questions, there is a widespread tendency to begin entertaining extreme and implausible hypotheses, to unreasonably raise the standards for “knowledge,” to demand explanations or reasons for the self-evident, or to adopt other misguided practices and attitudes that are especially liable to result in skepticism. It is plausible that these problems might afflict scientists who think about philosophy, just as they afflict full-time philosophers. But if this is true, what becomes of the point (Section 2.4, point [3]) that philosophers are more skeptical than are researchers in other fields? The point there was that philosophers are more skeptical about their own subject matter than other researchers are about their subject matter. That is, no discipline but philosophy commonly calls into question the existence and knowability of the central objects of study in that discipline – and this is true despite that some philosophical theories call into question the existence or knowability of the subject matter of other disciplines. This is significant because researchers can be expected to think most seriously and rationally about the subject matter of their own discipline. It is plausible that researchers in most fields implicitly follow a sound methodological rule that one should seek to account for the phenomena apparently before one rather than seeking to evade the responsibility to do so by denying the existence or knowability of those phenomena. It is only philosophers who commonly depart drastically from that rule. 4.4 “Metaphilosophical Skepticism Is Self-Defeating” I have aimed my criticisms at philosophical skepticism in general and moral skepticism in particular. But at least two of the arguments that I have used could be given a much broader target: The slow pace of philosophical progress and the widespread disagreement among philosophers could be cited as evidence that philosophers and philosophical reasoning in general are unreliable. If so, the reasoning of this chapter itself can be impugned as likely unreliable. (It isn’t as if this chapter, unlike typical philosophical papers, can be expected to garner widespread agreement from philosophers upon its appearance.) Therefore, it seems that some of my reasoning is self-defeating. Of course, this objection is itself a philosophical argument and therefore would also have to be judged unreliable. And that point is a philosophical point . . . and so on. How can we think

Debunking Skepticism

171

through these issues, when our very ability to reason cogently has been called into question? Perhaps we must simply set aside doubts about our philosophical reasoning ability, while engaged in philosophical reasoning. In response, surely it would not be rational to completely disregard strong evidence of the unreliability of philosophical reasoning. The selfdefeat puzzles raised by extreme forms of metaphilosophical skepticism surely do not establish that philosophical reasoning is really, somehow, reliable. We should, however, eschew the most extreme forms of skepticism. We should not claim, for example, that no philosophical argument provides us with any justification at all for anything – that would be a self-defeating and, in any case, obviously unreasonable position. Rather, the effect of our reflection on the general unreliability of philosophical reasoning should be to downgrade our confidence in philosophical arguments and theories. That is, we should consider them as having less weight than we would naturally ascribe to them before reflecting on the issue of reliability. This conclusion can consistently be applied to itself. One may hold that we should have diminished confidence in philosophical conclusions (relative to the confidence we would have if we were unaware of the evidence for the unreliability of philosophical reflection), without holding this opinion with complete confidence. We may, in other words, diminish our confidence in the claim that we should diminish our confidence in philosophical conclusions; there is nothing incoherent here. Moreover, we need not apply the same level of skepticism to all philosophical reasoning. While perhaps all or nearly all philosophical arguments should be taken with a grain of salt, some philosophical arguments are much more suspicious than others. We should view a philosophical argument with particular suspicion when 1. It contradicts widespread beliefs that antecedently appear extremely probable. 2. It follows a pattern of philosophical positions in a wide range of areas that appear extremely improbable. 3. It turns on highly subjective, speculative, or otherwise independently controversial judgments. All three of these conditions apply to the case for moral skepticism: (1) Moral skepticism contradicts the antecedently extremely plausible belief, for example, that I know murder is wrong. (2) Moral skepticism follows a general pattern of radically skeptical positions that philosophers take with regard to a great many matters, and each of these other forms of skepticism is also antecedently extremely implausible. (3) Arguments for moral skepticism typically turn on subjective, speculative, or otherwise controversial premises. For example, one famous argument claims that moral values are too “queer” to be accepted into our ontology.19 Another

172

Michael Huemer

influential argument appeals to the (independently controversial) epistemological theory of empiricism.20 Other arguments, alluded to in Section 1, rely on speculative generalizations about the source of moral beliefs. None of these are terribly secure premises, and all are of a sort that might easily be influenced by bias. This is why it is rational to be much more skeptical about arguments for moral skepticism than about philosophical arguments generally.

5 Conclusion Despite what you may have heard, appearances can be revealing. Most things are probably pretty much the way they seem. That is the rational presumption.21 Of course, this presumption is not indefeasible; so, for example, if a friend tells you a seemingly implausible conspiracy theory, it may be worth listening to their evidence. But if that same friend repeatedly generates conspiracy theories, for nearly every event that they hear about, then you should probably simply disregard all of their conspiracy theories. Philosophy is like a friend with a dozen conspiracy theories. Of course, philosophers typically do not literally advance conspiracy theories. The relevant point, however, is that the field consistently generates, for nearly any phenomenon it touches on, the most extreme and initially incredible theories about that phenomenon. In particular, one of the views under serious discussion will almost always be that the phenomenon doesn’t exist at all or that nothing at all can be known about it. Philosophers have advanced so many radically skeptical theories that, by now, we should probably simply disregard all of their skeptical theories. Moral skepticism is just one of these many radical skeptical theories. It is easy to see how philosophers might be unreliable on the subject of skepticism. To begin with, the sort of reasoning that philosophers use to support moral skepticism is just the sort that we should expect to be easily led astray by biases. For example, it relies on subjective judgments and intuitions about vague, abstract propositions, plus some empirical speculations, rather than on definite observations or precise calculations. The issue is highly ideologically significant, and moral skepticism appeals to some personalities on an emotional level. Philosophers may be biased toward moral skepticism for a variety of reasons, including unreasonably demanding epistemic standards, the feeling of cleverness resulting from debunking others’ beliefs, the professional and emotional rewards of defending “interesting” positions, and the ideology of scientism, among others. Although there are some ways that philosophers appear to be on average better thinkers than those in other fields, it is implausible that philosophers are, overall, reliable reasoners. This is made implausible mainly by the slow progress and the widespread disagreement in the field. For all of these reasons, our direct assessment of a philosophical

Debunking Skepticism

173

argument for moral skepticism cannot plausibly be taken as a reliable belief-forming method. We must take into account the unreliability of philosophical arguments of this kind and the likelihood of a bias toward skepticism and thus greatly diminish our confidence in any arguments for moral skepticism. This should leave us with our pre-philosophical opinions, which, for most, amount to some version of common-sense morality.

Notes 1. See Ruse (1985, 237–8), Joyce (2016, ch. 7), and Sinnott-Armstrong (2006). 2. See Mackie (1977, 36–8); Olle Risberg & Folke Tersman, “Disagreement, Indirect Defeat, and Higher-Order Evidence,” this volume. 3. For present purposes, moral skepticism includes error theories as well as theories that merely deny moral knowledge. It does not include theories that reduce moral facts to something knowable, such as facts about conventions or preferences; thus, e.g., Street’s (2006) constructivist position is not among my targets here, despite her argument’s similarity to those of Ruse and Joyce. Cf. Mackie’s (1977, 15–17) and Joyce’s (2016, 1) use of moral skepticism. 4. I call this third-order evidence because it is evidence about the reliability of arguments for moral skepticism, which is itself a second-order view. 5. Semantic nihilism holds that few if any sentences of natural language express propositions; see Braun and Sider (2007) and Huemer (2018, ch. 3). Admittedly, this view does not exactly deny that meaning exists. For a denial of the existence of meanings, see Quine (1951, 22–3). 6. Wheeler (1979) and Unger (1979) have argued that all sentences using vague expressions are false. Unger (1975, ch. 7) has argued separately that there is no truth. 7. Ioannidis (2005). 8. See Mackie (1977, 40) 9. For elaboration, see Huemer (2005, 240–8). 10. The description is from van Inwagen (1983, 215). 11. Here I use undercutting defeater to refer to any defeater that is not a rebutting defeater. It is not required that undercutting defeaters in this sense must constitute evidence against an implicit premise used to support the original belief (as in Risberg & Tersman’s usage in this volume). For example, an undercutting defeater may consist simply of evidence that one’s original belief-forming method is unreliable, as in the debunking skeptical arguments. 12. I discuss the need for this sort of coherence in Huemer (2011). 13. Brian C. Barnett, “Moral Belief, Higher-Order Defeat, and Amplification,” this volume. 14. This is a purely hypothetical possibility. Almost everyone has moral intuitions that can be used to show that animal cruelty is wrong; see Huemer (2019). 15. Moore (1953, 119–20) and Huemer (2005, 115–17). 16. See Preston (2006, section 2d) and Stroud (1984, 108–12) (though Stroud does not use the expression begging the question). 17. Some philosophers defend the cautiously optimistic thesis that there is progress in philosophy (Stoljar 2017). With this, I agree. I maintain only that this progress is slower than that which we observe in natural science and that philosophers have a smaller body of agreed-on knowledge than scientists have.

174

Michael Huemer

18. Skepticism about free will is the most common; see, e.g., the psychologist B.F. Skinner (1972) and the biologist Jerry Coyne (quoted in Harris 2018). Although reductionist views about morality and consciousness are fairly common among scientists (e.g., Carroll 2016), it is difficult to find any scientists who are skeptics or eliminativists. Hanson (2016) sounds close to denying the reality of consciousness but elsewhere clarifies that he does not intend this (p.c.). 19. Mackie (1977, 38–42). 20. Ayer (1952, ch. 6). 21. See Huemer (2006; 2007).

References Ayer, Alfred Jules. 1952. Language, Truth and Logic. New York, NY: Dover Publications. Barnett, Brian C. 2020. “Higher-Order Defeat in Realist Moral Epistemology.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Braun, David, and Theodore Sider. 2007. “Vague, so Untrue.” Noûs 41 (2): 133–56. https://doi.org/10.1111/j.1468-0068.2007.00641.x. Carroll, Sean. 2016. The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. New York, NY: Dutton. Hanson, Robin. 2016. “All Is Simple Parts Interacting Simply.” www.overcoming bias.com/2016/09/all-is-simple-parts-interacting-simply.html. Harris, Lee. 2018. “Meet Jerry Coyne, the University’s Most Prolific and Provocative Emeritus Blogger.” The Chicago Maroon. www.chicagomaroon.com/ article/2018/2/15/meet-jerry-coyne-universitys-prolific-provocative/. Huemer, Michael. 2005. Ethical Intuitionism. Basingstoke: Palgrave Macmillan. Huemer, Michael. 2006. “Phenomenal Conservatism and the Internalist Intuition.” American Philosophical Quarterly 43: 147–58. Huemer, Michael. 2007. “Compassionate Phenomenal Conservatism.” Philosophy and Phenomenological Research 74 (1): 30–55. https://doi.org/10.1111/j. 1933-1592.2007.00002.x. Huemer, Michael. 2011. “The Puzzle of Metacoherence.” Philosophy and Phenomenological Research 82 (1): 1–21. https://doi.org/10.1111/j.1933-1592. 2010.00388.x. Huemer, Michael. 2018. Paradox Lost: Logical Solutions to Ten Puzzles of Philosophy. New York, NY: Palgrave Macmillan. Huemer, Michael. 2019. Dialogues on Ethical Vegetarianism. New York, NY: Routledge. Ioannidis, John P.A. 2005. “Why Most Published Research Findings Are False.” PLoS Medicine 2 (8): e124. https://doi.org/10.1371/journal.pmed.0020124. Joyce, Richard. 2016. Essays in Moral Skepticism. Oxford: Oxford University Press. Mackie, John Leslie. 1977. Ethics: Inventing Right and Wrong. London: Penguin Books. Moore, George Edward. 1953. “Hume’s Theory Examined.” In Some Main Problems of Philosophy, 108–26. London: Allen & Unwin. Preston, Aaron. 2006. “George Edward Moore (1873–1958).” www.iep.utm.edu/ moore/.

Debunking Skepticism

175

Quine, W.V. 1951. “Two Dogmas of Empiricism.” The Philosophical Review 60 (1): 20–43. Risberg, Olle, and Folke Terman. 2020. “Disagreement, Indirect Defeat, and Higher-Order Evidence.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Ruse, Michael. 1985. Sociobiology: Sense or Nonsense? Dordrecht: Springer. Sinnott-Armstrong, Walter. 2006. Moral Skepticisms. Oxford: Oxford University Press. Stoljar, Daniel. 2017. “Is There Progress in Philosophy? A Brief Case for Optimism.” In Philosophy’s Future: The Problem of Philosophical Progress, edited by Russell Blackford and Damien Broderick, 107–17. Hoboken, NJ: Wiley-Blackwell. Street, Sharon. 2006. “A Darwinian Dilemma for Realist Theories of Value.” Philosophical Studies 127 (1): 109–66. https://doi.org/10.1007/s11098-005-1726-6. Stroud, Barry. 1984. The Significance of Philosophical Scepticism. Oxford: Oxford University Press. Skinner, Burrhus F. 1972. Beyond Freedom and Dignity. New York: Knopf. Unger, Peter K. 1975. Ignorance: A Case for Scepticism. Oxford: Oxford University Press. Unger, Peter K. 1979. “There Are No Ordinary Things.” Synthese 41 (2): 117–54. van Inwagen, Peter. 1983. An Essay on Free Will. Oxford: Oxford University Press. Wheeler, Samuel C. 1979. “On That Which Is Not.” Synthese 41 (2): 155–73. Accessed September 17, 2019.

Part III

Broader Implications of Higher-Order Evidence in Moral Epistemology

8

Moral Testimony as HigherOrder Evidence Marcus Lee, Neil Sinclair, and Jon Robson

1 Introduction1 How should we respond to the moral testimony of others? In this chapter, we explore the view that it is never reasonable to form moral judgements on the basis of the moral testimony of others, but it is sometimes reasonable to revise one’s moral judgements on this basis. On the view we wish to explore, the former comes about because moral testimony cannot act as first-order evidence for forming moral judgements and the latter because such testimony can act as higher-order evidence for revising those same judgements.2 Hence, this view combines pessimism about the first-order evidential role of moral testimony with a cautiously nonsceptical view about its higher-order evidential role. After giving some considerations in favour of this moderate position, we explore the subsequent question: are the circumstances in which moral testimony serves as higher-order evidence the same sort of circumstances in which mundane (roughly, non-moral) testimony serves as higher-order evidence? Our tentative conclusion is that they are not and that this may put some pressure on the moderate view. In Section 2, we distinguish two different roles that testimony might play: first order and higher order. The first of these (roughly) concerns the legitimacy of testimony as a source of judgement. The second (even more roughly) concerns the legitimacy of testimony as grounds for revising our judgements. One might think that if there is good reason to reject the former, then there will be good reason to reject the latter (and it does seem that proponents of one sceptical view tend to endorse the other).3 However, in Section 3, we argue that the most prominent reasons for accepting pessimism concerning moral testimony in the literature would, even if cogent, provide no reason to reject the higher-order role of such testimony. In Section 4, we look in more detail at one particular strategy – appeals to considerations of autonomy – that we believe provides the best extant motivation for linking scepticism about the first-order and higherorder roles. Even here, though, we argue that the connections are not straightforward. In Section 5, we consider whether there are particular

180

Marcus Lee et al.

restrictions on the higher-order evidential role of moral testimony, and we tentatively suggest some restrictions grounded in the often-observed practicality of morality. We also tentatively suggest that one of these strategies provides the most promising avenue for someone looking to reject the moderate view.

2 Two Roles for (Moral) Testimony Consider two types of case. In the first, we have no prior view on whether p and receive testimony that p. In the second, we believe that not p on the basis of evidence E but receive testimony that p from someone with access to the same evidence.4 Further, assume that this someone is an epistemic peer in the narrow sense that there is no reason (antecedent to considering the question whether p) to think that either of us is any more reliable or competent with respect to p-like matters.5 The first kind of case (a first-order case) concerns a paradigm case of testimony and so needs no explication. By contrast, the second kind of case (a higher-order case) requires unpacking. Suppose, for example, that Able and Mable are history professors who have read the same books, attended the same seminars, and so forth. On the basis of the arguments presented, Able has come to believe that Richard III killed the princes in the tower, and Mable has come to believe that Richard did no such thing (the arguments are complex and the issue difficult). Able says to Mable, “Richard killed the princes in the tower,” providing Mable with a piece of testimony that is contrary to her own judgement. Note some general features of higher-order cases. First, they are cases in which testimony is not the sole source of evidence. In Mable’s case, she begins with some evidence that serves as the basis for her judgement. If it provides any evidence for anything at all, Able’s testimony is evidence that is additional to this existing evidence. Second, the testimony provided in such cases (and, we will assume, in first-order cases) is pure in content. Able says, “Richard killed the princes.” He does not say, “We know that Richard killed the princes because .  .  .” where the ellipses point to some further putative evidence of which Mable was previously unaware.6 Our focus here is on the evidential role of such “pure” testimony. Third, although in the case of Able and Mable the latter receives testimony that is counter to her original judgement (counter-testimony), not all the cases with which we are concerned are like this. In a related case, suppose that Babel is somewhat confident that Richard is guilty, but Able is absolutely adamant. Able says to Babel, “Richard definitely killed the princes!” conveying as he does his absolute conviction in this matter. We are also concerned with the question whether it is now reasonable for Babel to be more confident about Richard’s guilt. Given these two kinds of cases, it seems that there are at least two roles that (moral) testimony might play. The first, which we term the

Moral Testimony as Higher-Order Evidence

181

first-order role, involves testimony providing a basis for forming a new judgement that p.7 What precisely this basis is has been a source of much debate in the epistemology of testimony, but for ease of exposition, we will here speak as if it is straightforwardly evidential.8 On this construal, in its first-order role, testimony provides evidence that p, where p is the proposition (or one of the propositions) expressed by the content of the testimony. The second role that testimony might play – the higher-order role – does not involve taking testimony to bear directly on the issue of whether p but instead involves taking the testimony to bear on the issue of whether one’s initial response to the (first-order) evidence that p was appropriate.9 To make this thought more accessible, consider Mable. Mable has looked at the evidence carefully and subsequently formed the judgement that Richard did not kill the princes. Able, who has seen the same evidence, testifies that Richard did kill the Princes. Mable then reflects: Perhaps Able’s testimony shows that my initial response to the evidence was misguided. And perhaps because of this, I should lower my confidence that Richard did not kill the princes or even suspend judgement on the matter altogether. More generally, to take testimony as higher-order evidence is to take that testimony to bear on the issue of whether one’s response to the first-order evidence was appropriate. Where testimony plays this role, an appropriate response to it can be to alter one’s confidence in an already-accepted proposition or suspend judgement entirely. Simplifying somewhat, the higher-order role of testimony relates to revising one’s judgements, where this covers – as we will further discuss later on – both altering one’s level of confidence and suspending judgement. This is the role of testimony in higher-order cases. Two further points about the higher-order role are worth noting. First, considerations other than testimony can provide higher-order evidence. For example, evidence that one’s judgement is subject to bias, that one has been slipped a smart drug (one that heightens reasoning abilities), or a belief pill (that makes you believe that p, independent of whether or not p) can also be higher-order evidence to revise one’s judgement. (In ethics, some take evolutionary debunking arguments to be analogous to belief pill cases.10) Nevertheless, our focus is on a particular potential source of higher-order evidence, namely testimony. Second, discussions of moral testimony have tended to focus exclusively on the first-order role. Indeed, the focus is often on cases where moral testimony is the only evidence that one has in such cases, which rules out any higher-order role. When it comes to higher-order cases, the obvious question to ask is to what extent the new piece of testimony serves as higher-order evidence, which should lead us to revise our judgement that p. Philosophers who have discussed higher-order cases have tended to agree that there are some cases in which testimony can play some legitimate higher-order role. Consider, for example, how strange it would be for someone who

182

Marcus Lee et al.

believed, on the basis of a quick calculation, that their share of the bill was £17.88 to refuse to lower their confidence at all when they learn that the other nine diners (all epistemic peers) conclude that it is £17.99. The question, then, tends to be about what role we should allow higher-order evidence to play. There are, however, a number of reasons for doubting whether this question will have a uniform answer. For example, Lackey (2010) has argued that the correct response is dependent on whether our initial judgement really was supported by the relevant evidence, and others argue that the correct answer varies depending on the domain under consideration.11 In this chapter, we will investigate one possibility of this second kind, by asking whether the circumstances in which moral testimony serves as higher-order evidence are the same circumstances in which mundane testimony serves a parallel role for mundane judgements. In doing so, we will assume that there are at least some mundane cases in which testimony does provide higher-order evidence that should lead to our revising our judgements. In the next two sections, we consider arguments for a negative answer to our question that are based on pessimism concerning moral testimony. The suggestion is that the truth of pessimism clearly entails that moral testimony cannot play a higher-order role. We will argue that this is not the case, since some of the most prominent reasons for accepting pessimism in the moral case provide no reason at all for a blanket rejection of the higher-order role of moral testimony (Section 3). Other extant motivations for pessimism are more promising in this respect but still fall short of entailing that we must reject the higher-order role (Section 4). We do, however, suggest (Section 5) that there are some underexplored features of the moral domain which may justify both pessimism and scepticism concerning the higher-order role. Even here, though, the link between the two is by no means a straightforward one.

3 Moral Testimony and the Norms of Moral Judgement Hopkins (2007) draws a distinction between two positions regarding the first-order role of moral testimony. Optimists think that there is in principle no special problem with relying on such testimony (as compared to cases of mundane testimony) as a (defeasible) source of first-order evidence for forming one’s moral judgements, whereas pessimists think that although testimony is often a legitimate source of (defeasible) evidence in mundane cases, there is something in principle problematic about such reliance in the moral case. As we will see, though, pessimists disagree in some significant respects about where the problem(s) lie(s) here. In particular, some pessimists take the problem with such judgements to be a straightforwardly epistemic one, whereas others take the problem to be a moral, or otherwise non-epistemic, one. It is also worth noting that both optimism and pessimism come in degrees. An extreme pessimist

Moral Testimony as Higher-Order Evidence

183

would claim that we can never legitimately form moral judgements on the basis of moral testimony. An extreme optimist would affirm that moral testimony is a legitimate basis on which to form one’s moral judgements about as often as it is in (some) mundane cases. There are also a range of positions that one might hold between these two extremes, so it is better to think of there being a spectrum of views that tend towards optimism or pessimism rather than a binary choice.12 Nonetheless, even if somewhat artificial, the distinction that Hopkins sketches is a common and useful place to start.13 Since optimists think that there is no in principle difference between first-order reliance on moral testimony and reliance on testimony in mundane cases, it seems reasonable to assume they would have no in principle problem with reliance on moral testimony as higher-order evidence.14 This is not to suggest that they are unable to adopt scepticism about the higher-order role of moral testimony but merely to suggest that their view concerning the first-order role of moral testimony does nothing to motivate it. The interesting question is whether pessimists who deny the first-order role of moral testimony have, qua pessimists, reason to doubt the legitimacy of testimony in the higher-order role. This is an underexplored question in the literature.15 What pessimists think is problematic about relying on testimony in the first-order role varies. Here we will survey some of these reasons. Our concern is not to assess the plausibility of these explanations themselves but merely to ask whether, if cogent, they would also justify a rejection of the higher-order role.16 First, some suggest that pessimistic intuitions can be explained by the twin facts that (i) moral judgements are distinctively and intimately connected with affective responses (moral sentiments), such that moral judgements either cannot be formed in the absence of such responses or are inappropriate in the absence of such responses, and (ii) such affective responses are difficult or impossible to form on the basis of testimony.17 Does this kind of pessimism transfer to the case of revising one’s moral judgement on the basis of testimony considered as higher-order evidence? No. In higher-order cases, the agent has already formed a moral judgement that p and, we can assume, done so in a way that satisfies the affective requirement. If they were to subsequently alter their credence in this judgement or indeed abandon the initial judgement altogether, then this would not (provided they don’t take the further step of adopting the view of their opponent) involve introducing any new moral judgements whose status is open to suspicion. Therefore, the question whether we can (legitimately) possess moral judgements in the absence of affective states is moot when it comes to determining the legitimacy of the higher-order role. Of course, we could make the further claim that it is illegitimate to alter our credence in a moral judgement without some corresponding alteration to our affective state, but crucially, this is an additional claim,

184

Marcus Lee et al.

one that isn’t entailed by the initial claims that the affective pessimist makes. Further, it seems plausible to think that our affective states may well alter along with our credences. We may, for example, find ourselves becoming less resentful of a presumed enemy the less certain we become that they really have wronged us. Parallel points seem to apply to other putative candidates for additional requirements on possessing moral judgements. Consider, for example, the influential view according to which moral testimony cannot play the first-order role, since legitimate moral judgement requires moral understanding, where this is cashed out as having a “grasp” of the relevant moral reasons.18 To explain, if an agent’s moral judgment that p is based on someone’s testimony, then the reason for their believing p is not the reason why p is correct. For example, suppose Karen judges that euthanasia is (in some cases) morally permissible solely on the basis that her partner judges it so. The issue is that even if Karen’s judgement is correct, it is not based on considerations that underwrite its correctness; it is insensitive to the “permissible-making” features of euthanasia. To gain understanding, Karen would need to, inter alia, be capable of saying why these particular cases are morally permissible and what differentiates them from other, non-permissible cases. This motivation for pessimism could be put forward as the epistemic complaint that the agent lacks warrant for the judgement because moral testimony fails to provide moral understanding, and without moral understanding, the agent does not know what the justificatory grounds for the judgement are.19 More commonly, however, concerns of this kind are phrased in terms of the agent’s doing something morally (or otherwise non-epistemically) problematic in forming a moral judgement on the basis of testimony.20 For example, one common complaint is that without understanding the reasons for a moral judgement, any actions performed on the basis of that judgement will be lacking in (full) moral worth.21 On this view, relying on testimony as the basis for moral judgments is problematic because of the tight link between moral judgement and action. The idea need not be that a moral judgement formed solely on the basis of moral testimony is problematic in itself, but that such judgement formation becomes problematic given the practical nature of moral judgement. On this account, morally worthy action is the (or at least a) primary aim of moral judgement, and any actions performed in the absence of moral understanding will fail to be (fully) morally worthy. As we might say, the ideally virtuous person not only does the right thing but does so for the very reasons that make it right. This is not necessarily to say that someone who does the right thing without knowing the reasons is blameworthy in that respect. Indeed, they could be praiseworthy to the extent that they did the right thing because it was the right thing. Nonetheless, according to this kind of pessimism, they still fall short of

Moral Testimony as Higher-Order Evidence

185

an important ethical ideal, and insofar as they do, their actions are lacking in moral worth. Again, though, there seems to be no reason in all this to deny that moral testimony can play a higher-order role. The lack-of-moral-understanding explanation (along with various other pessimistic explanations of why moral testimony cannot legitimately play the first-order role) appeals to some additional norm of moral judgement such that forming (or sustaining) judgements on the basis of moral testimony inevitably leads to judgements that flout this norm. However, accepting pessimism about the first-order judgement-formation role is clearly compatible with holding that moral testimony can legitimately perform a higher-order revising role. This is because revising one’s level of confidence in a moral judgement, or suspending that judgement, does not in itself introduce any new moral judgement that is open to assessment on the basis of this additional norm. This is not, of course, to deny that revising one’s credences might sometimes involve forming a new moral judgement but merely to insist that it doesn’t always do so. We may, for example, move from believing that eating meat is morally wrong to being agnostic on this matter or retain the same judgement but with a reduced level of conviction. And we may do this while it remains the case that the relevant judgement was formed in such a way as to be accompanied by the appropriate affective responses or level of understanding. The overall message is that merely proposing some additional norm on moral judgement, a norm that some ways of forming moral judgements lead us to flout, provides no reason to deny that testimony can play a higher-order role, since a higherorder role concerns only the revision – and not the formation – of moral judgements.22

4 Moral Testimony and Autonomy In the previous section, we considered two prominent motivations for pessimism, which, we argued, failed to provide clear grounds for rejecting the higher-order role of moral testimony (arguments that would also apply, mutatis mutandis, to many other forms of pessimism, which are motivated by specific additional norms on moral judgement.) In this section, we look in more detail at a further motivation for pessimism that we take to represent the best prospect (among the arguments for pessimism that presently dominate the literature) for forging a link between a denial of the first-order role and a denial of the higher-order role: the connection between moral judgement and autonomy. One common criticism of relying on moral testimony when forming moral judgements is that it involves a renunciation of one’s autonomy.23 This account posits a norm applying directly to the formation of moral judgements. On this view, forming moral judgements in particular ways is problematic, not because the resulting judgements are themselves problematic (e.g. insofar as they

186

Marcus Lee et al.

lack affect or understanding) but because such judgement-forming processes are themselves objectionable insofar as they are not autonomous (it is of course consistent with this that certain other processes concerning changes in judgement are similarly not autonomous). This is an important difference between the type of pessimism discussed in this section and those discussed in Section 3. If the worry concerning autonomy was only about a total outsourcing of moral decision-making to the kind of “Google morals” system described by Howell (2014) then this would provide scant reason to refrain from forming isolated moral judgements on the basis of testimony (and still less to avoid revising such judgements on that basis). However, many autonomy-based criticisms of relying on moral testimony are significantly more demanding than this. For example, Crisp (2014) suggests that there are some cases in which relying on moral testimony on just one occasion “is detrimental to autonomy, unless we set the standards for rational self-government quite low” (2014, 138). The norm being flouted here is a moral (non-epistemic) norm on moral judgement formation qua autonomous action. There are, of course, various questions that could be raised concerning accounts of this kind. Why should we think that deferring to others in this way is especially autonomy compromising? Why think that it is always or usually problematic to reduce our own autonomy?24 And so forth. Again, though, our concern won’t be with evaluating this motivation for pessimism but asking whether, if cogent, it transfers to the higher-order case. If all that is intended here is to introduce an additional non-epistemic (autonomy-based) norm on moral judgement formation, then this would fail to transfer to cases of moral judgement revision for reasons paralleling those discussed in Section 3 and more generally because the norms appropriate for one type of activity are not necessarily appropriate for a distinct type of activity. However, we think it is likely that those who have proposed a link between autonomy and pessimism take the demands of autonomy, as they relate to moral judgements, to be broader than this, covering other processes relating to moral judgement, such as their revision. Of course, if we merely stipulated that the species of autonomy at issue here required that we neither form nor revise moral judgements on the basis of testimony, then this would straightforwardly rule out the higher-order role as well. This would not, however, be of much theoretical interest. What we need, then, is a clearer explanation of the kind of autonomy involved here and the demands that it could make on us. As Driver (2006, 634) notes, autonomy is a “murky” concept, involving many elements that are easy to conflate. Here we will focus on the kind of autonomy that she identifies that “involves viewing the moral agent and the moral judge to be self-legislating in some sense.” Whether this kind of autonomy really rules out forming our moral judgements on the basis of testimony is, of course, controversial.25 But, once again, our concern

Moral Testimony as Higher-Order Evidence

187

won’t be with evaluating the motivation itself but with asking whether, if cogent, it raises difficulty for the higher-order role. To examine this, consider the story that Crisp (2014, 136–7) offers concerning testimony in relation to autonomy of this kind according to which “a person who accepts moral testimony is handing control over their moral sensibility to another, and hence implicitly surrendering their status as a freely thinking and autonomous agent.”26 Note three points about this account. First, it is not the (implausible) claim that someone who takes moral testimony ceases to be an autonomous agent altogether but, rather, that “she fails to exercise her autonomy on this occasion, and in this sense her action is heteronomous and her life to that extent less autonomous overall” (Crisp 2014, 137). Second, the decision to defer can itself be an autonomous one: “heteronomy arises once the control has been transferred, not in the transference itself” (Crisp 2014, 137). Third, Crisp’s view is not the claim that relinquishing control of one’s “moral sensibility” in this way is, all things considered, always impermissible but rather the claim that “in certain cases, judging on the basis of testimony is always worse, to some degree, than judging for oneself, even though in those very cases, and indeed others, it may be better overall to rely on testimony” (Crisp 2014, 141). Insofar as relying on moral testimony is a failure to exercise one’s autonomy on that occasion, the thought goes, such reliance is suboptimal because reductions of autonomy are always pro tanto problematic. There is much that needs to be said to fill out this account, but for our purposes, this sketch should do. The key point is that in any instance when we opt for deference to someone else rather than exercising our own moral reasoning, we are making our lives less autonomous overall in a way which is pro tanto objectionable.27 Although Crisp (2014, 129) frames his account as a critique of the “acquisition of moral beliefs through reliance on what others say” (i.e. in terms of our first-order role), its consequences seem rather broader. It would, after all, seem strange for Crisp to hold that “a person who accepts moral testimony is handing control over their moral sensibility to another” (Crisp 2014, 136) in a way that problematically undermines their exercising autonomy but to deny that there are any autonomy-threatening consequences in someone’s altering their confidence in an existing moral judgement on the basis of another’s say-so. This is not to say that such a position would be incoherent but merely that it fits poorly with the general account of the nature and value of autonomy that Crisp proposes. The requirement to be autonomous with respect to our “moral sensibility” seems much wider than merely the requirement not to defer to the testimony of others when forming moral judgements. Here, then, we have an account of the importance of autonomy that could be used to motivate scepticism about the formation and revision of our moral judgements on the basis of testimony in certain cases. It is

188

Marcus Lee et al.

important, though, to stress the difference between two kinds of moral judgement revision. In the first kind of case, someone might revise their moral judgements on the basis of first-order considerations (e.g. if they receive testimony from someone who has, or who they take to have, access to relevant evidence that they lack). This kind of judgement revision is plausibly ruled out by the autonomy-based considerations noted earlier; but it isn’t the kind of (higher-order) case that we are focusing on in this chapter. By contrast, it is by no means clear that considerations of autonomy do anything to undermine the higher-order motivation for judgement revision. Consider again one kind of judgment revision that we have been focusing on: an agent reduces their credence in their moral judgement that p upon hearing testimony that not p, because they take the testimony as (defeasible) evidence that their own judgement-forming capacity is misfiring on this occasion (i.e. the testimony is playing a higher-order role). In this case, the immediate target of higher-order evidence isn’t any kind of moral judgement but, rather, the agent’s judgements concerning the reliability of their own cognitive faculties. And it is perfectly consistent to propose a strong requirement for self-legislation in the moral domain – one that refuses to allow the testimony of others to legitimately make even the smallest direct impact on our moral judgements – while still allowing that the testimony of others can provide good reasons to doubt the reliability of our own judgment-forming mechanisms (including those we use to form our moral judgements). Indeed, being sensitive to the reliability of one’s judgement-forming mechanisms could be argued to be a paradigm of reflective, autonomous agency (as opposed to being blindly led by one’s unexamined judgement-forming habits). Further, any autonomy requirement that entailed that we can never legitimately accept a judgement on the basis of testimony, where that judgement then leads (via a fairly straightforward route) to the legitimate revision of a moral judgement, would be manifestly implausible. Imagine, for example, that Poppy judges that it is wrong to eat fish and judges this way solely on the basis of the pain that this causes to the individual fish involved. She is then told by a reliable source that owing to new evidence, there is now a consensus among experts that fish don’t feel pain. It seems clear that she should now abandon her previous moral judgement based on this (non-moral) testimony – yet this hardly strikes against Poppy’s autonomy. Indeed, responding for oneself to new information such as this seems to be a paradigm case of autonomous judgement formation (as opposed, for instance, to being “goaded” into accepting a view on the basis of threats or propaganda that obscure the facts). Nor is it the case that we can reasonably hold that testimony can never serve as higherorder evidence concerning our moral judgements. Consider a case where you are told (again via impeccably reliable testimony) that the last time you formed a moral judgement on the basis of your own reasoning, you

Moral Testimony as Higher-Order Evidence

189

were under the influence of some introspectively undetectable drug that severely compromises reasoning (moral or otherwise).28 It seems clear that this information should, at the very least, lead you to become significantly less confident in that judgement (and again, doing so seems to be an affirmation of autonomy rather than a degradation of it). All of this is not, of course, to deny that there could be some way to motivate a form of autonomy in moral judgement that ruled out the particular kind of higher-order role that we are considering. Rather, our claim is merely that work needs to be done to show why any independently plausible version of the autonomy requirement would have this consequence.

5 A Practical Reason for Rejecting the Higher-Order Role The foregoing suggests that there is no general reason to be pessimistic about the higher-order role of moral testimony. Relying on such testimony to revise one’s moral judgements does not undermine any affective basis that legitimate moral judgements might have, nor does it undermine moral understanding or agential autonomy. Or, more precisely, if reliance on moral testimony is problematic in any of these ways, then this would need a novel argument, independent of standard pessimistic concerns. This seems to raise two questions 1. Are there any circumstances in which we should not grant moral testimony a higher-order role? 2. If so, does the best explanation for this also provide a motivation for pessimism? Many philosophers have addressed a more general question: are there any circumstances in which we should not grant testimony – any testimony – a higher-order role? Kelly (2011) gives several plausible examples of circumstances that disqualify testimony from playing any higher-order role. For example, we should not grant testimony a higherorder role if that testimony is not independent of some other testimony that one has already considered.29 Nor should we grant testimony a higher-order role if there is compelling evidence that, in the case at hand, one’s peer has not responded appropriately to the evidence you share (e.g. if they have arrived at an obviously false judgement on the basis of that evidence) or, in related cases, when the agent possesses compelling evidence that they have responded entirely appropriately to the first-order evidence.30 These conditions apply equally to moral and non-moral cases; for example, even in moral matters, the number of people providing additional testimony is largely irrelevant if they are not independent of each other. Doubtless, there are other cases where (almost) everyone would agree that no revision is in order, regardless of the domain. The relevant question

190

Marcus Lee et al.

here is whether there are any circumstances particular to the moral case that undermine the higher-order role of testimony. One possibility concerns viciousness. Suppose one’s peer is generally reliable on moral matters: their moral judgements are reliably accurate. But they are also, to use a piece of technical terminology, a bastard. They are a bastard, not because they think (for example) that the only duties that exist are duties to oneself – in that case their moral judgements would not be reliably accurate – but rather because although they generally make the correct moral judgements (that one ought, ceteris paribus, to keep promises, pay debts, not cause harm, and so on), they systematically fail to live up to them. (Naturally, for us to have reason to trust their testimony we would need to stipulate that, for whatever reason, they make an exception to their bastardly behaviour when it comes to truth telling.) There is no non-moral (or at least non-normative) correlate of the moral bastard; their bastardy is defined in terms of making a set of (reliably correct) normative judgements but then not living up to the way that those judgements prescribe or recommend. If a set of judgements do not have normative content, they do not prescribe or recommend, and they therefore cannot be contradicted in action in this way. Therefore, if moral bastardy rules out moral testimony playing a higher-order role, this is not a restriction that transfers to mundane non-moral (or nonnormative) cases. And, intuitively, moral bastardy does undermine (or at least diminish) any higher-order role of moral testimony.31 Consider Sally the saint and Bertie the bastard. Both are equally reliable in their moral judgements, but only the former acts appropriately in light of those judgements. Suppose that we judge that eating meat is morally permissible. Both Sally and Bertie demur, and they testify that “eating meat is wrong” (assume we share the same first-order evidence). Sally lives up to her testimonial judgement and assiduously refrains from eating meat. Bertie, on the other hand, only eats steak and invests all his spare money in factory farming (“What do I care for morality?” he adds). Intuitively, Sally’s testimony should weigh with us more than Bertie’s. In particular the fact that Sally has considered the same evidence as we have, and yet has come to a contrary conclusion on the basis of that evidence, should (other things being equal) be treated as higher-order evidence that our initial response to the first-order evidence was not appropriate and hence cause us to revise our judgement that eating meat is morally permissible. But Bertie’s countertestimony should, intuitively at least, not be given the same (higher-order) evidential weight – because Bertie is a bastard.32 Somewhat less intuitively but still plausibly, Bertie’s testimony should be given no (higherorder) evidential weight at all – again because Bertie is a bastard. In a slogan: don’t let the bastards get you(r moral credences) down!33 These are just intuitions, of course. But if correct, they suggest that there are particularly moral conditions on whether moral testimony

Moral Testimony as Higher-Order Evidence

191

should be given higher-order evidential weight. If they are incorrect, they need to be explained away. By contrast, if the issue is non-moral (or non-normative) – for example, about the division of a restaurant bill – whether or not one’s peer is a moral bastard, again assuming they’re the honest kind, seems irrelevant to the higher-order evidential weight that one should afford their testimony. Similarly, even in the case of moral judgements, it seems crucial that the testimony has moral content. There is, in our view at least, nothing intuitively problematic about reducing your confidence in the claim that eating meat is permissible based on testimony from a(n honest) bastard, where that testimony is to the effect that you’ve ingested a pill that compromises your ability to reliably form moral judgements. Can we move beyond bare intuitions and provide theoretical grounding for the claim that moral bastardy undermines the higher-order evidential role of moral testimony? The next few paragraphs explore two putative grounds. The first is as follows. Quite generally, insincere or confused testimony should not be given higher-order evidential weight. Let’s return to the example of Able and Mable. If Mable discovers that Able’s countertestimony is insincere or that Able doesn’t really understand what the word “princes” or the word “kill” means, then the testimony provides no reason for Mable to revise her judgement that Richard did not kill the princes. Moreover – this view continues – in the case of moral judgements (and only that case) being appropriately motivated is a sincerity or competence condition for moral judgement: an agent who (apparently) judges that ɸ is wrong, for instance, and yet is not in some way motivated against ɸing thereby reveals themselves to be insincere or confused about the meaning of the term “wrong.”34 The argument, then, is that insincerity or semantic incompetence is in general a disqualifying condition for testimony playing a higher-order role. And since in the moral case, insincerity or semantic incompetence follows from a lack of appropriate motivation, it follows that in the moral case, lack of appropriate motivation is a disqualifying condition for testimony playing a higher-order role. However, this is a treacherous argument because it is not clear that being appropriately motivated is a sincerity or competence condition for moral judgement. As Brink has argued, amoralists – agents who sincerely make moral judgements but are not appropriately motivated – seem to be a conceptual possibility.35 Moreover, arguments for thinking that such agents cannot be competent with the moral terms they deploy – such as Smith’s fetishism argument – have generally failed.36 Further, it doesn’t seem particularly relevant that our bastard feels no motivation to act morally. There would also be something intuitively problematic about testimony from someone who experiences some (minimal) motivation to act morally but consistently experiences stronger motivation in the other direction (on the basis of self-interest, sadism, or what have you).37 It is

192

Marcus Lee et al.

worth considering, then, alternative groundings for the intuition that one should not give any higher-order evidential role to the moral testimony of bastards. This brings us to our second suggestion. The “internalist” idea that moral judgements are necessarily connected to appropriate motivational states is one way of cashing out the thought that morality is practical. But there are other ways of cashing out this thought, which might do better in explaining the intuitions at stake. Suppose, for example, that one goal of moral inquiry is interpersonal action alignment – that is, not just for you personally to come to the correct moral judgements but for you and your fellow enquirers to act rightly in light of these judgements (i.e. for not merely your collective judgements but also your collective actions to align with the moral facts – which is the distinctive interpersonal normativity of morality). Moral inquiry would not be over – moral matters would not be settled – if everyone professed the same moral judgements but some consistently acted contrary to those judgements.38 Therefore, forming moral judgements on the basis of pure testimony from bastards would not meet this goal39 and so would be inappropriate: the hypothesised goal of moral inquiry explains the norm relating to the formation of judgements that are the product of that inquiry. This reason for not relying on the testimony of bastards in forming moral judgements is easily overlooked, however, because forming moral judgements on the testimony of anyone seems to many to undermine other important goals of moral inquiry and judgement – such as moral understanding or autonomy – that are sufficient to make this kind of deference dubious. Although we suggested earlier that these other goals aren’t (obviously) compromised by revising our moral judgements on the basis of the testimony of others, it does seem problematic to revise these judgements on the basis of the testimony of bastards (i.e. to give such testimony a higherorder role). This is because allowing their testimony to play the higherorder role would, again, fail to advance the goal of interpersonal action alignment (in a way that revising moral judgements in response to the testimony of bastards need not fail to advance other goals of moral inquiry – see, e.g., Fritz 2018, 131). Suppose, for example, that in response to Bertie’s testimony that eating meat is wrong we revise downwards our credence that eating meat is permissible or suspend judgement on the matter entirely. This might make us slightly more reluctant to eat meat, so we eat less. Bertie, on the other hand, continues to judge that eating meat is wrong and continues to scarf the stuff down. In no sense has this change furthered the goal of interpersonal action alignment. By contrast, responding in the same way to Sally’s saintly testimony does seem to further this goal. So in the moral case, not only does forming judgements on the basis of the testimony of bastards not meet (or further) this goal of moral inquiry but revising judgements on the basis of bastardly testimony does not do so either.40 This is why the latter is problematic – and

Moral Testimony as Higher-Order Evidence

193

problematic in the way that relates to the distinctive practical nature of morality and to the essence of moral bastardy, neither of which has an analogue in non-moral cases. It is this account, we suggest, that offers the most promising link between pessimism and rejecting moral testimony’s higher-order role. As it stands, though, this explanation is rather underdeveloped, so it would be premature to make any definitive judgements as to its success in justifying either pessimism or scepticism concerning the higher-order role of moral testimony (and even more so about its ability to link the two).

6 Conclusion If the tentative explanation that we offered in the previous section is correct, then we can explain the distinctive epistemic restrictions on the higher-order role of moral testimony in part in terms of the distinctive, practical, action-guiding goals of moral inquiry and judgement.41 In one sense, this is surprising, since it seems to derive “oughts” from social and psychological “ises” (concerning the aims or goals of moral inquiry). But in another sense, it is not surprising at all, since it is only by understanding the distinctive functions of the practices of forming and – for this may be distinct – revising moral judgements that we can understand the norms appropriate to those practices (the same applies, of course, to judgements of other types, such as those of beauty or taste). It may be that one underestimates those distinctive functions if one holds (as some versions of moral realism seem to) that the goal of such practices is simply to “see matters aright” and then to transmit that knowledge. In this sense, the arguments of this chapter are consonant with those of Hills and others, who have suggested that a more developed and nuanced account of the aims of moral inquiry may be key to explaining the permissible moves within it. Our approach is different only insofar as the relevant goals proposed are collective rather than individualist.

Notes 1. Lead author: Marcus Lee. The other authors made equal contributions. Thanks to Marco Tiozzo, Kurt Sylvan, Javier González de Prado, Katherine Puddifoot, Daniel Whiting, Mona Simion, and all the other attendees of the University of Southampton Higher-Order Evidence Workshop on 25 March 2019. 2. We use the term evidence here in a loose sense that is neutral both about whether testimony is ever strictly evidential as a source of judgement and about whether reasons for being sceptical about moral testimony are strictly evidential. 3. E.g. Hills (2010, 219–30). 4. Cases of the kind we are concerned with are often invoked in discussions of the role of peer disagreement; however, testimony of the relevant kind isn’t necessary, and may not be sufficient, for generating peer disagreement. It is not necessary because an agent could learn of a peer’s contradictory opinions

194

5. 6. 7.

8. 9. 10. 11. 12. 13.

14. 15.

16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.

Marcus Lee et al. by non-testimonial methods: their disapproving looks, examining scans of their brain, and so forth. It is (arguably) not sufficient because of the existence of the kind of relativist view discussed in MacFarlane (2007). Kelly (2011, 183). See Hopkins (2011, 138), Hills (2010, 222), McGrath (2009), and Fritz (2018, 128). We do not mean to suggest that these are the only first-order or higher-order roles that testimony might play but merely that these are the roles that we will be focusing on. For example, testimony may also serve as a first-order reason for revising your judgement in cases where you take the judgement that your interlocutor is conveying to be based on first-order evidence that you haven’t yourself assessed. Fletcher (2016, §2) labels this a type of indirect reliance on testimony. For an overview, see Lackey (2006). Compare Kelly (2011, 200). See Joyce (2006) and Sinclair (2018). E.g. McGonigal (2006) and Choo (2018). Robson (2012, 3–4) defends a view of this kind regarding parallel positions in aesthetics. We will also talk as if the pessimist takes it to be impermissible to form moral judgements on the basis of testimony, but some pessimists make only the weaker claim that there is something problematic (or sub-optimal) about forming such judgements. See, e.g., Crisp (2014). However, they could appeal to in practice differences of the kind discussed in Elga (2007, 495–6). A recent exception is Fritz (2018), who asks a question that is distinct from, but closely related, to our own: whether pessimism about the first-order role supports steadfastness about moral disagreement (roughly, the view that disagreement in moral matters ought not affect our own judgements). His conclusion is that it does not. More precisely, he concludes that such pessimism does not rule out responding to disagreement by reducing one’s confidence in or by suspending one’s judgements (2018, 130, 133–4), although it does undermine the practice of switching judgements in response to such disagreement (2018, 129, 133). Although we are generally sympathetic to Fritz’s paper, we think, as will become clear later, that he is rather too quick in his dismissal of autonomy arguments (2018, 125–6). Other minor disagreements, and differences in focus, should become clear later. This is a further contrast with Fritz (2018). The authors of this paper differ significantly in their assessments of the plausibility of these explanations (and of pessimism more generally). Fletcher (2016, §§4–5). See Nickel (2001), Hopkins (2007), Hills (2009), and McGrath (2011). Cf. Nickel (2001). Of course, these two complaints are compatible. See Nickel (2001), Hills (2009), and McGrath (2011). See Fritz (2018, 136). See Annas (2004) and Crisp (2014). Compare Sliwa (2012, 188–9). See Driver (2006, 636). Driver (2006, 635–6) raises some reasons for doubting that it does. Crisp (2014, 137–8) also discusses a second interpretation of autonomybased requirements, but we will focus here on the first. Crisp (2014, 138) takes this to be objectionable on both epistemic and moral grounds, but other versions of this account might focus on one of these at the expense of the other.

Moral Testimony as Higher-Order Evidence

195

28. This case is similar to the one described in Christensen (2011, 6–7). 29. “numbers mean little in the absence of independence” – Kelly (2011, 204–5). However, see Coady (2006) for some exceptions to this general claim. 30. Kelly (2011, 207–8). 31. See, e.g., Annas (2004, 64). 32. We take this intuition to be widely, but by no means universally, shared (e.g. a referee reports not sharing it). 33. One suggestion, which we won’t explore here, might be that our contempt for such people makes us psychologically incapable of taking their testimony as reason to alter our credences in the relevant way (compare to DiPaolo (this volume) on the inability of certain “fanatics” to respond to disagreement). 34. See Smith (1994). 35. Brink (1989, ch. 3). 36. See Lillehammer (1997). 37. Further, this explanation doesn’t seem to vindicate the claim that there is something exceptional about moral testimony (since we should be suspicious of testimony in any domain from those who lack either sincerity or competence in that domain). 38. By contrast, non-moral inquiry would be over once the judgements were settled – see Stevenson (1963). This does not entail that moral inquiry possesses mechanisms that guarantee that all moral matters can be settled. 39. Either because one was oneself a bastard, in which case neither of your actions are aligned with the moral facts, or because you are not a bastard, in which case although your actions so align, the bastard’s do not. 40. The same applies, mutatis mutandis, to suspending moral judgements. 41. It follows that theories of the nature of moral inquiry and judgement that deny practicality cannot in this way explain the intuitive restrictions on the higher-order role of moral testimony. It is an interesting question whether this explanation extends to other normative judgements, such as aesthetic and prudential judgements. It seems to us that while such judgements are practical (or normative), they are not practical in precisely the same way as moral judgements, so it is not obvious that their practicality can similarly explain any analogous restrictions.

References Annas, J. 2004. “Being Virtuous and Doing the Right Thing.” Proceedings and Addresses of the American Philosophical Association 78 (2): 61–75. Brink, David O. 1989. Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Choo, Frederick. 2018. “The Epistemic Significance of Religious Disagreements: Cases of Unconfirmed Superiority Disagreements.” Topoi 6 (3): 336. https:// doi.org/10.1007/s11245-018-9599-4. Christensen, David. 2011. “Disagreement, Question-Begging, and Epistemic SelfCriticism.” Philosopher’s Imprint 11 (6): 1–22. Coady, David. 2006. “When Experts Disagree.” Episteme 3 (1–2): 68–79. https:// doi.org/10.3366/epi.2006.3.1-2.68. Crisp, Roger. 2014. “Moral Testimony Pessimism: A Defence.” Aristotelian Society Supplementary Volume 88 (1): 129–43. DiPaolo, Joshua. 2020. “The Fragile Epistemology of Fanaticism.” In HigherOrder Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge.

196

Marcus Lee et al.

Driver, Julia. 2006. “Autonomy and the Asymmetry Problem for Moral Expertise.” Philosophical Studies 128 (3): 619–44. https://doi.org/10.1007/s11098004-7825-y. Elga, Adam. 2007. “Reflection and Disagreement.” Noûs 41 (3): 478–502. https:// doi.org/10.1111/j.1468-0068.2007.00656.x. Fletcher, Guy. 2016. “Moral Testimony: Once More with Feeling.” In Oxford Studies in Metaethics. Vol. 11, edited by Russ Shafer-Landau, 45–73. Oxford: Oxford University Press. Fritz, James. 2018. “What Pessimism about Moral Deference Means for Disagreement.” Ethical Theory and Moral Practice 21 (1): 121–36. https://doi. org/10.1007/s10677-017-9860-8. Hills, Alison. 2009. “Moral Testimony and Moral Epistemology.” Ethics 120 (1): 94–127. https://doi.org/10.1086/648610. Hills, Alison. 2010. The Beloved Self. Oxford: Oxford University Press. Hopkins, Robert. 2007. “Whats Wrong with Moral Testimony?” Philosophy and Phenomenological Research 74 (3): 611–34. Hopkins, Robert. 2011. “How to Be a Pessimist about Aesthetic Testimony.” The Journal of Philosophy 108 (3): 138–57. https://doi.org/10.5840/jphil201110838. Howell, Robert J. 2014. “Google Morals, Virtue, and the Asymmetry of Deference.” Noûs 48 (3): 389–415. https://doi.org/10.1111/j.1468-0068.2012.00873.x. Joyce, Richard. 2006. The Evolution of Morality. Life and Mind. Cambridge, MA: MIT Press. Kelly, Thomas. 2011. “Peer Disagreement and Higher-Order Evidence.” In Social Epistemology: Essential Readings, edited by Alvin I. Goldman and Dennis Whitcomb, 183–217. Oxford: Oxford University Press. Lackey, Jennifer. 2006. “Knowing from Testimony.” Philosophy Compass 1 (5): 432–48. https://doi.org/10.1111/j.1747-9991.2006.00035.x. Lackey, Jennifer. 2010. “A Justificationist View of Disagreement’s Epistemic Significance.” In Social Epistemology, edited by Adrian Haddock, Alan Millar, and Duncan Pritchard, 298–325. Oxford: Oxford University Press. Lillehammer, Hallvard. 1997. “Smith on Moral Fetishism.” Analysis 57 (3): 187–95. https://doi.org/10.1093/analys/57.3.187. MacFarlane, John. 2007. “Relativism and Disagreement.” Philosophical Studies 132 (1): 17–31. https://doi.org/10.1007/s11098-006-9049-9. McGonigal, A. 2006. “The Autonomy of Aesthetic Judgement.” The British Journal of Aesthetics 46 (4): 331–48. https://doi.org/10.1093/aesthj/ayl019. McGrath, Sarah. 2009. “The Puzzle of Pure Moral Deference.” Philosophical Perspectives 23 (1): 321–44. McGrath, Sarah. 2011. “Skepticism about Moral Expertise as a Puzzle for Moral Realism.” The Journal of Philosophy 108 (3): 111–37. https://doi.org/10.5840/ jphil201110837. Nickel, P. 2001. “Moral Testimony and Its Authority.” Ethical Theory and Moral Practice 4 (3): 253–66. Robson, Jon. 2012. “Aesthetic Testimony.” Philosophy Compass 7 (1): 1–10. https://doi.org/10.1111/j.1747-9991.2011.00455.x. Sinclair, Neil. 2018. “Belief-Pills and the Possibility of Moral Epistemology.” In Oxford Studies in Metaethics. Vol. 13, edited by Russ Shafer-Landau, 98–122. Oxford: Oxford University Press.

Moral Testimony as Higher-Order Evidence

197

Sliwa, Paulina. 2012. “In Defense of Moral Testimony.” Philosophical Studies 158 (2): 175–95. Smith, Michael. 1994. The Moral Problem. Philosophical Theory. Oxford: Blackwell. Stevenson, Charles Leslie. 1963. Facts and Values: Studies in Ethical Analysis. New Haven, CT: Yale University Press.

9

Higher-Order Defeat in Collective Moral Epistemology J. Adam Carter and Dario Mortini

1 Introduction According to a popular view in individual moral epistemology termed pessimism, there is something deeply problematic about believing a moral proposition purely on the basis of another’s say so (Jones 1999; Nickel 2001; e.g. Driver 2006; Hopkins 2007).1 For example, there is something amiss with believing that cruelty is wrong because your teacher told you so but for no other reason. Although pessimists disagree among themselves about why moral deference is problematic, a point of agreement is that believing a moral proposition purely on the basis of moral testimony violates an epistemic norm governing belief. Robert Hopkins (2007) articulates such a norm as follows: Grasping norm (GN): You (epistemically) should believe a moral proposition only if you grasp the moral grounds for it. Because a grasp of the moral grounds of a proposition can’t be straightforwardly transmitted from speaker to hearer via testimony,2 some pessimists reason from GN that moral knowledge can’t be acquired via testimony at all. For our purposes, we remain agnostic on this point.3 What we take to be more interesting, from an epistemological point of view, is what GN implies more generally about what moral knowledge demands. On one way of thinking, knowledge, per se, doesn’t require grasping grounds,4 so it follows from GN that there is no such thing as moral knowledge, because nothing corresponding with (mere) knowledge is normatively constrained in this demanding way; yet whenever GN is satisfied, one is in the market for moral understanding, for which such a grasp is essential.5 An alternative route open to pessimists is to maintain that knowledge, generally, doesn’t require grasping grounds; however, it does require this in the special case of moral knowledge.6 Our main interest here is not to adjudicate this dispute. Rather, we will investigate how the foregoing predicament that gives rise to it also

Collective Moral Epistemology

199

generates some interesting and hitherto unnoticed epistemological puzzles when transposed to the collective level, particularly with regard to the relationship between collective moral knowledge, disagreement, and defeat. Here is the plan for the chapter. Section 2 sharpens the foregoing problem in individual epistemology and argues that if there is moral knowledge, then it plausibly involves what robust virtue epistemologists (e.g. Greco 2010, 2003; Sosa 2009, 2015) call cognitive achievement, or cognitive success (e.g. true belief) creditable to the exercise of cognitive ability. An implication of this “achievement” requirement on individual moral knowledge is that it is, at least in principle, more easily defeated than otherwise, given that more is required to retain moral knowledge than non-moral knowledge. However, this apparent fragility is not particularly worrying at the individual level. This is because, as we will show, individual-level abilities that give rise to achievements are generally stable. Interestingly, as we will see, things are different at the collective level. In Section 3, we take as a starting point that if there is collective moral knowledge, then it must plausibly be primarily creditable to (collective) ability.7 But what does this involve? In Section 4, we taxonomize two mainstream accounts of collective knowledge, the joint acceptance model and the distributed model, and then, in each case, we show how the idea that collective moral knowledge requires collective achievement would be plausibly modelled. Section 5 argues that, on both models, collective moral knowledge is extremely fragile; it is only difficult to acquire8 but much easier than other kinds of group knowledge to defeat.

2 (Individual) Moral Knowledge, Credit, and Defeat The metaethical antirealist might respond to pessimist’s key idea by trying to undermine the initial puzzle motivating it. Perhaps there is no moral knowledge. This might be because surface-level moral claims are categorically false (i.e. error theory9) or non-truth evaluable (i.e. noncognitivism). If this is right, then it’s no wonder that something seems amiss about gaining moral knowledge through testimony. Epistemic antirealist views that respond to the initial puzzle this way are committed to skepticism about moral knowledge (and, more generally, to skepticism about any kind of evaluative knowledge as it is traditionally – i.e. realistically – construed10). An assumption that we will be making in what follows is non-skeptical: moral knowledge, realistically construed, is possible, and there is at least some moral knowledge. With this assumption in play, let’s focus in on what moral knowledge might involve over and above what non-moral knowledge involves. The most natural general answer is that a certain kind of ability not required by non-moral knowledge. Why? First, “ability” is the sort of thing that can’t be transmitted simply through testimony. (Consider that David Gilmour can tell you how to play a guitar solo, but it doesn’t follow that

200

J. Adam Carter and Dario Mortini

you are thereby able to play that guitar solo.11) Second, grasping grounds implicates not only something that non-moral knowledge doesn’t essentially involve but also an ability non-moral knowledge doesn’t essentially involve.12 Third, if moral knowledge requires more by way of ability than non-moral knowledge, then this could help us to make sense of an important data point about the relationship between moral knowledge and moral action, to wit that the moral goodness of an action plausibly depends not only on doing the right thing but also on doing it for the right kind of reason. If moral knowledge, as such, requires abilities beyond what non-moral knowledge requires,13 then we can easily make sense of the idea that moral knowledge is important to morally good action. If moral knowledge lacked any such abilities, then this connection would be harder to explain. Granted, if all knowledge demands a lot of us by way of ability, then it would be hard to see how moral knowledge might be distinctive in what it demands of us. There is one view of knowledge in particular – robust virtue epistemology – according to which all knowledge has to be primarily creditable to ability.14 As John Greco (2003) succinctly puts it (2003, 116), To say that someone knows is to say that his believing the truth can be credited to him. It is to say that the person got things right due to his own abilities, efforts and actions, rather than due to dumb luck, or blind chance, or something else. This view has a number of well-known advantages, not least that it offers an elegant way to deal with standard Gettier cases.15 However, one of the main disadvantages of this kind of ability-heavy view of knowledge is that it is too strong to reconcile with the prevalence of testimonial knowledge gained cheaply – such as by trusting a reliable source in the absence of defeaters. A point notably made by Jennifer Lackey (e.g. 2007) is that in paradigmatic cases of testimonial knowledge exchange (e.g. as when one asks for directions in a new town), it should be the testifier rather than the testimonial recipient who deserves the credit (if anyone does) for the recipient getting things right when they do.16 While testimony cases pose a serious obstacle for robust virtue epistemology as a general account of knowledge simpliciter, they offer an interesting vantage point to appreciate just how closely two things line up together: 1. Moral knowledge, which we’ve shown plausibly requires ability in a way that non-moral knowledge does not. 2. Knowledge simpliciter, which, according to robust virtue epistemology, also requires the exercise of ability in a way that (as we’ve seen) is at tension with the thought that some knowledge is easily transmittable via testimony.

Collective Moral Epistemology

201

A plausible working hypothesis to draw here is that even if all knowledge does not require the kind of cognitive achievement (i.e. cognitive success due to ability), the robust virtue epistemologist identifies with knowledge and so takes to be necessary for acquiring it, moral knowledge in particular does require this. For ease of reference, call this idea the credit condition on moral knowledge. Credit condition on moral knowledge: S knows a moral proposition, pm only if S’s believing pm truly is primarily creditable to S’s exercise of (morally relevant) cognitive ability. There are various ways that a credit condition on moral knowledge might be substantively glossed. For instance, those sympathetic to Hills’s (2009) thinking might insist that the abilities involved in the credit condition include some of those that Hills takes to play a role in understanding.17 Alternatively, one might view these abilities referenced by the credit condition more standardly along virtue reliabilist lines.18 Either way, a credit condition on moral knowledge, no matter how the details are filled in, is going to carry with it an important commitment to thinking about defeat and knowledge asymmetrically in cases of moral and non-moral knowledge. In particular, an implication is that moral knowledge is going to be – at least in principle – less resilient to being undermined by defeaters than non-moral knowledge, and this comparative fragility is on account of its being more demanding. To make this idea more concrete, consider the following case: Cognitive Saboteur: Through the exercise of his excellent moral reasoning, Theon comes to appreciate that selling high-risk subprime mortgages is morally wrong. Furthermore, through the exercise of his excellent math abilities, Theon comes to appreciate that Pythagoras’s theorem is true – viz, that the square of the hypotenuse of a triangle equals the sum of the squares of the other two sides. Unfortunately, Varys whispers to Theon ten confusing moral claims and ten confusing mathematical claims, with the sole purpose of sabotaging Theon’s cognitive life. Varys’s testimony has destabilized Theon’s moral and mathematical abilities, which leads Theon to rightly begin to distrust them and even forget how to exercise them, even though – crucially – he retains his beliefs (which he now takes only on reliable testimony) that selling highrisk subprime mortgages is morally wrong and that Pythagoras’s theorem is true. A first point to note is that even without the mathematical abilities that he had before, he can know that Pythagoras’s theorem (i.e. that the square of the hypotenuse of a triangle equals the sum of the squares of

202

J. Adam Carter and Dario Mortini

the other two sides) is true simply by continuing to trust mathematical experts.19 However, while Varys’s testimony doesn’t defeat Theon’s mathematical knowledge, even if it has a deleterious effect on his mathematical abilities, it does seem not only to wreck Theon’s moral abilities (vis-à-vis the subprime mortgage proposition) but also, via the credit condition on moral knowledge, to defeat his moral knowledge. After all, these moral reasoning abilities are undermined such that Theon is now believing what he does about the morality of subprime lending by simply trusting others. Fortunately, this difference in fragility is not particularly concerning – despite initial appearances – at least at the individual level where we are considering it presently. This is because abilities demanded by any plausible unpacking of the credit condition on moral knowledge must be stable in such a way that they will in practice withstand all but the strongest kinds of Cognitive Saboteur–style cases. To see why this point holds, consider how proponents of a credit condition on knowledge diagnose Plantinga’s (1993) brain-lesion case, in which an undetected brain lesion happens to reliably cause the subject, Al, to believe that he has a brain lesion despite having no other evidence to support this. Is this an ability to which we can credit Al? If so, then (oddly) it looks as though Al’s believing truly that he has a brain lesion is primarily creditable to ability rather than, say, luck. But this seems too permissive; Al’s getting it right seems to have nothing to do with his abilities.20 The way that robust virtue epistemologists such as Greco have dealt with such cases is to insist that the kind of abilities that can generate knowledge must be, as he puts it, grounded in the subject’s cognitive character. According to Greco, this means they must (2010, 152) be (a) stable . . . and (b) well integrated with other of the person’s cognitive dispositions . . . the cognitive process associated with the lesion is not well integrated with other aspects of the person’s cognition. The process produces only a single belief, for example, and it is unrelated and insensitive to other dispositions governing the formation and evaluation of belief. This point about stability has an important ramification for how we should think about cases like Cognitive Saboteur. What’s important is that knowledge-generating abilities will be, in virtue of being well integrated into the subject’s cognitive character, highly resilient to being undermined in the way described in Cognitive Saboteur. An implication is that while, say, testimony might suffice to undermine a poorly integrated disposition that falls short of a bona fide ability by the lights of a plausible credit condition, the wrecking of well-integrated cognitive abilities (at least, at the individual level) through the acquisition of new

Collective Moral Epistemology

203

beliefs will not be easy to do at all.21 And this is welcome news: it means that – at the individual level, at least – the comparative asymmetry in resilience to defeat between moral and non-moral knowledge is unlikely to generate any serious skeptical threat to the (individual) moral knowledge we have. Things, however, are different at the collective level. And it’s to this point that we’ll now turn.

3 Collective Moral Knowledge: Parity Principles Just as individuals know things, so do groups. For example, the Federal Bureau of Investigation (FBI) knows where the president is at all times. Chevrolet knows that airbags must be put in cars before they can be sold. CERN knows that the 125 GeV/c2 particle discovered in 2012 is a Higgs Boson. On one way of thinking about what group knowledge involves, the aforementioned knowledge ascriptions come out true provided that at least one (or perhaps several or most) individuals of the target group possesses the relevant item of knowledge. This view is termed summativism: what’s key to summativism is that group knowledge reduces to individual knowledge. Non-summativism, by contrast, is a more philosophically interesting way of thinking about group knowledge. According to non-summativism, which is gaining traction in social epistemology,22 groups can have epistemic properties even if no individual in the group possesses them, including the property of knowledge.23 In the next section, we’ll review some of the standard ways in the non-summativist literature to make good on this idea (and how these connect with different ways of thinking about non-summativist moral knowledge, in particular). But first we want to make explicit four assumptions that we will be making and that allow us to engage in some new ways with puzzles that arise in connection with (non-summative) moral knowledge, disagreement, and defeat. First, we are going to assume that there is nonsummativist knowledge. That is, we assume that there are group subjects who know things, where this group-level knowledge is not reducible to a summation of the individual knowledge of its members.24 Second, we are going to assume that there is not merely non-summativist non-moral knowledge but also non-summativist moral knowledge – viz, in some cases, a group can know a moral proposition. The third and fourth assumptions will be the most important in what follows. The third assumption is what we’ll call Parity Principle 1: Parity Principle 1: If an individual S knows a moral proposition, pm, only if S’s believing pm truly is primarily creditable to S’s exercise of (morally relevant) cognitive ability, then the same goes for (nonsummative) groups agents.

204

J. Adam Carter and Dario Mortini

Parity Principle 1 is a special case of a more general parity principle that is more or less universally accepted in collective epistemology. This more principle says (roughly) that epistemic conditions (e.g. justification) on individual knowledge carry over, mutatis mutandis, to the collective level.25 What we’re calling Parity Principle 2 is just another instance of the general principle: if individual moral knowledge requires that a credit condition be satisfied, then, mutatis mutandis, so does group moral knowledge. Parity Principle 1, in conjunction with the credit condition on moral knowledge, jointly imply Parity Principle 2: Parity Principle 2: A group g (non-summatively) knows a moral proposition, pm, only if g’s believing pm truly is primarily creditable to g’s exercise of (morally relevant) cognitive ability. Parity Principle 2 is, effectively, a group-level version of the individual credit condition on moral knowledge. The individual-level version of this principle – while it made moral knowledge easier to defeat than non-moral knowledge – did not do so substantially. This was due to the stability of individual-level abilities of the sort that are capable of generating individual-level knowledge. Whether the same holds for group-level knowledge remains to be seen.

4 Collective Moral Knowledge: Two Varieties In this section, we first outline two strategies for fleshing out non-summativist knowledge, generally speaking: 1. the joint-acceptance model. 2. the distributed model. Then, for each account, we show an outline of what it would take to countenance Parity Principle 2. Given the differences between the two accounts, the shape the credit condition will take in each case will be different. 4.1 Joint-Acceptance Model According to the joint-acceptance model of group knowledge, knowledge is built out of group belief (e.g. Gilbert 1987, 2002, 2013). A group belief is, itself, a function of conditional commitments on the part of its individual members. The key features of the joint-commitment account of group belief are as follows: Joint-acceptance belief (JAB): (i) A group, g, believes p iff the members of g jointly accept p; (ii) the members of g jointly accept that

Collective Moral Epistemology

205

p when the members conditionally commit to accept that p; (iii) members of g conditionally commit to accept that p when each is committed to acting as if p provided that the others do. For example, according to JAB, a jury believes that the defendant is guilty provided that the members of the jury commit to act, in their capacities as jury members, as if the defendant is guilty. This will include voting guilty, such as by raising their hand at the appropriate time, responding in a way that is consonant with a guilty vote when queried by the foreman or judge, and so on. Note that on Gilbert’s JAB model, this conditional commitment does not extend to the individual members’ believing the defendant is guilty in a private capacity, just to their acting as if the client is guilty in their capacity as jurors.26 Nothing about JAB prevents a group from believing a proposition that is false; group knowledge, then, at least requires that the group belief be true, as well as justified. Notice, though, that the “justification” of a JABstyle group belief won’t be a matter simply of whether the individuals’ beliefs are justified.27 After all, as a non-summativist model, JAB (as per the jury example illustrates) does not require individual belief for group belief. So what, then, is the source of group-level justification, when a grouplevel belief is justified? A natural answer here is a simple reliabilist answer, one that does not require additional group beliefs to function as group reasons.28 On the simple reliabilist model, a JAB-style group belief is justified just in case the process of joint acceptance is one that reliably gets to true beliefs.29 On such a model, then, we might think of group knowledge provisionally as a JAB-style belief that is true and that arises from truth-reliable joint acceptance. There is, unfortunately, a lurking problem: joint-acceptance issues in group-level propositional outputs with both mind-to-world and worldto-mind directions of fit. In this way, it is importantly disanalogous from traditionally reliable belief-forming processes at the individual level (e.g. perception), which, when functioning normally, issue only mind-to-world outputs. As such, it’s not clear how “joint acceptance” is plausibly a reliable process in a sense that would mimic the kind of reliable processes that we expect to issue in individual knowledge. Jeroen de Ridder (2014) proposes a way to get around this problem. Taking scientific group knowledge as a paradigm for group knowledge, de Ridder maintains that a group is justified in believing something, p, only if the group belief is properly based on a reliable process of inquiry, where a process of inquiry implicates a joint commitment to getting to the truth about whether or not p. This caveat avoids the worry facing a flat-footed reliability account because while unqualified “joint acceptance” as a process type isn’t a viable candidate for reliability, specifically inquiry-directed joint acceptance, by comparison, is. Even more, de

206

J. Adam Carter and Dario Mortini

Ridder’s basing condition closes an important potential gap between the doxastic output issued by joint acceptance of the group and the reliable inquiry-directed process. Let’s suppose that something like de Ridder’s account is on the right track and that, such as with suitable supplementation with an antiGettier proviso, it offers a workable non-summativist account of group knowledge built out of JAB-style group belief. Even if these assumptions are granted, it follows from Parity Principle 2 that additional conditions will be needed to be satisfied if the group is to have specifically moral knowledge. After all, the credit condition that features in Parity Principle 2 isn’t going to be secured simply through a basing condition such as de Ridder’s, viz where the relevant group output must be based on a reliable inquiry-directed process. For one thing, not all processes are abilities (as the discussion of cognitive integration in Section 2 reveals at the individual level). And so what follows from Parity Principle 2 is that a workable JAB account needs to be supplemented with a further account of group ability if it is to countenance group knowledge. For another, such an account of group ability needs to be put to work in the account in such a way that the account can explain how (when a group knows a moral proposition) the group ability is what primarily explains why the group’s doxastic output is true. 4.2 A Distributed Model On a distributed model on non-summativist group knowledge, a group can know something even though none of the individuals knows it and even if the group doesn’t jointly commit to the proposition in the way required by JAB. What’s important, on the distributed model, is principally that the group generates propositional outputs in a reliable way. For example, on Alexander Bird’s (2010) model, what’s key to group knowledge is that the group plays functional roles that are analogous to the knowledge-generating cognitive powers of individuals.30 In particular, according to Bird, group knowers will have the following properties (Bird 2010, 43–4): (1) They have characteristic outputs that are propositional in nature (propositionality). (2) They have characteristic mechanisms whose function is to ensure or promote the chances that the outputs in (1) are true (truth filtering). (3) The outputs in (1) are the inputs for (a) social actions or for (b) social cognitive structures, including the very same structure (function of outputs). A group, for Bird, is functionally integrated when there is dependence between them for their own proper functioning, and such dependence

Collective Moral Epistemology

207

determines group membership. This is of starkly different from the Gilbert-style requirement of joint commitment. One similarity, though, concerns a broadly reliabilist requirement that features in both models. On the JAB model (glossed with de Ridder’s justification condition), the reliability requirement on group knowledge is a matter of a group belief’s being properly based on a reliable process of inquiry. On a distributed model like Bird’s, the reliability requirement is captured in clause (2) – viz Bird’s truth-filtering requirement. Let’s now assume for the sake of argument that Bird’s account – supplemented with a suitable anti-Gettier proviso – offers a viable account of non-summativist distributed group knowledge. Even if this is assumed, it follows from Parity Principle 2 that an additional credit condition will be needed to be satisfied if the group is to have specifically moral knowledge and not merely non-moral knowledge. What would it take to satisfy this further credit condition on a distributed model? Given that it’s possible for all of Bird’s conditions to be met while the credit condition is not met, further elaboration is needed. One suggestion here can be extracted from recent work on group knowledge by S. Orestis Palermos (2015). Palermos, in discussing the conditions under which group knowledge might be an achievement on virtue reliabilist lines, proposes a kind of modal condition according to which “getting to the truth of the matter as to whether p (or not p) could only be collectively achieved and is thereby creditable only to the group as a whole.”31 Transposed to the language of Bird’s model, getting to the truth of the matter as to whether p (for some moral proposition p) is primarily creditable to g’s exercise of a (morally) relevant ability if and only if the group’s truth-filtering mechanisms are not only sufficient but also necessary for getting to the truth of the matter as to whether p. With this kind of supplement, then, we can in principle make sense of differential demands within a distributed model for (1) non-moral knowledge and (2) moral knowledge, respectively, where the latter, but not the former, is meant to countenance Parity Principle 2.

5 Collective Moral Knowledge and Defeat: Two Negative Results In this section, we want to show how both the accounts of group knowledge sketched in Section 4 – viz the joint-acceptance and distributive accounts – face a dilemma concerning higher-order defeat: in short, each turns out to be able to accommodate group-level moral knowledge only by making such knowledge problematically fragile.3233 5.1 Joint-Commitment Model Recall that, on a JAB-style account of group knowledge, it needs to be clear how – when a group knows a moral proposition – a group ability

208

J. Adam Carter and Dario Mortini

is what primarily explains why the group’s doxastic output is true. But how, exactly, is a group going to acquire any abilities that it might have on a joint-acceptance model? Miranda Fricker (2010) offers a suggestion. In her work on collective character traits, Fricker shows how we can make room for collective character virtues within a Gilbert-style joint-acceptance model, according to which joint acceptance is what fixes the non-summative properties of a group. On Fricker’s proposal, a group g has a collective character virtue when the members of g jointly commit to a good motive as a body. For instance, in the case of a committee, a committee that jointly commits to being open-minded or impartial when undertaking some task type, T, can be considered open-minded vis-à-vis that task type. It is contestable whether character virtues must themselves be reliable.34 Abilities, at least those that generate knowledge, on the other hand, must be. However, a reliability requirement can naturally be incorporated into the kind of character account that Fricker is proposing: the idea in short is that a group has an ability just in case it (1) Jointly commits to achieving some good end, where “good” is relative to a given domain (e.g. moral, epistemic, aesthetic, etc.). (2) Is reliably successful enough at bringing this good end about. On this kind of a template view, then, a JAB model could accommodate Parity Principle 2 by insisting that when a group knows a moral proposition, what primarily explains why the group’s doxastic output is true is a group ability that is, itself, fixed in part by joint commitments to achieving some epistemically good end. Unfortunately, no matter how we fill in further details with this proposal, there is a looming problem. To bring this problem to light, consider the following case: Disagreement:35 A bioethics policy committee, C, is tasked with determining whether it is morally acceptable to use BrainEx technology to perform a perfusion on a dead human brain.36 The committee (with reference to JAB) jointly commits to two things. First, they jointly commit to investigating the matter of whether performing such perfusions is morally acceptable in an intellectually rigorous and open-minded way that accords with scientific standards. Then – following their detailed investigation – the group makes a second joint commitment: to the truth of the proposition that using BrainEx technology to perform a perfusion on a dead human brain is not morally acceptable. During the course of the committee’s deliberation, one of the committee’s members, member A, registered reservations about how the committee was weighing evidence about what counts as “brain death,” and this led to a

Collective Moral Epistemology

209

disagreement with another member, member B. But A agreed with B and the rest that the total evidence overwhelmingly supported what the group jointly committed to (e.g. such perfusions are not morally acceptable); thus, A joined the rest in the eventual joint commitment that was made. A first thing to note about the Disagreement case is that it (assuming for the sake of argument that the target proposition is true) looks like about as good a candidate for non-summativist moral knowledge within a JAB framework as you could expect to encounter in practice. It is, after all, not reasonable to expect that there will be no disagreements whatsoever between a group about such things as to how certain kinds of evidence (in this case, concerning usage of the term brain death) should be interpreted scientifically. But, even so – and here is the crux of the problem – the kind of dispute that we find in Disagreement about the standards being used to evaluate the evidence turns out to be enough to defeat the group’s moral knowledge. The reasoning here is as follows: the defeasibility conditions for group ability on a JAB model are highly fragile. If, in Disagreement, some members submit during the course of the group investigation that the group is not following agreed-to scientific standards, then it trivially follows that (at least) some members are not acting in their capacity as group members as if the group is following such standards. But since joint commitments are on the JAB model conditional commitments, this means that even minority dissent has the power to release others from their conditional commitments. Of course, a tempting gloss of the situation just described is to emphasize that the minority dissent that features in Disagreement is just a kind of higher-order disagreement about the methods used by to group to reach its conclusion. There was no first-order disagreement within the group about the truth of the target proposition itself. But, crucially, an attempt to minimize the epistemic significance of this (even lone) higherorder dispute is simply not on the table if Parity Principle 2 is to be upheld. Parity Principle 2, to reiterate, articulates a sense in which moral knowledge of a group must be creditable primarily to group ability (which we’ve fleshed out on the joint-acceptance model along Fricker’s lines). Whenever that ability is undermined, then thereby is group moral knowledge. And this is the case even if undermining a group ability is not sufficient for undermining non-moral group knowledge. So long as a lone higher-order disagreement within a group can suffice to undermine group ability, then it can suffice to undermine group moral knowledge. What we find, then, is an important disanalogy between the comparative fragility of (1) individual moral knowledge and (2) group moral knowledge, at least when group moral knowledge is theorized about within a joint-commitment framework. A proponent of a joint-commitment

210

J. Adam Carter and Dario Mortini

model is forced to make a choice that (as we’ve suggested in Section 2) a proponent of individual moral knowledge is not forced to make. The choice is (1) either give up the idea that group moral knowledge requires ability in a way that non-moral knowledge does not (i.e. give up Parity Principle 2) and sever the connection at the group level between moral knowledge and ability that we find well motivated at the individual level, or accept Parity Principle 2 and accept that group moral knowledge is highly fragile,37 so fragile that it will be likely to be undermined in cases that feature even outlying higher-order disagreement of the sort that we find in Disagreement. Either route is problematic. 5.2 The Distributive Model A variation on the dilemma just sketched faces a proponent of distributed group moral knowledge. As we showed in Section 4.2, it’s possible for all of Bird’s conditions to be met while the credit condition implied by Parity Principle 2 is not met. The elaboration suggested, drawing from work by Palermos (2015), was as follows: a group, g, knows a moral proposition only if the group’s truth-filtering mechanisms are not only sufficient but also necessary for getting to the truth of the matter as to whether p (for some moral proposition, p). The combination of Bird’s distributed account of group knowledge with a Palermos-style construal of a credit condition captures an important intuition about credit: the group deserves the credit if no subset of individuals could get the desired result alone. And indeed, in some cases of moral knowledge, that will be the case, especially perhaps difficult moral knowledge (e.g. as in the case of Disagreement, or perhaps even in the scientific case of CERN discovering the Higgs Boson through widely distributed collaboration). The view, however, struggles when it comes to making sense of easy moral knowledge. To make this idea concrete, consider the following case: Credit Swamp: A bioethics policy committee, C, is tasked with determining whether it is morally acceptable to allow, as a method of deterrence, the whipping of young children in hospitals who do not follow the doctors’ advice. The committee realize that this will be a short meeting. In a manner that satisfies all of Bird’s three conditions – including a truth-filtering condition – the committee, working collaboratively and through the normal distribution of tasks across committee members, produces a prompt and professional report detailing why the answer is no. Credit Swamp looks ex ante like a clear-cut case of distributed moral knowledge. The problem is that there’s no straightforward way for a proponent of distributed knowledge to make sense of this while upholding

Collective Moral Epistemology

211

Parity Principle 2. In short, the situation is as follows: it’s simply false that in Credit Swamp, getting the right result could only be collectively achieved. Granted, the tasks are in fact distributed across the members of the group as in a typical case of distributed knowledge, but given how easy the moral question under consideration is, any one individual we may assume would have been able to get this correct result (i.e. that children shouldn’t be whipped, even if it were a successful deterrent!). But once this point has been appreciated, the prospects that the credit condition in this case is satisfied look dim: after all, the truth-filtering mechanisms of the group were on display here, but is the group’s getting it right primarily creditable to this distributed mechanism? It’s not true that the result could have been collectively achieved only through such collective mechanisms. Given the widespread agreement both in individual beliefs and individual abilities, each has suitable epistemic coverage to do the relevant cognitive work that other members of the group happen to be doing. The proponent of distributed moral knowledge might press back against our dilemma and simply deny that any moral knowledge is easy moral knowledge. Perhaps, as this line of thought would go, all moral knowledge is difficult. We reject this claim. However, our response to the worry doesn’t require that we do. In fact, all that’s needed to generate problems for a proponent of distributed knowledge who wants to uphold Parity Principle 2 is to point out that some moral knowledge is easy enough to come by that a collective effort is not needed to achieve it, even if a collective effort is used to achieve it. Our Credit Swamp case is meant to be such an example – such as where the distributed filtering abilities are reliable and in fact are used to get to the truth but, given the wide coverage of the individual abilities and beliefs that bear on the target proposition, are superfluous.

6 Concluding Remarks Our overarching aim here has been to motivate some new puzzles to do with defeat in collective moral epistemology, puzzles that have ultimately revealed collective moral knowledge to be surprisingly fragile compared to individual-level moral knowledge. We take as a starting point, in individual epistemology, a distinction between (1) moral knowledge and (2) the kind of (non-moral) knowledge that is easily transferable by testimony. On the assumption that there is moral knowledge, we’ve argued that the best way to countenance it is with reference to the kind of credit condition that robust virtue epistemologists, albeit mistakenly, think that all knowledge, moral as well as non-moral, must answer to. We then argued, by parity, from the individual to the collective level as follows: if moral knowledge demands the satisfaction of a credit condition at the individual level, then the same should hold at the collective level. In

212

J. Adam Carter and Dario Mortini

particular, the key parity principle that we defend maintains the following: a group, g, knows (non-summatively) a moral proposition, pm, only if g’s believing pm truly is primarily creditable to g’s exercise of (morally relevant) cognitive ability. With this parity principle in play, we then outlined the two most prominent template accounts of non-summativist moral knowledge: the jointacceptance account, notably defended by Gilbert, and the distributed model, defended by Bird. We showed what it would take to satisfy the credit parity principle on moral knowledge on each of these accounts, with reference to their substantive differences. The upshot was, in each case, a dilemma, an analogue to which we don’t find at the individual level. Joint-acceptance accounts turned out to be capable of vindicating group-level moral knowledge in a way that satisfies the parity principle only at a substantial cost – viz by making group-level moral knowledge highly fragile, so fragile that (it was argued) a single intragroup dispute about methodology would (in the case Disagreement) be sufficient for undermining it. Distributed accounts faced a similar dilemma: for proponents of distributed group moral knowledge, the dilemma was to either reject the parity principle (and thus sever the connection at the group level between moral knowledge and credit that we find well motivated at the individual level) or retain that connection and accept that group moral knowledge is undermined (via credit swamping) whenever the moral knowledge at issue is too easy to require a collective effort. These puzzles place the burden of argument on non-skeptical collective moral epistemologists to show us how (non-summativist) knowledge is possible, and in a way that avoids serious theoretical costs. We hope to have shown what some of these costs are and what kinds of considerations the non-skeptical collective moral epistemologist will need to grapple with in order to meet the challenge in a plausible way.38

Notes 1. For some recent critiques of moral pessimism, see, e.g., Sliwa (2012) and Enoch (2014). 2. See, e.g., Hills (2009), Hopkins (2007), and Lillehammer (2014). 3. For a more thorough discussion of moral testimony, moral deference, and its relationship to higher-order evidence, see Lee, Sinclair, and Robson (this volume). 4. See Lackey (2007) and Pritchard (2012). Cf. Greco (2010) for a reply. 5. See Nickel (2001) and Hills (2009) for developments of such an “understanding reply” to the problem of moral deference. 6. There is also logical space for the line that one epistemically should not have (mere) moral knowledge. Because this conflicts with the platitude that knowledge is at least epistemically permissible, we’ll bracket this possibility. 7. Kallestrup (2016) maintains that all collective knowledge requires collective achievement. 8. It is, however, difficult to acquire on the two models, for different reasons.

Collective Moral Epistemology

213

9. See, e.g., Mackie (1977) and Olson (2014). 10. We say “traditionally construed” because it is available to the non-cognitivist to embrace, along with ethical non-cognitivism, also epistemic non-cognitivism, according to which knowledge attributions are expressions of epistemic approval. For discussion of this kind of a view, see Chrisman (2012). 11. Similar examples have been raised in recent work in the epistemology of know-how. See, e.g., Carter and Pritchard (2015) and Poston (2016). 12. For a discussion of grasping as a kind of ability, see Kvanvig (2003) and Grimm (2014). 13. This is of course not to imply that moral knowledge is always or even generally more difficult than non-moral knowledge to acquire. Some non-moral knowledge is obviously more difficult to acquire than some of the easiest moral knowledge (e.g. Goldbach’s conjecture versus the wrongness of gratuitous evil). Rather, the idea is that moral knowledge categorically requires ability in a way that non-moral knowledge does not. 14. See, e.g., Greco (2003, 2010) and Sosa (2009, 2015) for representative defenses; cf. Zagzebski (1996) for a stronger version of the position which requires not only that these abilities be reliable dispositions but also that they feature distinctive motivations. 15. See, e.g., Greco (2010, ch. 6), Sosa (2009), Turri (2011), and Carter (2016) for representative discussions. Robust virtue epistemology also has the advantage of explaining why knowledge, qua achievement, has the kind of value often ascribed to it. For discussion, see Pritchard, Turri, and Carter (2018). 16. For related points, see Pritchard (2012) and Kallestrup and Pritchard (2014). 17. See, e.g., Hills (2016) for a development on the view of abilities and moral understanding from her (2009). 18. See, e.g., Sosa (2009) for an account of virtue reliabilist abilities as competences, which are dispositions with three components: seat, shape, situation (see also Sosa [2015]). A canonical presentation of virtue reliabilist abilities is given in Greco (2010). 19. For a discussion, see Graham (2006). While the idea that one can know mathematical propositions via testimony is widely accepted in the epistemology of testimony, it is contentious in the philosophy of mathematics, according to which proof is required for knowledge. Thanks to Justin Clarke-Doane for raising this point. 20. What goes for Al plausibly goes for other kinds of meta-incoherence cases in the classic reliabilist literature. For a discussion, see, e.g., Sosa (2000). 21. For a recent discussion of how abilities can be defeated, see Carter and Navarro (2017), who engage with this issue in the context of anti-intellectualism about know-how. 22. See Gilbert (2013) for an overview. 23. See especially the collections of papers in Lackey (2014) and Brady and Fricker (2016). 24. See Lackey (2014), Brady and Fricker (2016), and Gilbert (2013). 25. Note that “belief” is not an epistemic condition, per se. 26. For a critical discussion on this point, see Mathiesen (2006) and Carter (2015). 27. See, however, Lackey (2016) for an interesting kind of amalgamation of summativism and non-summativism in an account of group justification. For a criticism of Lackey’s account, see Silva (2019). 28. However, such an account has been defended: see, e.g., Schmitt (1994). Cf., however, Lackey (2016, 346–7).

214

J. Adam Carter and Dario Mortini

29. For a more sophisticated version of this view, see de Ridder (2014). 30. See Bird (2010, §4.3) for discussion of how this kind of functionalist approach draws from Durkheim’s functionalism and the organismic analogy. 31. Our italics. 32. DiPaolo (this volume) discusses how a group of fanatics, given their dogmatic beliefs, might be more resilient to higher-order defeat than other more epistemically virtuous groups are. 33. For a more precise characterization of higher-order evidence that is congenial to the project that we pursue here, see Barnett (this volume, section 2). 34. This is a topic of longstanding debate in individual epistemology, in the literature on virtue responsibilism. See, e.g., Montmarquet (1993), Zagzebski (1996), and Baehr (2011). 35. For the sceptical significance of moral disagreement as an instance of higherorder evidence, see Terman and Risberg (this volume). See also Turnbull and Sampson (this volume) for a steadfast account of moral disagreement. 36. For a recent discussion in Nature of some of the ethical quandaries surrounding perfusion on mammalian brains more generally, see www.nature.com/ articles/d41586-019-01168-9. 37. We are neutral (here and elsewhere) on how these claims about knowledge would best interface with a linguistic theory of knowledge attributions. 38. Thanks to Justin Clarke-Doane and Michael Klenk for helpful comments on a previous version of this paper.

References Barnett, Brian C. 2020. “Higher-Order Defeat in Realist Moral Epistemology.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Baehr, Jason S. 2011. The Inquiring Mind: On Intellectual Virtues and Virtue Epistemology. Oxford: Oxford University Press. Bird, Alexander. 2010. “Social Knowing: The Social Sense of ‘Scientific Knowledge’.” Philosophical Perspectives 24 (1): 23–56. https://doi.org/10.1111/j. 1520-8583.2010.00184.x. Brady, Michael, and Miranda Fricker, eds. 2016. The Epistemic Life of Groups: Essays in the Epistemology of Collectives. Oxford: Oxford University Press. Carter, J. Adam. 2015. “Group Knowledge and Epistemic Defeat.” Ergo 2 (28): 711–35. https://doi.org/10.3998/ergo.12405314.0002.028. Carter, J. Adam. 2016. “Robust Virtue Epistemology as Anti-Luck Epistemology: A New Solution.” Pacific Philosophical Quarterly 97 (1): 140–55. https://doi. org/10.1111/papq.12040. Carter, J. Adam, and Jesús Navarro. 2017. “The Defeasibility of KnowledgeHow.” Philosophy and Phenomenological Research 95 (3): 662–85. https://doi. org/10.1111/phpr.12441. Carter, J. Adam, and Duncan Pritchard. 2015. “Knowledge-How and Epistemic Value.” Australasian Journal of Philosophy 93 (4): 799–816. https://doi.org/10 .1080/00048402.2014.997767. Chrisman, Matthew. 2012. “Epistemic Expressivism.” Philosophy Compass 7 (2): 118–26. https://doi.org/10.1111/j.1747-9991.2011.00465.x. DiPaolo, Joshua. 2020. “The Fragile Epistemology of Fanaticism.” In HigherOrder Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge.

Collective Moral Epistemology

215

Driver, Julia. 2006. “Autonomy and the Asymmetry Problem for Moral Expertise.” Philosophical Studies 128 (3): 619–44. https://doi.org/10.1007/s11098004-7825-y. Enoch, David. 2014. “A Defense of Moral Deference.” The Journal of Philosophy 111 (5): 229–58. https://doi.org/10.2139/ssrn.2601807. Fricker, Miranda. 2010. “Can There Be Institutional Virtues?” In Oxford Studies in Epistemology, edited by Tamar Szabo Gendler and John P. Hawthorne, 223–35. Oxford: Oxford University Press. Gilbert, Margaret. 1987. “Modelling Collective Belief.” Synthese 73 (1): 185–204. https://doi.org/10.1007/BF00485446. Gilbert, Margaret. 2002. “Belief and Acceptance as Features of Groups.” ProtoSociology 16: 35–69. https://doi.org/10.5840/protosociology20021620. Gilbert, Margaret. 2013. Joint Commitment: How We Make the Social World. Oxford: Oxford University Press. Graham, Peter J. 2006. “Can Testimony Generate Knowledge?” Philosophica 78: 105–27. https://philpapers.org/archive/GRACTG.pdf. Greco, John. 2003. “Knowledge as Credit for True Belief.” In Intellectual Virtue: Perspectives from Ethics and Epistemology, edited by Michael R. DePaul and Linda T. Zagzebski, 111–34. Oxford: Oxford University Press. Greco, John. 2010. Achieving Knowledge. Cambridge: Cambridge University Press. Grimm, Stephen R. 2014. “Understanding as Knowledge of Causes.” In Virtue Epistemology Naturalized: Bridges between Virtue Epistemology and Philosophy of Science, edited by Abrol Fairweather, 329–45. Synthese Library, Studies in Epistemology, Logic, Methodology, and Philosophy of Science 366. Cham: Springer. https://doi.org/10.1007/978-3-319-04672-3_19. Hills, Alison. 2009. “Moral Testimony and Moral Epistemology.” Ethics 120 (1): 94–127. https://doi.org/10.1086/648610. Hills, Alison. 2016. “Understanding Why.” Noûs 50 (4): 661–88. https://doi. org/10.1111/nous.12092. Hopkins, Robert. 2007. “Whats Wrong with Moral Testimony?” Philosophy and Phenomenological Research 74 (3): 611–34. Jones, Karen. 1999. “Second-Hand Moral Knowledge.” The Journal of Philosophy 96 (2): 55. https://doi.org/10.2307/2564672. Kallestrup, Jesper. 2016. “Group Virtue Epistemology.” Synthese 24 (1): 23. https://doi.org/10.1007/s11229-016-1225-7. Kallestrup, Jesper, and Duncan Pritchard. 2014. “Virtue Epistemology and Epistemic Twin Earth.” European Journal of Philosophy 22 (3): 335–57. https://doi. org/10.1111/j.1468-0378.2011.00495.x. Kvanvig, Jonathan L. 2003. The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press. Lackey, Jennifer. 2007. “Why We Don’t Deserve Credit for Everything We Know.” Synthese 158 (3): 345–61. https://doi.org/10.1007/s11229-006-9044-x. Lackey, Jennifer, ed. 2014. Essays in Collective Epistemology. Oxford: Oxford University Press. Lackey, Jennifer. 2016. “What Is Justified Group Belief?” The Philosophical Review 125 (3): 341–96. https://doi.org/10.1215/00318108-3516946. Lee, Marcus, Neil Sinclair, and Jon Robson. 2020. “Moral Testimony as HigherOrder Evidence.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge.

216

J. Adam Carter and Dario Mortini

Lillehammer, Hallvard. 2014. “Moral Testimony, Moral Virtue, and the Value of Autonomy.” Aristotelian Society Supplementary Volume 88 (1): 111–27. https://doi.org/10.1111/j.1467-8349.2014.00235.x. Mackie, John Leslie. 1977. Ethics: Inventing Right and Wrong. London: Penguin Books. Mathiesen, Kay. 2006. “The Epistemic Features of Group Belief.” Episteme 2 (3): 161–75. https://doi.org/10.3366/epi.2005.2.3.161. Montmarquet, James A. 1993. Epistemic Virtue and Doxastic Responsibility. Lanham, MD: Rowman & Littlefield. Nickel, P. 2001. “Moral Testimony and Its Authority.” Ethical Theory and Moral Practice 4 (3): 253–66. Olson, Jonas. 2014. Moral Error Theory: History, Critique, Defence. Oxford: Oxford University Press. Palermos, Spyridon Orestis. 2015. “Active Externalism, Virtue Reliabilism and Scientific Knowledge.” Synthese 192 (9): 2955–86. https://doi.org/10.1007/ s11229-015-0695-3. Plantinga, Alvin. 1993. Warrant and Proper Function. Oxford: Oxford University Press. Poston, Ted. 2016. “Know How to Transmit Knowledge?” Noûs 50 (4): 865–78. https://doi.org/10.1111/nous.12125. Pritchard, Duncan. 2012. “Anti-Luck Virtue Epistemology.” The Journal of Philosophy 109 (3): 247–79. https://doi.org/10.5840/jphil201210939. Pritchard, Duncan, John Turri, and J. Adam Carter. 2018. “The Value of Knowledge.” In Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. https://plato.stanford.edu/archives/spr2018/entries/knowledge-value/. Ridder, Jeroen de. 2014. “Epistemic Dependence and Collective Scientific Knowledge.” Synthese 191 (1): 37–53. https://doi.org/10.1007/s11229-013-0283-3. Schmitt, Frederick F. 1994. “The Justification of Group Beliefs.” In Socializing Epistemology: The Social Dimensions of Knowledge, edited by Frederick F. Schmitt, 257–87. Lanham, MD: Rowman & Littlefield. Silva, Paul. 2019. “Justified Group Belief Is Evidentially Responsible Group Belief.” Episteme 16 (3): 262–81. https://doi.org/10.1017/epi.2018.5. Sliwa, Paulina. 2012. “In Defense of Moral Testimony.” Philosophical Studies 158 (2): 175–95. Sosa, Ernest. 2000. “Reliabilism and Intellectual Virtue.” In Knowledge, Belief, and Character: Readings in Virtue Epistemology, edited by G. Axtell, 33–40. Lanham, MD: Rowman & Littlefield. Sosa, Ernest. 2009. Apt Belief and Reflective Knowledge. Oxford: Oxford University Press. Sosa, Ernest. 2015. Judgment and Agency. Oxford: Oxford University Press. Turri, John. 2011. “Manifest Failure: The Gettier Problem Solved.” Philosopher’s Imprint 11 (18): 1–11. Zagzebski, Linda Trinkaus. 1996. Virtues of the Mind: An Inquiry into the Nature of Virtue and the Ethical Foundations of Knowledge. Cambridge: Cambridge University Press.

10 The Fragile Epistemology of Fanaticism Joshua DiPaolo

1 Introduction Westboro Baptist Church is labeled by the Southern Poverty Law Center as the “most obnoxious and rabid hate group in America.”1 They are the fire-and-brimstone, anti-gay religious group that pickets military funerals while displaying deeply offensive signs that say things like “God Hates Fags,” “Thank God for Dead Soldiers,” and “Same-Sex Parents Doom Kids.” The sheer breadth of their hate-mongering is remarkable; no one outside the church escapes their vitriol. They stomp on the American flag. They burn the Koran. They say God hates Jews. They even blame other Christians for same-sex marriage. What’s more remarkable than their fanatical hate speech is how they interpret their own behavior. According to the church, these are acts of love. By spreading this intolerant message, they are obeying the commandment to “love thy neighbor.” They appeal to Leviticus 19:17–18, the famous dictum’s first appearance in the Bible, to justify this interpretation. Thou shall not hate thy neighbor in thine heart; thou shall in any wise rebuke him and not suffer a sin upon him. Thou shall not avenge, nor bear any grudge against the children of thy people, but thou shall love thy neighbor as thyself. To hate your neighbor is to not rebuke him when you see him sinning; to love your neighbor is to warn him of the consequences of his sins. During the filming of a BBC documentary on the church, documentarian Louis Theroux at one point tries to help a young church member, Jael Phelps, acknowledge the pain that the group causes. When Westboro members respond to criticism, they rapidly recite rehearsed rebuttals, appearing to be on “autopilot.” However, in a moment of unexpected vulnerability and candor, Jael slowly, calmly, and thoughtfully answers (Theroux 2007): Well, it’s very simple. We read the scriptures and we tell people what the standard is. We don’t do violence to people. We warn them that

218

Joshua DiPaolo their sins are taking them to Hell. We do a courteous and loving thing to them. That’s courteous and loving. And they hate us, and they beat on us, and they don’t want to have anything to do with us. They’re just down right mean to us sometimes, ya know? What did we do to them? Nothing but a courtesy.

As twisted as this all sounds, there’s logic to it. Love can require intolerance and hurtful honesty. If a friend were planning on performing some heinous crime, I would try to stop them – not only for the sake of their potential victims but out of love and for their own sake, to prevent a corruption of their character. When silence would cost a loved one more than the pain associated with hearing a hard truth, telling the truth is the right thing to do. This is how Westboro members see their actions. Not sharing their message with the world would be wrong. It would be like letting my friend perform the heinous crime or remaining silent when the costs to a loved one are severe. Their beliefs are criticizable for many reasons, but not for being completely illogical.2 Consistent with this assessment of the Westboro belief system, several recently developed theories imply that the beliefs of fanatics, terrorists, and other extremists are often epistemically rational. Far from being wholly irrational creatures or crazy psychopaths, these characters are often embedded in social dynamics and informational networks that justify their hateful and intolerant worldviews. Perhaps these individuals have an informationally impoverished crippled epistemology that supports their worldview (Hardin 2002; Sunstein 2009). Or perhaps they’re embedded in echo chambers, groups that foster disparities in trust between like-minded insiders and dissenting outsiders (Nguyen 2018; Sunstein 2009). According to these views, when beliefs are products of crippled epistemologies or echo chambers, those beliefs may be completely rational. Although I don’t take issue with these theories per se, neither adequately accounts for the distinctive relationship between fanaticism and higher-order evidence. Like other acts of extreme intolerance, Westboro’s morally repugnant behavior targets precisely those people whose worldviews differ from their own. Jael doesn’t see the hateful backlash against church members as providing reason to engage in soul-searching or to rethink her view. Holding fixed her interpretation of the church’s behavior as loving and courteous, the backlash is incomprehensible to her as anything but unprovoked and undeserved hostility. This intellectual behavior is characteristic of fanaticism. To be completely committed to their worldview, fanatics must have absolute confidence in its source. To avoid questioning their creed, fanatics must lack confidence in their own judgment. To steadfastly adhere to their doctrine, fanatics must interpret dissent and disagreement either as expressing straightforward, answerable objections to their worldview or as a hostile threat

The Fragile Epistemology of Fanaticism

219

to their identity rather than as a reason to step back and engage in critical self-reflection. I’m not going to epistemically evaluate fanatics’ beliefs here. But I’ll put my cards on the table. I tend to think the crippled epistemology and echo chamber stories too quickly grant rationality by neglecting the higherorder evidence that people have at their disposal even when they’re operating with a crippled epistemology or stuck in an echo chamber. While defending the rationality of extremist belief, for instance, Cass Sunstein (2009, 121) acknowledges that extremists “frequently assume that their own group is not skewed or biased; they fail to make proper adjustments for the motivations and limited information of group members.” But then he adds, “it is not easy to describe this failure as a form of irrationality.” I don’t see why not. Failing to adjust your beliefs in response to what you know about the quality of your evidence certainly looks epistemically irrational. If we’re going to say fanatics’ and other extremists’ beliefs are epistemically rational, then we need to think seriously about the higherorder evidence that they typically possess. These theories imply that fanatics are rational because what little they know supports their worldviews or because they’re taught to distrust people with different worldviews. But, I’ll argue, these theories don’t take into account the higher-order evidence that fanatics typically possess that, on its face, undermines the rationality of their beliefs. Any theory that implies fanatics’ beliefs are rational but fails to explain why this higher-order evidence doesn’t undermine the rationality of those beliefs is inadequate. In this chapter, I explain how fanatics, given the nature of fanaticism, treat or respond to their higher-order evidence. I do this to show what an argument for the rationality of fanaticism must do. The fanatic treats higher-order evidence, in particular disagreement from others, not as reason to rethink their worldview but as a threat to their identity. This treatment of higher-order evidence derives from how they understand their values qua fanatic: they’re not to be questioned, and when they are questioned, their status is thereby threatened. If fanatics treat higher-order evidence in this way and their beliefs are rational, then their treatment of their higher-order evidence must be rational. Thus, this chapter lays the groundwork for further research: is the fanatic’s treatment of higher-order evidence, which derives from the nature of his fanaticism, rational? Because the “nature of fanaticism” involves a distinctive stance toward the nature of certain values, this question blends the epistemology of higher-order evidence with moral epistemology: under what conditions is it rational to take this stance towards one’s values? I won’t weigh in on whether fanatics’ beliefs are rational or defend their treatment of higher-order evidence. Rather, I explain why fanaticism leads people to treat higher-order evidence in this way to lay the groundwork for a more adequate assessment of the rationality of their beliefs. This inquiry will also provide guidance on what must be done to help

220

Joshua DiPaolo

fanatics change their minds. If we want higher-order evidence to make a dent in the fanatic’s belief system, we may need to unseat their convictions about the nature of the fanatic’s values.

2 Fanaticism: Pretheoretical Basics The paradigmatic fanatic is the violent religious extremist who takes themselves to have divine sanction for terrible acts of cruelty and intolerance (Katsafanas 2019, 2). On 21 April 2019, nine suicide bombers attacked several Sri Lanka locations, ultimately killing 258 people and injuring another 500. The Islamic State claimed responsibility for these coordinated attacks. Jihadis like these who embrace demanding values that require devotion and great personal sacrifice look like paradigmatic fanatics. But fanaticism needn’t be religiously inspired or violent. The clearly fanatical Westboro Church doesn’t engage in (physical) violence. And some forms of white nationalist fanaticism aren’t religiously inspired. Just five weeks before the Sri Lanka bombings, an Australian man shot up two New Zealand mosques, ultimately killing 51 people and injuring another 49, in the name of white supremacist, anti-Muslim ideals rather than specifically religious values. Katsafanas (2019) identifies six pretheoretical features associated with fanaticism. First, the fanatic has an unwavering commitment to some ideal. The fanatic is often willing to make extreme sacrifices to promote or preserve their ideal. Second, the fanatic typically has unwavering certainty about this ideal. Often aware of others’ non-acceptance of the ideal, the fanatic remains steadfast in their absolute confidence in this ideal.3 Third, this certainty is often localized. Fanatics don’t display a general inability to assess evidence or draw rational conclusions; they display rigid certainty in a narrowly circumscribed domain. Fourth, fanatics are intolerant and often violent. They typically attempt to impose their ideals or values on others who do not share them. Fifth, the fanatic is usually group oriented. They think of their identity as partly constituted by reference to a like-minded group and to a group to react against. Finally, Katsafanas notes that the ideals accepted by fanatics often have religious provenance, claiming divine revelation as the source of their unwavering commitment and certainty. For the sake of understanding fanaticism, it’s important to emphasize its logical independence from religion. Although I’ll focus on a case of religious fanaticism, the chapter’s implications should be understood more generally. Fanatics have strong moral commitments, derived from religion or elsewhere. These moral commitments are typically among their most troubling beliefs. Theories that imply that fanatical beliefs are rational imply that these commitments are rational. My purpose is to clarify what needs to be defended if we’re going to accept the rationality of these moral beliefs.

The Fragile Epistemology of Fanaticism

221

Shortly, I’ll describe Katsafanas’s more theoretical, broadly Nietzschean account of fanaticism. First, I’ll focus on the rational warrant that fanatics have for their beliefs. Although the fanatic’s confidence seems to outstrip its warrant, the crippled epistemology and echo chamber theories question this. In the next section, I’ll explain how these theories attempt to rationalize fanatical belief, and I’ll argue that they fail to account for the higher-order evidence that fanatics typically possess.

3 Crippled Epistemologies and Echo Chambers Why should we think that the beliefs of fanatics, extremists, terrorists, and other radicals are rational? The crippled epistemology and echo chamber theories both begin with the fact that we unavoidably, and uncriticizably, depend on others for information about the world. Few of our beliefs would be rational if it weren’t rational to regularly trust the testimony of those around us. Because it usually is rational to believe what we are told by like-minded people, we are susceptible to rationally believing and maintaining belief in fanatical or extremist worldviews. Thus, dependence on others for information is the theoretical background shared by these two theories. But they differ in details. The crippled epistemology (CE) story focuses on ignorance. The idea is that extremists know little, and what they know supports their extremism (Sunstein 2009, 41). Why are fanatics so ignorant, according to CE? Their group provides them only with information that supports the group ideology. They are exposed to little if any contrary information. And open questioning of this ideology is strongly prohibited. Furthermore, fanaticism essentially depends on exclusionary practices that affect the information that group members possess, keeping them largely ignorant of alternatives (Hardin 2002, 18). Anyone who disagrees with the group’s creed exits – willingly, through excommunication, or worse. So why are fanatics often rational in their beliefs? They just don’t know any better. The limited information, evidence, and knowledge they possess, most of which is supplied by like-minded group authorities, supports their beliefs. In contrast, the echo chamber (EC) story focuses less on ignorance and more on trust. An echo chamber is, roughly, a group that fosters disparities in trust between like-minded insiders and dissenting outsiders (Nguyen 2018). Just as we rationally depend on others to provide ordinary information about the world, we also rationally depend on others to tell us whom to trust. Echo chambers are often constituted by an alignment between outputs of these two dependencies: group members agree about the truth of some ideology because purported authorities inform them of its truth, and this ideology tells its adherents to trust only those who share this commitment. For instance, Westboro relies on the Bible for its doctrine, which includes passages that the church interprets as encouraging distrust of those who don’t, like James 4:4: “friendship with

222

Joshua DiPaolo

the world is enmity with God.” Thus, fanatics in echo chambers needn’t be ignorant of contrary information. Even if an outsider exposes them to information that contradicts their worldview, they can rationally dismiss it as coming from an untrustworthy source. Thus, we have two arguments that fanatics’ beliefs are often rational. Since fanatics have crippled epistemologies or are in echo chambers that favor their fanatical ideology, their beliefs in that ideology are rational. Rather than arguing against this conclusion, I want to show how both arguments rely on not taking into account higher-order evidence that fanatics typically possess, whether they’re operating with a crippled epistemology or embedded in an echo chamber. I will do this to show what more needs to be said to defend this conclusion and to set the stage for the next sections, where I’ll link fanaticism to higher-order evidence. First, what is higher-order evidence? It can be two things: evidence about the quality of a body of evidence (e.g. about which relations hold between one’s evidence and certain propositions, about how strong it is, about how representative it is, etc.) and evidence about a person’s relation to evidence (e.g. about whether they have correctly assessed their evidence, about the capacities they have to assess evidence, etc.) For my purposes, information, empirical data, and a priori reasons all count as evidence. Epistemic considerations might better designate what I have in mind, but I’ll stick with evidence. One of the main examples I’ll use of higher-order evidence is disagreement, understanding it as potentially providing evidence of rational error. 3.1 Crippled Epistemology Now let me explicitly state the argument from crippled epistemology to the rationality of fanatics’ beliefs. (The argument from echo chambers is similar.) Crippled Epistemology Argument C1. If fanatics have a crippled epistemology that favors a certain ideology, then fanatics have been told that this ideology is true and that the people they know who agree with this ideology have given them only (and perhaps lots of) information that confirms it. C2. If fanatics have been told that this ideology is true and that the people they know who agree with it have given them only (and perhaps lots of) information that confirms it, then all the information that fanatics possess all things considered supports this ideology. C3. If all the information fanatics possess all things considered supports this ideology, then it’s rational for fanatics to believe this ideology. C4. Therefore, if fanatics have a crippled epistemology that favors a certain ideology, then it’s rational for fanatics to believe this ideology.

The Fragile Epistemology of Fanaticism

223

C5. Fanatics have a crippled epistemology that favors a certain ideology. C6. Therefore, it’s rational for fanatics to believe this ideology. I won’t question C1 or C3: C1 partially articulates what a crippled epistemology is, and C3 expresses a plausible total evidence principle. Instead, I’ll question C2 and C5. The idea behind these claims is that the only relevant information fanatics possess is (1) Testimony that the ideology is true. (2) Any additional evidence they’ve been given that confirms this ideology (because if this were all the information they possessed, then all the information they possess would all things considered support their ideology). Even if this information is misleading, and even if there exists veridical evidence out in the world that decisively refutes this ideology, those facts don’t bear on the rationality of fanatics’ beliefs. Only evidence, misleading or otherwise, that fanatics possess determines whether their beliefs are rational. The thought is that since (1) and (2) exhaust the relevant evidence that fanatics possess, their evidence all things considered supports their ideology. The problem with this argument is that fanatics typically possess more evidence that bears on the rationality of their beliefs. I’m not merely claiming that there is evidence out in the world that would undermine their beliefs if only they possessed it. Rather, I’m claiming fanatics do typically possess information that bears on the rationality of their beliefs that isn’t included in (1) and (2). According to the crippled epistemology argument, if the fanatic’s relevant evidence is wholly constituted by (1) and (2), then their beliefs are rational. That may be true. But the claim that the fanatic’s beliefs are rational follows only if their relevant evidence is wholly constituted by (1) and (2) – that is, they don’t possess any additional relevant evidence. This is the claim that I’ll argue against. First, unlike children born into Westboro, many fanatics aren’t raised to be fanatics. They radicalize. For instance, between 2014 and 2016, thousands living in Western liberal democracies fled their countries of residence to join the Islamic State of Iraq and Syria (ISIS). Far from only being exposed to confirming information, these people would have had plenty of information contradicting ISIS doctrine. More importantly for our purposes, even fanatics raised in their fanaticism often have evidence that goes beyond (1) and (2) and that prima facie tells against the rationality of their fanaticism, namely several forms of higher-order evidence. One form of higher-order evidence that fanatics often possess is disagreement with outsiders.4 Because there are only 70–80 members in the Westboro Baptist Church, including children,

224

Joshua DiPaolo

Westboro members know that nearly the entire world disagrees with them. Of course, epistemologists of disagreement argue that disagreement is epistemically significant only against a background of agreement (Vavova 2015). I think it’s worth asking how strong this consideration is when the extent of disagreement is this massive. And there’s nothing special about Westboro here: fanatics typically adopt fringe beliefs. But even if we grant that disagreement’s significance depends on agreement, we can still say fanatics possess evidence of epistemically significant disagreement, since fanatics often encounter disagreement from those who largely agree with them. For instance, there are plenty of Christians who disagree with their practices but who nevertheless accept fundamentally similar worldviews. Moreover, the exclusionary practices that sustain crippled epistemologies have systematic higher-order import. Often relative moderates exit fanatical groups, by choice or force, when they disagree with the direction in which the group is headed or when they themselves have changed their minds (Hardin 2002, 10). These are people who have shared, and may still share, a large background of agreement but who nevertheless disagree with those remaining in the group. Importantly, fanatical groups don’t keep dissent-based exits secret. Enforcing prohibitions against dissent requires informing group members of its consequences. This means that when people exit or are excommunicated due to disagreement, the remaining members will know that people with shared backgrounds disagree with them. Of course, disagreement’s epistemic significance depends not only on background agreement but also on the relative epistemic abilities of disputants. Whereas disagreement with epistemic superiors or peers may demand belief revision, disagreement with epistemic inferiors arguably doesn’t. It’s open, then, to these theories to claim that outsiders and exiters are by the lights of Westboro members their epistemic inferiors and that this is why such disagreement doesn’t affect the rationality of their beliefs. To assess the plausibility of this move, we need to know why it might be reasonable for Westboro members to judge these people their inferiors. If it’s the mere fact of disagreement, this runs afoul of the independence principle (Christensen 2010), which says, roughly, that evaluating the epistemic credentials of those who disagree with you must be done independently of the disagreement. If it’s not just the disagreement that rationalizes this stance, then what is it? It’s true that many outsiders and some exiters see the world completely differently from how Westboro members do. But it’s also true that many outsiders are in almost complete agreement with them, except when it comes to their more extreme commitments (e.g. the obligation to picket soldiers’ funerals). These people trust the same sources, draw many of the same inferences, and think just as seriously about these issues as Westboro members do. What reason do Westboro members have for doubting their credentials if not merely

The Fragile Epistemology of Fanaticism

225

the disagreement? They might be told that outsiders and exiters are their epistemic inferiors, but that’s just one piece of evidence that needs to be considered in conjunction with the rest. At the very least, if defenders of these theories want to claim that it’s rational for fanatics to be unmoved by disagreement with outsiders and exiters, they need to explain why it’s rational for fanatics to treat these people as their inferiors. As I’ve suggested, this may require denying independence. In any case, defending the rationality of fanatics’ beliefs must go beyond merely identifying some of the skewed and limited information that they possess. It must also seriously contend with the higherorder evidence that they possess. I think it’s implausible to claim that the only information that fanatics possess that bears on the rationality of their beliefs, even if they have grown up with a crippled epistemology, is limited to information that confirms those beliefs. Fanatics typically possess higher-order evidence unaccounted for by CE. It may be possible to concoct a case where fanatics have only the information that CE says they have – though, the relation between a crippled epistemology and exclusionary practices might provide grounds for pessimism. But regarding real-world cases, like the ones that CE was constructed to account for, it’s implausible to make this claim. 3.2 Echo Chambers What about echo chambers? EC has to contend with many of the same issues as CE does. To its credit, it does state how fanatics can dismiss disagreement with outsiders: they’re not to be trusted. Information about whom should be trusted when it comes to assessing evidence is a sort of higher-order evidence, so EC doesn’t completely miss the phenomenon. Whether this strategy plausibly carries over to exiters and outsiders who largely agree with them should be investigated. But I want to identify a different form of higher-order evidence overlooked by EC. The idea behind echo chambers is that asymmetries in trust between insiders and outsiders can rationalize resistance to contrary evidence and reasons for doubt presented by outsiders. But the insider/outsider asymmetry isn’t the only trust asymmetry that fanatical groups rely on. These groups often require members to severely distrust themselves. Group members are encouraged to distrust their own faculties insofar as those faculties lead to questioning the group’s core commitments. To illustrate, consider remarks by Megan Phelps-Roper, a former rising star in Westboro before she left the church in 2014. During an interview after her departure, she explains how church members are taught to distrust their own thinking (Harris 2015): If you have a doubt or a question about these standards which are so clearly laid out in scripture, then you’re doubting not just scripture,

226

Joshua DiPaolo but God himself . . . and trying to substitute your judgment for God’s. “And how dare you! How dare you! Who the Hell are you . . . to question?”

She continues by describing the cognitive effects of this rhetoric: When you’re hearing that as you grow up, you have no confidence in your own thoughts and your own thinking. . . . You have to separate everything you think and feel and everything you see, and see it this way. The encouragement to distrust herself was relentless. A few weeks before she left, presumably as doubts were bubbling over, her mother attempted to console her by continuing to instill this self-distrust: “You’re just a human being, my dear sweet child.” Megan understood this as a call for humility: “not to question but to trust God, and my elders” (Phelps-Roper 2017). If EC is going to claim that fanatical beliefs are rational, then it must claim not only that trust disparities between insiders and outsiders are rational but also that such disparities between insiders are rational. Characteristic of how higher-order evidence works (Christensen 2010), insiders are encouraged to bracket their own thinking whenever that thinking conflicts with doctrine or views of purported doctrinal authorities. But it’s not obvious that this demand for self-distrust, often based in recognition of group members’ human fallibility, can be quarantined. After all, the authorities are themselves only humans who must rely on their own imperfect faculties to interpret doctrinal sources. EC recognizes our inescapable reliance on others for information about the world, including about whom to trust. But fanatical groups further rely on trust disparities within their groups to forestall questioning of group commitments. When the grounds for self-distrust is human fallibility, it’s hard to claim that this disparity is warranted. And if authorities deserve as much distrust as lower-tiered individual group members do, it may be difficult to maintain sharp trust disparities between insiders and outsiders. Again, my aim is merely to identify gaps in the argument from echo chambers to rational fanatical belief. These gaps are based on failures to take into account the higher-order evidence that fanatics in echo chambers possess. To succeed in explaining why fanatical beliefs are rational by appealing to trust disparities, EC must also explain why trust disparities within the group are rational. CE and EC both appeal to the social embeddedness of individual fanatics in order to explain why fanatical belief is rational. Although studying social dynamics when assessing fanatical belief is a good idea – fanaticism is an essentially social phenomenon – doing so makes it easy to lose track of the higher-order evidence possessed by individuals caught up in these

The Fragile Epistemology of Fanaticism

227

dynamics. Disagreement with outsiders and exiters and disparities in selftrust among insiders provide higher-order evidence that must be reckoned with by these individuals and accounted for by theories of fanatical belief if those theories aim to explain how fanatical belief is rational. In the next section, I describe and adopt a theory of fanaticism. In the following section, I’ll show how the nature of fanaticism explains why fanatics respond to higher-order evidence as they do. Again, this will illuminate what must be done to defend the rationality of fanatical belief.

4 Fanaticism We noted six pretheoretical features associated with fanaticism: unwavering commitment to and certainty about an ideal, localization of this certainty, intolerance of those who oppose the ideal, group orientation, and religious provenance. What, then, is fanaticism? Adler (2007) claims that fanaticism resides in a lack of commonplace “self-restraints.” The fanatic can reason themselves to the conclusion that they should kill non-believers, say, and they often act on this conclusion. In contrast, not only would the rest of us not act on this conclusion, but we also wouldn’t even reach it in the first place. Seeing its conclusion, we would be convinced that this reasoning is distorted, because the conclusion evokes in us a response that amounts to a restraint on our reasoning (Adler 2007, 268). The fanatic lacks such self-restraints. Adler blames supernatural religious faith for this lack of self-restraint because, he claims, it promotes their denial. How? It encourages (i) Following divine commands when justification for them isn’t understood. (ii) Making exceptions of religious ideas. (iii) Limiting sources of critical control to only those who agree with the faith. (iv) Self-deception. (v) Shrinking the “belief-action gap.” Regarding (5), Adler has in mind the gap between the forming of a belief that I should do something and the forming of the intention to do it. Most of us rely on a “delay principle”: as the costs of acting increase, we hesitate to follow the belief’s guidance out of respect for our fallibility. By delaying action, we increase our opportunities to discover whether the action-guiding belief is mistaken (Adler 2007, 276). But, Adler claims, the fanatic closes this gap by going immediately from the belief that they should do something to forming the intention to do it. Nietzsche, who has written extensively on fanaticism, would agree that fanaticism is related to faith and lack of self-restraint, but he wouldn’t

228

Joshua DiPaolo

accept Adler’s explanation. Rather than being the source of fanaticism, Nietzsche thinks faith meets the fanatic’s distinctive needs: regardless of what the fanatic believes or what their actual grounds for belief are, they need to believe that they possess unconditional truth. This need gives rise to narrow-mindedness; the fanatic clings to one point of view at the expense of others. And it makes the fanatic a “willing slave”: they not only submit to the regulation of an external authority but also actually seek it out (Reginster 2003, 75). Nietzsche explains (Nietzsche 1974, 347) that Faith is always coveted most and needed most urgently where will is lacking; for will, as the affect of command, is the decisive sign of sovereignty and strength. . . . the less one knows how to command, the more urgently one covets someone who commands, who commands severely – a god, prince, class, physician, father confessor, dogma, or party conscience. Nietzsche continues by linking this to fanaticism: Fanaticism is the only “strength of will” that even the weak and insecure can be brought to attain, being a sort of hypnotism of the whole system of the senses and the intellect for the benefit of an excessive nourishment . . . of a single point of view. An absence or weakness of practical and intellectual “will” gives rise to a desire or need to be commanded – that is, to be told what to do and think.5 When you lack the ability or have a substantially diminished ability to decide what to do or think and yet you have a need to possess unconditional truth, what emerges is a desire to outsource your reasoning. But to meet this need, this source must be seen as expressing unconditional truth. These remarks explain some features of fanaticism. Why doesn’t the fanatic exhibit self-restraint? The governance of their thinking has been outsourced to external authorities, whom they see as needing no regulation. Adler claims that the fanatic reasons to unacceptable conclusions. However, for Nietzsche, the fanatic does not restrain themselves, because they, in a sense, don’t participate in this reasoning. The belief-action gap is closed because they are a sort of functional algorithm; inputs lead directly to outputs. Earlier I said Westboro members appear to be on autopilot when they respond to criticism; Nietzsche would say they’re hypnotized. This also explains the localization of the fanatic’s certainty. Where the fanatic thinks that they possess unconditional truth, they have abdicated their reasoning responsibilities. Elsewhere, however, the fanatic reasons like the rest of us because they admit contingency and uncertainty in these other domains.

The Fragile Epistemology of Fanaticism

229

All of this is suggestive. Katsafanas (2019) has developed a detailed account of fanaticism that captures many of these insights.6 His theory analyzes fanaticism in terms of seven properties: (i) (ii) (iii) (iv) (v)

Unwavering commitment to an ideal. Unwillingness to subject the ideal to rational critique. Non-rational provenance of the ideal.7 Sacred values, in which the agent adopts sacred values. Fragility of the self, which involves the agent needing to treat a value as sacred to preserve unity of the self. (vi) Fragility of value, in which the value’s status is taken to be threatened when it isn’t widely accepted. (vii) Group identity, in which the fanatic identifies themselves with a group defined by shared commitment to a sacred value. The first two features should be clear: the fanatic’s behavioral devotion to their ideal is absolute, and they refuse to subject their ideal or its basis to serious rational scrutiny. The third feature partly explains why. Often fanatics take their worldview to have religious provenance. But what’s essential is that the fanatic sees the source of their worldview as distinct from sources like human reason and empirical evidence and thinks it needn’t be constrained by these other sources, because it has more credibility or authority. An account of fanaticism comprised of these three features has been inherited from Enlightenment thinkers like Locke, Hume, and Kant. But Katsafanas rightly notes that analyzing fanaticism only in terms of (1)– (3) doesn’t suffice, since there is no clear link between (1)–(3) and the intolerance characteristic of fanaticism. Katsafanas contends that fanaticism resides not only in unwavering commitment and certainty toward a non-rationally sanctioned ideal but also in the nature of this ideal, how the fanatic relates to it, and how other people relate to it. This leads to properties (4)–(7). According to the fourth condition, fanatics adopt “sacred values.” These are values that cannot be questioned and don’t admit of trade-offs or violations. According to Katsafanas, for those who adopt sacred values these values are (i) Inviolable and uncompromisable. (ii) Unquestionable and not to be critiqued or doubted. (iii) Associated with emotions like love, hatred, veneration, contempt, etc. The fourth condition says that fanatics treat their favored ideals as sacred in the sense of (i)–(iii). Why does the fanatic treat their values as sacred? For the Nietzschean reasons already discussed: the fanatic needs, in order to preserve unity of

230

Joshua DiPaolo

self, to think of themselves as possessing unconditional truth. On a standard picture, the self is constituted by its commitments. It’s an orientation toward certain principles, ideals, values, or narratives. But for some of us – like fanatics – self-integrity is inconsistent with viewing these ideals as uncertain or contingent. By treating certain values as sacred, the fanatic eliminates uncertainty about some of their commitments, thereby satisfying their existential need to think of themselves as possessing unconditional truth. Hence, the fanatic has a fragile self; only the most rigid commitments can sustain it. Not only does the fanatic have a fragile self, but they also think that their own values are, in a sense, fragile: the status of their sacred values is threatened by the fact that other people do not share them. An example will help illustrate this idea. Some people oppose same-sex marriage because they think it threatens or undermines the institution of marriage. On this view, allowing same-sex marriage imperils the significance or status of opposite-sex marriage by no longer publicly marking it off as possessing a distinctive form of value (Katsafanas 2019, 14). This opponent of same-sex marriage therefore treats the value of marriage as fragile: the value’s status is threatened when it is not widely accepted. Plenty of people do not treat their own sacred values as fragile. The Amish and the Hasidic both maintain certain sacred values without viewing their status as dependent on how they’re treated by the wider society (Katsafanas 2019, 15). The fanatic, on the other hand, wants their values to be widely accepted because for them, the status of those values depends on others sharing them. This partly explains why the fanatic imposes their values on others. Finally, the fanatic sees their identity as defined by membership in a group that shares their commitment to their sacred values, as well as the sense that membership in the group is necessary for preserving the status of these values. What I argue now is that if we take this account of fanaticism for granted, we should expect fanatics to treat higher-order evidence in certain ways.

5 Sacred Values, Fragility, and Higher-Order Evidence In Section 3, I identified three types of higher-order evidence that fanatics typically possess: disagreement with outsiders, disagreement with exiters, and trust disparities within their group. I claimed that CE and EC didn’t have enough to say about this evidence. Without explaining and evaluating fanatics’ treatment of this evidence, it’s premature to claim that fanatics caught up in crippled epistemologies or echo chambers have, for that reason, rational beliefs. In this section, I’ll argue that if a fanatic has a fragile epistemology, we can expect them to handle this higher-order evidence in the ways that fanatics typically do.

The Fragile Epistemology of Fanaticism

231

Among the interesting things that Megan Phelps-Roper has revealed about Westboro since she left is that church members were explicitly instructed on how to construe any evidence that pertained to their core beliefs (Harris 2015): We were taught how to interpret evidence, how to see everything in the world. And to have every objection that might ever arise and have the answer to that objection all ready, having repeated it over and over again. The epistemically troubling result, as she saw it, was that “there was literally no evidence that could be introduced to us to change our opinions.” She illustrates this with the following example: If somebody [outside the church] says they love us, that they care about us, then they’re either lying or delusional. And if they say they hate us, then of course they hate us. So everybody hates us! Westboro members aren’t given only one side of the religious story, as CE predicts, and they weren’t merely taught to trust insiders and distrust outsiders, as EC predicts. Their epistemic instruction was pervasive. From theological objections to professions of love, they were taught how to think about everything. This is a general feature of the fragility of fanaticism: its import isn’t merely psychological or axiological but also thoroughly epistemic. To illustrate how a fragile epistemology works, let’s work with an example of an idealized fanatic, Frank. I’ll focus on the last four conditions of fanaticism.8 Frank adopts sacred values for the sake of psychic unity. Moreover, he treats these values as fragile: he sees them as threatened when they aren’t widely accepted. And let’s suppose that he knows they aren’t widely accepted. Finally, he partly defines himself by membership in a group that shares these commitments. How will Frank interpret intellectual opposition to these values? First, because Frank sees these values as sacred, he’ll think that they must not be questioned, doubted, compromised, or violated. Insofar as Frank’s opponent does violate these values, Frank will see her as morally depraved. And since her opposition consists in her doubting these values, Frank will think she’s doing something that must not be done. This might be interpreted as a moral, prudential, or epistemic “must.” If moral, this violation will count as further evidence of her moral depravity. If prudential, this violation will show, by Frank’s lights, that she fails to act in her own self-interest, in which case Frank will likely see her as practically irrational. If the “must” is epistemic, then she will be violating her epistemic obligations. In this case, Frank will think that she has manifested a grave epistemic failure. Thus, the fact that Frank’s opponent violates and

232

Joshua DiPaolo

questions the values that Frank treats as sacred makes her, by Frank’s lights, deserving of various forms of (severe) criticism. Second, since Frank treats these values not only as sacred but also as fragile, this has implications for how he’ll interpret widespread intellectual opposition. For Frank, this opposition doesn’t provide grounds for critical self-reflection or questioning of these values: they’re not to be questioned! Instead, because Frank treats these values as fragile, he sees this opposition as a threat to their status. Finally, given how tightly bound up Frank’s individual and group identity is with these values, he’ll see the disagreement that threatens his values as also threatening his group and himself. Katsafanas (2019, 16) nicely explains this point that The agent’s psychic integrity is vouchsafed by his commitment to a sacred value, where the value is taken as definitive of a group. The value is seen as compromised by dissent. Thus, the group’s identity, which hinges on its adherence to the value, is seen as compromised by dissent. So, too, the agent’s psychic integrity. . . . The fanatic sees outsiders as opposed to his group. These outsiders threaten not only his value, and not only his group, but his very identity. Thus, the people who disagree with Frank’s values appear to Frank as deserving serious criticism for questioning something that mustn’t be questioned, and the fact that they’re questioning his values is also seen as a personal threat. It’s no surprise, then, that disagreement with outsiders or even exiters doesn’t move Frank. Given his worldview, their dissent is evidence not of his error but of their own questionable moral and intellectual characters. Moreover, for Frank, there is no such thing as unthreatening dissent regarding these core values. Disagreeing with him is threatening him. This also explains why it’s important for fanatical group members like Frank to bracket their own thinking and not trust it when it leads to doubts about the group’s values. If they allowed these doubts to surface, they would be doing something that must not be done (because they’d be doubting sacred values): they would be threatening the values (because those values are fragile) and even threatening themselves (because their selves are fragile). Of course, fanatics do sometimes respond to disagreement. Given our present analysis of fanaticism, this might seem strange. Why would you engage with disagreement if you think it’s a threat to your identity? This is where the autopilot/hypnotism point returns. I think while the fanatic remains in the grips of their fanaticism, they don’t genuinely engage with the disagreement-based reasons for doubt. Evidence of disagreement can be treated as higher-order evidence, evidence about what the evidence actually supports, and evidence of error. Or it can be demoted to

The Fragile Epistemology of Fanaticism

233

a first-order objection to one’s views. When treated as evidence of error, genuinely responding to it requires taking seriously the possibility of one’s own error. In this case, those of us who aren’t fanatics might take such evidence to provide reason to step back and take a detached view, considering whether the error lies with us. But agents in the grips of fanaticism can’t do this. So, instead, fanatics respond to disagreement by demoting it to a first-order objection. Then they do what Megan was trained to do: appeal straightforwardly to their doctrine to rebut the objection. Because in these instances, the agent isn’t really involved, it makes sense that they don’t step back and take a detached view. The fanatic has two modes of orientation toward disagreement. When they are truly engaged with the disagreement, it’s a threat to their identity. When they’re not truly engaged – when they’re “hypnotized” – disagreement is simply answerable by appeal to claims derived from their worldview. Either way, because they’re a fanatic, their beliefs are impervious to this form of higher-order evidence.

6 Theoretical and Practical Upshots CE and EC, I’ve argued, base their assessment of the fanatic’s beliefs on only a proper subset of the evidence available to the fanatic by leaving out of the picture higher-order evidence that fanatics typically possess. The fragile epistemology story is meant to do better in this respect. Not only do fanatics have ordinary evidence and higher-order evidence provided by disagreement, but they also have commitments – commitments that constitute their fanaticism – that tell them how to interpret this higher-order evidence in ways that prevent them from seeing it as providing reason to rethink their beliefs. In theoretical terms, this theory builds in a defeater for any potential higher-order defeater provided by disagreement. EC does a better job of this than CE, but neither accounts for as much of the higher-order evidence that fanatics possess as the fragile epistemology story does, and neither accounts for the distinctive ways that fanatics, qua fanatics, must think of higher-order evidence. A general recommendation, then, for theories attempting to explain the rationality of fanatical belief is to identify the higher-order defeater defeaters that fanatics possess. Still, fanatics’ beliefs will be rational only if their treatment of higherorder evidence is rational. It’s one thing to treat disagreement as a mere threat to one’s identity or as an answerable objection, rather than as evidence of error, because you have a fragile self committed to sacred values that you view as fragile. It’s quite another to do this rationally. While I haven’t taken a stand on this issue, I’ve attempted to identify what defending the rationality of fanatics’ beliefs requires. For instance, we need to know under what conditions it’s rational to consider your values fragile. There’s a debate in the higher-order evidence

234

Joshua DiPaolo

literature about whether there can be all things considered misleading evidence about what a body of evidence supports. There’s also a debate in the moral epistemology literature about the epistemic status of normative beliefs based in testimony. I think we’ll need to consider both to answer questions about the rationality of believing one’s values are fragile since this belief will often be based in testimony and it has implications about how higher-order evidence should be interpreted. If normative testimony doesn’t rationalize belief and if there’s no other way to rationally believe your values are fragile, then perhaps fanatics’ beliefs cannot be rational. Or if facts about when certain evidence has defeating force are in some sense objective (Klenk 2019) and if a fragile epistemology necessarily gets some of those facts wrong, then perhaps fanatics’ beliefs cannot be rational. In any event, by drawing our attention to the nature of fanaticism, I hope to have shown that questions about the rationality of fanatics’ beliefs are intertwined with questions in moral epistemology and the epistemology of higher-order evidence. In conclusion, these questions aren’t of merely theoretical interest. If fanatics are generally rational beings and if we want to prevent the persistence and growth of fanaticism, then we should want to know how to make it irrational for people to believe that their values are fragile. One strategy that experts use to fight terrorism combines radicalization prevention with the de-radicalization of radicals. However, I’m not aware of work dedicated to preventing the encroachment of fragile values. That may be exactly what’s needed to stall the growth of fanaticism.9

Notes 1. Southern Poverty Law Center. n.d. “Westboro Baptist Church.” https://www. splcenter.org/fighting-hate/extremist-files/group/westboro-baptist-church. 2. Much of the Westboro belief system strikes me as deeply confused, downright false, and completely incoherent. But what they’re most known for isn’t wholly illogical, even if it rests on mistakes. 3. Action characteristic of the fanatic doesn’t require fanatical belief, but in this chapter, I’m focusing on those who truly believe. 4. David Christensen is a prominent defender of the higher-order defeating power of disagreement. See inter alia his (2010). 5. Adolf Eichmann, one of the chief organizers of the Holocaust, exemplified this condition. Eichmann lamented the fall of the Nazi Party at the end of WWII because it meant he would no longer receive external directives: “I sensed I would have to live a leaderless and difficult individual life; I would receive no directives from anybody, no orders and commands would any longer be issued to me, no pertinent ordinances would be there to consult” (Arendt 1963, 27). 6. Due to space limitations, I must refer readers to Katsafanas (2019) for a fuller statement and defense of this account. 7. Non-rational doesn’t mean irrational; it doesn’t prejudice the question of the epistemic rationality of relying on these sources. Rather, the term derives from a contrast with reason, as understood by early modern philosophers. 8. My purpose is to illuminate how the fanatic’s relations to sacredness and fragility, in particular, predictably lead to their treatment of higher-order evidence, leaving discussion of the other conditions for another time.

The Fragile Epistemology of Fanaticism

235

9. I would like to thank Michael Klenk, Thi Nguyen, Olle Risberg, and Folke Tersman for helpful comments on previous drafts. Special thanks to Luis Oliveira and to Gina Schouten for providing speedy, yet excellent feedback on this project. Finally, I’d like to thank the students in my Spring 2019 Conversion and Radicalization seminar for all that I learned from them during our wonderful class discussions on these topics.

References Adler, Jonathan Eric. 2007. “Faith and Fanaticism.” In Philosophers without Gods: Meditations on Atheism and the Secular Life, edited by Louise M. Antony, 266–85. Oxford: Oxford University Press. Arendt, Hannah. 1963. Eichman in Jerusalem: A Report on the Banality of Evil. New York, NY: Viking Press. Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomenological Research 81 (1): 185–215. https://doi.org/10.1111/j.1933-1592.2010. 00366.x. Hardin, R. 2002. “The Crippled Epistemology of Extremists.” In Political Extremism and Rationality, edited by Albert Breton, 3–22. Cambridge: Cambridge University Press. Harris, Sam. 2015. “Leaving the Church.” The Waking Up Podcast. https:// samharris.org/podcasts/leaving-the-church/. Katsafanas, Paul. 2019. “Fanaticism and Sacred Values.” Philosopher’s Imprint 19: 1–20. Klenk, Michael. 2019. “Objectivist Conditions for Defeat and Evolutionary Debunking Arguments.” Ratio: 1–14. https://doi.org/10.1111/rati.12230. Nguyen, C. Thi. 2018. “Echo Chambers and Epistemic Bubbles.” Episteme 12: 1–21. https://doi.org/10.1017/epi.2018.32. Nietzsche, Friedrich. 1974. The Gay Science. Edited by Walter Kaufmann. New York, NY: Vintage. Phelps-Roper, M. 2017. “I Grew up in the Westboro Baptist Church: Here’s Why I Left.” Ted Talk. www.ted.com/talks/megan_phelps_roper_i_grew_up_in_the_ westboro_baptist_church_here_s_why_i_left. Reginster, Bernhard. 2003. “What Is a Free Spirit? Nietzsche on Fanaticism.” Archiv für Geschichte der Philosophie 85 (1): 51–85. https://doi.org/10.1515/ agph.2003.003. Sunstein, Cass R. 2009. Going to Extremes: How Like Minds Unite and Divide. Oxford: Oxford University Press. http://site.ebrary.com/lib/academiccomplete titles/home.action. Southern Poverty Law Center. n.d. “Westboro Baptist Church.” https://www. splcenter.org/fighting-hate/extremist-files/group/westboro-baptist-church. Theroux, Louis. 2007. “The Most Hated Family in America.” BBC. Vavova, Katia. 2015. “Evolutionary Debunking of Moral Realism.” Philosophy Compass 10 (2): 104–16. https://doi.org/10.1111/phc3.12194.

Part IV

Permissible Epistemic Attitudes in Response to Higher-Order Evidence in Moral Epistemology

11 How Rational Level-Splitting Beliefs Can Help You Respond to Moral Disagreement Margaret Greta Turnbull and Eric Sampson

1 Introduction The problem of disagreement between epistemic peers, individuals who are roughly each other’s intellectual equals with respect to the proposition under disagreement, has been the subject of much philosophical discussion and contention (Christensen 2007; Elgin 2018; Kelly 2010; Lackey 2010).1 Much of this literature has centered on whether rationality requires us to adjust our doxastic attitudes towards a proposition when we learn that an epistemic peer holds a different attitude towards that proposition.2 Non-conciliatory views of peer disagreement, including Jennifer Lackey’s justificationism and Tom Kelly’s total evidence views, hold that it can sometimes be rational to maintain the original attitude that you held before you learned that you disagree with a peer. Conciliatory views of peer disagreement, including David Christensen’s (2007) and Adam Elga’s (2007) conciliatory views, require individuals to adjust their doxastic attitudes towards a proposition in response to learning of peer disagreement about that proposition. Relatedly, epistemologists have also invested in recent efforts in understanding the relationship between one’s higher-order evidence, one’s evidence about one’s evidence, and one’s first-order evidence, one’s evidence that bears directly on the truth of a proposition (Christensen 2010). Peer disagreement provides us one type of higher-order evidence as it shows us that the first-order evidence which we originally took to support our attitude towards some proposition may not support that attitude, since our peer takes it to support some other doxastic attitude.3 Within the broader literature on the relationship between first-order evidence and higher-order evidence, Richard Feldman (2005), Michael Huemer (2011), Sophie Horowitz (2014), and others have argued that beliefs of the form “p, but my evidence does not support p” are irrational. On the face of it, it seems that they’re right; these level-splitting beliefs appear to be irrational since they involve believing p when our higher-order evidence indicates that our first-order evidence fails to support p. This holding onto belief in p might seem dogmatic and, in Feldmanian terms, an epistemic act of disrespecting our (higher-order) evidence rather than respecting it.

240

Margaret Greta Turnbull and Eric Sampson

Similarly, non-conciliatory views of peer disagreement are sometimes understood as dogmatic since they involve holding onto one’s views even after learning that those views are not shared by those who resemble us in our abilities to reason well (Elgin 2018, 17). In this chapter, we will attempt to address this dogmatism problem for non-conciliationism by showing how non-conciliationists can adopt rational level-splitting beliefs which allow them to exemplify intellectual humility. Many discussions of moral disagreement assume that non-conciliatory views of disagreement are false and that conciliatory views of disagreement are true (see, e.g., Klenk 2018). We will take the opposite tack. We will assume a non-conciliatory view of disagreement, argue that a peculiar type of level-splitting belief can be rational, and then show that when non-conciliationists adopt these level-splitting beliefs, they can demonstrate intellectual humility. By showing how non-conciliatory views of disagreement and level-splitting beliefs can be combined, we will provide an indirect argument in support of non-conciliationism, demonstrating that non-conciliatory views of disagreement need not be dogmatic. In the second section, we identify a specific type of moral disagreement between peers. In the third section, we argue that when nonconciliationists find themselves in this type of moral disagreement, they can rationally adopt level-splitting beliefs. In the fourth section, we show how these level-splitting beliefs in response to moral disagreement allow those who hold them to exemplify intellectual humility rather than dogmatism. We conclude in the fifth section.

2 Divergence Moral Disagreements Consider the following case of moral disagreement: Veganism: Peyton and Brenna are careful ethical reasoners. They reflect on potential actions and on whether those actions are morally permissible before acting and are concerned to make sure that their actions are deemed permissible by their particular moral viewpoints. Since they have discussed ethical issues at great length over time, Peyton and Brenna view each other as peers about ethical matters, as roughly each other’s intellectual equals when it comes to moral issues. One day, Peyton and Brenna find that they disagree about the ethics of eating nonhuman animals. Brenna believes that Act V, eating nonhuman animals, is morally permissible, and Peyton believes that Act V is not morally permissible.4 But there is a further complication. Brenna and Peyton have been aware of the fact that although they view each other as equally careful, capable, and effective moral reasoners, Brenna is a cultural relativist and Peyton is an objectivist about morality – that is, Peyton thinks that the truths about morality do not constitutively depend on any

Rational Level-Splitting Beliefs

241

cultural practices. As a consequence, Brenna and Peyton discover after extended discussion about V that Brenna takes their shared culture’s practices (S) to bear on the moral permissibility of V and Peyton rejects their culture’s practices as bearing on V. As Brenna describes her moral reasoning to Peyton, Peyton learns that Brenna takes the widespread acceptability of eating meat in their culture as a portion of the reasoning that leads her to view V as permissible. Peyton, meanwhile, holds that eating nonhuman animals is morally impermissible regardless and in spite of S. While Brenna takes S to bear on the permissibility of V, Peyton denies that S bears on V’s permissibility. To review, Brenna views V (eating nonhuman animals) as permissible and takes S (their shared culture’s practices) to bear on the permissibility of V, while Peyton believes V is impermissible and denies that S bears on V. Initially, when Brenna learns that she disagrees with Peyton about the permissibility of V, she receives what we’ll call opposition evidence. Opposition evidence: Evidence that an agent whom I regard as my epistemic peer with respect to p holds an opposing doxastic attitude to my attitude towards p.5 As we’ve noted, epistemologists have argued about what the rational response to opposition evidence is. Some have defended conciliatory views of the rational response to disagreement provided opposition evidence. If you hold a conciliatory view of disagreement, you think the rational response to opposition evidence is to adjust your original doxastic attitude towards the proposition under disagreement.6 On the other hand, if you hold a non-conciliatory view of disagreement, you think that in some cases, you can receive opposition evidence and maintain the original attitude that you held towards the disputed proposition. But in Veganism, Peyton doesn’t merely receive opposition evidence. After discussing their disagreement, she learns from Brenna that Brenna not only disagrees with her about the permissibility of V but also disagrees with her about which considerations make V (im)permissible. We’ll call considerations that individuals take to make some act, A, (im) permissible A-bearing considerations. While Peyton does not view S as bearing on V, Brenna understands S as a V-bearing consideration. Let’s call the additional evidence that Peyton receives from Brenna when she learns that Peyton does not take the same considerations as bearing on V, moral divergence evidence. Moral divergence evidence: Evidence that an agent whom I regard as my epistemic peer with respect to moral considerations about some act, A, takes a different set of considerations to be bearing on A.

242

Margaret Greta Turnbull and Eric Sampson

Note that moral divergence evidence is higher-order evidence. It is evidence about my evidence, because it gives me reason to think that I may have failed to rationally select the considerations that make it more or less likely that A is permissible when forming my attitude concerning A’s permissibility. Moral divergence evidence has two possible roles. First, it can give me reason to think that the considerations that I took to bear on the permissibility of A do not tell the whole story. Call this the incompleteness role of moral divergence evidence. Moral divergence evidence in its incompleteness role suggests that the set of considerations that I presently take to bear on the permissibility of a specific action may be incomplete in an important way when I learn that my peer takes some consideration to bear on that action’s permissibility that I do not. In Veganism, Peyton receives moral divergence evidence in its incompleteness role from Brenna. Second, moral divergence evidence can tell me that some consideration that I originally took to bear on the permissibility of an action may not bear on that action if I learn that a peer does not take that consideration to bear on the permissibility of the action under disagreement. Call this the extraneous evidence role of moral divergence evidence. In Veganism, Brenna receives moral divergence evidence in its extraneous evidence role. For purposes of concision, we will focus on examining the rationality of holding level-splitting beliefs in response to receiving divergence evidence in its incompleteness role, as Peyton does in Veganism.7 In Veganism, we’ve seen that peers disagree both about the permissibility of an action and about the considerations that bear on the permissibility of that action. In the following section, we’ll consider how those who hold non-conciliatory views of disagreement might rationally hold levelsplitting beliefs in response to situations where they receive both opposition evidence and moral divergence evidence in its incompleteness role.

3 In Defense of Level-Splitting Beliefs Before defending level-splitting beliefs in the contexts of disagreements like Veganism, we should consider what kind of level-splitting belief Peyton might adopt in response to discovering her double disagreement with Brenna about the permissibility of Act V and about what considerations bear on V. In particular, we should consider what kind of levelsplitting belief Peyton might adopt in response to Veganism if she is a non-conciliationist about peer disagreement, in keeping with the focus of this chapter. In discussing Peyton’s post-Veganism beliefs, we’ll use proposition v: eating nonhuman animals is permissible. We will also assume that Peyton is in the kind of context in Veganism in which her nonconciliationism indicates that it is rational for her to maintain her belief that ~v even after learning that her peer, Brenna, believes v.

Rational Level-Splitting Beliefs

243

While previous discussions of level-splitting beliefs have focused on beliefs of the form “p, but my evidence does not support p,” we can translate this belief into a form relevant to moral disagreement: “Act A is (im)permissible, but the considerations I take to be A-bearing do not support A’s (im)permissibility.” In Veganism, however, let’s consider how Peyton’s non-conciliationism could affect her understanding of the considerations that could possibly bear on Act V’s permissibility. Since Peyton’s non-conciliationism can allow her to maintain that ~v in response to learning that Brenna believes v, it seems plausible that her non-conciliationism will also apply to her understanding of which considerations bear on Act V’s permissibility. Peyton does not take S, the shared cultural practices, as bearing on V, whereas Brenna takes S to be bearing on V. Peyton’s non-conciliationism should apply equally to this disagreement about whether S is bearing on V as it does to the disagreement about v. In other words, thanks to her non-conciliationism, Peyton may rationally maintain that the set of considerations that bear on V does not include S. So Peyton may, in the aftermath of Veganism, rationally believe “~v, and the considerations I take to bear on v support ~v,” by the lights of non-conciliationism.8 But how might Peyton then hold a level-splitting belief if she can rationally believe that ~v and that the considerations that she takes to bear on V support her belief that ~v? In the spirit of intellectual humility and in recognition of her peerhood with Brenna, Peyton may reflect and realize that were she to take S to bear on V as Brenna does, this new set of V-bearing considerations (S + Peyton’s original set of V-bearing considerations) would support a different attitude towards v. If an agent takes certain shared cultural practices to be relevant to the permissibility of eating nonhuman animals, then it seems likely that they will conclude that it is permissible to eat nonhuman animals, at least in light of the assumption that this agent is a member of many contemporary cultures. In many contemporary cultures, human beings have eaten nonhuman animals freely and have even viewed meat from nonhuman animals as culturally significant (e.g. barbecue, in its various forms). Peyton and Brenna’s shared cultures’ general approval of eating nonhuman animals, if it bears on V, implies that Act V is permissible. So if Peyton at some point changes her mind and decides to take S to bear on V, she could conceivably come to believe v, eating nonhuman animals is permissible, rather than her original view of ~v. This possibility shows us how Peyton might adopt a distinct and previously overlooked form of level-splitting belief, which we’ll call a moral divergence belief. Moral divergence belief: A belief of the form of A is (im)permissible, but the set of A-bearing considerations as understood by my peer does not support the view that A is (im)permissible.

244

Margaret Greta Turnbull and Eric Sampson

As we have just explicated, Peyton could easily find herself holding a moral divergence belief. She could hold the belief “~v, but the set of V-bearing considerations as understood by my peer Brenna does not support ~v.” And if Peyton is in the particular epistemic context in which it’s rational for her as a non-conciliationist to go on maintaining that ~v and that S does not bear on V, as we’ve assumed, then it seems that at first blush, it could be rational and even an act of admirable epistemic humility for her to hold such a moral divergence belief. While rationality may not require that Peyton adopt a level-splitting belief, we will argue for the rest of the chapter that it is rationally open for Peyton to adopt a moral divergence belief in response to Veganism and, further, that this response to Veganism should be attractive to her because it exemplifies intellectual humility. To show that it is rationally permissible for Peyton to hold a moral divergence belief in response to Veganism, we’ll now respond to two objections to level-splitting beliefs from Sophie Horowitz (2014) and will show that moral divergence beliefs can be defended from these objections in a way that previously discussed sorts of level-splitting beliefs cannot. Call Horowitz’s first objection the lucky belief objection to beliefs of the form “p, but my evidence does not support p.” According to Horowitz, if I hold a level-splitting belief, then I will “naturally wonder how [I] came to have this particular true belief” (Horowitz 2014, 725). She notes that “usually, we come to have true beliefs by correctly evaluating our evidence” but that when we hold level-splitting beliefs, we believe that our “evidence doesn’t support P. So perhaps [we] should just think that [we] got lucky” (Horowitz 2014, 725). If my evidence does not support p and yet I still believe p, then Horowitz argues that I’m committed to thinking that I arrived at my true belief in p via luck or chance since I can’t maintain that I got it by assessing the evidence correctly. This consequence also seems to suggest that level-splitting beliefs are often unjustified, since many epistemologists are hesitant to award justification to beliefs that are true only by virtue of luck or chance. But the lucky belief objection does not show that true moral divergence beliefs are necessarily arrived at via luck or chance. When Peyton holds a moral divergence belief of the kind we’ve discussed, she believes that the set of V-bearing considerations as understood by her peer does not support ~v. We have not argued that it would be rational for her to believe that the considerations that she does take to bear on Act V’s (im)permissibility do not support ~v while she continues to believe ~v. Instead, in holding a moral divergence belief of the kind we’ve discussed, she may believe that the set of V-bearing considerations according to Brenna does not support ~v in the aftermath of receiving divergence evidence that tells her that her epistemic peer has assessed the possible V-bearing considerations available to her differently than she

Rational Level-Splitting Beliefs

245

has. But this divergence evidence does not give Peyton any special reason to think that considerations that she has taken as bearing on V fail to indicate that ~v. Those who rationally hold moral divergence beliefs on our account are in the contexts in which their non-conciliationist views of disagreement indicate that they have good reason to believe the following: (1) They have correctly selected the considerations that bear on the (im)permissibility of the act in question. (2) This set of considerations supports the view of that act’s (im)permissibility that they originally held. Peyton can maintain that she has arrived at a true belief that ~v by correctly evaluating the considerations that she takes to bear on V’s (im) permissibility while holding the moral divergence belief “~v, but the set of V-bearing considerations as understood by my peer does not support ~v.” So the lucky belief objection fails to show that true moral divergence beliefs must be arrived at via luck or chance. We’ll call Horowitz’s second objection the better belief objection. First, Horowitz argues that level splitting allows level-splitting believers to conclude that their evidence, which they believe does not support p when they believe “p, but my evidence does not support p,” is misleading. Horowitz claims that epistemically akratic believers can reason as follows: “P is true. But all of my evidence [E] relevant to P does not support it. It supports low confidence in a true proposition, P, and therefore high confidence in a false proposition, ~P. So E is misleading” (Horowitz 2014, 726). Horowitz goes on to admit that while “it can even be rational, in some cases, to conclude that your total evidence is misleading,” epistemically akratic believers should not simply conclude that their evidence is misleading when they “can avoid being misled” (Horowitz 2014, 727). To avoid being misled, they “can point to a particular belief of [theirs] that is, [they think], unsupported by [their] total evidence” and then adjust that belief (Horowitz 2014, 727). When I believe “p, but my evidence does not support p,” Horowitz argues that I should not simply conclude that my evidence is misleading and believe akratically. Instead, I should follow where my evidence leads and adopt a better belief by either suspending judgment in p or believing ~p. Applied to our argument here, the better belief objection as we understand it results in the consequence that if Peyton holds a moral divergence belief and recognizes that the set of V-bearing considerations as understood by her peer does not support ~v, then she should avoid level splitting entirely and suspend judgement in v as well or even believe v if that’s what’s indicated by the set of V-bearing considerations as understood by Brenna. The better belief is to suspend judgement in v or believe v

246

Margaret Greta Turnbull and Eric Sampson

when the set of V-bearing considerations as understood by her peer does not other support ~v. In other words, she is required to abstain from holding a moral divergence belief. In keeping with the focus of this project, we’ll provide reasons from a non-conciliatory view of disagreement for refraining from following the dictates of the set of V-bearing considerations as understood by one’s peer or suspending judgment in v when it’s unclear whose set of V-bearing considerations we should take as bearing on V. Recall that the non-conciliationist who rationally holds a moral divergence belief in Veganism possesses the right sort of justification for maintaining their original doxastic attitude towards v, even in the face of opposition evidence. So by the lights of their view of disagreement, the non-conciliationist holds the view of v that they ought to hold. To ask Peyton to suspend judgement in v in Veganism is to ask her to suspend judgement in a belief for which she believes she has justification and which she takes to be supported by the considerations that she understands as bearing on V. Even if the option to follow the dictates of the set of V-bearing considerations as understood by Brenna is perhaps rationally open to Peyton if she wishes to avoid level splitting, we will need an additional argument to show the non-conciliationist that she is rationally required to do so. Again, the better belief objection falls short when applied to the divergence beliefs of concern to the present project. But consider a pertinent objection, inspired by Horowitz’s objections, to the preceding argument. It seems that in some cases of disagreement that involve divergence evidence in its incompleteness role, the agent receiving the divergence evidence may be dogmatic if they don’t take the considerations that their peer takes as bearing on V as bearing on V themselves. Perhaps now that Peyton is aware that Brenna takes S to bear on V, she should take S as part of the total considerations that she takes to be bearing on V. To put the point more clearly, imagine a different version of Veganism in which Brenna is an expert on alimentary ethics and Peyton is not. If Peyton holds a moral divergence belief in this version of Veganism, it seems that she is likely believing irrationally. Brenna is an expert on ethical issues pertaining to food, so Peyton should likely privilege Brenna’s assessment of which considerations bear on V over her own and should potentially take S as bearing on V since Brenna, the expert, does. To fail to do so would be apparently irrational of Peyton, not unlike a citizen who refuses to update their beliefs in response to the information provided to them about climate change from scientific experts. When we reduce Brenna’s epistemic standing back to peerhood with Peyton, we might have a similar intuition. Perhaps since Peyton considers Brenna to be roughly her epistemic equal, her peer with respect to v, she should take S as bearing on V in order to be undogmatic in the face of their disagreement. This objection is worth listening to. Importantly,

Rational Level-Splitting Beliefs

247

there may be cases in which Peyton, within her non-conciliationism, is led away from a moral divergence belief and towards taking S, the relevant shared cultural practices, to bear on v: eating nonhuman animals is morally permissible. We’ll use Jennifer Lackey’s justificationist non-conciliatory view to provide some suggestions about when non-conciliationists who receive divergence evidence in its incompleteness role should hold a moral divergence belief. According to Lackey’s view, I can permissibly maintain the original doxastic attitude that I held towards p when we disagree about p if my “belief that p enjoys a very high degree of justified confidence” and if I have “a relevant symmetry breaker” that allows me to privilege my doxastic attitude over yours (Lackey 2010, 319). Lackey is most interested in the “relevant symmetry breakers” that she terms personal information: “information about myself that I lack with respect to you” (Lackey 2010, 309–10). According to Lackey, personal information is information that one has about the normal functioning of one’s own cognitive faculties. I may, for instance, know about myself that I am not currently suffering from depression, or not experiencing side effects from prescribed medication, . . . whereas I may not know that all of this is true of you. (Lackey 2010, 310) A high degree of justified confidence in my belief that p plus my access to personal information about my reasoning about p makes it rational for me to maintain my belief that p even when I learn that you, my peer, believe ~p. On the other hand, according to Lackey’s view, if my “belief that p enjoys a relatively low degree of justified confidence,” I am “rationally required to substantially revise the degree to which [I hold] the belief that p” (Lackey 2010, 319). We’ll use the proposition b, S bears on V, in our discussions of possible responses to Veganism. If we are non-conciliatory justificationists like Lackey, we will now have two propositions in Veganism whose degrees of justified confidence we must assess in determining how we will respond to Veganism. Since we are assuming that those who hold divergence beliefs are convinced non-conciliationists in the right contexts to hold onto their original beliefs, we will assume that Peyton has access to personal information about her cognitive processes that she lacks with respect to Brenna. Peyton, we saw, believes ~v and ~b in Veganism. But her degree of justified confidence in one of these propositions may be higher or lower than her degree of justified confidence in the other. Further, she may have a high degree of justified confidence in one that under Lackey’s view could permit her to maintain her original doxastic attitude towards that proposition and a lower degree of justified confidence in the other that

248

Margaret Greta Turnbull and Eric Sampson

would require her to adjust her credence in that proposition, presumably in the direction of her peer’s credence. Noticing this possibility can help us to understand when Peyton can rationally hold a divergence belief consistent with non-conciliationism and when she cannot. First, consider the possibility that Peyton has a high degree of justified confidence in ~v but a low degree of justified confidence in ~b. If this is the case, she will not be able to rationally hold a moral divergence belief, even apart from worries about level-splitting beliefs. Her justificationist non-conciliationism will require her to adjust her credence in ~b, similar to a conciliatory response to disagreement about b, potentially even requiring her to take S as bearing on V. If her adjustment in her credence in b causes her to take S as bearing on V, then her degree of justified confidence in her belief that ~v could conceivably decrease, especially if she also believes that S does not support ~v, as we’ve suggested she easily could. So being a consistent justificationist in response to Veganism will likely require her to refrain from maintaining that ~v if she holds merely a low degree of justified confidence in ~b. Next, consider the possibility that Peyton has a low degree of justified confidence in ~v but a high degree of justified confidence in ~b. If she is a faithful justificationist, she will adjust her credence in ~v and will by no means find herself continuing to believe ~v. So her high degree of justified confidence in ~b, for the purposes of moral divergence beliefs, will be superfluous. She cannot rationally hold a moral divergence belief in this context, again merely given the strictures of her non-conciliationist justificationism. Finally, consider the possibility that Peyton has a high degree of justified confidence in ~v and a high degree of justified confidence in ~b. In this case, Peyton can rationally hold a moral divergence belief according to the defense we’ve given here, coupled with Lackey’s justificationist non-conciliatory view. Lackey’s justificationist view as we’ve interpreted it licenses her to maintain that her original selection of evidence was not incomplete by allowing her to maintain her high degree of justified confidence that ~b and will also allow her to maintain her high degree of justified confidence in ~v. Maintaining these high degrees of confidence in response to Veganism, however, will not prevent her from recognizing that the set of V-bearing considerations as understood by Brenna does not support ~v. By adopting a moral divergence belief, Peyton can acknowledge that if she were to gain future evidence that reduced her high degree of justified confidence in ~b, the new set of considerations that she took to bear on V, now including S, may not support ~v. So Peyton can rationally believe “~v, but the set of V-bearing considerations as understood by my peer does not support ~v.” While this explication may appear to be just a mere restatement of the non-conciliationist requirements for holding rational beliefs, we think it can help us to better see how we can know when we can rationally

Rational Level-Splitting Beliefs

249

hold moral divergence beliefs, in light of the defense of moral divergence beliefs given in this section. To avoid the lucky belief objection, we need agents who are justified in believing that the considerations that they take to bear on V support ~v. Having a high justified degree of confidence that ~v plausibly indicates that Peyton is justified in believing that the set of considerations she takes to bear on V supports ~v. To avoid the better belief objection, we need agents who are justified in believing that the set of considerations they take to bear on V is not incomplete. Having a high degree of justified confidence that ~b similarly seems to indicate that Peyton is justified in maintaining that the set of considerations she takes to bear on V is not incomplete, contra Brenna.

4 Intellectual Humility and Divergence Beliefs If what we’ve argued is correct, then if you hold a non-conciliatory view of disagreement, a new response to disagreement is now available to you. You can hold a moral divergence belief that allows you to stick to your non-conciliatory guns while admitting that if you understood the relevant A-bearing considerations the way your peer does, you would be led to a different belief. We’ve argued that it’s rationally open for non-conciliationist agents like Peyton to adopt moral divergence beliefs by defending them from some criticisms of level-splitting beliefs from Horowitz. We will now show why adopting a moral divergence belief should be attractive to non-conciliationists. More precisely, we will argue that moral divergence beliefs allow non-conciliationists to dispel some of the dogmatic appearance of their view and to exemplify intellectual humility instead. Moral divergence beliefs provide non-conciliationists with a unique way to exemplify intellectual humility in the face of peer disagreement without giving up their beliefs in the disputed propositions. In this section, we’ll outline a few thoughts about intellectual humility and then show how moral divergence beliefs exemplify intellectual humility. We begin by noting the obvious fact that intellectual humility is a virtue. Thus, it lies, as many virtues do, in a mean between two vices. In this case, the vices on either side of intellectual humility are intellectual arrogance and intellectual servility.9 The intellectually arrogant person characteristically overrates their intellectual abilities relative to others and is insufficiently sensitive to their own intellectual limitations.10 For example, as an undergraduate, one of the authors of this chapter wrote a paper arguing that all philosophical disputes are the product of imprecision in language. If philosophers would just get clear about the meanings of their terms, I thought, all philosophical problems would dissolve. I was able to see this fundamental problem plaguing philosophy. The professional philosophers had missed it. But fear not: I was going to set things straight in my five-page paper. Upon reflection, this strikes us as a paradigm case of intellectual

250

Margaret Greta Turnbull and Eric Sampson

arrogance. I (vastly) overrated my own intellectual abilities and was insufficiently sensitive to my own intellectual limitations.11 That’s one extreme. The intellectually servile, by contrast, characteristically underrate their intellectual abilities and are often so sensitive to their intellectual limitations that they defer too quickly, or too much, to their intellectual inferiors and peers. Take for example, the straight-A philosophy student, who despite constant praise from their professors for their excellence in philosophy are still reluctant to raise their hand in class for fear that their ideas are not up to snuff for an undergraduate philosophy discussion. This student thinks that they have nothing interesting to offer despite plenty of evidence to the contrary. Now surely this student is correct that they have much to learn about philosophy (don’t we all?) and that nothing they say will decisively settle the philosophical matter under consideration. But they are also mistaken, and surely underrating their abilities relative to their peers, if they think that they have nothing worthwhile to contribute to a casual philosophy discussion between fellow students. Even if we sympathize with this student and would prefer to be them rather than their intellectually arrogant counterpart described earlier, they are not intellectually humble. They’re intellectually servile. This is the other extreme. We want to hit the mean between these two. One way to do this in the context of moral disagreement, a way that doesn’t involve giving up one’s belief in the controversial proposition, is to conciliate to some degree at the level of one’s credence without conciliating at the level of one’s belief. For example, when Peyton disagrees with Brenna about both v and b, she may exemplify epistemic humility by decreasing, to some degree, her credence in ~v and ~b (as Lackey’s and Kelly’s respective non-conciliatory views allow) while retaining her all-ornothing belief in these propositions. On the dualist view about the relation between beliefs and credences, this possibility exists since credences are those more-fine-grained doxastic attitudes that we have that can change even as our all-or-nothing beliefs may not.12 In decreasing her credences in the disputed propositions, Peyton can show respect for Brenna’s intellectual excellence (as intellectual humility plausibly requires) while maintaining her controversial moral belief. To fail to change her doxastic states at all would plausibly constitute intellectual arrogance on Peyton’s part, but to fully conciliate at the level of both belief and credence would plausibly constitute intellectual servility. Peyton regards Brenna as an intellectual equal and therefore ought to take Brenna’s views into some consideration, but Peyton must also respect her own opinion. After all, Peyton is no fool. She’s smart, careful, and otherwise intellectually virtuous too. Thus, Peyton should not easily give up on her own views, which she has formed virtuously (even if not infallibly), as we’ve assumed. But some still worry that when an agent retains belief in the face of disagreement with a peer, they are thereby committed to thinking that

Rational Level-Splitting Beliefs

251

they are, or their view is, better, in some sense, than their dissenting interlocutor or their view. But this is also mistaken. Most non-conciliationists about disagreement think that it’s possible that both parties to a peer disagreement are fully rational – that neither has made a mistake in assessing the evidence for the disputed proposition. This is because, for all we’ve said, permissivism may be true (see, e.g., Schoenfield 2014). It may be that, given a proposition, two agents, and a body of evidence, there is more than one permissible doxastic state concerning that proposition for the agents. Thus, it may be that two peers have diverging beliefs without either being guilty of irrationality. Given this possibility, an agent could disagree with a peer and hold onto their belief while also thinking that their peer is rational. They need not think that their own view is uniquely rational. (However, obviously, they must believe that their own view is uniquely true. That just follows from the nature of belief. Believing that p is believing that p is true and ~p is false.) Thus, they need not think that they are a better inquirer than their peer or that their view is rationally better than their peer’s. So they may, without arrogance or servility, retain their belief in the face of controversy. This is especially true on the level-splitting view that we’ve outlined in this chapter, since it is part of our level-splitting story that the agent retaining their belief in the face of controversy recognizes that the considerations as their interlocutor sees them support the view that their interlocutor actually holds. Finally, let us remember that virtues constrain one another. To see this, consider the moral virtues. It’s not benevolent to cut up one innocent person to distribute their organs to three sick people. It’s unjust. Justice constrains benevolence. It’s not humble for a soldier defending their homeland against a modest force of unjust invaders to drop their weapons and surrender when their side is equally strong or stronger. It’s cowardly. Courage constrains humility. It’s not compassionate for a doctor to tell a terminally ill patient they’re in perfect health to spare them from the unpleasant news. It’s dishonest. Honesty constrains compassion. It is the same with the intellectual virtues. It’s not intellectually humble, we suggest, to become agnostic about value, meaning, justice, and the great questions of the moral life simply because there are smart folks out there who disagree.13 It’s (at least potentially) intellectually cowardly. Some moral propositions are worth taking an intellectual risk for. They’re worth the risk of believing falsely, or being duped (as William James famously put it)14 or being mistaken. Intellectual courage thus permits (or even requires) holding some views about morality in the face of disagreement, which is admittedly intellectually risky. This does not entail, of course, that anything goes – that one can just believe as one pleases. There is such a thing as being intellectually rash too – that is, taking intellectual risks when the risk of being wrong is too great. For example, being exceedingly confident that satisficinghedonistic-rule utilitarianism (a highly specific form of utilitarianism)

252

Margaret Greta Turnbull and Eric Sampson

is correct may well be irrational in the face of so much disagreement about it from excellent philosophers. But plausibly it is not irrational to think that some version of consequentialism or some version of nonconsequentialism is correct. In any case, our suggestion that intellectual courage permits even humble belief in the face of controversy does not entail that one can believe whatever one pleases and call it intellectual courage. Some moral beliefs in the face of controversy may well be intellectually rash. But not all of them are. With these thoughts about intellectual humility in mind, let’s consider how moral divergence beliefs can help non-conciliationists to exemplify intellectual humility. First, moral divergence beliefs allow nonconciliationists like Peyton to occupy the mean between intellectual arrogance and intellectual humility. In holding the moral divergence belief “~v, but the set of V-bearing considerations as understood by my peer does not support ~v,” Peyton avoids the intellectual arrogance and dogmatism often attributed to non-conciliationist views. Peyton is not merely believing ~v, as most non-conciliatory views of disagreement would suggest she do. Including “but the set of V-bearing considerations as understood by my peer does not support ~v,” the level-splitting portion of her belief allows her to explicitly call to attention her recognizing some broad overall uncertainty about which considerations bear on V. This recognition of overall uncertainty about the considerations that bear on V (and about the beliefs that stem from these considerations) seems to run counter to charges of dogmatism. And maintaining her belief that ~v in response to learning what Brenna believes allows her to avoid total intellectual servility. It seems that this moral divergence belief thus allows Peyton to occupy the narrow territory of humility, between servility and arrogance. Further, this moral divergence belief is consistent with both Peyton and Brenna holding rational doxastic attitudes towards v and rational assessments of which considerations bear on V. In holding the belief “~v, but the set of V-bearing considerations as understood by my peer does not support ~v,” Peyton is not in any way implying that Brenna’s beliefs and assessment of which considerations bear on V are less than fully rational or that Peyton’s belief is more or less rational than Brenna’s. Finally, we have not argued that it is always rational or in keeping with intellectual humility for non-conciliationists to hold moral divergence beliefs. Sometimes we will find that intellectual courage and humility are overruled by concerns about intellectual rashness, and it will no longer be intellectually virtuous for us to maintain our assessment of the moral (im)permissiblity of an act and the considerations that determine that (im)permissibility. But a similar provision is already present, as we have noted in the previous section, within non-conciliatory views. Non-conciliatory views hold only that it is sometimes rational for individuals to hold onto belief in response to peer disagreement. Similarly, we argue that it is sometimes rational and intellectually virtuous for non-conciliationists to hold moral divergence beliefs in response to peer disagreements like Veganism.

Rational Level-Splitting Beliefs

253

5 Conclusion Moral disagreement among both folk and moral philosophers is widespread and entrenched, and this isn’t going to change any time soon. It sometimes appears that refusing to conciliate in response to this widespread disagreement is dogmatic. We think that this view is mistaken and have tried to show why. When agents disagree both about a moral question and the considerations that bear on that question (as often happens), they can rationally hold a kind of level-splitting belief in a way that permits non-conciliationists to humbly retain their beliefs in the controversial moral proposition. Indeed, if what we’ve argued is correct, retaining one’s moral beliefs in the face of disagreement may, far from being intellectually vicious, exemplify moral courage, an oft-overlooked virtue of the mind needed in our current epistemic climate, where epistemic dangers – in the form of disagreement from excellent philosophers – lurk around every corner.

Notes 1. Thanks to Catherine Elgin, Branden Fitelson, Charity Anderson, Richard Atkins, the audience at the 2018 Significance of Higher-Order Evidence Conference in Cologne, Brian Barnett, and Michael Klenk for helpful comments and suggestions. Earlier versions of portions of this chapter appeared in Turnbull (2019, ch. 4). 2. We will use attitude or doxastic attitude to refer to the range of full or coarse-grained doxastic attitudes such as belief, disbelief, and suspension of judgement and to degreed attitudes, including credences. 3. This view that learning of peer disagreement provides one with higher-order evidence is widely accepted in related discussions. But see Risberg & Tersman (this volume) for an argument that disagreement provides one not with higher-order evidence but with an undercutting defeater. 4. We will use the capitalized V to refer to the act of eating nonhuman animals and the lowercase v later to refer to proposition v: eating nonhuman animals is permissible. 5. Although Elga’s (2007, 493–4) account of peerhood will dismiss these individuals as epistemic peers, we are operating under a looser, non-idealized understanding of peerhood, on which peer disagreement is something that individuals encounter relatively frequently. As we understand the term, peers may not share the same evidence and identical reasoning powers, but it is nevertheless rational for them to view each other as roughly intellectual equals. Peers, on our account, share at least similar bodies of evidence and similar reasoning powers. This might seem to give us “non-conciliationism on the cheap,” since by removing the requirement that individuals share all of the same evidence, we are allowing the possibility that individuals may disagree in part because they hold different bodies of evidence. But we are assuming, not arguing for non-conciliationism. If it turns out that most philosophers are non-conciliationists in non-idealized contexts in which agents do not hold identical sets of evidence, so much the better for the broader applicability of our argument that level-splitting beliefs can be useful to nonconciliationists. Thanks to Brian Barnett for helpful comments on this point. 6. Some philosophers, so-called moral testimony pessimists, have argued that there is something distinctively problematic, either morally or epistemically,

254

7. 8.

9. 10. 11. 12. 13.

14.

Margaret Greta Turnbull and Eric Sampson about deferring to the testimony of others on matters of morality. If this is correct, then perhaps one is not rationally required to revise one’s doxastic attitudes in the face of moral disagreement – that is, conflicting moral testimony. We assume here, however, that pessimism is not correct. For a further exploration of this issue, see Lee, Sinclair, and Robson (this volume). In addition, the possibility of rational level-splitting beliefs in response to receiving moral divergence evidence in its extraneous evidence role is less obvious, for various reasons that we don’t have space here to detail. Just when it will be rational for her to do so will depend on the particular non-conciliationist view she holds. For example, according to Lackey’s (2010) justificationist view, she must have a high degree of justified confidence that ~v and that S does not bear on V as well as certain symmetry breakers that disrupt her perception of equality between her epistemic position with respect to v and Brenna’s epistemic positions with respect to v. For an excellent overview of the literature on intellectual humility, see Whitcomb et al. (2017). We don’t intend for this to be a definition, or an analysis, of intellectual humility. It’s meant only as a rough characterization to help us get a grip on something that we hope we recognize pretheoretically. It was Eric. For more on belief-credence dualism and its applications in epistemology, see Jackson (2019). And let’s be clear: there are plenty of smart folks out there who disagree. There are excellent philosophers on almost all sides of almost all morally important questions. If you’re thinking, “Yeah, but no one thinks slavery or wanton torture is permissible,” that’s no doubt true. But there are plenty who think that neither is wrong. They’re called error theorists. And though they are not well represented in metaethics, they are much better represented among philosophers more generally. This is for the same reason that there are few atheists in philosophy of religion: if you think the whole enterprise is bunk, you’re less likely to go into the field. See James (1897 [1979]).

References Christensen, David. 2007. “Epistemology of Disagreement: The Good News.” The Philosophical Review 116 (2): 187–217. Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomenological Research 81 (1): 185–215. https://doi.org/10.1111/j.1933-1592.2010. 00366.x. Elga, Adam. 2007. “Reflection and Disagreement.” Noûs 41 (3): 478–502. https:// doi.org/10.1111/j.1468-0068.2007.00656.x. Elgin, Catherine Z. 2018. “Reasonable Disagreement.” In Voicing Dissent: The Ethics and Epistemology of Making Disagreement Public, edited by Casey R. Johnson, 10–21. New York, NY: Routledge. Feldman, Richard. 2005. “Respecting the Evidence.” Philosophical Perspectives 19 (1): 95–119. https://doi.org/10.1111/j.1520-8583.2005.00055.x. Horowitz, Sophie. 2014. “Epistemic Akrasia.” Noûs 48 (4): 718–44. https://doi. org/10.1111/nous.12026. Huemer, Michael. 2011. “The Puzzle of Metacoherence.” Philosophy and Phenomenological Research 82 (1): 1–21. https://doi.org/10.1111/j.1933-1592.2010. 00388.x.

Rational Level-Splitting Beliefs

255

Jackson, Elizabeth Grace. 2019. “Belief and Credence: Why the Attitude-Type Matters.” Philosophical Studies 176 (9): 2477–96. https://doi.org/10.1007/ s11098-018-1136-1. James, William. 1897 [1979]. The Will to Believe and Other Essays in Popular Philosophy. Edited by Frederick Burkhardt, Fredson Bowers, and Ignas K. Skrupskelis. Cambridge, MA: Harvard University Press. Kelly, Thomas. 2010. “Peer Disagreement and Higher-Order Evidence.” In Disagreement, edited by Richard Feldman and Ted A. Warfield, 111–74. Oxford: Oxford University Press. Klenk, Michael. 2018. “Evolution and Moral Disagreement.” Journal of Ethics and Social Philosophy 14 (2): 112–42. https://doi.org/10.26556/jesp.v14i2.476. Lackey, Jennifer. 2010. “A Justificationist View of Disagreement’s Epistemic Significance.” In Social Epistemology, edited by Adrian Haddock, Alan Millar, and Duncan Pritchard, 298–325. Oxford: Oxford University Press. Lee, Marcus, Neil Sinclair, and Jon Robson. 2020. “Moral Testimony as HigherOrder Evidence.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Risberg, Olle, and Folke Terman. 2020. “Disagreement, Indirect Defeat, and Higher-Order Evidence.” In Higher-Order Evidence and Moral Epistemology, edited by Michael Klenk. New York: Routledge. Schoenfield, Miriam. 2014. “Permission to Believe: Why Permissivism Is True and What It Tells Us about Irrelevant Influences on Belief.” Noûs 48 (2): 193–218. https://doi.org/10.1111/nous.12006. Turnbull, M.G. 2019. “Uncovering the Roots of Disagreement.” PhD thesis, Boston College. Whitcomb, Dennis, H. Battalay, Jason S. Baehr, and D. Howard-Snyder. 2017. “Intellectual Humility: Owning Our Limitations.” Philosophy and Phenomenological Research 94: 509–39.

12 Epistemic Non-factualism and Methodology Justin Clarke-Doane

1 Introduction1 I discuss methodology in epistemology. I argue that settling the facts, even the epistemic facts, fails to settle the questions of intellectual policy at the center of our epistemic lives. An important upshot of the discussion is that the methodology of analyzing concepts like knowledge, justification, rationality, and so on is misconceived. More generally, any epistemic method that seeks to issue in intellectual policy by settling the facts, whether by way of abductive theorizing or empirical investigation, no matter how reliable, is inapt. The argument is a radicalization of Moore’s open-question argument. I conclude by considering the ramifications of this conclusion for the debate surrounding modal security, a proposed necessary condition on undermining defeat.

2 The Prospect of Pluralism Alston (2005) argues that certain debates in epistemology might be merely verbal. One party may be using the target word in one way, while the other is using it in another. If Alston is right, then paradigmatic debates in epistemology would be like a “debate” between moving observers over the simultaneity of two events. There would be a plurality of properties in the neighborhood, giving intuitively opposite verdicts on the question whether X’s belief that P counts as knowledge, is justified, or is supported by some evidence, and each party may be right of one of them (“intuitively” because the rival properties would strictly give verdicts on different questions – questions of knowledge1 and knowledge2, say – as with simultaneity). What would become of epistemological debates if Alston were right? It might be thought that they would go away, just like debates about simultaneity. We have given up the question of what is simultaneous with what in favor of the question of what is simultaneous with what relative to R, for variable reference frame R. We would likewise give up the question of whether X’s belief counts as knowledge, is justified, or is

Epistemic Non-factualism and Methodology

257

supported by some evidence. There would be different properties in the neighborhood, and all sides could agree that X’s belief exemplified one but not another. There would be no question left to ponder. However, there is an essential difference between the cases: epistemology is normative, while physics is not. Whatever that means exactly, it at least means that the truths that it discovers about knowledge, justification, evidence, and so on typically issue in policy. They issue in what to do, believe, infer, and so forth. It would be bewildering if an epistemologist were to say, “the evidential support relation is R, but do not worry about having beliefs that bear R to the evidence” (much as it would be if an ethicist were to say, “goodness is property G, but do not worry about seeking things that are G”). It would just invite the question, “why care about evidential support?” There are myriad properties that epistemologists could investigate. What makes knowledge, justification, and so on important is their connection to intellectual policy. This means that pluralism in epistemology would be a problem in a way that pluralism is not in the case of simultaneity. There would be a residue of policy questions that would be left unresolved by the epistemic facts. There would be justification1, knowledge1, and evidential support1, as well as justification2, knowledge2, and evidential support2. Learning that, for example, our belief that P is supported1 by the evidence but not supported2 would leave the policy question unanswered: whether to believe P. And while it might be thought that this question would just be resolved by a higher-order fact about which properties we epistemically ought to consult, that assumes that Alston’s worries do not arise for terms like epistemically ought themselves. If they did, then while we epistemically ought1 to consult properties P1, P2, . . . we epistemically ought2 to consult properties Q1, Q2, . . . . Our original question would merely get transposed. The policy question would now be whether to do what we epistemically ought1 or ought2 to do.

3 Illusive Questions I have been writing as if epistemological facts would fail to settle policy questions only if Alston were right about natural language semantics. There is no problem so long as he is wrong and so long as we all happen to use knowledge, justification, and so on in the same way. But, actually, a problem like this one arises in any case. Let us grant that epistemic terms are systematically univocal. We can always stipulatively introduce epistemic-like terms that diverge in extension (Eklund 2017). And now the policy question simply rearises: whether to consult these new properties or the old ones when regulating our beliefs, inferences, and so on. Could this question be settled by the epistemic facts? It could not, on pain of triviality. If the epistemic facts are good for anything, then they

258

Justin Clarke-Doane

are self-sanctioning in the sense that we epistemically ought to use epistemic ought. But this banality does not help us. We epistemically ought to use epistemic ought. We also epistemically* ought to use epistemic* ought, for various alternative epistemic-like properties (or operators), epistemic ought*.2 Whenever ought-like concepts diverge in extension, they issue in conflicting policy. We are left with the question of which to follow. On pain of triviality, this cannot turn on the question of which we ought to follow. This argument suggests that epistemic facts, even if there are any and even if they are the (determinate) subject of epistemic debates, fail to settle questions at the center of our epistemic lives. The argument is actually an example of a more general one. Moore (1903 [1988], Section 13) noted that an agent may believe that A is F, for any descriptive property, F, while failing to judge that A is morally good.3 For instance, if A is prioritizing one’s offspring, then they may believe that this is natural, that it is what we would desire to desire, or that it would maximize a certain psychological state, while failing to judge that doing this is morally good. However, there is a sense in which Moore’s point can be generalized even to evaluative properties, like goodness, themselves (Clarke-Doane forthcoming; Clarke-Doane 2015). An agent may believe that A is F, for any property whatever, while failing to “endorse” A in the sense that is characteristic of practical deliberation. This is because they may always wonder whether to do what is F, rather than F*, for some alternative F-like property, F*.4 As Blackburn puts it, “[e]ven if [a moral] belief were settled, there would still be issues of what importance to give it, what to do, and all the rest. . . . For any fact, there is a question of what to do about it” (1998, 70). Settling the facts, even the evaluative facts, fails to settle the policy questions at the center of our evaluative lives. It might be objected that this “new open-question argument” merely shows that we have failed to identify the facts that settle policy questions. Perhaps, for instance, it is facts about which epistemic-like properties we ought to consult in some non-epistemic sense of “ought,” whether moral, prudential, or all things considered (Das 2019). Many epistemologists would hold that epistemic norms are not overriding. Sometimes we ought to do what we epistemically ought not. If a gun is put to my head with the credible threat that my friend will be killed unless I believe that the number of stars is even, then I ought to believe even though I epistemically ought not. But if the argument works, then it works for any evaluative terms – whether moral, prudential, epistemic, or all things considered. Even if we all things considered ought to consult epistemic properties, we all things considered ought* not. And the policy question just rearises whether to do what we all things considered ought, or all things considered ought*, to do. Nor could the remaining question be one of speculative metaphysics. We know from Goodman (1983) that having true beliefs about a subject

Epistemic Non-factualism and Methodology

259

is one thing, whereas “getting it right” is another. Getting it right at least arguably requires having true beliefs that ascribe natural kinds. So one might be tempted to suggest that the question of which epistemic-like properties to consult turns on the metaphysical question of which epistemic-like properties are natural kinds.5 But either the question of which epistemic-like properties are natural kinds is itself evaluative or not. If not, then Moore’s open-question argument applies. Learning that epistemic ought is a natural kind would be like learning that it is heavy. It would be neither here nor there from the standpoint of policy. But if the question of which epistemic-like properties are natural kinds is evaluative, then the argument can just be rerun vis-à-vis natural kindhood. Even if epistemic ought is a natural kind, it is not a natural* kind, for some naturalness-like concept, natural*, and now the question is whether to theorize in terms of natural or natural* kinds (Dasgupta 2017). Maybe policy questions turn on an ineffable question of fact (Eklund 2017)? It does not seem so. There are two ways that the target propositions – call them policy propositions – could be ineffable. First, they could be structurally ineffable in the sense of Hofweber (2017). Their ineffability could be due to their failure to share anything like sentential structure. But if this were so, then it would be impossible to explain the connection between our linguistic behavior with epistemic sentences and the policy propositions that we ponder. If you utter S and I reply ~S, where S is an epistemic sentence, then we should at least be able to conclude that the policy propositions that we believe are inconsistent (even if they are not expressed by the sentences S and ~S). But if those propositions are structurally ineffable, then we do not even know whether “consistency” makes sense as applied to them – since we do not know whether there is any operation on them corresponding to sentential negation. So it is more promising to suggest that policy propositions are ineffable, because while they share sentential structure, policy properties are ineffable. If this is why policy propositions are ineffable, however, then we can simply reformulate the problem that policy propositions were supposed to solve. If there are a plurality of epistemic-like facts, then there are a plurality of policy-like facts as well. (Whether those facts are effable is neither here nor there.) And the question remains which of them to consult.6

4 Non-factualism and Methodology So the policy questions at the center of our evaluative lives are not settled by the facts – whether evaluative or not. Since the only way that this could be is if those questions were not themselves questions of fact, it follows that the policy questions at the center of our evaluative lives are not questions of fact. They are the questions of “what importance to give [the facts], what to do, and all the rest” that remain after the facts have been settled. In particular, even if there are epistemic facts and even if they are

260

Justin Clarke-Doane

the subject of typical epistemological debate, a central goal of a normative discipline like epistemology must be to issue in policy. And not even epistemic facts can do that – for much the reason that they could not if Alston were right about natural language semantics. This conclusion, if correct, has significant ramifications for epistemic methodology. Consider what is perhaps the standard methodology in epistemology: conceptual analysis. We consider cases and ask what we would “say.” Paradigms include Gettier’s (1966) “counterexamples” to the justified-true-belief analysis of knowledge, Goldman’s (1976) fake barn case, and BonJour’s (1980) clairvoyant Norman. There is considerable debate about whether conceptual analysis is a reliable means by which to determine the facts about knowledge, justification, and so on. But for our purposes, we can assume that the methodology is infallible. What is important is that this is not enough to show that results arrived at by way of conceptual analysis settle intellectual policy. That assumes that we ought to defer to the concepts that we happen to have inherited.7 Even if we could accumulate epistemic theorems as robust as Euclid’s, maybe our epistemic concepts are corrupt or merely submaximal. In that case, we could agree with Gettier, Goldman, Bonjour, and so on about the epistemic facts while disagreeing with them in the way that matters. We could advocate consulting epistemic*, rather than epistemic, facts. Let us consider a contemporary example. Modal Security is a currently discussed proposed necessary condition on undermining defeat (ClarkeDoane 2015, forthcoming, ch. 4). At first approximation, it says that if evidence undermines (rather than rebuts) our belief, then it gives us reason to doubt the belief’s modal security. The intuition is that if evidence tells neither directly against P (as it would if it were rebutting) nor against the security of our belief as to whether P, then it makes no sense to give it up. More carefully, the principle reads as follows (Clarke-Doane and Baras 2019): Modal Security: If evidence, E, undermines our belief that P, then E gives us direct reason to doubt that our belief is sensitive or safe or E undermines the belief that . Sensitivity and safety are defined as follows. Sensitivity: Our belief that P is sensitive just in case had it been that ¬P, we would not still have believed that P (had we used the method that we actually used to determine whether P). Safety: Our belief that P is safe just in case we could not have easily had a false belief as to whether Q, where Q is any proposition

Epistemic Non-factualism and Methodology

261

similar enough to P (using the method that we actually used to determine whether P). The primary interest of Modal Security is its bearing on epistemological arguments against realism. The two most prominent examples are genealogical debunking arguments against moral realism and the Benacerraf–Field Challenge against mathematical realism. If Modal Security is true, then all such arguments would seem to fail (Clarke-Doane forthcoming, ch. 4). Consequently, there has been considerable interest in whether it is true. For our purposes, it does not matter whether Modal Security is true, however. What matters is the relevance of arguments for and against it to question of intellectual policy. The most prominent argument against it is that knowledge that P requires a “connection,” whether causal or not, between our belief that P and the fact that P (Faraci 2019; Korman and Locke forthcoming). It does not suffice that our belief that P is safe and sensitive. If knowledge is the norm of belief and if evidence that our belief that P fails to satisfy that norm undermines whatever justification the belief enjoyed, then evidence that there is no connection between our belief that P and the fact that P undermines that belief – even if it fails to tell against our belief’s safety or sensitivity. Since Modal Security says that it does not, modal security is false if this argument is sound.8 Let us suppose that knowledge that P does require a connection between our belief that P and the fact that P; that knowledge is the norm of belief; that evidence that a belief falls short of that norm undermines it; and, hence, that Modal Security is false. Shall we adopt the policy of giving up (defeasibly) justified beliefs in light of evidence that neither tells directly against their contents nor against the security of their truth on this basis alone? Surely not. This argument just tells us what is “packed into” the concepts that we happen to have inherited – or, semantically descending, the conditions under which the corresponding properties are instantiated. This does nothing by itself to tell us how to regulate our beliefs (even if it does settle the question of how we epistemically ought to regulate them). Indeed, if our goal is anything like having true beliefs, then giving up a (defeasibly) justified belief on the grounds that it lacks a connection to the truth would seem to be like giving it up because it is not polite. The truth seekers among us would seem better served by opting out of such epistemic etiquette. My point is not to advocate the policy of abiding by Modal Security. It is that its truth per se is of doubtful relevance to the policy question. An advocate of Modal Security who is convinced that knowledge requires a connection between our belief and the truth will just advocate aspiring to knowledge* instead – where knowledge* is like knowledge but does not require a connection between our belief and the truth. She is like one who

262

Justin Clarke-Doane

advocates not retributing against the responsible. If deserving retribution is “packed into” the concept of responsibility, then such a person will just advocate holding people responsible* instead – where responsibility* is like responsibility, except that one can be responsible* without deserving (or deserving*) retribution. Settling the epistemic, and generally evaluative, facts fails to settle the policy questions at issue. Of course, the methodology of conceptual analysis has been widely criticized for reasons that are independent of the present discussion. But it should be clear that this problem is not peculiar to it. It arises for any method that seeks to decide on epistemic policy by determining the epistemic facts. This includes the abductive method of Williamson (2007) and the naturalistic method of Quine (1969), no less than the method of conceptual analysis. The policy questions at stake in epistemology, as in other evaluative domains, are not questions of what we ought to do or believe. They are questions of what to do or believe. And these questions remain open even when the facts, including the evaluative ones, are closed.

5 Conclusion I have argued that knowledge of the epistemic facts fails to settle the policy questions at the center of our epistemic lives. This means that the standard method of analyzing concepts like knowledge, justification, and so on is misconceived. But it means something stronger. It means that settling the facts, whether by conceptual analysis or in any other way, is an inappropriate method by which to arrive at intellectual policy. For example, if “the implication relation [classically] defined agrees with the pre-theoretic notion of implication between statements” (Zach 2018, 2079), then the policy question just gets transposed. The question is now whether to use our pretheoretical notion or another. This argument is a radicalization of Moore’s. Note that I have not argued that “anything goes.” That would only follow if policy questions failed to be settled at all if they failed to be settled by the facts. But as non-cognitivists have long argued, this does not follow. Policy questions – whether moral, epistemic, or prudential – are settled by practical reasoning. The point is that such reasoning is not reducible to reasoning about the facts. Policy questions are impervious to how plentiful the facts turn out to be.

Notes 1. Thanks to J. Adam Carter Michael Klenk, Christos Kyriacou, Chris Scambler, and Justin Vlasits for helpful comments. 2. I borrow the star notation from Eklund (2017). 3. See Greco (2015) for an epistemic analog of Moore’s argument. 4. And not just in the sense that we all threaten to be weak in will.

Epistemic Non-factualism and Methodology

263

5. Perhaps this is what Werner means in speaking of concepts that adequately characterize the “robustly normative properties” (Werner 2018, 627) or what McPherson means by “authoritatively normative concepts” (McPherson 2018). 6. Nor could the further fact be a mind-dependent question one à la Street (2006, Se. 7) or Korsgaard (1996). If this “new open question argument” works, it works equally to show that the evaluative facts constructivistically construed fail to settle policy questions. For instance, just as we can wonder whether to do what we ought1 as opposed to ought2 to do, realistically construed, we can wonder whether to be an agent or a shmagent (Enoch 2006). 7. See Stich (1990) for a worry in the same spirit as this one. More carefully, since we ought to defer to our concept of ought, we might say that the method of conceptual analysis recommends or advocates using “ought”. But perhaps our stance is: stop using “ought” – even if we ought to use it! 8. See Bergmann (1997) for other reasons that one might think that Modal Security is false.

References Alston, William P. 2005. Beyond “Justification”: Dimensions of Epistemic Evaluation. Ithaca, NY: Cornell University Press. www.jstor.org/stable/10.7591/j. ctv2n7gqf. Bergmann, Michael. 1997. “Internalism, Externalism and Epistemic Defeat.” PhD thesis, University of Notre Dame. Blackburn, Simon. 1998. Ruling Passions: A Theory of Practical Reasoning. Oxford: Oxford University Press. BonJour, Laurence. 1980. “Externalist Theories of Empirical Knowledge.” Midwest Studies in Philosophy 5 (1): 53–74. https://doi.org/10.1111/j.1475-4975.1980. tb00396.x. Clarke-Doane, Justin. forthcoming. Morality and Mathematics. Oxford: Oxford University Press. Clarke-Doane, Justin. 2015. “Objectivity in Ethics and Mathematics.” In Proceedings of the Aristotelian Society: The Virtual Issue. Vol. 3, edited by Ben Colburn, 98–109. www.aristoteliansociety.org.uk/pdf/2015_virtual_issue.pdf. Clarke-Doane, Justin, and Dan Baras. 2019. “Modal Security.” Philosophy and Phenomenological Research 65 (1): 87. https://doi.org/10.1111/phpr.12643. Das, Ramon. 2019. “Moral Pluralism and Companions in Guilt.” In Companions in Guilt Arguments in Metaethics, edited by Christopher Cowie and Richard Rowland. London: Routledge. Dasgupta, Shamik. 2017. “Normative Non-Naturalism and the Problem of Authority.” Proceedings of the Aristotelian Society 3: 297–319. Eklund, Matti. 2017. Choosing Normative Concepts. Oxford: Oxford University Press. Enoch, David. 2006. “Agency, Shmagency: Why Normativity Won’t Come from What Is Constitutive of Action.” The Philosophical Review 115 (2): 169–98. Faraci, David. 2019. “Groundwork for an Explanationist Account of Epistemic Coincidence.” Philosopher’s Imprint 19 (4): 1–26. Gettier, Edmund. 1966. “Is Justified True Belief Knowledge?” Analysis 23: 121–3. https://doi.org/10.4324/9781912281862. Goldman, Alvin I. 1976. “Discrimination and Perceptual Knowledge.” The Journal of Philosophy 73 (20): 771. https://doi.org/10.2307/2025679.

264

Justin Clarke-Doane

Goodman, Nelson. 1983. Fact, Fiction and Forecast. Cambridge, MA: Harvard University Press. Greco, Daniel. 2015. “Epistemological Open Questions.” Australasian Journal of Philosophy 93 (3): 509–23. Hofweber, Thomas. 2017. “Are There Ineffable Aspects of Reality?” In Oxford Studies in Metaphysics. Vol. 10, edited by Karen Bennett and Dean W. Zimmerman, 124–70. Oxford: Oxford University Press. Korman, Daniel Z., and Dustin Locke. forthcoming. “Against Minimalist Responses to Moral Debunking Arguments.” In Oxford Studies in Metaethics. Vol. 15, edited by Russ Shafer-Landau. Korsgaard, Christine M. 1996. The Sources of Normativity. Edited by Onora O’Neill. Cambridge: Cambridge University Press. McPherson, Tristram. 2018. “Authoritatively Normative Concepts.” In Oxford Studies in Metaethics. Vol. 13, edited by Russ Shafer-Landau, 253–77. Oxford: Oxford University Press. Moore, George Edward. 1903 [1988]. Principia Ethica. Amherst, NY: Prometheus Books. Quine, W.V. 1969. “Epistemology Naturalized.” In Ontological Relativity and Other Essays, 69–90. New York, NY: Columbia University Press. Stich, Stephen P. 1990. The Fragmentation of Reason: Preface to a Pragmatic Theory of Cognitive Evaluation. Cambridge, MA: MIT Press. Street, Sharon. 2006. “A Darwinian Dilemma for Realist Theories of Value.” Philosophical Studies 127 (1): 109–66. https://doi.org/10.1007/s11098-005-1726-6. Werner, Preston J. 2018. “Why Conceptual Competence Won’t Help the NonNaturalist Epistemologist.” Canadian Journal of Philosophy 48 (3–4): 616–37. https://doi.org/10.1080/00455091.2017.1410417. Williamson, Timothy. 2007. The Philosophy of Philosophy. Malden, MA: Wiley-Blackwell. Zach, Richard. 2018. “Rumfitt on Truth-Grounds, Negation, and Vagueness.” Philosophical Studies 175 (8): 2079–89. https://doi.org/10.1007/s11098-0181114-7.

Contributors

Brian C. Barnett is lecturer in philosophy at the State University of New York at Geneseo and St. John Fisher College in Rochester, New York. He received his PhD in philosophy from the University of Rochester for a thesis on the nature and significance of higher-order evidence. His primary areas of research are epistemology, philosophy of religion, and the philosophy of nonviolence. Additional interests include logic, abstract mathematics, and formal methods in philosophy. Brian is currently at work in his role as editor of the Introduction to Epistemology volume in the Introduction to Philosophy open textbook series (ed. Christina Hendricks). J. Adam Carter is a lecturer in philosophy at the University of Glasgow, where he is director of Glasgow’s COGITO Epistemology group. He works mainly in epistemology, with research interests in virtue epistemology, social epistemology, epistemic defeat, know-how, and relativism. His books include Metaepistemology and Relativism (2016, Palgrave Macmillan) and (with Ted Poston) A Critical Introduction to Knowledge-How, (2018, Continuum). Justin Clarke-Doane is an associate professor of philosophy at Columbia University, NYC. He is also an honorary research fellow at the University of Birmingham, UK, and an adjunct research associate at Monash University, Australia. He earned his PhD in philosophy from NYU in 2011. His work centres on metaphysical and epistemological problems surrounding apparently a priori domains, such as morality, modality, mathematics, and logic. He is the author of Morality and Mathematics (Oxford University Press, forthcoming). Joshua DiPaolo is an assistant professor in the philosophy department at California State University, Fullerton. His current research concerns higher-order evidence, disagreement, intellectual humility, the epistemology of prejudice, and conversion and radicalization. His work has appeared in Philosophical Studies, Journal of the American

266

Contributors

Philosophical Association, Synthese, Pacific Philosophical Quarterly, and Journal of Ethics and Social Philosophy. Michael Huemer received his BA from UC Berkeley and his PhD from Rutgers University. He is presently professor of philosophy at the University of Colorado at Boulder. He is the author of more than 70 academic articles in ethics, epistemology, political philosophy, and metaphysics, as well as six amazing books that you should immediately buy, including Ethical Intuitionism (2005), The Problem of Political Authority (2013), and Dialogues on Ethical Vegetarianism (2019). Marcus Lee is teaching associate in the Department of Philosophy at the University of Nottingham. His research interests include ethics, metaethics, epistemology and Eastern philosophy. Dario Mortini is a PhD candidate in philosophy at the University of Glasgow, where he is working on issues related to collective epistemology, knowledge-first epistemology, and epistemic defeat. Norbert Paulo is a postdoctoral researcher at the Institute of Philosophy at the University of Graz and at the Department of Social Sciences and Economics at the University of Salzburg, Austria. His research interests are in empirically informed ethics and moral epistemology but also in moral psychology and experimental philosophy more generally. He also works in political philosophy, applied ethics, and legal philosophy. Olle Risberg is a PhD student at Uppsala University (Sweden). He works mainly in ethics and metaethics and has published in venues such as Journal of the American Philosophical Association, Oxford Studies in Metaethics, Philosophical Studies, and Philosophical Quarterly. Jon Robson is an assistant professor in philosophy at the University of Nottingham. His current research focuses mainly on issues in aesthetics, epistemology and the philosophy of religion. He is coauthor of A Critical Introduction to the Metaphysics of Time as well as coeditor of Aesthetics and the Sciences of Mind and The Aesthetics of Videogames. Eric Sampson is a PhD candidate in philosophy at the University of North Carolina at Chapel Hill. He works in ethical theory and epistemology. His dissertation concerns how we ought to think about the metaphysical and epistemological status of normative inquiry (e.g. ethics, epistemology, political philosophy) given that there never has been, and likely never will be, agreement about the correct answers – not even among those specially trained to conduct normative inquiry. Paul Silva is a professor at the University of Cologne and a member of the Alexander von Humboldt funded group CONCEPT. He specializes in epistemology and has numerous publications in leading peer-reviewed journals.

Contributors

267

Neil Sinclair is associate professor of philosophy at the University of Nottingham. He is the editor of The Naturalistic Fallacy (Cambridge University Press 2019) and Explanation in Ethics and Mathematics (with Uri. D. Leibowitz, Oxford University Press 2016). His paper on evolutionary debunking arguments was awarded the 2016 Sanders Prize in Metaethics. His research covers the areas of moral semantics, moral metasemantics, moral epistemology, and methodology in metaethics. Folke Tersman is the chair professor of practical philosophy at Uppsala University (Sweden). He works mainly in metaethics and is the author of Moral Disagreement (Cambridge University Press, 2006). His recent publications include “Debunking and Disagreement” (Noûs, 2017), “A New Route from Moral Disagreement to Moral Skepticism” (w. Olle Risberg, Journal of the American Philosophical Association, 2019), and “From Scepticism to Anti-Realism” (Dialectica, forthcoming). Marco Tiozzo works as an adjunct lecturer of philosophy at the University of Gothenburg and as an associate professor in philosophy at Alströmergymnasiet, a senior high school in Sweden. He received his PhD from the University of Gothenburg in 2019. Tiozzo’s dissertation was titled “Moral Disagreement and the Significance of Higher-Order Evidence.” The thesis presents a new perspective on the normative significance of higher-order evidence. His primary research interests are in epistemology and in the theory of normativity, in particular theories of reasons and rationality. Margaret Greta Turnbull is assistant professor of philosophy at Gonzaga University. Her research interests span social and traditional epistemology, philosophy of science, philosophy of religion, and public philosophy. She works to provide responses to real-world epistemic predicaments in scientific and nonscientific contexts and has published forthcoming work on underdetermination, permissivism, and disagreement. She earned her PhD in the Department of Philosophy at Boston College. Silvan Wittwer is a postdoctoral associate at Harvard University. His current project, funded by the Swiss National Science Foundation and jointly hosted by the Massachusetts Institute of Technology, examines the epistemic risks for political belief formation, especially in the age of social media. He is also interested in moral knowledge: what it is, where its limits lie, and how it fits into our scientific worldview.

Index

akrasia: epistemic 142, 146; see also akratic akratic: beliefs 7, 8, 142, 245

124, 127–8, 136–9, 146–9, 180, 182, 190–1, 239–52 dogmatism 19, 240, 152

belief: background 100–102, 108, 110–11, 113; high-stakes 86–7, 89; moral divergence 241–9, 252; see also disagreement Benacerraf, Paul 118, 261 bias 139, 155–60, 181; cognitive 63, 65, 67; implicit 56

epistemology: collective 18, 204; moral 1, 3, 5, 8, 11, 13–19, 56, 58–61, 64–5, 69–70, 117, 119, 130, 198, 211, 219, 234; virtue 20, 200 evidence: first-order 2, 4, 6–7, 9, 16–18, 32, 34, 36, 38–43, 54, 81, 91, 98–100, 103, 105, 111, 119, 121, 123, 126–7, 129–30, 139, 162–5, 179–85, 187–90, 239; misleading higher-order 4, 7, 98, 107, 125–6, 144, 245 evolution 9, 35, 37–8, 42, 47, 55, 118, 123, 128–9; see also genealogical extremism 221

Christensen, Thomas 3, 6, 12, 57–9, 99–103, 108–9, 224, 226, 239 conciliationism 14, 31–9, 46–7, 118, 128; conciliatory view 6–7, 12, 19, 136, 138–9, 241; nonconciliationism 240, 243–9, 251–3 consequentialism 93, 252 conspiracy theories 172 debunking 5, 6, 11–18, 31–8, 41–7, 97, 107, 117–19, 125–30, 164, 172, 181, 261; see also evolution defeat: higher-order 6, 14, 16–19, 78, 102, 107, 110–11, 119–21, 123–8, 130, 137, 140–9, 164, 207, 233; objective 17, 137–40, 145–6; self-defeat 12, 14, 31–8, 46, 91; subjective 17, 111, 137, 140, 143–6; undercutting 6, 16, 56–8, 99–105, 108–10, 137, 140, 164, 256, 260 disagreement: moral 5, 10, 11, 13–17, 19, 42, 46, 106, 118, 123, 126, 136–8, 240, 250, 253; peer 6, 8, 13–18, 32–5, 37–40, 42–7, 55–6, 97, 99, 103, 104, 107, 111, 118,

fanaticism 19, 218–23, 226–34; see also extremism Feldman, Richard 3, 5, 120, 142, 239 Field, Hartry 118, 261 genealogical 3–4, 8–14 Hills, Allison 13, 18, 193, 201 independence, principle 12, 14–15, 31–9, 46–7, 224–5 intuition, moral 2–3, 5, 7–8, 15, 35, 43, 54–6, 58–61, 63, 65–9, 89, 119, 138 Joyce, Richard 3, 9–10 justification: doxastic 82; epistemic 78, 80–2, 86, 97–102, 105, 108–11,

Index 117, 124, 126, 129, 137–8, 141, 171, 244; of moral beliefs 3, 6–7, 9–11, 15–18, 31–5, 37–40, 42–3, 46, 87, 90–2; propositional 82 Kelly, Thomas 3, 6–7, 39–40, 42, 67, 111, 139, 189, 239 knowledge: moral 5, 10, 13, 18, 69, 70, 87, 138, 148–149, 198–213 passim Lasonen-Aarnio, Maria 3, 6–7, 57–8, 60, 99, 141, 146 level-splitting 7–8, 16, 19, 98, 108–9, 141–2, 239–45, 248–9, 251

269

reliability 12, 15, 55, 58–70, 119, 125–6, 129, 139, 155, 164–5, 167–9, 171 safety, of beliefs 12, 70–80, 87–91, 260–1 scepticism, moral 17–18, 33–5 sensitivity, of beliefs 12, 79, 87, 90–1, 260–1 steadfast view 6–7; see also conciliationism, non-conciliationism Street, Sharon 3, 8–10, 12, 31–3, 35, 40–1, 46, 100–101

Pritchard, Duncan 18, 80, 89

testimony, moral 5, 13–14, 17–18, 179–93, 198 total evidence view 7, 14, 32, 38–9, 43, 45–7, 111

rationality: doxastic 141, 143; propositional 141, 143–4 realism, moral 10, 17, 31, 44, 117, 136, 193, 261 reflective equilibrium 54–5, 58, 63, 68

Vavova, Katia 8, 12, 14, 31, 36, 46, 137–8, 148, 224 virtue: epistemic 161, 166; intellectual 130, 199–202; moral 130, 251; -reliabilism 207–8, 201

McGrath, Sarah 13, 137–8