Regard for Reason in the Moral Mind

The burgeoning science of ethics has produced a trend toward pessimism. Ordinary moral thought and action, we're to

849 80 3MB

English Pages 206 Year 2018

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Regard for Reason in the Moral Mind

Citation preview

Regard for Reason in the

Moral Mind Joshua May

Oxford University Press Published in May 2018

[Final draft | 6 December 2017 | 108,718 words]

Abstract: The burgeoning science of ethics has produced a trend toward pessimism. Ordinary moral judgment and motivation, we’re told, are profoundly influenced by arbitrary factors and ultimately driven by unreasoned feelings or emotions—fertile ground for sweeping debunking arguments. This book counters the current orthodoxy on its own terms by carefully engaging with the empirical literature. The resulting view, optimistic rationalism, maintains that reason plays a pervasive role in our moral minds and that ordinary moral reasoning is not particularly flawed or in need of serious repair. The science does suggest that moral knowledge and virtue don’t come easily, as we are susceptible to some unsavory influences that lead to rationalizing bad behavior. Reason can be corrupted in ethics just as in other domains, but the science warrants cautious optimism, not a special skepticism about morality in particular. Rationality in ethics is possible not just despite, but in virtue of, the psychological and evolutionary mechanisms that shape moral cognition. Keywords: rationalism, optimism, skepticism, debunking arguments, moral judgment, moral motivation, moral knowledge, virtue, reason, emotion, rationalization

Regard for Reason | J. May

Table of Contents Table of Contents

2

Preface

5

List of Tables and Figures

7

Introduction

8

Ch. 1: Empirical Pessimism 1.1 Introduction 1.2 Pessimism about Moral Cognition 1.3 Pessimism about Moral Motivation 1.4 Optimistic Rationalism 1.5 Coda: Appealing to Science

9 9 10 17 20 24

PART I: Moral Judgment & Knowledge

26

Ch. 2: The Limits of Emotion 2.1 Introduction 2.2 Moralizing with Feelings? 2.3 Accounting for Slight Amplification 2.4 Psychopathology 2.5 Conclusion

27 27 28 35 39 45

Ch. 3: Reasoning beyond Consequences 3.1 Introduction 3.2 Consequences 3.3 Beyond Consequences 3.4 Moral Inference 3.5 Conclusion

47 47 48 50 59 66

Ch. 4: Defending Moral Judgment 4.1 Introduction 4.2 Empirical Debunking in Ethics 4.3 The Debunker’s Dilemma 4.4 Emotions 4.5 Framing Effects 4.6 Evolutionary Pressures 4.7 Automatic Emotional Heuristics 4.8 Explaining the Dilemma 4.9 Conclusion

68 68 69 70 71 73 76 80 84 85

Ch. 5: The Difficulty of Moral Knowledge 5.1 Introduction 5.2 The Threat of Selective Debunking 5.3 The Threat of Peer Disagreement 5.4 Conclusion

87 87 88 94 102

PART II: Moral Motivation & Virtue

105

Ch. 6: Beyond Self-Interest 6.1 Introduction

106 106 pg. 2 of 206

Regard for Reason | J. May

6.2 The Egoism-Altruism Debate 6.3 Empirical Evidence for Altruism 6.4 Self-Other Merging 6.5 Dividing Self from Other 6.6 Conclusion

107 109 114 117 121

Ch. 7: The Motivational Power of Moral Beliefs 7.1 Introduction 7.2 Ante Hoc Rationalization 7.3 Rationalizing Immorality 7.4 Motivating Virtue 7.5 Conclusion

122 122 123 127 135 138

Ch. 8: Freeing Reason from Desire 8.1 Introduction 8.2 Anti-Humean Moral Integrity 8.3 Neurological Disorders 8.4 Special Mechanisms 8.5 Aspects of Desire 8.6 Simplicity 8.7 Conclusion

140 140 141 143 148 150 153 154

Ch. 9: Defending Virtuous Motivation 9.1 Introduction 9.2 The Defeater’s Dilemma 9.3 The Threat of Egoism 9.4 The Threat of Situationism 9.5 Containing the Threats 9.6 Conclusion

156 156 157 159 164 172 173

Conclusion

175

Ch. 10: Cautious Optimism 10.1 Introduction 10.2 Lessons 10.3 Enhancing Moral Motivation 10.4 Enhancing Moral Cognition 10.5 Conclusion

176 176 176 178 181 183

References

185

Index

202

pg. 3 of 206

Regard for Reason | J. May

To Jules, my mighty girl, may you grow up to be righteous.

pg. 4 of 206

Regard for Reason | J. May

Preface During graduate school in Santa Barbara, I developed a passion for the interdisciplinary study of ethics. Fortunately for me, the field was just beginning to explode, fueled by fascinating discoveries in the sciences and renewed interest in their philosophical implications. Both philosophers and scientists have primarily taken the research to reveal that commonsense moral deliberation is in need of serious repair. I was energized by this burgeoning area of research and I too felt the pull toward pessimism about ordinary moral thought and action and particularly about the role of reason in them. However, as I began to dig into the research and arguments, many pessimistic conclusions seemed to be based on insufficient evidence. I’ve come to find that our moral minds are more defensible in light of the science than many have let on. This book articulates and defends my optimistic rationalism. It argues that our best science helps to defend moral knowledge and virtue against prominent empirical attacks, such as debunking arguments and situationist experiments. Being ethical isn’t easy, as our understanding of the human brain certainly confirms. But our moral minds exhibit a regard for reason that is not ultimately beholden to blind passions. Although we are heavily influenced by automatic and unconscious processes that have been shaped by evolutionary pressures, virtue is within reach. Gratitude: I am grateful to so many people who have aided in the development of this project. The number is enormous partly because some of the ideas have been in the works for nearly a decade, since my early years in graduate school. My colleagues have been invaluable at the University of California at Santa Barbara, then Monash University in Australia, and now at the University of Alabama at Birmingham. Many have provided incisive feedback and stimulating discussion that have indirectly helped along the ideas that appear in this book. So as to avoid a dozen pages of acknowledgments, I think it wise to confine my thanks here to those who have provided feedback (in oral or written form) on this particular manuscript or draft papers that have become core elements of it. These individuals are, to the best of my fallible memory and records: Ron Aboodi, Mark Alfano, C. Daniel Batson, Bradford Cokelet, Stephen Finlay, Jeanette Kennett, Charlie Kurth, Hyemin Han, Yongming Han, Julia Henke Haas, Richard Holton, Bryce Huebner, Nageen Jalali, Karen Jones, Matt King, Victor Kumar, Andy Lamey, Elizabeth Lanphier, Neil Levy, Dustin Locke, Heidi Maibom, John Maier, Colin Marshall, Kevin McCain, John Mikhail, Christian Miller, Brandon Murphy, Charles Pigden, François Schroeter, Laura Schroeter, Luke Semrau, Neil Sinhababu (the best “philosophical nemesis” one could ask for), Walter Sinnott-Armstrong, Michael Slote, Jesse Summers, Raluca Szekely, John J. Tilley, Brynn Welch, Danielle Wylie, and Aaron Zimmerman. My sincere apologies to anyone I’ve unintentionally left out. The work on this book, and key papers leading to it, took place at several institutions outside of my home department at UAB. In 2014, I attended a summer seminar at the Central European University in Budapest ably directed by Simon Rippon. In 2015, at the Prindle Institute for Ethics at DePauw University, Andrew Cullison graciously hosted a writing retreat. In 2017, I attended the fantastic Summer Seminar in Neuroscience and Philosophy at Duke University, generously supported by the John Templeton Foundation and directed by Walter SinnottArmstrong and Felipe De Brigard. Many thanks to the directors and funders for those opportunities. Part of these visits, and many others in recent years, have also been made possible pg. 5 of 206

Regard for Reason | J. May

by my department at UAB. I’m forever grateful to our chair, Gregory Pence, for his considerable support of research, including many trips to present my work both in the U.S. and abroad. A handful of people deserve to be singled out for special thanks. Throughout my philosophical development, one group has provided continuous mentorship and moral support that has sustained me through the uphill battle that is modern academia. That crew includes Dan Batson, Walter Sinnott-Armstrong, and Aaron Zimmerman. Special thanks also go to the reviewers and advisors of the book for Oxford University Press and to the editor, Peter Momtchiloff. Their guidance, comments, and support made the reviewing and publishing process a pleasure when it easily could have been demoralizing. Finally, I thank two talented philosophy majors, Elizabeth Beckman and Samantha Sandefur, who worked as research assistants. Previous work: Some of this book draws on my previously published work. Little of it is merely recycled, as I have significantly updated both the presentation (organization and prose), as well as some of the content. Chapters 2 and 3 draw on “The Limits of Emotion in Moral Judgment” (forthcoming in The Many Moral Rationalisms, eds. K. Jones & F. Schroeder, Oxford University Press). Chapter 2 also draws partly on “Does Disgust Influence Moral Judgment?” (published in 2014 in the Australasian Journal of Philosophy 92(1): 125–141). Chapter 3 also draws partly from “Moral Judgment and Deontology: Empirical Developments” (published in 2014 in Philosophy Compass 9(11): 745-755). Chapters 4 and 5 draw on a paper I’ve co-authored with Victor Kumar: “How to Debunk Moral Beliefs” (to appear in The New Methods of Ethics, eds. J. Suikkanen & A. Kauppinen). Chapter 6 is based partly on two articles: “Egoism, Empathy, and Self-Other Merging” (published in 2011 in the Southern Journal of Philosophy 49(S1): 25–39, Spindel Supplement: Empathy & Ethics, ed. R. Debes) and “Empathy and Intersubjectivity” (published in 2017 in the Routledge Handbook of Philosophy of Empathy, ed. Heidi Maibom, Routledge). Chapter 7 draws a little bit from a short commentary piece, “Getting Less Cynical about Virtue” (published in 2017 in Moral Psychology, Vol. 5: Virtue & Happiness, eds. W. Sinnott-Armstrong & C. Miller, MIT Press, pp. 45-52). Chapter 8 draws a little bit from “Because I Believe It’s the Right Thing to Do” (published in 2013 in Ethical Theory & Moral Practice 16(4): 791–808). For permission to republish portions of these works, I’m grateful to the publishers and, in one case, to my brilliant co-author, Victor Kumar. Audience: Given the interdisciplinary nature of the material in this book, I aim for its audience to include a range of researchers, especially philosophers, psychologists, and neuroscientists. I have thus attempted to write in an accessible manner, which sometimes requires explaining concepts and theories with which some readers will already be quite familiar. Another consequence is that I sometimes omit certain details that one otherwise might discuss at length. I hope that, all things considered, such choices actually make for a better read. Title: Finally, a note on the book’s title. The phrase “regard for reason” came to me independently (or so it seems) years ago. I later found that over a century ago Henry Sidgwick used it when describing Kant’s view (see Sidgwick 1874/1907: 515). I also discovered that Jeanette Kennett uses the similar phrase “reverence for reason” in her account of moral motivation (2002: 355).

pg. 6 of 206

Regard for Reason | J. May

List of Tables and Figures Tables: 2.1: Example Data from a Disgust Experiment 3.1: Cases Varying Intention and Outcome 3.2: Two Modes of Moral Cognition 4.1: Example Processes Subject to the Debunker’s Dilemma 5.1: Five Moral Foundations 6.1: Proportion of Participants Offering to Help 7.1: Proportion of Later Indulgence by Choosing Cake 7.2: Mean Self-Reported Likelihood to Engage in Behavior 7.3: Mean Responses to Whether a Job was Suited for a Particular Race 7.4: Task Assignment and Moral Ratings of It 9.1: Situational Influences on Classes of Behavior 9.2: Example Factors Subject to the Defeater’s Dilemma Figures: 1.1: Key Sources of Empirically Grounded Pessimism 3.1: The Switch Case 3.2: The Footbridge Case 3.3: Loop Track vs. Man-in-Front 5.1: Ideological Differences in Foundation Endorsement 8.1: Accounts of Moral Motivation

pg. 7 of 206

Regard for Reason | J. May

Introduction

pg. 8 of 206

Regard for Reason | J. May

Ch. 1: Empirical Pessimism Reason is wholly inactive, and can never be the source of so active a principle as conscience, or a sense of morals. – David Hume Word count: 8,971

1.1 Introduction Moral evaluation permeates human life. We readily praise moral saints, admonish those who violate ethical norms, and teach children to develop virtues. We appeal to moral reasons to guide our own choices, to structure social institutions, and even to defend atrocities. But is this a fundamentally rational enterprise? Can we even rely on our basic modes of moral thought and motivation to know right from wrong and to act virtuously? Empirical research may seem to warrant doubt. Many philosophers and scientists argue that our moral minds are grounded primarily in mere feelings, not rational principles. Emotions, such as disgust, appear to play a significant role in our propensities toward racism, sexism, homophobia, and other discriminatory actions and attitudes. Scientists have been increasingly suggesting that much, if not all, of our ordinary moral thinking is different only in degree, not in kind. Even rather reflective people are fundamentally driven by emotional reactions, using reasoning only to concoct illusory justifications after the fact. As Jonathan Haidt has put it, “the emotions are in fact in charge of the temple of morality” while “moral reasoning is really just a servant masquerading as the high priest” (2003: 852). On such influential pictures, ordinary moral thinking seems far from a reasoned pursuit of truth. Even if some ordinary moral judgments are rational and reliable, brain imaging research suggests that the intuitive moral judgments that align with commonsense morality are driven largely by inflexible emotional alarms instilled in us long ago by natural selection. The same apparently goes for our thinking about even the most pressing of contemporary moral issues, such as abortion, animal rights, torture, poverty, and climate change. Indeed, some theorists go so far as to say that we can’t possibly acquire moral knowledge, or even justified belief, because our brains have been shaped by evolutionary forces that can’t track supposed “moral facts.” As a result, virtue seems out of reach because most of us don’t know right from wrong. And it gets worse. Even if commonsense moral judgment is on the right track, distinctively moral motivation may be impossible or exceedingly rare. When motivated to do what’s right, we often seem driven ultimately by self-interest or non-rational passions, not our moral beliefs. If our moral convictions do motivate, they are corrupted by self-interested rationalization or motivated reasoning. Scientific evidence suggests that people frequently lie and cheat to benefit themselves whenever they believe they can get away with it. Sure, we can feel empathy for others, but mostly for our friends and family. Those suffering far away don’t stir our sentiments and thus don’t motive much concern. When we do behave well, it’s often to gain some reward, pg. 9 of 206

Regard for Reason | J. May

such as praise, or to avoid punishment. Doing what’s right for the right reasons seems like a psychological rarity at best. While theorists disagree over the details, there has certainly been an increase in scientifically motivated pessimism (a term I borrow from D’Arms & Jacobson 2014). These pessimists contend that ordinary moral thought and action are ultimately driven by non-rational processes. Of course, not all empirically informed philosophers and scientists would describe themselves as “pessimists.” They may view themselves as just being realistic and view the optimist as a Panglossian Pollyana. But we’ll see that the label of “pessimism” does seem apt for the growing attempts to debunk ordinary moral psychology or to pull back the curtain and reveal an unsophisticated patchwork in need of serious repair. This book aims to defend a more optimistic view of our moral minds in light of our best science. Knowing right from wrong, and acting accordingly, is indeed difficult for many of us. But we struggle not because our basic moral beliefs are hopelessly unjustified—debunked by evolutionary pressures or powerful emotions—or because deep down we are all motivated by self-interest or are slaves to ultimately non-rational passions. Science can certainly change our conception of humanity and cause us to confront our biological and cultural limitations. Not all of commonsense morality can survive, but we should neither oversell the science nor commit ordinary moral thinking to the flames. Ultimately, I argue for an optimistic rationalism. Ordinary moral thought and action are driven by a regard for “reason”—for reasons, reasonableness, or justifiability. Pessimists commonly point to our tendencies toward irrationality, but perhaps paradoxically it is often our irrationalities that reveal our deep regard for reason. If ordinary moral cognition had little to do with reason, then we would not so often rationalize or provide self-deceived justifications for bad behavior. Driven by this concern to act in ways we can justify to ourselves and to others, moral knowledge and virtue are possible, despite being heavily influenced by unconscious processes and despite being sensitive to more than an action’s consequences. In this chapter, I’ll introduce some key sources of pessimism about two core aspects of moral psychology. Some theorists are doubtful about the role of reason in ordinary moral cognition and its ability to rise to knowledge. Others are doubtful about the role of reason in moral motivation and our ability to act from virtuous motivation. After surveying a diverse range of opponents, I’ll explain the plan in the coming chapters for defending a cautious optimism about our moral minds, and one that lies within the rationalist tradition.

1.2 Pessimism about Moral Cognition 1.2.1 Sources of Pessimism Contemporary moral philosophers have rightly turned their attention to the sciences of the mind in order to address theoretical and foundational questions about ethics. What is going through our minds when we condemn others or are motivated to do what’s right? Is moral thinking a fundamentally inferential process or are sentiments essential? To test proposed answers to such questions, some philosophers are now even running their own experiments. Unfortunately, though, philosophers and scientists alike have tended to hastily take this empirically informed movement to embarrass ordinary moral thinking or the role of reason in it. pg. 10 of 206

Regard for Reason | J. May

Ethical theories in the tradition of Immanuel Kant, in particular, have taken a serious beating, largely for their reverence for reason. To be fair, Kantians do claim that we can arrive at moral judgments by pure reason alone, absent any sentiments or feelings. Contemporary Kantians likewise ground morality in rational requirements, not sentiments like resentment or compassion. Thomas Nagel, for example, writes: “The altruism which in my view underlies ethics is not to be confused with generalized affection for the human race. It is not a feeling” (1970/1978: 3). Instead, Kantians typically ground morality in reflective deliberation about what to do (Wallace 2006) or in reflective endorsement of one’s desires and inclinations. Michael Smith, for example, argues that moral approbation expresses a belief about “what we would desire ourselves to do if we were fully rational” (1994: 185). Similarly, Christine Korsgaard writes that “the human mind… is essentially reflective” (1996/2008: 92), and this self-consciousness is required for moral knowledge and virtue, for it allows us to make reasoned choices that construct our own identities. Morality, according to Korsgaard (2009), arises out of “the human project of self-constitution” (4), which involves a “struggle for psychic unity” (7). Many empirical pessimists contend that reflection and deliberation do not play such a substantial role in our moral minds. Haidt even speaks of a “rationalist delusion” (2012: 103), and it’s not difficult to see why. The study of moral development in psychology was dominated in the 20th century by Lawrence Kohlberg (1973), who was heavily inspired by Kant. However, that tradition has largely fallen out of favor to make room for psychological theories in which emotion plays a starring role. Many psychologists and neuroscientists now believe that a surprising portion of our mental lives is driven by unconscious processes, many of which are automatic, emotional, and patently irrational or non-rational. Reasoning comes in to justify that which one’s passions have already led one to accept. As Haidt has put it, “moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post-hoc construction, generated after a judgment has been reached” (2001: 814). This is the challenge from a brand of sentimentalism which contends that moral cognition is fundamentally driven by emotion, passion, or sentiment that is distinct from reason (e.g., Nichols 2004; Prinz 2007). Many now take the science to vindicate sentimentalism and Hume’s famous derogation of reason. Frans de Waal, for example, urges us to “anchor morality in the socalled sentiments, a view that fits well with evolutionary theory, modern neuroscience, and the behavior of our primate relatives” (2009: 9). Even if reasoning plays some role in ordinary moral judgment, the idea is that sentiment runs the show (Haidt 2012: 77; Prinz 2016: 65). Other critics allow that ordinary moral judgment can be driven by reason, but they attempt to debunk all or large portions of commonsense morality, yielding full or partial skepticism. Evolutionary debunkers argue that Darwinian pressures prevent our minds from tracking moral truths. Even if blind evolutionary forces get us to latch onto moral facts, this is an accident that doesn’t amount to truly knowing right from wrong. As Richard Joyce puts it, “knowledge of the genealogy of morals (in combination with some philosophizing) should undermine our confidence in our moral judgments” (2006: 223; see also Ruse 1986; Rosenberg 2011). Other debunkers align good moral reasoning with highly counter-intuitive intuitions consistent with utilitarian (or other consequentialist) ethical theories. Peter Singer (2005) and Joshua Greene (2013), for example, argue that moral thinking is divided into two systems—one is generally trustworthy but the other dominates and should be regarded with suspicion. The commonsense moral intuitions supporting non-utilitarian ethics can be debunked since they arise pg. 11 of 206

Regard for Reason | J. May

from unreliable cognitive machinery. Greene writes that our “anti-utilitarian intuitions seem to be sensitive to morally irrelevant things, such as the distinction between pushing with one’s hands and hitting a switch” (328). These pessimists are utilitarian debunkers who argue that the core elements of ordinary moral judgment should be rejected, largely because they are driven by automatic emotional heuristics that place moral value on more than the consequences of an action. While some moral judgments are rational, and can yield knowledge or at least justified belief, most of our ordinary intuitions are not among them. Such utilitarians are often content with imputing widespread moral ignorance to the general population, which likewise renders virtuous action exceedingly rare. Many debunkers conceive of moral cognition as facing a dilemma in light of the science. As Singer has put it: We can take the view that our moral intuitions and judgments are and always will be emotionally based intuitive responses, and reason can do no more than build the best possible case for a decision already made on nonrational grounds. […] Alternatively, we might attempt the ambitious task of separating those moral judgments that we owe to our evolutionary and cultural history, from those that have a rational basis. (2005: 351) It seems we can avoid wholesale sentimentalism only by undermining large swaths of ordinary moral thinking. Whether by embracing sentimentalism or debunking, a pessimistic picture of ordinary moral thinking seems to result. The worry is that, if our best science suggests that our moral minds are driven largely by non-rational passions, then that way of thinking may be indefensible or in need of serious revision or repair. Now, sentimentalists frequently deny that their view implies that our moral beliefs are somehow deficient (see, e.g., Kauppinen 2013; D’Arms and Jacobson 2014), and of course emotions aren’t necessarily illicit influences. However, sentimentalists do maintain that genuinely moral cognition ultimately requires having certain feelings, which suggests that it’s fundamentally an arational enterprise in which reason is a slave to the passions. At any rate, I aim to provide a defense of ordinary moral cognition that allows reason to play a foundational role. First, I’ll argue for an empirically informed rationalism: moral judgment is fundamentally an inferential enterprise that is not ultimately dependent on nonrational emotions, sentiments, or passions. Second, I’ll advance a form of anti-skepticism against the debunkers: there are no empirical grounds for debunking core elements of ordinary moral judgment, including our tendency to place moral significance on more than an action’s consequences.

1.2.2 Reason vs. Emotion? Philosophers and scientists increasingly worry that the reason/emotion dichotomy is dubious or at least fruitless. We of course shouldn’t believe that reason is good and reliable while emotion is bad and biasing (Jones 2006; Berker 2009). Moreover, as we further understand the human brain, we find great overlap between areas associated with reasoning and emotional processing with apparently few differences. Like paradigm emotional processing, reasoning can be rapid and relatively inaccessible to consciousness. And emotions, like paradigm reasoning, aid both conscious and unconscious inference, as they provide us with relevant information (Dutton & pg. 12 of 206

Regard for Reason | J. May

Aron 1974; Schwarz & Clore 1983), often through gut feelings about which of our many options to take (Damasio 1994). The position developed in this book is likewise skeptical of the reason/emotion dichotomy, but this won’t fully emerge until the end. For now, let’s begin by attempting to articulate a working contrast between reason and emotion. Reasoning is, roughly, a kind of inference in which beliefs or similar propositional attitudes are formed on the basis of pre-existing ones. For example, suppose Jerry believes that Elaine will move into the apartment upstairs only if she has $5000, and he recently learned that she doesn’t have that kind of money to spare. Jerry then engages in reasoning when, on the basis of these two other beliefs, he comes to believe that Elaine won’t move into the apartment upstairs. It’s notoriously difficult to adequately characterize this notion of forming a belief “on the basis” of other beliefs in the sense relevant to inference (see, e.g., Boghossian 2012). But such issues needn’t detain us here. Some philosophers and psychologists define reasoning more narrowly as conscious inference (e.g., Haidt 2001: 818; Mercier & Sperber 2011: 57; Greene 2013: 136). This may capture one ordinary sense of the term “reasoning.” The archetype of reasoning is indeed deliberate, relatively slow, and drawn out in a step-wise fashion. For example, you calculate your portion of the bill, weight the pros and cons of divorce, or deliberate about where to eat for lunch. But there’s no need to be overly restrictive. As Gilbert Harman, Kelby Mason, and Walter Sinnott-Armstrong point out: “Where philosophers tend to suppose that reasoning is a conscious process… most psychological studies of reasoning treat it as a largely unconscious process” (2010: 241). Moreover, ordinary usage and dictionary definitions don’t make conscious awareness essential to reasoning, presumably because rule-governed transitions between beliefs can be a rather automatic, unconscious, implicit, and unreflective process. For example: • You just find yourself concluding that your son is on drugs. • You automatically infer from your boss’s subtly unusual demeanor that she’s about to fire you. • You suddenly realize in the shower the solution to a long-standing problem. These beliefs seem to pop into one’s head, but they aren’t born of mere feelings or noninferential associations. There is plausibly inference on the basis of representations that function as providing reasons for a new belief. Reasoning occurs; it’s just largely outside of awareness and more rapid than conscious deliberation. Indeed, it is now common in moral psychology to distinguish conscious from unconscious reasoning or inference (e.g., Cushman, Young, & Greene 2010; Harman et al. 2010). The idea is sometimes emphasized by rationalists (e.g., Mikhail 2011), but even sentimentalists allow for unconscious reasoning, particularly in light of research on unconscious probabilistic inference (Nichols, Kumar, & Lopez 2016; see also Zimmerman 2013). No doubt some of one’s beliefs are formed without engaging in reasoning, conscious or not. Basic perceptual beliefs are perhaps a good example. You believe that the door opening in front of you retains a rectangular shape, but arguably you don’t form this judgment on the basis of even tacit beliefs about angles in your field of vision. Rather, your visual system generates such perceptual constancies by carrying out computational work among mental states that are relatively inaccessible to introspection and isolated from other patterns of belief-formation (such states are often called sub-personal, although sub-doxastic [Stich 1978] is probably more apt [Drayson 2012]). As the visual experience of a rectangular door is generated, you believe that the pg. 13 of 206

Regard for Reason | J. May

door is rectangular by simply taking your visual experience at face value. So perhaps it’s inappropriate to posit unconscious reasoning (about angles and the like) at least because the relevant transitions aren’t among beliefs—not even tacit ones. Nevertheless, some inferential transitions between genuine beliefs are unconscious. Within the category of unconscious mental processes, some generate beliefs on the basis of prior beliefs (e.g., inferring that your son is on drugs). Other belief-generating processes don’t amount to reasoning or inference (e.g., believing that an opening door is constantly rectangular), at least because they are “subpersonal” or “subdoxastic.” What about emotion? There is unfortunately even less consensus here. There are staunch cognitivist theories on which emotions have cognitive content, much like or even exactly like beliefs. Martha Nussbaum, for example, argues that our emotions contain “judgments about important things” which involve “appraising an external object as salient for our own wellbeing” (2001: 19). Non-cognitivist theories maintain that emotions lack cognitive content. Jesse Prinz, for example, holds that emotions are “somatic signals… not cognitive states” although they “represent concerns” (2007: 68). Moreover, while we often think of emotional processes as rapid and automatic, they can be more drawn out and consciously accessible. One can, for example, be acutely aware of one’s anxiety and its bodily effects, which may ebb and flow over the course of days or weeks, as opposed to occurring in rapid episodes typical of fear or anger. I suspect the concept of emotion is flexible and not amendable to precise definition. I’m certainly not fond of classical analyses of concepts, which posit necessary and sufficient conditions (May & Holton 2012; May 2014b). In any case, we can be ecumenical and conceive of emotions as mental states and processes that have certain characteristic features. Heidi Maibom provides a useful characterization of emotions as “mental states associated with feelings, bodily changes, action potentials, and evaluations of the environment” (2010: 1000; cf. also Haidt 2003: 853). Suppose I negligently step on your gouty toe, so you become angry with me. Your anger has an affective element: a characteristic feel. The emotion also has motivational elements that often appear to activate relevant behavior: e.g., it motivates you to retaliate with verbal and physical abuse (but see Seligman et al. 2016: ch. 8). Emotions also seem to have physiological effects—e.g., your anger will lead to a rise in blood pressure, increased heart rate, and other bodily changes. Finally, feeling angry also typically involves or at least causes cognitive elements, such as thoughts about my blameworthiness, about the damage to your toe, about how you could best retaliate, and so on. I will understand such cognitive elements as, roughly, mental items whose function is to accurately represent. A cognitive mental state, like a belief, can be contrasted with motivations, goals, or desires, which arguably function to bring about the state of affairs they represent (Smith 1994). Tim and I may both believe that there is a taco on the table, but only I want to eat it, for he is stuffed. My longing for the scrumptious taco involves a desire or a mental state whose function is to bring it about that I eat the taco. Importantly, cognitive elements represent how things are and can thus play a role in inference. Insofar as emotions can have cognitive elements or at least effects on cognition, emotions can provide information and facilitate reasoning. The cognitive elements or effects of emotions make the apparent reason/emotion dichotomy becomes blurry at best. Despite the similarities between the two, however, at least one important difference may remain: it’s commonly assumed that feelings are essential to emotions but not to the process of reasoning. Many researchers use the term “affect” to refer to a kind of feeling (see, e.g., Seligman et al. 2016: 50), although it is something of a technical term pg. 14 of 206

Regard for Reason | J. May

with different meanings for some theorists. Perhaps, then, we should just speak of the dichotomy between inference/affect or cognitive/non-cognitive states. However, sometimes the connection to rationalism and sentimentalism is clearer if we operate with the working conception of reasoning and emotion and then contrast their cognitive vs. affective aspects. So far, the working conception respects the worry that there is no sharp division between reason and emotion. This overlap view, as we might call it, seems to satisfy many in empirical moral psychology (e.g., Greene 2008; Maibom 2010; Helion & Pizarro 2014; Huebner 2015). For others, however, it doesn’t go far enough. On the total collapse view, there is no difference between reasoning and emotional processing. Peter Railton, for example, construes the “affective system” quite broadly such that “affect appears to play a continuously active role in virtually all core psychological processes: perception, attention, association, memory, cognition, and motivation” (2014: 827; cf. also Damasio 1994; Seligman, Railton, Baumeister, & Sripada 2016). On this picture, it may seem that the debate between rationalists and sentimentalists is spurious, since affect and inference are inextricable. However, what motivates the collapse view is a range of empirical evidence which suggests that “emotion” turns out to be more like inference than we thought, not that “reason” turns out to be less like inference than we thought. As James Woodward has put it, areas of the brain associated with emotion are “involved in calculation, computation, and learning” (2016: 97). This would be a welcome result for the view to be defended in this book, which aims to emphasize the role of reasoning and inference in moral psychology. Indeed, the affective system broadly construed is something humans share with many other animals (Seligman et al. 2016). The total collapse view suggests that affective processes are necessary for moral judgment merely because they’re required for inference generally, moral or otherwise. So we give sentimentalists a better chance if we operate with the overlap view instead. To see this, we need to consider in more detail the debate between rationalists and sentimentalists.

1.2.3 Rationalism vs. Sentimentalism Clearly, both reason and emotion play a role in moral judgment. Nevertheless, a traditional dispute remains between rationalists and sentimentalists over the comparative roles of inference vs. feelings in distinctively moral cognition (Nichols 2008: n. 2; Maibom 2010: 1000; May & Kumar forthcoming). The issue is interesting in its own right and we’ll eventually see that it has important practical implications for how to develop moral knowledge and virtue. The empirical claim made by those in the rationalist tradition is that reasoning is central to moral cognition in a way that the affective elements of emotions are not. Such (empirical) rationalists hold that moral judgment, just like many other kinds of judgment, is fundamentally “a product of reason” (Nichols 2004: 70) or “derives from our rational capacities” (Kennett 2006: 70). However, as only a psychological thesis, “rational capacities” here is meant to be nonnormative—even poor reasoning counts as deriving from one’s “rational” capacities. We can more clearly capture this idea by construing rationalism as the thesis that moral judgment is ultimately “the culmination of a process of reasoning” (Maibom 2010: 999). Emotions can certainly influence moral cognition, according to rationalists, but primarily insofar as they facilitate inference; they aren’t essential for making a judgment distinctively moral. On the sentimentalist picture I’ll resist, mere feeling or the affective component of emotions is essential for moral cognition and thus moral knowledge (if such knowledge is pg. 15 of 206

Regard for Reason | J. May

possible). Without emotions, a creature can’t make any moral judgments, because the feelings constitutive of emotions are in some way essential to having moral concepts. As Hume famously put it, when we condemn an action or a person’s character: The vice entirely escapes you, as long as you consider the object. You never can find it, till you turn your reflection into your own breast, and find a sentiment of disapprobation, which arises in you, towards this action. (1739-40/2000: 3.1.1) Hume clearly conceives of such sentiments or passions as feelings, and it’s this aspect of emotions, not their role in inference, that sentimentalists see as distinctive of moral judgment. Contemporary sentimentalists, such as Shaun Nichols, continue this line of thought, stating that “moral judgment is grounded in affective response” (2004: 83, emphasis added). Moreover, sentimentalists don’t merely claim that lacking feelings or affect would hinder moral judgment, but rather that this would render one incapable of understanding right from wrong. Even when sentimentalists emphasize the importance of reasoning and reflection in moral judgment, they remain sentimentalists because they give “the emotions a constitutive role in evaluative judgment” in particular (D’Arms & Jacobson 2014: 254; cf. also Kauppinen 2013). Rationalists can agree that emotions are commonly involved in human moral judgment and that lacking them leads to difficulties in navigating the social world. Humans are undoubtedly emotional creatures, and sentiments pervade social interactions with others. To build a moral agent, one might have to endow it with emotions, but only because a finite creature living in a fast-paced social world requires a mechanism for facilitating rapid reasoning and quickly directing its attention to relevant information. A creature with unlimited time and resources needn’t possess emotions in order to make distinctively moral judgments (cf. Jones 2006: 3). On the rationalist view, the role of emotions in morality is like the role of ubiquitous technologies: they facilitate information processing and structure our way of life. If the Internet was somehow broken, for example, our normal way of life would be heavily disrupted, but it’s not as though the Internet is fundamental to the very idea of communication and business transactions. Of course, in one sense the Internet is essential, as we rely on it for how we happen to operate. But a cognitive science of how communication fundamentally works needn’t feature the ability to use email. No doubt the analogy only goes so far, since emotions are not some recent invention in human life. They are part of human nature, if there is such a thing. The point is simply that, for sentimentalists, emotions are more than vehicles for information processing; they partially define what morality is. Thus, even if emotions aid in reasoning, we still can conclude that their affective elements aren’t necessary for moral judgment. The sentimentalist tradition isn’t vindicated if emotions are merely ways of processing information more quickly, rigidly, and without attentional resources (see Prinz 2006: 31). Of course, emotions may be required for moral judgment, especially knowledge, merely because experiencing certain emotions seems necessary for knowing what another is feeling. Indeed, sentimentalists sometimes draw an analogy between moral judgments and judgments about color: they are both beliefs typically caused by certain experiences (e.g., Hume 1739-40: 3.1.1; Prinz 2007: 16; Slote 2010; Kauppinen 2013: 370; Sinhababu 2017: ch. 4). The relevant experience may then be necessary for knowledge, particularly because such experiences are conscious, or essentially qualitative, mental states. And understanding what a sensation or experience is like seems impossible without having it oneself (Jackson 1982). In the moral domain, men in power have historically taken a paternalistic attitude toward women, and yet men pg. 16 of 206

Regard for Reason | J. May

presumably don’t generally know exactly what it’s like to be a woman or to carry a child to term. As some liberals are fond of saying: If men were giving birth, there wouldn’t be much discussion about the right to have an abortion. Perhaps even women don’t know these things either until they have the relevant experiences (see Paul 2014). Similarly, an emotionless robot may be ignorant of some moral facts in virtue of lacking feelings of love, grief, pride, or fury. Even so, this doesn’t show that emotions are essential for making a moral judgment. At best certain experiences are sometimes required for understanding a phenomenon. A sophisticated robot could acquire the relevant knowledge by having the requisite experiences. In fact, this is just an instance of a more general problem of ignorance of morally relevant information. Suppose I visit my grandmother in the hospital in Mexico. I know what it is to suffer but I falsely believe that the Spanish word “sufre” refers to, not suffering, but the vegetarian option at a Chipotle restaurant. Then I won’t know that the nurse did wrong when she made “mi abuela sufre.” Does this imply that Spanish is essential for moral knowledge? In certain circumstances, I must know the relevant language, but this is too specific for a general characterization of what’s psychologically essential for moral judgment. Similarly, suppose one doesn’t fully understand, say, the anguish of torture or the humiliation of discrimination unless one experiences them firsthand. Such examples don’t demonstrate that feelings are essential for making distinctively moral judgments but rather judgments about specific cases. The theoretically interesting position for sentimentalists to take is the one that many have indeed taken: emotions are required for understanding right from wrong generally, not merely for understanding a subset of particular moral claims.

1.3 Pessimism about Moral Motivation 1.3.1 Sources of Pessimism Suppose the previous challenges have been rebutted: ordinary moral cognition is a fundamentally rational enterprise capable of rising to moral knowledge or at least justified belief. Still, we might worry that we rarely live up to our scruples, for self-interest and other problematic passions too frequently get in the way. Even if we do end up doing the right thing, we do it for the wrong reasons. When we’re honest, fair, kind, and charitable, it’s only to avoid punishment, to feel better about ourselves, or to curry someone’s favor. Something seems morally lacking in such actions—let’s say that they’re not fully virtuous. Just as merely true but unjustified belief doesn’t seem to deserve a certain honorific (e.g., “knowledge”), merely doing the right thing, but not for the right reasons, doesn’t warrant another moniker (“virtue”). To be truly virtuous, it seems in particular that moral considerations should more frequently guide our behavior; reason cannot be a slave to non-rational passions, selfish or otherwise. Kant (1785/2002) famously thought that only such actions—those done “from duty”—have moral worth. For example, we’d expect a virtuous merchant not only to charge a naïve customer the normal price for milk but to do it for more than merely self-interested reasons—e.g., to avoid a bad reputation. Many believe the science warrants pessimism: deep down we’re primarily motivated to do what’s right for the wrong reasons, not morally relevant considerations. Robert Wright, for example, proclaims that an evolutionary perspective on human psychology reveals that we’re largely selfish, and yet we ironically despise such egoism: pg. 17 of 206

Regard for Reason | J. May

[T]he pretense of selflessness is about as much a part of human nature as is its frequent absence. We dress ourselves up in tony moral language, denying base motives and stressing our at least minimal consideration for the greater good; and we fiercely and selfrighteously decry selfishness in others. (1994: 344) This disconcerting account paints us as fundamentally egoistic. On the most extreme version— psychological egoism—all of one’s actions are ultimately motivated by self-interest. We are simply incapable of helping others solely out of a concern for their welfare. An ulterior motive always lurks in the background, even if unconsciously. There is a wealth of rigorous research that seems to suggest that altruism is possible particularly when we empathize with others. However, compassion can be rather biased, parochial, and myopic. We are more concerned for victims who are similar to ourselves, or part of our in-group, or vividly represented to tug at our heartstrings, rather than a mere abstract statistic (Cialdini et al. 1997; Jenni & Loewenstein 1997; Batson 2011). Moreover, studies of dishonesty suggest that most people will rationalize promoting their self-interest instead of moral principles (Ariely 2012). Even if we’re not universally egoistic, we may not be far from it (Batson 2016). A related source of pessimism draws on the vast research demonstrating the situationist thesis that unexpected features of one’s circumstances have a powerful influence on behavior. Many have taken this literature to undermine the existence of robust character traits or conceptions of agency and responsibility that require accurate reflection. However, even if we jettison commitments to character traits and reflective agency, results in the situationist literature pose a further challenge. If our morally relevant actions are often significantly influenced by the mere smell of fresh cookies, the color of a person’s skin, an image of watchful eyes, and the like, then we are motivated by ethically arbitrary factors (see, e.g., Nelkin 2005; Nahmias 2007; Vargas 2013; Doris 2015). A certain brand of situationism, then, may reveal that we’re chronically incapable of acting for the right reasons. Suppose we do often do what’s right for more than self-interested or arbitrary reasons. Proponents of Humeanism would argue that, even when we behave morally, we are beholden to our unreasoned passions or desires (e.g., Sinhababu 2009; Schroeder, Roskies, & Nichols 2010). If Humeans are right, our actions are always traceable to some ultimate or intrinsic motive that we have independent of any reasoning or beliefs. Bernard Williams famously discusses an example in which a callous man beats his wife and doesn’t care at all about how badly this affects her (1989/1995: 39). On the Humean view, we can only motivate this man to stop his despicable behavior by getting him to believe that being more kind will promote something he already cares about. We must try to show him that he’ll eventually be unhappy with himself or that his treasured marriage will fall apart. Pointing out that he’s being immoral will only motivate if he happens to care, and care enough, about that. If, however, refraining from physical abuse will not promote anything this man already wants, then the Humean says there is nothing that could motivate him to stop except a change in his concerns. The Humean theory can be conceived as a kind of pessimism if acting for the right reasons requires ultimately acting on the basis of recognizing the relevant reasons, not an antecedent desire. Some, like Thomas Reid, seem to think so: It appears evident… that those actions only can truly be called virtuous, and deserving of moral approbation, which the agent believed to be right, and to which he was influenced, more or less, by that belief. (1788/2010: 293) pg. 18 of 206

Regard for Reason | J. May

We do often describe one another’s actions this way—e.g., “She did it because she knew it was the right thing to do”—without appealing to an antecedent desire to be moral. However, Humeans might retort that acting for the right reasons requires only being motivated by specific moral considerations (e.g., kindness, fairness, loyalty), not the bare belief that something is right per se (cf., e.g., Arpaly 2003: ch. 3). Perhaps, for example, a father shouldn’t have “one thought too many” about whether he should save his own drowning daughter over a stranger’s (Williams 1976/1981). In general, the virtuous person presumably wouldn’t “fetishize” morality but rather be ultimately concerned with the welfare of others, fidelity to one’s commitments, and so on (Smith 1994), and a moral belief might still be problematic in this way (Markovits 2010). We’ll grapple with this issue later (Chapters 7-8), but for now suffice it to say that a certain kind of pessimism about the role of reason in moral motivation remains if Humeanism is right. For a variety of reasons, pessimists conclude that the aim of doing what’s right for the right reasons is practically unattainable. On a common account of what’s required for virtuous motivation, it’s practically out of reach for most of us. I aim to show that we are capable of genuinely altruistic motivation and that our beliefs about what we ought to do can motivate action without merely serving or furthering some antecedent desire. Moreover, while features of the situation certainly influence what we do, the ethically suspect influences do not systematically conflict with virtuous motivation. I ultimately argue that humans are capable of acting from duty or doing the right thing for the right reasons. Morally good motives are not rarities.

1.3.2 Non-cognitivism & Relativism The discussion so far has assumed that we can have moral beliefs, conceived as distinct from emotions, desires, or other passions. A complete defense of anti-Humeanism and rationalism requires showing that moral judgments don’t just express non-cognitive states. Consider, for example, the sentence “Slavery is immoral.” It seems such sentences don’t always merely express one’s negative feelings toward slavery. That is, it seems that non-cognitivism about moral judgment is false. Unlike beliefs, mere feelings and desires arguably can’t be evaluated for truth or accuracy, which makes it difficult to see how they can be part of a process of reasoning or inference. Importantly, rejecting non-cognitivism needn’t commit one to denying relativism, the view that moral statements are only true relative to some framework, such as the norms of one’s culture. I don’t assume that moral judgments are robustly objective but rather that they can be cognitive, similar to other beliefs. When I say “Lebron is tall,” this may be true only relative to a certain contrast class (ordinary people, not basketball players), but it is nonetheless assessable for truth or falsity in a certain context. In a somewhat similar fashion, moral truths are nonetheless truths even if they are in some sense relative to a culture, species, or kind of creature. So we needn’t assume that moral truths are objectively true—a core element of moral realism (ShaferLandau 2003)—in order to defend moral knowledge, conceived as justified true belief. I don’t intend to argue at length against non-cognitivism. The view has largely already fallen out of favor among many researchers. A survey of philosophers conducted in 2009 reveals that only 17% lean toward or accept it (Bourget & Chalmers 2014: 476). There is good reason for this. The famous Frege-Geach problem, which I won’t rehearse here, shows that noncognitivists struggle to make sense of moral language without drastically revising our best pg. 19 of 206

Regard for Reason | J. May

conception of logic and semantics (Schroeder 2010). Non-cognitivism is not exactly a live empirical theory either, as psychologists and neuroscientists appear to assume that moral judgments express beliefs. For example, rather than simply identify moral judgments with emotions or desires, researchers look to whether emotions are a cause or consequence of the moral judgment. In fact, the vast majority of “pessimists” I’ll target assume cognitivism as well. Moreover, we needn’t accept non-cognitivism to account for the various uses to which moral judgment can be put. For example, hybrid theories can capture the idea that we sometimes use sentences like “That’s just wrong” to express a negative reaction, like a feeling or desire, or to express a belief that an action or policy is wrong. Compare statements containing a pejorative, such as “Yolanda’s a Yankee,” which in some countries is used to express both a belief (Yolanda is American) and a distaste for her and other Americans (Copp 2001: 16). I favor something like this model (May 2014a), according to which moral judgments can express both cognitive and non-cognitive states (cf. also Kumar 2016a). However, I assume here only the falsity of noncognitivism, which is compatible with either a hybrid view or a strong cognitivist theory on which moral judgments only or chiefly express beliefs.

1.4 Optimistic Rationalism My primary aim is to resist the predominant pessimism about ordinary moral psychology that has developed in light of scientific research on the human mind. I will offer a more optimistic defense of ordinary moral thought and action in which reason plays a fundamental role— optimistic rationalism, if you will. Since pessimism comes in many forms, an optimistic view must be multi-faceted, with various components in opposition to the variety of pessimistic arguments. In particular, I aim to undermine some popular sources of empirically grounded pessimism (see Figure 1.1). I thus contend that moral judgments are generated by fundamentally cognitive and rational processes (rationalism), which are not subject to wide-ranging empirical debunking arguments (antiskepticism). Moreover, moral motivation is not always ultimately egoistic (psychological altruism), is heavily driven by a concern to do what’s right, and is not always a slave to unreasoned passions (anti-Humeanism). All of this casts doubt on the idea that virtuous motivation is rare among ordinary individuals (anti-skepticism). Figure 1.1: Key Sources of Empirically Grounded Pessimism

pg. 20 of 206

Regard for Reason | J. May

Note: Parentheses indicate in which chapters the source is primarily addressed.

Some may regard this cluster of views as closely associated with the Kantian tradition in moral philosophy. However, one can defend an optimistic picture of moral psychology without adopting a specific Kantian articulation of what precisely makes an action immoral. For example, Kant (1785/2002) says an action is wrong if the maxim on which it is based can’t be rationally chosen as a universal law. The theory developed in this book does not commit to knowledge of such specific accounts of fundamental moral principles. It’s similar in some important respects to the moral psychology of the great Chinese philosopher Mencius (Morrow 2009) and of some contemporary philosophers who are not particularly Kantian. So non-Kantian moral theorists—especially virtue ethicists, but even some consequentialists—may find much to agree with in what follows. At any rate, few optimists have taken the empirical challenges seriously, let alone answered them successfully. Some valiant attempts are simply incomplete in that they only address one aspect of moral psychology, such as moral judgment (e.g., Maibom 2005; Kamm 2009; Kennett & Fine 2009; Mikhail 2011; Sauer 2017) or moral motivation (e.g., Kennett 2002; Kennett & Fine 2008; de Kenessey & Darwall 2014; Sie 2015). Others claim to be optimists but embrace what I regard as sources of pessimism, such as simple sentimentalism (e.g., de Waal 2009) or revisionary utilitarianism (e.g., Greene 2013). This book aims to provide a more complete and satisfactory defense. I employ a divide and conquer strategy, breaking our moral minds into two key components (and their corresponding normative ideals): moral judgment (and knowledge) and moral motivation (and virtue). Consider how these two may come together or apart. Suppose you’re deciding whether you ought to press charges against your thieving son who is in the grips of a severe drug addiction. If all goes well, you form the correct judgment, it’s warranted or justified, and you thus know what to do. Suppose you decide it’s best to proceed with the charges. Next is the important task of living up to your standards. If you’re virtuous, you will act according to this judgment and for the right reasons, yielding moral motivation that exhibits virtue. One of my overarching aims is to reveal the deep connections and parallels in these two aspects of our moral minds—judgment and motivation—which are often addressed separately pg. 21 of 206

Regard for Reason | J. May

and by different sets of researchers. In subsequent chapters, we’ll see that our moral beliefs are formed primarily by on the basis of unconscious inference, not feelings, and that these moral beliefs play a prominent role in motivating action.

1.4.1 From Moral Judgment to Knowledge The next four chapters form Part I, which tackles moral judgment and to what extent it rises to knowledge or at least justified belief. Chapter 2 (“The Limits of Emotion”) argues that, contrary to the current sentimentalist orthodoxy, there is insufficient reason to believe that feelings play an integral role in moral judgment. The empirical evidence for sentimentalism is diverse, but it is rather weak and has generally been overblown. Chapter 3 (“Reasoning Beyond Consequences”) turns to some of the complex inferential processes that do drive ordinary moral thinking. Ample experimental evidence establishes in particular that we often treat more than just the consequences of one’s actions as morally significant. Ultimately, much of ordinary moral judgment involves both conscious and unconscious reasoning about outcomes and an actor’s role in bringing them about. But don’t we have empirical reasons to believe that core elements of ordinary moral judgment are defective? Chapter 4 (“Defending Moral Judgment”) argues that ordinary moral cognition can yield justified belief, despite being partly influenced by emotions, extraneous factors, automatic heuristics, and evolutionary pressures. I rebut several prominent, wide-ranging debunking arguments by showing that such pessimists face a Debunker’s Dilemma: they can identify an influence on moral belief that is either defective or substantial, but not both. Thus wide-ranging empirical debunkers face a trade-off: identifying a substantial influence on moral belief implicates a process that is not genuinely defective. By restoring reason as an essential element in moral cognition, the foregoing chapters undermine key sources of support for the sentimentalists and the debunkers. Such pessimists have tended to accept the idea that feelings play an important role in ordinary moral judgment. Sentimentalists embrace this as a more or less complete characterization. Debunkers instead use the apparent power of emotion as a source of skepticism about either all of moral judgment or only some of its more intuitive bases. With a regard for reason, ordinary moral thinking is on safer ground. However, while moral knowledge is possible, Chapter 5 (“The Difficulty of Moral Knowledge”) admits that we are far from flawless moral experts. There are two key empirical threats to the acquisition or maintenance of well-founded moral beliefs. First, empirical research can indeed reveal questionable influences on our moral views. While wide-ranging debunking arguments are problematic, this does not hinder highly targeted attacks on specific sets of moral beliefs (e.g., some influenced by implicit biases). Second, while people share many values, most ordinary folks have foundational disagreements with others who are just as likely to be in error (“epistemic peers”). However, this threat is likewise constrained since many moral disagreements aren’t foundational or aren’t with what most people should regard as their peers.

1.4.2 From Moral Motivation to Virtue

pg. 22 of 206

Regard for Reason | J. May

Part II consists of four chapters that focus on ordinary moral action and whether it’s compatible with virtuous motivation, which involves doing the right thing for the right reasons. Chapter 6 (“Beyond Self-Interest”) argues that we can ultimately be motivated by more than egoistic desires. Decades of experiments in social psychology provide powerful evidence that we are capable of genuine altruism, especially when empathizing with others. The psychological evidence, moreover, cannot be dismissed as showing that empathy blurs the distinction between self and other so much that it makes helping behavior non-altruistic. Even if we can rise above self-interest, we may just be slaves to our largely, if not entirely, egoistic passions. Chapter 7 (“The Motivational Power of Moral Beliefs”) argues that the motivational power of reason, via moral beliefs, has been understated. A wide range of experimental research shows that when we succumb it’s often due in part to a change in moral (or normative) belief. Rationalization, perhaps paradoxically, reveals a deep regard for reason— to act in ways we can justify to others and to ourselves. The result is that, even when behaving badly, actions that often seem motivated by self-interest are actually ultimately driven by a concern to do what’s right (moral integrity). This addresses a second form of egoistic pessimism but also sets up a challenge to the Humean theory addressed in the next chapter. Chapter 8 (“Freeing Reason from Desire”) picks up on the idea that our beliefs about which actions we ought to perform have a pervasive effect on what we do. Humean theories would of course insist on connecting such beliefs with an antecedent motive, such as a desire to do what’s right. However, I first shift the burden of proof onto Humeans to motivate their more restrictive, revisionary account. I then show that Humeans are unlikely to discharge this burden on empirical grounds, whether by appealing to research on neurological disorders, the psychology of desire, or the scientific virtue of parsimony. Chapter 9 (“Defending Virtuous Motivation”) considers further empirical threats to our ability to act for the right reasons. There are two main threats: self-interested rationalization and arbitrary situational factors. However, wide-ranging versions of such empirical challenges resemble sweeping attempts to debunk moral knowledge, and they’re likewise subject to a dilemma. One can easily identify an influence on a large class of actions that is either substantial or defective but not both. Thus, like moral knowledge, the science suggests that the empirical threat to virtue is limited.

1.4.3 Moral Enhancement The previous chapters defend the idea that, based on our regard for reason, ordinary moral thought and action are capable of rising to knowledge and virtue. But of course such optimism must be cautious. We do often behave badly, or do what’s right for the wrong reasons, or lack justified moral beliefs. Chapter 10 (“Cautious Optimism”) serves as a brief conclusion with a recapitulation of the main claims and moves made in the book, along with a discussion of how moral knowledge and virtue can be enhanced. One broad implication of optimistic rationalism is that the best method for making more of us more virtuous will not target our passions to the exclusion of our cognitive, reasoning, and learning abilities. However, sound arguments aren’t enough, for human beings are fallible creatures with limited attention spans. Still, the impediments to virtue are not primarily the absence of reason or our basic modes of moral thought; rather we must combat ignorance, self-interested rationalization, and the acquisition of misinformation and vices. pg. 23 of 206

Regard for Reason | J. May

There is further reason for caution and caveat. For all I will say here, one might adopt a truly global skepticism and conclude, on empirical grounds, that we don’t know right from wrong and can’t act virtuously because reason itself is thoroughly saturated with defective processes, both inside and outside the moral domain. It’s beyond the scope of this book to grapple with such a deep skepticism about our cognitive and inferential capacities. A vindication of moral knowledge or virtue, especially given a rationalist moral psychology, would ultimately require defending the reliability of our cognitive faculties generally. I’ll be content here, however, if I can show that empirical research doesn’t reveal that reason is largely absent or defective in our basic modes of moral thought and motivation.

1.5 Coda: Appealing to Science We’ll encounter a great deal of empirical research throughout this book. We should proceed with some caution given heightened awareness of concerns arising in experimental psychology and other sciences. First, there is a somewhat surprising amount of fraud, in which researchers fabricate data—and moral psychologists are no exception (Estes 2012). Second, there is an unsettling amount of poor scientific practice. Much of this falls under the heading of p-hacking, as when researchers continuously run participants in a study until they find a statistically significant result, which increases the likelihood of a false positive. Third, the scientific process itself has flaws. For example, there are publication biases in favor of shocking results and against null findings, including failures to replicate a previous result. One consequence is the file drawer problem in which failures to detect a significant effect are not published or otherwise circulated, preventing them from being factored into the cumulative evaluation of evidence. Related to this, the rate of replication seems unfortunately low in the sciences generally, including psychology in particular—an issue some call RepliGate (e.g., Doris 2015). A recent group of over 200 researchers attempted to carefully replicate 100 psychological studies and found roughly that only 39% succeeded (Open Science Collaboration 2015). An additional problem is that much of the empirical research in moral psychology is done on a small portion of the population, typically undergraduates in North American and European universities. That is changing, as researchers are increasingly recruiting participants from outside of universities, including some from multiple cultures. Still, as Joseph Henrich and his collaborators (2010) have put it, the majority of research participants are from societies that are predominantly Western, educated, industrialized, rich, and democratic (WEIRD people). This is especially problematic when we have empirical evidence that what appear to be psychological universals are not, at least not to the same degree in all societies. Of course, we shouldn’t overreact. The vast majority of scientists are not frauds and many conduct careful and rigorous studies. While participants are often WEIRD, such a subject pool may suffice if one’s aim is merely to establish the existence or possibility of certain psychological mechanisms, not their universality. Moreover, replication attempts shouldn’t necessarily be privileged over the original studies. The original could have detected a real effect while the later study is a false negative. The cutoff for statistical significance (typically p < .05) is somewhat arbitrary, after all. A statistically significant result only means, roughly, that there is a low probability (less than .05) that the observed difference, or a greater one, would appear in the sample, even when there is no real difference in the population (that is, when the null pg. 24 of 206

Regard for Reason | J. May

hypothesis is true). The p-value importantly doesn’t represent the probability that any hypothesis is true but rather a conditional probability: the probability of observing a certain result assuming that the null hypothesis is true. Thus, if a replication attempt is close to passing the conventional threshold—nearly yielding a successful replication—we may still have some reason to believe in the effect. Observing a difference between experimental groups that yields a p-value of .06, for example, doesn’t exactly amount to conclusive reason to accept the null. In general, it’s more difficult to prove a negative (e.g., that an effect is bogus) than it is to establish the existence of a phenomenon. There is certainly room for improvement in science, including larger sample sizes, more replication attempts, and more cross-cultural research. But science can clearly advance our knowledge, even about the mind and our complex social world, provided we aren’t overly credulous. For example, as Machery and Doris (2017) emphasize, one shouldn’t stake a conclusion on a single study, ideally not even on a few studies from one lab, especially when sample sizes are low. It’s best to draw on a large set of studies in the literature, appealing where possible to meta-analyses and reviews, while recognizing of course that these aren’t definitive either. Caution and care can ultimately yield strong arguments based on scientific data. Despite judicious appeal to the science, I tread lightly when drawing conclusions from empirical studies or philosophical analysis. Like Hume, I suspect the truth about such perennial issues will be difficult to uncover, and “to hope we shall arrive at it without pains, while the greatest geniuses have failed with the utmost pains, must certainly be esteemed sufficiently vain and presumptuous” (1739-40: intro, 3). So I don’t claim to have conclusively proven the theses in this book. Thankfully, though, my main aim is more modest. Defending a more optimistic conception of our righteous minds requires merely showing that it’s a plausible approach given our best evidence to date. No chapter is meant to establish definitively the relevant claim it defends. The value of the book is largely meant to arise from all of the parts coming together to exhibit a counterweight to the pessimistic trend.

pg. 25 of 206

Regard for Reason | J. May

PART I: Moral Judgment & Knowledge

pg. 26 of 206

Regard for Reason | J. May

Ch. 2: The Limits of Emotion Word count: 11,252

2.1 Introduction Emotions and moral judgment seem to go hand in hand. We feel outraged at injustice, compassion for victims of abuse, repugnance toward corrupt politicians, and a warm joyous elevation toward moral saints and heroes. Suppose, for example, that you hear breaking news of a deadly terrorist attack. Watching the story unfold, learning details of the gratuitous suffering and the loss of so many innocent lives, you experience a mixture of feelings—sadness, anger, disgust—and you naturally judge the relevant action to be immoral. But which came first, the feelings or the judgment? Do you believe the act of terror was unethical because of your negative feelings or do you have those feelings because of your moral evaluation? Empirical evidence could help to settle the issue. Psychological science, particularly in the tradition of Lawrence Kohlberg (1973), used to emphasize the role of inference and reflection in mature or adult moral judgment, not emotion. The tradition fit well with the rationalist idea that reasoning is integral to moral cognition. Feelings are either merely the natural consequences of moral judgment or provide just one way of instigating or facilitating reasoning that leads to such judgments. More recently, however, there has been something of an “affect revolution” in psychology generally, as well as moral psychology in particular (Haidt 2003: 852). There is apparently converging scientific evidence for the sentimentalist idea that the affective aspects of moral emotions, or feelings, play a foundational role in moral judgment. Jesse Prinz, for example, proclaims: “Current evidence favors the conclusion that ordinary moral judgments are emotional in nature” (2006: 30). Again, there is no bright line dividing reason from emotion (see Chapter 1, §1.2.2), and each clearly influence moral thinking to some degree. However, sentimentalists maintain that feelings play a foundational role in distinctively moral judgment (see Chapter 1, §1.2.3). This chapter and the next together form an empirically grounded argument against the new sentimentalist orthodoxy. We’ll see in this chapter that there is no compelling evidence that the affective elements of moral emotions are causally necessary or sufficient for making a moral judgment or for treating norms as distinctively moral. Chapter 3 then shows that, while it is misguided to emphasize reflection and the articulation of reasons, moral cognition is chiefly driven by unconscious inference, just like other forms of thought.

pg. 27 of 206

Regard for Reason | J. May

2.2 Moralizing with Feelings? A wealth of studies purport to show that feelings substantially influence moral judgment. There are certainly studies showing that moral judgments are correlated with emotions (Moll et al. 2005), but that is no evidence in favor of sentimentalism. Since we care deeply about moral issues, rationalists can happily accommodate emotions being a consequence of moral judgment (Prinz 2006: 31; Huebner et al. 2009). Ideally sentimentalists would be able to show that simply having strong feelings can make us moralize an action—e.g., come to believe that an action is wrong when we previous thought it morally acceptable. To establish this, we must be able to disentangle emotions from their cognitive components or their effects on inference. As Prinz points out, rationalists could happily admit a causal role for emotions by holding, for instance, that they “merely draw our attention to morally relevant features of a situation” (2006: 31) at which point reasoning processes could play a substantial role (cf. Huebner et al. 2009; Nichols 2014: 738; Scanlon 1998: ch. 1.8). Moreover, many moral emotions, such as compassion and indignation, are intimately bound up with beliefs about the morally relevant facts. Empathy nicely illustrates the issue. Many classical and contemporary sentimentalists have pointed out that putting ourselves in the shoes of a victim can lead us to condemn the perpetrator’s actions. The moral judgment was surely driven at least by the cognitive side of empathy, in which we acquire a vivid understanding of the victim’s plight. But in empathizing we also share in the victim’s feelings of anguish. Is this affective side of empathy essential? It’s difficult to tell. Sentimentalists have accordingly been drawn to the explosion of research on incidental emotions in which the feelings are unrelated to the action evaluated. Being grossed out by someone using the bathroom, for example, can be divorced from morally relevant thoughts about embezzlement.

2.2.1 Moralizing Conventions One prominent sentimentalist strategy is to establish that feelings are essential to making distinctively moral judgments. For this to work, we need a characterization of the concept of morality or some core aspect of it. One mark of moral norms is that they appear to be distinct from mere conventions. The norms of etiquette require that I utter certain words to people after they sneeze, and some school rules dictate that children wear a certain uniform. Such conventions can be contrasted with moral norms, such as those that prohibit physically harming innocent people or invading someone’s privacy. Violating moral norms is rather serious and this isn’t contingent on an authority’s decree. Wearing pajamas to school, by contrast, seems less serious generally and certainly more acceptable if the teacher says it’s okay. Moreover, explanations for why one shouldn’t violate a convention are less likely to point to considerations of harm, fairness, or rights. A large body of empirical evidence seems to confirm that people ordinarily draw some sort of moral/conventional distinction. In general, compared to moral transgressions, we treat violations of conventions as less serious, more permissible, contingent on authority, valid more locally than universally, and involving distinct justifications that don’t primarily appeal to another’s welfare or rights (Turiel 1983). This distinction between types of norms appears to develop quite early—around age 4—and appears to be universal across many cultures, religions, and social classes (Nucci 2001). pg. 28 of 206

Regard for Reason | J. May

Drawing heavily on this research, Shaun Nichols (2004) has argued that what makes us moralize a norm is that it’s backed by strong feelings or affect. While rules or norms are essential to moral judgment, they aren’t sufficient, for they may be conventional, not moral. What makes a judgment moral has to do with our feelings toward the norm that has been violated (or upheld, presumably). The key test of this “sentimental rules account” comes from studies in which Nichols (2004) sought to demonstrate that people would moralize the violation of a convention if they were especially disgusted by it (e.g., a person snorting and spitting into his napkin at dinner). In the first experiment, Nichols found evidence that participants would treat repulsive transgressions of etiquette as more like moral transgressions (that is, less conventional) compared to violations of emotionally neutral conventions. The people in his small sample were inclined to rate the disgusting transgressions as slightly more serious, less permissible, and less authority contingent (while justifications varied). In the second experiment, Nichols divided participants up into those that are highly disgust-sensitive, based on their score on a disgust scale, previously validated by other researchers. Participants especially sensitive to disgust tended to treat disgusting transgressions as less conventional, compared to the other group. However, while disgust-sensitive participants rated repulsive transgressions as more serious and less authority contingent, there was no difference between the groups’ permissibility ratings (2002: 231). Does this provide strong evidence that feelings alone can moralize? There are several reasons for doubt. First, disgust was not manipulated in either experiment, and in the second study disgust was merely identified as likely to be more intense in a certain group. We can’t be sure that the different responses these groups provided were merely due to differing levels of disgust experienced, rather than another factor. Second, permissibility ratings are arguably a key element of moral judgment, yet there was no difference among those participants who were especially disgust-sensitive. While these participants did rate disgusting transgressions as slightly more serious and less contingent on authority, this is a far cry from moralizing. It is interesting that elevated disgust seems to correspond to treating a transgression as less authority contingent. However, third, Nichols did not directly measure whether more disgusting violations strike people as involving more psychological harm, which fails to pry the emotion apart from a morally relevant belief and would explain any tendency to treat disgusting transgressions as a bit more of an ethical issue. Follow-up studies by Royzman et al. (2009) suggest that perception of harm accounts for some of the moralization of disgusting transgressions. Moreover, with a much larger sample size Royzman and colleagues were not able to replicate Nichols’s original result when the disgust scale was administered two weeks prior to soliciting moral reactions to the hypothetical transgressions. With this improved design, participants were less likely to be aware of the hypothesis being tested or to have their assessments of the transgressions influence their responses on the disgust scale. Nichols, along with his collaborator David Yokum, has conducted a related study that directly manipulated another incidental emotion: anger (reported in Nichols 2014: 737). Some participants were randomly assigned to write about an event that made them particularly angry and then judged the appropriateness of an unrelated etiquette violation. Some of the participants feeling greater incidental anger were more likely than controls to say that if someone disagrees with them about the etiquette violation, then one of the disputants “must be mistaken.” This study might seem to further support sentimentalism (Prinz 2016: 55). However, the small effect was found only among women. More importantly, even if such an effect had been found for pg. 29 of 206

Regard for Reason | J. May

more than a subgroup (as in Cameron et al. 2013), the data suggest a change in judgments of objectivity, not authority-independence in particular—and certainly not a change in all or even most of the characteristic features of norms that transcend mere convention. A broader problem here is that it’s unclear whether the moral/conventional distinction does appropriately measure moralizing anyway. Daniel Kelly et al. (2007) had participants evaluate a broader range of harmful actions than the usual “school yard” transgressions found in work on the moral/conventional distinction. The results provide some evidence that not all violations of moral rules yield the signature pattern of responses. For example, most of their participants thought that it’s very bad to train people in the military using physical abuse—but only if government policy prohibits it. The norm is apparently regarded as a moral one even though its status is authority-dependent. While there may be concerns about some aspects of the study conducted by Kelly and colleagues (Kumar 2015), there are good theoretical reasons for expecting such data. As Heidi Maibom (2005: 249) points out, many norms that would be dubbed mere “conventions” often seem moral. For example, if I speak without the talking stick in hand, then I’ve violated a rule that’s not very serious, not exactly highly impermissible, and dependent on an authority that set the rule. If the councilor says anyone can talk, with or without the stick, then there’s no transgression. Nevertheless, when the rule is in place, consistently speaking without the stick and interrupting others is rude, pompous, and inconsiderate. A line between moral and merely conventional is difficult to discern when one is treating others poorly by violating a local convention. In sum, it doesn’t seem sentimentalists can find strong support in research on incidental emotions and the moral/conventional distinction. The distinction is certainly a valuable heuristic for distinguishing many moral rules from non-moral ones, perhaps even as a rough way of characterizing the essence of a norm’s being moral (Kumar 2015). But it’s unclear in this context whether one group of people count as moralizing a norm just because they treat a transgression as slightly less conventional than another group does. More importantly, even if treating a rule as slightly less conventional suffices for moralization, we lack solid evidence that this is driven by mere feelings, such as incidental disgust or anger, rather than tacit thoughts about increased psychological harm.

2.2.2 Amplifying with Incidental Emotions A better route to sentimentalism appeals to research that manipulates emotions specifically and directly measures moral judgment. However, recall that rationalists predict that emotions can influence moral judgments by influencing reasoning. For example, emotions can draw one’s attention to morally relevant information that then facilitates inference. The best evidence for sentimentalism, then, would demonstrate that manipulating incidental feelings alone substantially influences moral cognition. Dozens of such experiments purport to demonstrate just such an effect. And many philosophers and scientists champion them as vindicating the role of emotions in practically all of moral judgment (e.g., Haidt 2001; Prinz 2007; Chapman & Anderson 2013; Sinhababu 2017) or at least large swaths of it (e.g., Nado, Kelly, & Stich 2009; Horberg, Oveis, & Keltner 2011; Kelly 2011; Plakias 2013; Greene 2013). The evidence, however, again underwhelms. Rather than support sentimentalism, the studies suggest that incidental emotions hardly influence moral pg. 30 of 206

Regard for Reason | J. May

judgment and are instead typically a mere consequence. But let’s first consider some of the key evidence. Most of the experiments again involve the manipulation of disgust immediately before participants provide their moral opinions about hypothetical scenarios described in brief vignettes. Some participants are randomly assigned to a control group that isn’t induced to feel heightened levels of disgust before evaluating the vignettes. Those in the manipulation group, however, have this emotion elevated in various ways. Thalia Wheatley and Jonathan Haidt (2005), for example, hypnotized some people to feel disgust upon reading a certain word. Other experiments induce disgust by having participants sit at a dirty desk with remnants of food and sticky substances; smell a foul odor; watch a gruesome film clip involving human feces; or recall a disgusting experience (Schnall et al. 2008). Still other researchers had some participants drink a bitter beverage, as opposed to water or something sweet (Eskine, Kacinik, & Prinz 2011), or listen for one minute to the sickening sound of a man vomiting (Seidel & Prinz 2013a). A related set of experiments manipulate incidental feelings of what seems disgust’s opposite: cleanliness. But the results are rather mixed: some studies suggest that cleanliness reduces the severity of moral judgments while others suggest the exact opposite (see Tobia 2015 for discussion). In all of these studies, and some more, incidental disgust alone has tended to make moral judgments harsher. If such effects are real, widespread, and substantial, then they provide powerful evidence in favor of sentimentalism. However, the data are rather limited, for many reasons (cf. May 2014a). (1) Generalizing from Subgroups: Many of the effects were found only among certain types of people or subgroups of the sample. Subjects in Wheatley and Haidt’s (2005) experiments were only people who were “highly hypnotizable.” Similarly, Schnall and her collaborators (2008) found the disgust effect only among those who were especially aware of their own bodily feelings (they scored high on a Private Body Consciousness scale). (2) Scarce Effects: While participants respond to many vignettes, the disgust effect was detected only among a minority of them. In Wheatley and Haidt’s (2005) first experiment, for example, only two out of six vignettes produced a statistically significant result, although the “composite mean” of responses to all vignettes together was also significant (see Table 2.1). So the effects on moral judgment are scarce, which means it’s not quite right to say: “Across the board, ratings [of moral wrongness and disgustingness] were more severe when disgust was induced” (Kelly 2011: 25). It could be that disgust does affect our moral judgments about most of the individual vignettes, but the researchers didn’t find it in their sample. After all, failing to find an effect doesn’t mean there isn’t one—unless of course the study has the statistical power to accept the null hypothesis that there isn’t an effect. But experiments in the social sciences are often underpowered, which precludes this inference. At best, then, we have no evidence either way, in which case we still shouldn’t say there is an effect “across the board” when one wasn’t found. Table 2.1: Example Data from a Disgust Experiment Vignette Cousin incest Eating one’s dog Bribery Lawyer

Morality Ratings Disgust absent Disgust present 43.29 67.63** 65.64 65.26 78.73 91.28* 59.82 73.26 pg. 31 of 206

Regard for Reason | J. May Shoplifting Library theft Composite Mean

67.75 69.40 64.67

79.81 71.24 73.94*

100-point scale (0 = not at all morally wrong; 100 = extremely morally wrong), * = p < .05, ** = p < .01. Table adapted from Wheatley & Haidt (2005: 781).

(3) Small Effects: Even when detected, the effect is rather small (an issue also briefly noticed by others, such as Mallon & Nichols 2010: 317–8; Pizarro, Inbar, and Helion 2011). For example, in one of Wheatley and Haidt’s (2005) vignettes, which described an act of bribery, the average moral ratings differed between the control and disgust group by only 12.55 points on a 100-point scale (see Table 2.1). This mean difference between the groups is statistically significant, but that at best warrants the conclusion, roughly, that the difference was not likely due to chance. More precisely, the probability is rather low (less than 0.05) that we’d observe this difference in a sample even assuming there’s no real difference in the population. At any rate, statistical significance alone doesn’t license the conclusion that the observed difference is substantial or significant in the ordinary sense. If anything, the observed difference between groups seems rather small and fails to shift the valence (or polarity) of the moral judgment. Disgusted or not, both groups tend to agree about whether the hypothetical action was right or wrong. At best, these studies only provide support for the idea that incidental emotions can color or intensify a moral judgment whose existence is due to some other factor. Of course, sentimentalists might predict only a small shift in moral judgment from a small increase in emotion (Sinhababu 2017: 76). But part of the reason the disgust experiments are important to examine is that the emotional inductions are often quite powerful, as reflected in manipulation checks. Yet the (rare) effect on moral judgment is miniscule at best, even when participants are in the presence of a truly foul smell, sipping on a conspicuously bitter beverage, listening to someone vomiting, or watching a scene from a film in which a man rifles through a used toilet while visibly struggling not to lose his lunch. Just thinking about being a participant in such experiments is disgusting enough! All of the key disgust experiments ask participants to rate moral transgressions. Wheatley and Haidt (2005), however, did run one vignette which interestingly tests whether incidental disgust alone can lead one to judge an action wrong that one would ordinarily consider perfectly acceptable. Wheatley and Haidt included a “Student Council” scenario, in which a student performs a mundane, morally neutral action: Dan is a student council representative at his school. This semester he is in charge of scheduling discussions about academic issues. He [tries to take/often picks] topics that appeal to both professors and students in order to stimulate discussion. (2005: 782) Those who read this without their disgust-inducing word present (“take” vs. “pick”) tended to rate Dan’s action as “not at all morally wrong” (providing marks near this end of the scale). But ratings were significantly elevated for those who read the version with the trigger word. Moreover, the experimenters offered participants an opportunity to explain their judgments, and some wrote of Dan that “It just seems like he’s up to something” or that he seems like a “popularity-seeking snob” (783). Wheatley and Haidt conclude that disgusted subjects “condemned Dan” and made “severe judgments” (783). If this is an accurate description of the results, then that would clearly be powerful and surprising, as many have noticed. Plakias, for example, deems it a “striking demonstration of the pg. 32 of 206

Regard for Reason | J. May

power of disgust [to affect moral judgment]” (2013: 264). The crucial Student Council case, however, is underwhelming. The mean rating of moral wrongness for those who did not receive the version of this story with their disgust-inducing word was 2.7 (recall: 0 = “not at all morally wrong” and 100 = “extremely morally wrong”). Disgusted participants, however, had a mean rating of 14, which still seems to count the action as not morally wrong (cf. Mallon and Nichols 2010: 317–8). Some researchers are not concerned about whether their participants’ responses tend to fall on opposite sides of the midpoint, so long as the difference is statistically significant. For example, in their study of how moral judgments affect various intuitions in folk psychology, Dean Pettit and Joshua Knobe explicitly propose to disregard whether responses tend to straddle the midpoint (2009: 589–90). While this may be a fine approach to some research questions, it can over-inflate the import of certain results, and the disgust experiments are a clear example. It’s of course unclear whether we should take the scales used in such research to have a genuine midpoint at all, or to otherwise clearly deliver information about whether participants tended to judge the action as right or wrong, rather than being uncertain or ambivalent. But that would only further pose a problem for the sorts of claims many have made regarding these studies, especially the Student Council case. Still, it is useful to consider where on these various scales subjects were tending to fall, even if it is difficult to determine a valence for the mean response. Consider how the data conflict with the usual descriptions of Wheatley and Haidt’s hypnotism studies. Prinz, for example, summarizes one of their experiments thus: “when the trigger word is used in [morally] neutral stories, subjects tend to condemn the protagonist”— “[they] find this student morally suspect” (2007: 27–8). (Note: There was only one neutral story.) Likewise, Richard Joyce writes that people responding to the Student Council story: “were often inclined to follow up with a negative moral appraisal” (2006: 130). Kelly similarly writes: “Participants maintained their unfavorable judgment of Dan despite their complete lack of justification for it….” (2011: 25). And Plakias says, “subjects who had been hypnotized judged Dan’s actions morally wrong” (2013: 264), which is similar to Valerie Tiberius’s statement that “for the students who did feel disgust… there was a tendency to rank Dan’s actions as wrong” (2014: 78). Contrary to all of the above descriptions of Wheatley and Haidt’s results, if anything it appears their subjects tended to regard the representative’s action as not morally wrong. The studies certainly don’t provide evidence that “disgust is sufficient to bring about an appraisal of moral wrongness even in the absence of a moral violation” (Plakias 2013: 264). While the different morality ratings between the groups may not straddle the midpoint, one might contend that the effect is nonetheless substantial. Kelly, for example, claims Wheatley and Haidt’s disgust-manipulation “increased judgments of disgustingness and moral wrongness by factors of roughly 10 and 6, respectively” (2011: 25). While it’s true that the morality ratings of subjects increased by a factor of 6 (mean responses were 2.7 vs. 14 in the Student Council case) in the direction of the “extremely morally wrong” end of the scale (100), again this looks if anything to be on the side of counting Dan as not having done something wrong. The factor by which it increased along the “moral wrongness” scale would have to be much greater just to get it barely in the realm of being judged somewhat morally wrong (i.e., above 50). So, while disgust may have made participants’ judgments more “harsh” (as some more carefully put it), we do not have evidence that it tended to alter their valence—e.g., from permissible to wrong. Such data only warrant something like the conclusion that disgust slightly amplifies moral judgments in the direction of condemnation (as briefly noted by some commentators, e.g., Huebner, Dwyer, & Hauser 2009; Pizarro, Inbar, & Helion 2011; and Royzman 2014). pg. 33 of 206

Regard for Reason | J. May

One might retort that in fact some of the disgusted participants rated Dan’s action as immoral. Joshua Greene, for example, says, “Many subjects who received matching posthypnotic suggestions indicated that his behavior was somewhat wrong, and two subjects gave it high wrongness ratings” (2008: 58). Such claims are apparently based on an earlier version of the manuscript that circulated prior to publication, which discusses some additional details about earlier versions of the data. (Thanks to Thalia Wheatley, via Walter SinnottArmstrong, for clarifying this issue and providing the earlier version of the paper.) But the “many” to which Greene refers was a minority of the group (about 20% by my calculations), and their ratings are only reported (in the manuscript) as being “above 15” which is still well on the “not morally wrong” side of the 100-point scale. Furthermore, the two subjects (out of sixtythree) who allegedly provided “high wrongness ratings” were at most in the area of judging the act somewhat morally wrong (“above 65”). More importantly, these data points are mere outliers—the kind that are often removed from analysis in experimental work. However, even if we included the data points from the older manuscript and the authors’ description of them, Greene’s gloss is fairly misleading and the outliers are irrelevant anyhow. What matters are the central tendencies of subjects’ ratings, which we can subject to statistical analysis. Yet the means from both groups are still quite low (14 in the published article; 15 in the prior manuscript), indicating either way a tendency to count the act as morally permissible. Finally, to further support the alleged effect of disgust, many authors also point to the written explanations subjects provided regarding the Student Council story. While some disgusted participants did explain their morality ratings by reporting suspicions of Dan and so forth, Wheatley and Haidt don’t report the percentages. They tell us only that “some participants” engaged in this apparently post-hoc “search for external justification” (2005: 783). And these existential generalizations can be true even if only a small minority of participants provided such explanations (e.g., the two outliers). Indeed, while Wheatley and Haidt provide no explicit indication either way, it is likely that only a small minority provided these rationalizations, since only a small minority provided harsher moral judgments, and only two outliers provided a response that indicates a condemnation of Dan’s behavior. So we shouldn’t be led into thinking that the above reports from some of the participants are representative of the experimental group. The problems with the disgust experiments have been buttressed by a recent metaanalysis of the effect of incidental disgust on moral cognition. Landy and Goodwin (2015) combed the literature for published studies and collected numerous unpublished ones, yielding fifty experiments and over 5,000 participants. Using Cohen’s standard, estimated effect size based on all of these studies was officially “small” (d = 0.11). Moreover, the effect disappears when one considers only unpublished experiments, which suggests a bias against publishing the null results or replication failures. The mainstream and underground studies cleave on this point: “the published literature suggests a reliable, though small, effect, whereas the unpublished literature suggests no effect” (2015: 528). Given publication bias and possible confounds, Landy and Goodwin conclude that incidental disgust’s amplification effect on moral cognition is extremely small at best, perhaps nonexistent. While disgust has received the most attention, some researchers have also manipulated other incidental emotions, often using audio piped through headphones. For example, in one experiment, researchers manipulated either of two positive emotions—mirth or elevation—by having participants evaluate moral dilemmas while listening to clips from either a stand-up comedy routine or an inspirational excerpt from Chicken Soup for the Soul (Strohminger et al. pg. 34 of 206

Regard for Reason | J. May

2011). Interestingly, the experiments found that mirth slightly increased utilitarian responses to moral dilemmas while elevation had the opposite effect. One worry is that the audio clips involved different statements that were not properly controlled for in the various conditions. Other studies, however, avoid this problem by having participants listen to instrumental music. To manipulate incidental anger, for example, Seidel and Prinz (2013a) had participants listen to Japanese “noise music,” which is irritating to most people. In another experiment, they induced positive feelings of happiness with uplifting classical music (Seidel & Prinz 2013b). The key results in these studies were that incidental anger slightly amplified condemnation of autonomy violations and happiness slightly amplified judgments of praise and moral obligation (while anger reduced such judgments). Do these few additional sound studies demonstrate the power of incidental emotions in moral judgment? One worry is that certain noises, particularly irritating ones, could significantly distract participants from fully processing morally relevant information in the vignettes. More importantly, though, all of the findings are similar to those of the disgust experiments. While the effects weren’t restricted to subgroups in the samples, and sometimes the effects were found for all or most vignettes tested (not just a minority), the differences between groups are again consistently small shifts on a fine-grained scale. Now, in these studies, the extraneous emotion does sometimes shift the valence of the moral judgment on average compared to controls (as emphasized by Prinz 2016: 54). But the shift is consistently from roughly the midpoint (the mean response in the control group) to slightly beyond (in the relevant manipulation group). So these few studies from one lab don’t provide sufficient evidence that incidental emotions have more than a negligible effect on moral judgment. Further research, replications, and meta-analyses are required before we can confidently conclude that the effects are stable and substantial enough to support sentimentalism.

2.3 Accounting for Slight Amplification Ultimately, rigorous empirical studies are converging on the idea that incidental emotions are hardly sufficient for moral judgment and instead are often elicited by them. Now we may be inclined to ask: If an emotion, such as disgust, is primarily a consequence of moral judgment, then why does it sometimes slightly amplify them? As Prinz has recently put it, “Why should emotions have any effect?” (2016: 54). This does call for explanation if we are to deny that incidental emotions play a significant role in forming moral beliefs. Further research is required, but there are plausible proposals available. To appreciate these, however, we first need to see why it’s plausible to suppose the emotions often follow moral judgments.

2.3.1 Emotions as Effects of Moral Judgments No doubt emotions can sometimes influence cognition, as affect in general sometimes provides us with information, even outside of the moral domain (Dutton & Aron 1974; Schwarz & Clore 1983). However, feelings alone do not appear to be a substantial cause (or sustainer) of a sizeable class of distinctively moral beliefs. Instead, it seems that emotions are often effects of moral judgment. The phenomenon is familiar from various emotions. When empathizing with others in distress, for example, the compassion that typically results is modulated by pre-existing moral pg. 35 of 206

Regard for Reason | J. May

beliefs. There is evidence in particular that one’s empathic response toward people in need is dampened if one judges them to be blameworthy for their distress (Betancourt 1990; cf. Pizarro 2000: 366). But let’s focus once more on disgust. Of course, some reactions of revulsion are not connected to moral beliefs at all. Eating an insect might disgust many of us but it needn’t have any relation to our moral beliefs, either as an effect or as a cause. When there is a connection between a moral belief and repugnance, however, the emotion is often elicited by the belief, not the other way around (cf. Huebner et al. 2009; Pizarro et al. 2011; May 2016a). Behavioral data: Moralization Consider first changes in reactions of disgust following a change in specific moral beliefs. A natural example concerns omnivores who become vegetarians and are eventually disgusted by meat. Not every vegetarian becomes repulsed by meat, perhaps for various reasons. Some may be vegetarian primarily for reasons of health, not ethics. Even for so-called “moral vegetarians,” the desire for meat may be too entrenched, given one’s personal preferences or length of time as a meat eater. Nonetheless, there is some empirical evidence that moral vegetarians are more disgusted by meat than health vegetarians (Rozin et al. 1997). And further research suggests that this result is not simply due to moral vegetarians already being more disgust-sensitive (Fessler et al. 2003). Thus, it seems the ethical beliefs of many moral vegetarians eventually elicit disgust as a consequence. The emotional response is related to the moral judgment by following it. This general phenomenon, which Rozin has called “moralization,” is not restricted to vegetarianism either. Few people are vegetarians, let alone for moral reasons, but many more are now disgusted by cigarette smoke. Just in the past fifty years, attitudes toward smoking tobacco have radically changed. Interestingly, there is some evidence that people in the United States have become more disgusted by cigarettes and other tobacco products after forming the belief that it’s a morally questionable habit and industry to support (Rozin & Singh 1999). Such research confirms a common phenomenon in ordinary experience: emotions commonly follow one’s moral judgments. Neuroscientific data: ERP Extant experiments on disgust and moral judgment have not precisely measured temporal ordering. Qun Yang et al. (2013), however, attempted to do precisely that using a complicated, yet clever, application of a Go/No-Go task with a sample of people in China. In this paradigm, researchers instruct participants to respond (Go) to certain cues but not others (No-Go). Researchers then can use an electroencephalogram (EEG) to measure increased electrical activity on the surface of participants’ brains, specifically event-related potentials, which indicate preparation for motor activity. By identifying “lateralized readiness potentials” in particular, the experimenters could discern when participants were preparing to move either their left or right hands to respond to a specific cue. Importantly, detecting preparation to move in a No-Go condition indicates that participants were prepared to move (Go) before they realized they shouldn’t (No-Go). The paradigm allows researchers to determine which of two judgments people tend to make first by essentially asking them to eventually make both judgments but act on one only if it has a certain relationship to the other. For example, we can gain evidence about whether people tend to process an individual’s race or gender first by asking them to press a button to indicate the gender of a person (Go) but not if the person is Caucasian (No-Go), and then swapping the Go/No-Go mapping. Preparation to move in the No-Go condition suggests pg. 36 of 206

Regard for Reason | J. May

that participants first processed the information mapped to Go (e.g., gender) but only afterward realized it met the No-Go condition (Caucasian) and so didn’t follow through with pushing the button. To test the temporal order of judgments of morality vs. disgust, Yang and colleagues divided the experiment into two sessions. In the first session, participants were instructed to report a moral judgment about a scenario (Go) by pressing a button with their left or right hands but not (No-Go) when the action was physically disgusting. For example, subjects should report their moral judgment in response to “A person at a party is drinking water” but not for “…is drinking blood.” In the second session, the Go/No-Go instructions were reversed: participants were supposed to report their assessment about whether the act was disgusting but not when they think the action is immoral. For example, they should make an assessment of disgustingness (Go) for “A person at a party is drinking water” but not for “…is stealing money” (No-Go). The experimenters’ hypothesis was that participants would make their moral judgments prior to their assessments of disgust but not vice versa. The resulting data confirmed this hypothesis by detecting brain activity that indicated participants were prepared to make an assessment of morality before disgustingness in the relevant setting (Session 1) but not vice versa (Session 2). In the first session, the researchers detected significant preparation to move in the No-Go trials. This indicates participants generally made a moral judgment first and then realized they’d need to inhibit reporting it when they judged the act to be disgusting. Of course, this should be predicted by any hypothesis, given the instructions for the first session. Participants were supposed to make a moral judgment unless the act was disgusting, so they may have chosen to make moral judgments first and then decided to respond or not based on whether they found the act disgusting. However, in the crucial second session, participants were instructed to assess disgustingness only if the act was immoral. Yet there was no evidence of preparation to move in No-Go trials, which suggests that participants weren’t already preparing to respond to whether the act was disgusting. It seems they knew no response was required because they continued to make their moral judgment first. Thus, the data from the two sessions indicate that it’s more natural for people to make moral judgments first and judgments of disgust second. There is then no evidence that disgust informs or causes the moral judgments, because the emotional response seems to occur too late. These EEG data provide some rigorous evidence in favor of the idea that disgust often follows moral judgments, rather than serves as an input to them. And similar results were achieved by Yang and his collaborators using EEG again but with a different research paradigm (Yang et al. 2014). Two EEG studies from a single lab certainly do not settle the matter. However, combined with the other empirical studies (e.g., on moralization and compassion), there is growing evidence that disgust tends to follow negative moral judgments, not vice versa.

2.3.2 Misattribution of Arousal Now, to understand how incidental emotions could sometimes slightly amplify moral judgments despite not generally being important causes, consider what has come to be known as “misattribution of arousal,” an established phenomenon in the social psychology literature. In an early study, Schachter and Singer (1962) conducted an experiment that they led participants to believe was about how an injection of vitamin supplements (“Suproxin”) affects vision. The “vitamin” injections were actually either a placebo for one group or adrenalin (epinephrine) for another, the latter of which causes noticeable physiological responses, such as pg. 37 of 206

Regard for Reason | J. May

elevated heart rate and respiration. Some participants were informed of these real side effects of the injection, while others were misinformed of false side effects, and a third group remained ignorant. Subjects were then paired with a confederate of the experiment (“the stooge”) who pretended to react to the injection either with noticeable euphoria or anger. Eventually, participants provided self-reports of their own mood. Schachter and Singer found that those who didn’t have the right information to attribute their symptoms to the adrenalin shot reported feeling more euphoria or anger (like the stooge). The authors conclude that “given a state of physiological arousal for which an individual has no explanation, he will label this state in terms of the cognitions available to him” (395). In other words, when we have unexplained feelings, we often attribute them to some source, even if inaccurately. In a similar vein, Dutton and Aron (1974) famously had men interviewed by an attractive woman either over a wobbly suspension bridge with a 230-foot drop or a more solid bridge only 10 feet above a small creek. The researchers measured how many in each group accepted the woman’s phone number and how many later called to find out more about the experiment. As expected, participants who interviewed on the scary bridge more often accepted the number and later called. Dutton and Aron suggest that on one interpretation of the results such participants misattributed (or “relabeled”) the source of their racy feelings to the attractive woman, not to fear of the bridge. Making a mistake in the source of their feelings affected their assessment of their attraction to the woman. A number of experiments have uncovered similar findings. In addition to two field experiments, Dutton and Aron found similar results in a lab experiment involving fear of electric shocks. More recently, a meta-analysis of the effect of arousal on attraction indicates that it’s robust, albeit small to moderate in size (Foster et al. 1998). The authors of the meta-analysis conclude that the data are consistent with the misattribution theory of the arousal-attraction link, even if other plausible theories are not ruled out either. Now consider how the idea of misattributing sources of emotion can extend to disgust. Experimenters induce incidental feelings of this emotion—e.g., via a foul smell, watching a repulsive film clip, tasting a bitter beverage—that are unrelated to the moral scenarios under consideration. To what source, other than the actual source, might participants attribute their feelings? There are two main possibilities, but only the first affords disgust a direct causal impact on moral judgment. The first appeal to misattribution can be found in the work of Simone Schnall and her collaborators (2008). They deliberately attempted to “induce low-level, background feelings of disgust” so that “any disgust elicited by the moral dilemmas” (1106) wouldn’t be correctly attributed to the real cause of the incidental feelings of the emotion, such as a dirty desk. On this account, the experimental manipulation elicits disgust and participants are expected to misattribute the source of the incidental feelings to the act or actor in the vignette. If it’s too obvious that the source of the disgust is really from, say, a dirty desk, then participants in the disgust condition will not amplify their negative moral judgments. On this account, misattribution explains the effect but on the assumption that disgust does influence moral judgment. However, we can appeal to misattribution in a different way that doesn’t rely on disgust directly amplifying moral judgment. Some people may misattribute their elevated levels of disgust to their moral judgment about the story, not the actor in the vignette. This misattribution is then combined with the tacit knowledge that we tend to feel disgust toward those acts we think are especially heinous, which leads to a tendency among some to report a harsher moral belief. pg. 38 of 206

Regard for Reason | J. May

Compare anger. Incidental feelings of anger might make me rate an act as worse just because, usually, the angrier I am about a violation, the worse I think it is. Just as we automatically take smoke as evidence of a fire, we tacitly take an emotional reaction to be a natural consequence of a moral judgment. So there are three possible sources for the incidental feelings of disgust: the real source (dirty desk, film clip, etc.), the moral violation in the vignette, and the moral judgment about the vignette. Schnall and company maintain that participants misattribute their elevated feelings of disgust to the vignette (rather than the real source). Again, this assumes that disgust typically causes relevant moral judgments, rather than the other way around. On an alternative theory, however, participants misattribute the feelings to the moral judgment. This assumes only that people are tacitly aware that disgust typically follows relevant moral judgments. The second misattribution account explains why sometimes researchers find (slightly) harsher self-reported moral judgments among people feeling incidental disgust. And the small effect is explained without providing any reason to believe that disgust plays an important role in an interesting class of moral judgments. Compare: the arousal-attraction link does not provide strong evidence that fear plays an important role in judgments of attraction; rather such studies indicate that incidental and unexplained feelings strike us as calling for explanation, even if unconsciously. How we unconsciously reconcile such feelings is left open. The misattribution account of the disgust studies shows that we can explain this reconciliation without assuming that disgust is primarily a cause of moral judgment. In particular, the affective element of this emotion needn’t be a normal part of the mechanism for producing moral judgment, just as fear isn’t a normal part of the cause of judgments of attractiveness. Rather, people sometimes tacitly take the disgust as evidence that they think the act is worse. Misattribution accounts also explain why disgust only sometimes amplifies moral judgment. After all, the account predicts the effect will only show up among some people who tacitly make the error or misattribution. This might seem problematic since, in some experiments, disgust appears to effect moral judgment only among those who are more skilled at detecting their internal physical states (e.g., Schnall et al. 2008). But these participants can only be expected to be adept at noticing the arousal, not its true source; misattribution is still plausible at least for some. Indeed, it makes perfect sense that only those who notice the unexplained emotion will (unconsciously) rationalize it. Moreover, in keeping with this misattribution account, there is some evidence that the slight effect of incidental disgust on moral judgment disappears in participants who can more finely distinguish their own emotions (Cameron, Payne, & Doris 2013). We thus have a clear way to explain amplification that’s consistent with denying that incidental disgust plays an important role in moral judgment. In fact, as the arousal-attraction studies indicate, misattribution accounts can generalize to other emotions. Of course, there may be multiple explanations for amplification, which aren’t mutually exclusive. Either way, there are explanations for how incidental emotions might slightly influence moral judgment indirectly, without supposing that feelings ordinarily play an important direct causal role.

2.4 Psychopathology We have seen that various popular experiments fail to show that mere feelings play an integral role in mature moral judgment. Other avenues of support come primarily from psychopathology. pg. 39 of 206

Regard for Reason | J. May

By studying when moral judgment breaks down, we can perhaps uncover whether an emotional deficit best explains the problem.

2.4.1 Psychopathy Not all psychopaths are alike, but they are typically characterized as callous, lacking in remorse and guilt, manipulative, having a superficial charm, impulsive, irresponsible, and possessing a grandiose sense of self-worth (Hare 1993). Most studied are incarcerated men and many have committed violent crimes or engaged in reckless actions that leave innocent people injured, dead, or destitute. Psychopathy is similar but not exactly equivalent to antisocial personality disorder in the Diagnostic and Statistical Manual of Mental Disorders. Researchers instead typically diagnose psychopaths using Robert Hare’s Psychopathy Checklist (Revised), which has a range of criteria pertaining to the individual’s psychological traits and history of past infractions. While the exact causes of psychopathy are unknown, it’s clearly a developmental disorder due to some combination of factors present early in life. Some of these factors are genetic predispositions but can also include traumatic experiences or environmental influences on gene expression, which may include profound neglect, abuse, and even lead exposure (Glenn & Raine 2014: ch. 6). Psychopaths seem to be prime examples of adults who are morally incompetent due to a severely impaired capacity for moral judgment. If this is correct and best explained by emotional deficits, then psychopathy seems to provide evidence in favor of sentimentalism (Nichols 2004; Prinz 2007). However, we’ll see that what’s at issue here isn’t incidental feelings but rather broad emotional capacities that are intimately bound up with cognition, attention, learning, and reasoning. Moral Competence Psychopaths don’t just behave badly; some research suggests they don’t properly grasp moral concepts. Some theorists point to their poor use of moral terms, as when some psychopaths don’t appear to properly understand what it means to regret hurting someone (Kennett & Fine 2008). More striking is the apparent failure to draw the moral/conventional distinction, which some theorists believe is necessary for a proper grasp of morality (see Chapter 2, §2.2.1). In particular, some research on adult incarcerated psychopaths suggests that they treat conventions like moral rules by categorizing them as just as serious and independent of authority (Blair 1995). One hypothesis is that such inmates incorrectly categorize conventional norms as moral in a futile attempt to show that they know it’s wrong to violate most norms. However, other evidence suggests that psychopaths do not have such a substantial deficit in moral judgment. One study attempted to remove the motivation to treat all transgressions as serious by telling inmates that the community regards only half of the transgressions as moral violations. Yet the researchers found that a higher score on the Psychopathy Checklist did not correlate with less accurate categorization of the norms as moral vs. conventional (Aharoni et al. 2012). However, the researchers in this later study did find that two sub-factors of the Psychopathy Checklist (affective deficits and anti-social traits) correlate with a diminished ability to accurately categorize transgressions. Another line of research focuses on patterns of moral judgments about sacrificial moral dilemmas in which one innocent person can be harmed for the greater good. Most people believe it’s immoral to save five people by killing one person in a “personal” way, such as pushing him to his death. Yet one study found that incarcerated psychopaths were more “utilitarian” than pg. 40 of 206

Regard for Reason | J. May

other inmates, as they were more inclined to recommend sacrificing one to save several other people in such personal dilemmas (Koenigs et al. 2012). However, abnormal responses to sacrificial dilemmas might not indicate a deficit in moral judgment as opposed to a different set of moral values. The resulting moral judgments may be somewhat abnormal, but utilitarians like Greene (2014: 715) would have us believe that psychopaths happen to be morally correct. In any event, other studies provide conflicting data regarding “utilitarian” values. One found that incarcerated and non-incarcerated psychopaths responded like most other people, categorizing personal harm as morally problematic even if it could bring about a greater good (Cima et al. 2010). Moreover, Andrea Glenn and her colleagues observed no difference in nonincarcerated psychopaths’ moral judgments about personal vs. impersonal dilemmas (Glenn et al. 2009). Thus, while there is some evidence of impaired moral cognition in psychopaths, it’s decidedly mixed. A recent, even if limited, meta-analysis (Marshall et al. forthcoming) examined dozens of studies and found at best a small relationship between psychopathy and impaired moral judgment (assuming that abnormal “utilitarian” responses to sacrificial moral dilemmas are evidence of a deficit in moral judgment). The researchers take their meta-analysis as “evidence against the view that psychopathic individuals possess a pronounced and overarching moral deficit,” concluding instead that “psychopathic individuals may exhibit subtle differences in moral decision-making and reasoning proclivities” (8). In sum, a diverse array of evidence suggests a rather attenuated conclusion about moral cognition in psychopathy. There is most likely some deficit in the psychopath’s grasp and deployment of moral concepts, but the extent of it is unclear. And much of a criminal psychopath’s behavior can be explained by abnormal motivation, such as a lack of concern for others, even if knowledge of right and wrong is roughly intact (Cima et al. 2010). As Glenn and her collaborators put it: “Emotional processes that are impaired in psychopathy may have their most critical role in motivating morally relevant behavior once a judgment has been made” (2009: 910). Such a conclusion may require admitting the possibility of making a moral judgment while lacking motivation to act in accordance with it—a form of “motivational externalism.” But rationalists can happily accept that the connection between moral judgment and motivation breaks down when one isn’t being fully rational (Smith 1994: ch. 3). Rational Deficits So what in psychopaths explains their (slightly) impaired capacity for moral cognition? The most popular account points primarily to emotional deficits, based on various studies of the behavioral responses and brain activity of either psychopaths or people with psychopathic tendencies. For example, key brain areas of dysfunction are the amygdala and ventromedial prefrontal cortex (VMPFC), both of which appear to be implicated in processing emotion, among many other things, including implicit learning and intuitive decision-making (Blair 2007). Moreover, as already noted, the sub-factors in psychopathy that have been correlated with diminished ability to draw the moral/conventional distinction involve emotional deficits (e.g., lack of guilt, empathy, and remorse) and anti-social tendencies (Aharoni et al. 2012). Further evidence comes from studies which indicate that, compared to normal individuals, when psychopaths evaluate moral dilemmas they exhibit decreased activation in the amygdala (Glenn et al. 2009). The idea that psychopathy primarily involves an emotional deficit seems bolstered when compared to autism (Nichols 2004). On many accounts, autism typically involves difficulty understanding the thoughts and concerns of others. People on the spectrum can be in some sense pg. 41 of 206

Regard for Reason | J. May

“anti-social” but they aren’t particularly aggressive or immoral, and they don’t have such difficulty feeling guilt, remorse, or compassion for others. Moreover, autism doesn’t seem to yield a lack of moral concepts, at least because high-functioning children with autism seem to draw the moral/conventional distinction (Blair 1996). However, autism, especially when severe, can impair moral judgment by limiting the understanding of others’ projects, concerns, and emotional attachments (Kennett 2002). There is evidence, for example, that adults with highfunctioning autism don’t tend to treat accidental harms as morally permissible (Moran et al. 2011), although neurotypical adults do (see Chapter 3, §3.3.1). Importantly, though, there is ample evidence that psychopaths have profound deficits that are arguably in their inferential or reasoning capacities. Notoriously, they are disorganized, are easily distracted, maintain unjustified confidence in their skills and importance, and struggle to learn from negative reinforcement (see, e.g., Hare 1993; Blair 2007; Glenn & Raine 2014). Moreover, a meta-analysis of twenty studies suggests that individuals with psychopathy (and related anti-social personality disorders) have difficulty detecting sad and fearful facial expressions in others (Marsh & Blair 2008). Such deficits can certainly impair one’s reasoning about both morality and prudence at least by preventing one from properly assessing the merits of various choices and resolving conflicts among them (cf. Kennett 2002; 2006; Maibom 2005). Consider an example. One psychopath tells the story of breaking into a house when an old man unexpectedly appears, screaming about the burglary. Annoyed that the resident wouldn’t “shut up,” this psychopath apparently beat the man into submission, then lay down to rest and was later awoken by the police (Hare 1993: 91). Such aggressive and reckless actions, common in psychopathy, are easily explained by impaired cognitive and inferential capacities, such as overconfidence in one’s abilities, inattention to relevant evidence, and the failure to learn from past punishment. Affective processes certainly facilitate various forms of learning and inference, particularly via the VMPFC (Damasio 1994; Woodward 2016; Seligman et al. 2016), and such processes are no doubt compromised in psychopathy. But that is just evidence that affective deficits are disrupting domain-general inferential and learning capacities, not necessarily moral (or normative) cognition specifically. Abnormal Development It’s important that psychopathy is a developmental disorder. There is some evidence that people who acquire similar brain abnormalities in adulthood—VMPFC damage—retain at least some forms of moral judgment (more on this in the next section). The difference with psychopaths is that their brain abnormalities are present at birth or during childhood (or both), which prevents or hinders full development of social, moral, and prudential capacities in the first place. Some worry then that psychopathy at best indicates that the affective elements of emotions are merely developmentally necessary for acquiring full competence with moral concepts (Kennett 2006; Prinz 2007: 38). However, even this concedes too much ground to sentimentalism. As we’ve seen, there’s reason to believe that psychopaths have only some deficits in moral judgment and they experience domain-general problems with learning, attention, and inference. This rationalist-friendly view is actually bolstered by the fact that these problems in psychopathy arise early in development and continue throughout a psychopath’s life. The rationalist needn’t explain the psychopath’s immoral and imprudent behavior by positing a lack of conscious understanding of a moral argument written down with explicit premises and conclusion. Rather, the rationalist can point to a lifetime of compromised attention, learning, and inference. pg. 42 of 206

Regard for Reason | J. May

Emotions are certainly part of the explanation. Like all development, social and moral learning begins early in one’s life before one fully acquires many concepts and the abilities to speak, read, write, and engage in complex reasoning. Yet moral development must go on. Emotions—with their characteristic package of cognitive, motivational, and affective elements— can be a reliable resource for sparking and guiding one’s thoughts, actions, and learning. We are creatures with limited attentional resources in environments with more information than we can take in. The predicament is even more dire when we’re young and developing key social concepts, associations, and habits. Our default is not generally to pay attention to everything, for that is impossible. Instead, we rely on our attention being directed in the right places based on mechanisms that are quick, automatic, often emotionally driven, and sometimes fitnessenhancing (cf. Pizarro 2000; Huebner et al. 2009). In ordinary people emotions combine with a suite of cognitive and motivational states and processes to guide one’s attention and aid one’s implicit learning and inferential capacities. As does a hearty breakfast, emotions facilitate the healthy development of a human being’s normative concepts and knowledge. Due to certain genes, neglect, abuse, and so on, psychopaths’ relevant emotional responses, such as compassion and guilt, are missing or significantly attenuated. But these are connected to broader cognitive and inferential deficits—such as delusions of grandeur, inattention, and poor recognition of emotions in others. It’s no surprise that what typically results is a callous, manipulative, and aggressive person who behaves badly and lacks a healthy grasp of the normative domain (both morality and prudence). In Sum Ultimately, there are several issues to highlight that work in concert to neutralize the threat to rationalism from psychopathy. First, extant evidence does suggest that psychopaths lack normal moral competence, but the deficit in moral cognition is often overstated in comparison to motivational deficiencies, such as impulsivity and concern for others. Feelings may directly affect motivation and behavior but that isn’t in conflict with the rationalist’s claim about moral judgment and needn’t conflict with a rationalist account of all aspects of our moral psychology. Second, while psychopathy plausibly involves some emotional dysfunction, especially in guilt and compassion, the condition involves at least an equal impairment in learning and inference. A lifetime of delusions of grandeur, impulsivity, poor attention span, difficulty processing others’ emotions, diminished sensitivity to punishment, and so on can alone explain a slightly diminished competence with moral concepts and anti-social behavior.

2.4.2 Lesion Studies When patients suffer brain damage, we can correlate impaired moral responses with the dysfunctional brain areas and their usual psychological functions. Ideally, this can help determine whether feelings play an important role in normal moral judgment. There are two key brain lesions that have been studied in relation to emotions and moral judgment: damage to the ventromedial prefrontal cortex and neurodegeneration in the frontal or temporal lobes. Patients with lesions of VMPFC typically develop what Antonio Damasio (1994) has somewhat unfortunately called “acquired sociopathy.” The famous Phineas Gage is just one popular example: after a rod accidentally passed through his skull, a once upstanding Gage reportedly became crass, had difficulty keeping jobs, and so forth, despite apparently pg. 43 of 206

Regard for Reason | J. May

maintaining his level of general intelligence. There is some controversy about the various details of Gage’s story, but now such patients are better documented. Acquired sociopathy has some similarities with psychopathy, at least in that both involve abnormal function in the VMPFC (Blair 2007). But Damasio’s label can mislead as the two conditions are rather different. For one, since psychopathy is a developmental disorder, the brain dysfunction begins early in life and thus has much more serious effects. In contrast, adults who suffer damage to the VMPFC later in life have typically developed a fuller grasp of moral concepts, and there is evidence that they retain at least some forms of moral judgment (Roskies 2003). For example, patients are generally able to reason about hypothetical moral (and prudential) dilemmas and render a verdict about what one should do. The problem is more with personal decision-making, as patients struggle to settle questions about whether to lie to their spouse or about which apples are best to purchase at the grocery store. Those with acquired sociopathy can know or cognize the various options for a given decision, but they seem to lack the proper guidance from their gut feelings about what they themselves ought to do all things considered in the this particular situation—“in situ” as Jeanette Kennett and Cordelia Fine (2008) put it. Based primarily on studying the physiological responses of such patients while they make decisions, a key impairment seems to be in what Damasio (1994) calls “somatic markers” or bodily feedback that guides such decision-making. Diminished or missing somatic markers can leave patients prone to make imprudent and morally questionable choices, but unlike psychopaths they’re not characteristically manipulative, violent, or grandiose (we’ll encounter “acquired sociopathy” again in Chapter 8, §8.3.1). The brain’s VMPFC does seem crucial for intuitive and personal decision-making, which is at least typically guided by affective feedback. But such deficits affect learning and decisionmaking both within and outside the moral domain. Moreover, is there evidence of impaired moral judgment generally? Some studies suggest that VMPFC damage does yield abnormal processing of scenarios involving personal harm for the greater good. Such patients seem to be more inclined to provide the abnormal “utilitarian” judgment that one should sacrifice an innocent individual for the sake of saving a greater number of other innocents, even if it involves up-close and personal harm (e.g., Koenigs et al. 2007; Ciaramelli et al. 2007). A similar phenomenon arises in people who have related brain abnormalities. Patients with frontotemporal dementia (FTD) can have a wide variety of symptoms, including overeating and poor hygiene, since their neurodegeneration can occur in two out of the four lobes of cerebral cortex. But some common symptoms include blunted emotions and antisocial behavior, which are typical among those with lesions of the VMPFC. Importantly for our purposes, when presented with moral dilemmas requiring personal harm, FTD patients also provide more “utilitarian” moral judgments than controls (Mendez et al. 2005). Even if we take these lesion studies at face value, they don’t show that feelings are essential for moral judgment, for at least two reasons. First, as we’ve already seen, it’s a stretch to assume that providing more “utilitarian” responses to sacrificial moral dilemmas yields a profound moral deficit, lest we’re prepared to attribute moral incompetence to utilitarians. Second, while the VMPFC and the frontal and temporal lobes may be associated with emotional processing broadly speaking, they are also associated with many other non-affective processes. The frontal and temporal lobes are two out of the four lobes in the cerebral cortex. The VMPFC is a much smaller region located in the frontal lobe, but it too isn’t specific to moral emotions, such as guilt and compassion, and certainly not their affective aspects in particular. The area does indeed appear to, among other things, receive affective information from other pg. 44 of 206

Regard for Reason | J. May

structures, such as the amygdala, often having to do with goals and reinforcement learning (Blair 2007). However, as James Woodward puts it, the VMPFC is clearly “involved in calculation, computation, and learning, and these are activities that are often thought of as ‘cognitive’” (2016: 97). So, even if being more utilitarian demonstrates impaired moral judgment, it’s not clear that this is best explained specifically by a deficit in moral emotions rather than integrating information acquired through past experience with present circumstances in order to make a personal decision. Now, much like the experiments manipulating incidental emotions, one might argue that the lesion studies provide an alternative way of showing that damage to apparently “emotional areas” in the brain at least leads to different moral judgments. Even if patients retain the general capacity for moral judgment, their emotional deficiencies seem to lead to some change in moral cognition. The problem of course is that damage to relatively large areas of the brain implicates dysfunction of a wide range of mental capacities, not just emotional responses—let alone incidental ones. Associating areas of the brain with emotional processing is far from isolating the feelings from the thoughts associated with them. Frontotemporal dementia involves a wide range of neurodegeneration. While blunted emotion is a common symptom of FTD, it may simply hinder the patient’s ability to pay close enough attention to morally relevant information (Huebner et al. 2009), or hinder other reasoning capacities. In sum, much like psychopathy, lesion studies support sentimentalism only if they establish two theses: (a) that the relevant patients have impaired moral judgment and (b) that this impairment is best explained by a deficit in moral feelings. Both of these crucial claims are sorely lacking in empirical support. The lesion studies actually support the rationalist idea that moral cognition can proceed even with some blunted emotions. The patients certainly seem capable of making moral judgments about what other people should do in hypothetical situations, even if their responses tend to be a bit more “utilitarian.” Gut feelings help to guide certain personal decisions, but they aren’t necessary for the general capacity to judge actions as right or wrong. As with psychopathy, affective deficits alone don’t seem to reveal a substantial impairment in moral cognition specifically.

2.5 Conclusion Based on the science, many believe that mature moral judgment crucially depends on the affective aspects of moral emotions. The evidence for this sentimentalist conclusion has been diverse. As we’ve seen, however, much of it is rather weak and has generally been overblown. First, while the moral/conventional distinction may partly characterize the essence of moral judgment, we lack compelling evidence that moral norms transcend convention by being backed by affect. Second, priming people with incidental feelings doesn’t make them moralize actions. Third, moral judgment can be somewhat impaired by damage to areas of the brain that are associated with emotional processing but these areas also facilitate learning and inference both within and outside the moral domain. Feelings are undoubtedly frequent characters in the moral life. But this is because we care deeply about morality, so feelings tend to be the normal consequences, not causes, of praise or condemnation. I feel angry toward Nicki because I believe she wronged me; the feeling goes away, or at least begins to subside, once I realize it was all a misunderstanding. Now it may well be impossible for a creature to make moral judgments in the absence of certain motivations and concerns. Perhaps, for example, moral agents must have a concern to act for good reasons or pg. 45 of 206

Regard for Reason | J. May

more generally to act in ways one can justify to others (see Chapters 6-8). Imagine, for example, a robot that can assess an action’s consequences, an agent’s intentions, and so forth but that is motivationally inert. It perceives but doesn’t act and has no concerns of its own. Such a robot is arguably not a moral agent, but adding certain feelings aren’t the missing ingredient. We may want to add anger, disgust, and joy, but only because these indicate that the machine has concerns and attachments. It’s the cognitive and perhaps motivational elements of emotion that are essential, not feelings. It’s important to emphasize that the sentimentalist orthodoxy is threatened regardless of whether one construes emotions as partly cognitive or not, for there remains a sentimentalist dilemma. • Horn 1: If the relevant emotions are entirely non-cognitive feelings, then rationalists can and should deny that there is compelling evidence that such emotions substantially influence moral judgment. Mere feelings of even a powerful emotion, such as disgust, at best only sometimes slightly amplify one’s existing moral judgments and perhaps only through indirect means. Emotions can substantially influence moral judgment but only if they carry morally relevant information and facilitate inference. • Horn 2: If the relevant emotions are partly cognitive, then rationalists can and should point to such cognitive elements as doing the work in moral judgment, as opposed to the motivational or phenomenological aspects of emotions. Even if moral judgment is largely driven by automatic intuitions, these should not be mistaken for feelings. The next chapter explores this second horn in more detail. We’ll examine a growing body of research that fits with the rationalist idea that both conscious and unconscious reasoning heavily influence moral cognition. This reasoning is at least systematically sensitive to an action’s outcomes and the agent’s role in generating them. While these considerations are often processed implicitly, conscious reasoning isn’t predominantly post-hoc rationalization.

pg. 46 of 206

Regard for Reason | J. May

Ch. 3: Reasoning beyond Consequences Word count: 11,098

3.1 Introduction Consider a famous riddle. A man looks at a picture of a man and states, “Brothers and sisters I have none, but this man’s father is my father’s son.” Who is the man in the picture in relation to the person looking at the picture? Is it the viewer himself? That’s not right, because his own father can’t be his own father’s son. The man depicted must be the viewer’s son, for his son’s father (himself) is his own father’s son. Reasoning through this riddle involves slowly and consciously thinking about the possible family relationships and checking to see whether they fit the given constraints of the puzzle. What we just went through exhibits the archetype of reasoning: slow, drawn-out, explicit, and conscious inference in a step-wise fashion—perhaps even with chin perched on fist—in which you’re aware of deliberating and aware of at least some of the steps in the process. Many researchers in moral psychology focus only on this archetype (see, e.g., Haidt 2001: 818; Mercier & Sperber 2011: 57; Paxton & Greene 2010; Greene 2013; Doris 2015: 50, 136; Prinz 2016: 49). But conscious deliberation is only one way in which reasoning or inference can occur. Often our reasoning is rapid and automatic with little conscious access to the inferential steps; we’re aware at best of the output. Consider, for example, passively watching a silent film: there is no explicit dialogue telling you what’s going through the minds of the characters or what complex events and social interactions are occurring. Yet you nevertheless find yourself with various beliefs about the significance of the events on the screen. These beliefs don’t pop into existence out of thin air. You infer them from tacit reasoning about the actions of the characters as well their body language and facial expressions. This implicit process can be inference even while the reasoner isn’t consciously aware of the various steps or even that reasoning is afoot. The simple point, albeit an important one, is that reasoning can occur quickly, implicitly, and unconsciously (cf. Arpaly 2003; Horgan & Timmons 2007; Harman, Mason, & Sinnott-Armstrong 2010; Mallon & Nichols 2010). Moral cognition is no different from social cognition in this respect. Given how essential moral judgment is to our ordinary lives, it should be unsurprising that it too can become so quick and automated that it often goes unnoticed. Our mental lives frequently involve such unconscious, unreflective, or implicit processes that nonetheless amount to reasoning, even if emotions are involved in such processes. Just as Hume famously urged that calm passions can be mistaken for reason, tacit reasoning can be mistaken for (mere) passion. pg. 47 of 206

Regard for Reason | J. May

Of course, quick and unreflective information processing doesn’t always count as reasoning or inference. However, we’ll see in this chapter that experimental evidence suggests our moral judgments are often governed by rule-based inference. Morality is often thought of as codified in terms of rules, particularly norms that transcend mere convention (as we saw in the previous chapter). Much like other forms of cognition (Reber 1989; Seligman et al. 2016), moral judgment involves in part the application of abstract concepts or rules that identify relevant considerations. These rules can then be applied automatically, even in the absence of conscious access to their content. Often moral judgments that seem driven by mere feelings are simply quick and automatic with complex computation occurring unconsciously. We’ll see in particular that moral cognition is rife with elements that place moral weight on more than an action’s consequences, such as the actor’s intention or the type of action performed. Consider, for example, that many people believe it’s immoral for a physician to kill a suffering patient with a terminal illness, even if the patient is competent and requests it. Yet many people also believe that it’s morally acceptable for a physician to comply with competent patients’ requests to refuse treatment and let them die. Moral thinking contains many such nonconsequentialist elements that we might label “deontological” specifically given their relationship to general rules (e.g., killing is worse than letting die). Understanding the rules that influence moral cognition is the final step toward undermining sentimentalism on scientific grounds. Sentimentalists have argued that the affective aspects of moral emotions are integral to ordinary moral cognition while moral principles are either ultimately driven by such feelings or are mere rationalizations we concoct after the fact to make sense of our emotional reactions to moral scenarios. However, we’ll consider various ways in which reasoning does the real work in generating moral judgment. Of course, sentimentalists would claim that moral inference is grounded in emotions or dispositions to feel them (Prinz 2007: 24-6) or that moral inference merely assesses emotional responses, which remain constitutive of moral judgment (Kauppinen 2013; D’Arms & Jacobson 2014). But either brand of sentimentalism is compelling only if we have evidence that emotions are essential to moral judgment independently of such reasoning processes. We saw in Chapter 2 that this is unconvincing, as there is no sound empirical evidence that mere feelings play a crucial role in moral judgment. Thus, the previous chapter and the present one combine to provide a forceful empirical argument for the rationalist conception of moral cognition.

3.2 Consequences Much of the recent experimental work on moral judgment asks ordinary people to evaluate behavior in hypothetical scenarios, often ones that involve runaway trolleys, previously discussed by philosophers, starting with Philippa Foot (1967). Ethicists have been developing theories about how to explain patterns of “intuitions” or automatic, pre-theoretical judgments about specific cases. But in the mid-1990s an experimental strategy developed in which researchers began to systematically gather data on intuitive moral judgments about such scenarios, testing theories about which principles, if any, shape moral cognition, even if participants cannot articulate the relevant principles. This methodology allows researchers to explore the brain’s moral software to uncover the computations underlying moral reasoning, even if it occurs unconsciously. pg. 48 of 206

Regard for Reason | J. May

Most of the hypothetical scenarios feature harmful behavior, such as battery or homicide. Harm and care are certainly core values in commonsense morality, but other research suggests there is much else besides, including the values of fairness, liberty, loyalty, authority, and sanctity (see Chapter 5). There is some evidence that factors central to our judgments of wrongful harm, such as intention, aren’t as important in other domains. For example, there is some evidence that people treat incest as morally problematic even if it’s accidental, whereas they don’t tend to condemn accidental harm, unless it arises from negligence (Young & Saxe 2011). So the distinctions that follow may well vary across moral or other normative domains. Nevertheless, much of the research on moral judgment involves studying participants’ reactions to hypothetical scenarios involving harm and death, as in the famous trolley thought experiments. In one study involving thousands of participants (Hauser et al. 2007), the researchers focused on multiple trolley cases but consider first a scenario originally introduced by Judith Jarvus Thomson (1985), commonly known as Switch (or Side Track). A protagonist is faced with a choice: he can allow a runaway train to run over and kill five innocent people, or he can throw a switch to divert the train onto a sidetrack and away from the five but toward one innocent person who will die as a result (for illustration, see Figure 3.1). Figure 3.1: The Switch Case

In dozens of studies with samples from a variety of cultures and ages, people overwhelmingly think that it’s morally acceptable to sacrifice the one to save the greater number (e.g., Hauser et al. 2007; Pellizzoni et al. 2010; Mikhail 2011; Gold et al. 2014). Not only do people’s patterns of judgments reflect a commitment to the relevance of consequences, participants in studies will explicitly endorse utilitarian principles stated in general form (Horne et al. 2015), although there are naturally some differences among individuals (Lombrozo 2009). Ample data therefore suggest that, even when the outcome is the death of innocent human beings, we treat the number of bad outcomes as generally relevant to the permissibility of an act. We might thus posit a moral principle that shapes moral cognition: Consequentialist Principle: All else being equal, an action is morally worse if it leads to more harm than other available alternatives. The “all else being equal” (or “ceteris paribus”) clause allows us to make a generalization that may have exceptions—a common tool in psychological and moral theorizing (e.g., Fodor 1987; Wedgwood 2011). Outcomes can figure in rather complex calculations. Nichols and Mallon (2006: 538), for example, found that their participants tended to consider it morally acceptable to push an innocent person to his death if it would save billions of people in the area from a biochemical catastrophe. In this situation, the protagonist effectively chooses between either (a) not getting herself involved by killing an innocent man but allowing a catastrophe to kill herself, the man, pg. 49 of 206

Regard for Reason | J. May

and billions of others, or (b) killing an innocent person and saving herself as well as billions of others. Since the one innocent person sacrificed would die anyway, we let the numbers decide. Intuitions about such cases may be sensitive to the so-called Pareto optimality principle, which states roughly that it’s acceptable to take an option that benefits more people if it doesn’t otherwise make anyone worse off. A number of other experiments exploring this phenomenon further confirm that Pareto considerations systematically impact intuitive moral judgments (Moore et al. 2008; Huebner et al. 2011). Some of this reasoning may occur unconsciously, but research suggests that calculating consequences tends to engage conscious reflection. Brain areas associated with controlled, reflective processing—particularly the dorsolateral prefrontal cortex (DLPFC)—are more active when evaluating scenarios like Switch (Greene 2008). Some evidence suggests that being encouraged to reflect makes people weigh outcomes more heavily in their moral evaluations (see e.g., Paxton et al. 2012), at least when evaluating moral dilemmas in which bringing about greater consequences wouldn’t make anyone worse off. Finally, in moral evaluation people prioritize better outcomes more when they have damage to areas of the brain associated with automatic and intuitive reasoning (e.g., Koenigs et al. 2007; Ciaramelli et al. 2007). However, conscious reasoning may have less to do with calculating consequences than it does with providing the more counter-intuitive resolution to a moral dilemma, regardless of whether that involves minimizing harm (Kahane et al. 2012). In any event, ordinary moral judgments are clearly sensitive to the quantity of harmful consequences that follow from an agent’s options. We even factor in complex considerations about which options wouldn’t make anyone worse off. But, as we’ll see, weighing outcomes is only one element of moral reasoning, and much of it is more clearly automatic and unconscious.

3.3 Beyond Consequences In addition to outcomes, moral intuitions are tacitly shaped by how involved an individual was in bringing them about. For example, most people find it intuitively wrong to kill an innocent man and harvest his organs in order to save five others from certain death (Horne et al. 2015). At first glance, it may seem that we make such moral judgments based on an emotional aversion to prototypically violent acts (Greene 2013). However, as we’ll see, systematic studies reveal that such moral verdicts are sensitive to subtle features of the acts being evaluated—particularly intention, commission, and personal force. While quick and intuitive, our responses are evidently shaped by general principles about the moral relevance of these features. We’ll see that one promising way to unify these various factors is what I’ll call “agential involvement” (appropriating a term from Wedgewood 2011).

3.3.1 Intentional vs. Accidental Outcomes One way for a moral principle to be non-consequentialist is for it to treat the motivation behind an action, not merely its consequences, as intrinsically relevant. There’s a great deal of evidence suggesting that ordinary moral intuitions are strongly influenced by such considerations, especially those mental states that make us think an action was done intentionally (for an overview, see Young & Tsoi 2013). pg. 50 of 206

Regard for Reason | J. May

Consider, for example, an experiment conducted by Liane Young and her collaborators (2007), which compared the effect of outcome and intention (via belief) on moral judgment. The researchers constructed different types of vignettes that systematically varied these two factors, an example of which involved an agent, Grace, who puts a white substance in her friend’s coffee. The four versions of this scenario (Table 3.1) varied whether Grace believed the substance was sugar or poison (intention) and whether it actually was poison and thus whether her friend died (outcome). Table 3.1: Cases Varying Intention and Outcome

Negative Intention

Neutral Intention

Negative Outcome

Neutral Outcome

Intentional Harm: Grace believes the substance is poison and it is poison (so her friend dies).

Attempted Harm: Grace believes the substance is poison but it is sugar (so her friend lives).

Accidental Harm: Grace believes the substance is sugar but it is poison (so her friend dies).

Neutral: Grace believes the substance is sugar and it is sugar (so her friend lives).

Adapted from Young et al. 2007.

Participants’ moral evaluations of the actions in such cases revealed some impact of both intention and outcome, but intention played a much more prominent role. People were inclined to count both Intentional Harm and Attempted Harm as wrong, whereas Accidental Harm and Neutral were both regarded on the whole as permissible. So it appears the agent’s bad intention was the primary factor in perceived wrongness, whereas an innocent intention was the primary source of perceived permissibility. Apparently the adage “no harm, no foul” applies only when no harm comes from an accidental risk of it. Neurobiological evidence corroborates this psychological picture. The focus of Young and her colleagues (2007) was actually an area of the brain associated with detecting others’ mental states: the right temporal parietal junction (rTPJ). Using functional magnetic resonance imaging, they found that the rTPJ was most active during the assessment of merely attempted harm, where there was malicious intent but no harmful outcome. In two further experiments, Young and her collaborators (2010) found that temporarily disrupting neural activity in the rTPJ, using transcranial magnetic stimulation, made innocent intent less important for participants’ moral judgments of accidental harm. A different group of researchers increased the excitability of the rTPJ by using transcranial direct current stimulation (Sellaro et al. 2015). They found predictably that the excitation increased the role of intent in moral judgments about harmful acts. Computing information about an actor’s intent is clearly a crucial input to at least some forms of moral judgment, and it occurs rapidly. In a series of studies using electroencephalography (EEG), Jean Decety and his collaborators have tracked this processing pg. 51 of 206

Regard for Reason | J. May

down to fractions of a second. Computing intent in the rTPJ appears to occur around 62 milliseconds after viewing a morally relevant scenario (Decety & Cacioppo 2012). Within 300 to 600 milliseconds, as one evaluates the action, it appears that prefrontal circuits become more active after receiving input from the amygdala (Yoder & Decety 2014). In under a second, one can categorize an action as morally good or bad by unconsciously inferring the actor’s intention and evaluating the outcomes. The importance of intentionality in moral cognition appears to be something of a cultural universal. One large study examined this phenomenon specifically in a diverse sample of largeand small-scale societies across the globe—from urbanites in Los Angeles to hunter-gatherers in Africa to hunter-horticulturists in the Australian Outback (Barrett et al. 2016). While the researchers found some variation in the degree to which intent affects moral judgment, its impact was observed among every group in the study. Given the massive cultural differences between these groups, the commonality in the data is striking. As Barrett and his collaborators put it, “participants in all 10 societies moderated moral judgments in light of agents’ intentions, motivations, or mitigating circumstances in some way” (4692). Now, perhaps intentions are only important for blame, not wrongness, as utilitarian ethicists and other consequentialists typically maintain. Participants may often confuse these or count an action as wrong just to blame the transgressor, in which case ordinary intuitions about right and wrong might only value consequences after all. Experimental evidence suggests otherwise. Fiery Cushman (2008) had participants evaluate variations of cases that resembled Grace’s in being examples of intentional harm, accidental harm, merely attempted harm, or successfully avoiding harm. However, Cushman explicitly measured judgments of wrongness, blame, and punishment. One set of scenarios, for example, described Jenny who had an opportunity to burn her partner’s hand while welding. The vignettes varied whether Jenny actually did harm her partner (outcome) and whether she wanted to do so and was aware of the risk (intentionality). Naturally, all judgments on average were influenced to some degree by both outcomes and intentionality. However, intentionality more powerfully affected how wrong participants thought Jenny’s action was whereas outcomes were comparatively more important for judgments about how much blame and punishment Jenny deserved. Thus, it seems ordinary people do explicitly treat intentionality as more relevant to wrongness than blame. Intentions, then, seem rather important for moral evaluations of harm. This should be unsurprising given the importance of mens rea in the legal system, as when it grounds punishment for some merely attempted crimes or the difference between types of homicide. Even the moral evaluations of children as young as four are sensitive to intentionality (Cushman et al. 2013). We thus have reason to posit something like the following that shapes moral cognition: Intentionality Principle: All else being equal, it’s morally worse to cause harm intentionally as opposed to accidentally.

3.3.2 Actions vs. Omissions Another basic moral distinction is between acts (or commissions) and omissions, which connects with the famous “Doctrine of Doing and Allowing.” To take an example from Foot, there’s an important difference between “allowing people in Third World countries to starve to death” and pg. 52 of 206

Regard for Reason | J. May

“killing them by sending poisoned food” (1984: 177). Distinguishing acts from omissions is a common theme in the law too, as there are usually prohibitions on actively causing harm, say, but rarely against inaction (although some jurisdictions do have “Good Samaritan laws” or a “duty to rescue,” according to which, it is illegal to fail to help a fellow citizen in great need). Different moral and legal attitudes toward assisted suicide also rest on the distinction between actively killing patients and passively letting them die. The American Medical Association draws a sharp distinction between letting patients die in certain circumstances and actively killing them, even when it’s done with the patient’s consent and for merciful reasons (famously discussed by Rachels 1975). Several studies have now confirmed that this is a recurring element in moral cognition, even beyond legal contexts (e.g., Spranca et al. 1991; Cushman et al. 2006; Cushman & Young 2011). In most of the experiments, participants are presented with numerous vignettes that systematically vary action vs. omission. For example, in one scenario, James decides to poison an innocent person in order to save five, while in the contrasting story Robert saves five by merely allowing another to die by not administering an antidote (Cushman et al. 2006). Participants consistently treat actively harming as worse or “more wrong” than harming by omission. However, responses are typically recorded on fine-grained scales and the differences between such pairs of moral judgments are sometimes rather small (see esp. Cushman & Young 2011, Exp. 2). Moreover, scenarios contrasting acts and omissions often also differ in terms of either bodily contact or the number of times assault or battery is committed, creating confounding variables (Mikhail 2014). Still, such differences are stable and easily detectable across a range of experiments. There is even some evidence that children as young as five draw the distinction in moral evaluation (Powell et al. 2012). So we might follow Cushman and colleagues (2006) and posit something like the following: Action Principle: All else being equal, harm caused by action is morally worse than harm consequent upon omission. There is some evidence suggesting that our moral evaluations are influenced by this distinction because we think people are more causally implicated in an outcome if it results from action rather than omission (Cushman & Young 2011). In that case, it may be more appropriate to say that the relevant rule is a more general Intentional Harm Principle: All else being equal, it’s morally wrong to intentionally cause harm. This captures both the significance of commissions (via causing harm) and intention. Either way, there is reason to posit tacitly moral rules that turn on the nature of the act and its relation to the agent, independent of outcomes. The case is bolstered by studies of abnormal moral cognition. We already saw this in studies that use brain stimulation to manipulate one’s ability to compute an agent’s role in bringing about an outcome. And we can see similar results when one simply is a morally deviant individual. For example, there is evidence that people who take pleasure in being cruel and brutal or watching such activities— those with “trait sadism”—discount intention and causation when making moral judgments (Trémolière & Djeriouat 2016). Computing a person’s intent and their causal role in bringing about an outcome appears to be rather engrained in much of normal moral cognition.

3.3.3 Personal Harm: Force, Contact, and Battery pg. 53 of 206

Regard for Reason | J. May

Let’s now consider some evidence for more complex, yet still arguably non-consequentialist, principles in ordinary moral cognition. Recall the standard Switch scenario (Figure 3.1): a runaway train can be diverted from careening into and killing five innocent people, but as a side effect this would put the train on the path to killing one innocent person. Now contrast that situation with the famous Footbridge case. The protagonist can again allow the train to kill the five, but the only alternative is to save the five by pushing one large innocent man off of a bridge onto the tracks, where he will die but his body will stop the train (for illustration, see Figure 3.2). Figure 3.2: The Footbridge Case

Is it morally permissible to sacrifice one for the greater good? We saw that many people believe that diverting the train in Switch is morally permissible but most think pushing the man in Footbridge is wrong, despite the fact that the consequences appear to be the same in both. This asymmetry in moral intuitions is robust. It was found with apparently little variation in age, gender, and some cultures and nationalities (Hauser et al. 2007; Mikhail 2011: Appendix: Exps. 1-3). It has even been identified in children as young as three (Pellizzoni et al. 2010; Mikhail 2011: Appendix, Exp. 6) and is quite generalizable to harms other than death—such as financial loss, bodily harm, and emotional distress (e.g., Gold et al. 2013). There is some evidence for slight East-West cultural variation in intuitions about some trolley cases (e.g., Gold et al. 2014). But, even if there is substantial variation, it doesn’t count against the existence of internalized rules, only perfectly universal ones. How should we explain the different responses to Switch and Footbridge? Both involve commission and some level of intentionality. One important difference between the cases involves the personal way in which harm is inflicted. A cluster of related factors in this category appear to influence moral judgments. Joshua Greene (2008) famously argues that we tend to think it’s wrong to push someone in cases like Footbridge because this involves up-close and personal harm (cf. Singer 2005; Prinz 2007: 1.2.2). Greene appeals to a wide range of empirical evidence for this claim but focuses on his own brain imaging studies, which suggest greater automatic intuitive processing when judging “personal” moral dilemmas, primarily given greater activity in the ventromedial prefrontal cortex (VMPFC). However, there are some methodological worries about the early studies (see, e.g., Berker 2009) and the effects might not be driven by the personal nature of the cases (McGuire et al. 2009) but instead by how counter-intuitive the response to such dilemmas are (Kahane et al. 2012). Moreover, the personal/impersonal distinction does not appear to capture a wide range of cases. There are many situations in which inflicting personal harm is pg. 54 of 206

Regard for Reason | J. May

regarded as permissible, e.g., when defending one’s family, honor, or life (cf. Mallon & Nichols 2010). Still, various experiments suggest that something like contact generally affects moral intuitions, even if other factors do as well. Studies led by Cushman present participants with a host of scenarios that systematically contrast a number of factors, including physical contact (Cushman et al. 2006; Cushman & Young 2011). For example, one pair of cases includes a version of Footbridge and a modified case in which the protagonist can sacrifice one in order to save five by merely pulling a lever that drops the victim in front of a runaway boxcar. People treat this “drop” version of the case as more morally acceptable compared to pushing. Physical contact, however, is distinct from personal harm, since the latter includes harm inflicted in the absence of physical contact—e.g., poisoning Grandmother’s tea. Greene and his collaborators (2009) have since switched to the notion of harm via personal force, defined as force that directly impacts another and is generated by one’s muscles. Pushing someone with one’s hands or a pole counts as using personal force, for example, but directing a drone attack doesn’t. In one experiment, participants rated the Footbridge scenario and three variations of it. In one the protagonist can push the large man with a pole; in another he can drop him on the tracks through a trap door by pulling a lever from a distance; in the last variant the protagonist is right next to the large man but can sacrifice him by merely pulling a lever to open a trap door. The researchers found that people were less likely to say killing the one to save the five is permissible if it involved personal force, not mere close proximity. However, importantly, Greene and colleagues recognize that personal force often involves intention or harming as a means rather than a byproduct (more on this in the next section). They found in a further experiment that personal force alone affects moral judgments only when it’s combined with something like intent. For this reason, Greene’s latest proposal (2013: 246) is that characteristically non-consequentialist intuitions arise from a rigid heuristic (a “myopic” brain “gizmo”) that responds negatively to prototypically violent acts. Such acts combine three factors: personal force, action rather than omission, and harming as a means rather than a side effect. Related to personal force is the notion of battery, often colloquially termed “assault.” John Mikhail identifies a relevant moral rule, embodied in the law, as the Prohibition of Intentional Battery, which “forbids purposefully or knowingly causing harmful or offensive contact with another individual or otherwise invading her physical integrity without her consent” (2011: 117). Of course, as with other rules, this is not without exception. For example, many think it permissible to injure a frail person by pushing him out of the way of an oncoming train in order to save his life (see the case of “Implied Consent” in Mikhail 2011). As in the law, we treat such harm via personal contact as morally justified (Mikhail 2014). It’s difficult to draw firm conclusions about how contact, personal force, or battery affect moral cognition. Battery is somewhat similar to contact and personal force, but it needn’t be especially personal or physical, as when from a distance one causes bodily injury by intentionally directing a trolley toward someone (Mikhail 2014: §1.3). Perhaps these three factors can be unified into a single principle, or perhaps one factor can explain all the relevant data. Further research is clearly needed. Until then, we can’t gain much precision. I leave the matter with a recent quip from Paul Bloom: “Here is a good candidate for a moral rule that transcends space and time: If you punch someone in the face, you’d better have a damn good reason for it” (2013: 10). pg. 55 of 206

Regard for Reason | J. May

3.3.4 Means vs. Byproduct There is another key explanation of the Switch/Footbridge asymmetry, and related patterns of moral judgment. It involves the distinction between generating an outcome as a mere side effect or byproduct of one’s plan vs. bringing it about as a means to some goal. In Switch, it seems that harming the one is an unintended side effect of saving the five. In Footbridge, however, the protagonist must harm the one as a means to saving the five. The means/byproduct distinction is often employed in the famous Doctrine of Double Effect, which has been widely discussed both empirically and theoretically. The Doctrine can be formulated in various and often complicated ways (see, e.g., Mikhail 2011: 149). The core idea is that there’s an important moral difference between bringing about an outcome as a means vs. a side effect of a noble goal. Such a principle has been used to defend the moral permissibility of various killings. For example, while killing an innocent human fetus is deemed by some as always immoral, abortion may be thought permissible if it’s the foreseen but unintended side effect of saving the mother’s life (cf. Foot 1967). Similarly, many people condemn active euthanasia, even when competent and terminally ill patients request it, on the grounds that doctors shouldn’t end the life of a patient as a means to the goal of ending suffering. However, causing the death of a terminally ill patient is sometimes treated as acceptable if it’s a merely foreseen byproduct of palliative care. Importantly, according to several theorists, something like the Doctrine of Double Effect tacitly operates in our ordinary moral thinking (e.g., Harman 1999; Mikhail 2011). Some variations on the original trolley cases apparently provide support for this idea. Consider an important second pair of cases (illustrated in Figure 3.3). In Loop Track, the first option is again to do nothing and let the five die while the alternative is to divert the train onto a sidetrack. But now there is a large innocent man there who can stop the train, dying in the process. The important difference here is that the sidetrack loops back onto the main track where the five are, such that killing the one is required to save the five (this case is originally articulated and discussed in Thomson 1985). Contrast this with Man-in-Front, which is meant to be exactly like Loop except that there is a large object behind the one man on the side track, and it’s enough on its own to stop the train. However, since the man is in front of this object, the protagonist knows the one must die if the switch is flipped (Mikhail 2011: Appendix Exp. 4). Figure 3.3: Loop Track vs. Man-in-Front

pg. 56 of 206

Regard for Reason | J. May

If people implicitly think in terms of Double Effect, then presumably one would find roughly the following pattern of results. On the one hand, participants would be more likely to say that flipping the switch in Man-in-Front is morally permissible, since this would involve a harmful consequence that’s not intended but merely foreseen. On the other hand, people would tend to treat flipping the switch as impermissible in Loop, since like Footbridge the protagonist must sacrifice the one as a means to saving the five. Adding or removing the heavy object should make a hefty moral difference. Initial experimental results roughly fit with the predictions of Double Effect. In one study, Mikhail (2007) found that most participants judged throwing the switch to be morally permissible in Man-in-Front but fewer said this about Loop (62% vs. 48%), and the difference is statistically significant. This even holds for the large-scale online study done by Hauser and his collaborators (Hauser et al. 2007). There are several criticisms one could make of the Loop studies. One problem is the apparently small size of the effect. For example, looking just at the percentages of people who thought flipping the switch is permissible (from Hauser et al. 2007), there is a large difference between Switch and Footbridge (85% vs. 12%) but not between Loop and Man-in-Front (56% vs. 72%). The differences between each pair are statistically significant, but that only warrants the conclusion (roughly) that the observed differences were not likely due to mere chance. As some commentators have noted, a principle like Double Effect should presumably treat the two pairs of cases similarly, so it’s odd that the difference in responses to the Switch/Footbridge pair is so much larger than in the Loop/Man pair (Phelan 2012; Enoch 2013: 10; Mikhail 2011: 341). Another issue is that some researchers have failed to replicate the original results. For example, Waldmann and Dieterich (2007, Exp. 2) presented participants with, among other scenarios, a version of Loop and measured moral evaluations on a scale. Waldmann and Dieterich’s loop case is not identical to Mikhail’s, but it’s similar enough that one would expect similar results. Yet participants tended to treat flipping the switch in this case as permissible, even though it involved harming as a means. One possible explanation is that Waldmann and Dieterich’s vignettes involved causing bodily harm, not death. However, if harming as a means substantially influences moral judgment, one would expect at least a slight difference in morality ratings between cases like Loop and a case like Switch, but these researchers did not find one. Waldmann and Dieterich’s experiment was not a fully direct replication attempt, but other attempts have fewer if any differences between the original Loop studies. SinnottArmstrong and his collaborators (2008), for example, found that subjects tended to count turning the trolley in their version of Loop as not morally wrong. Even more strikingly, Greene and colleagues (2009, Exp. 2a) used versions of Loop and Man-in-Front (“loop weight”) modeled closely on Mikhail’s. Yet participants tended to treat both of these cases as morally acceptable, and there was no statistically significant difference between responses, despite using fine-grained scales to measure moral evaluations. Zimmerman (2013) presented participants with slightly modified versions of Loop and Man-in-Front, and again most in both groups regarded sacrificing one to save five as morally acceptable (89% vs. 83%), regardless of whether this was measured using a forced choice (Yes/No) or a Likert-type scale. These issues aren’t necessarily devastating for Double Effect. Some of the attempts to replicate the results originally found for Loop and Man-in-Front are still not perfectly direct replication attempts (cf. Mikhail 2013). Moreover, numerous other experiments do suggest that the severity of our moral judgments is sometimes sensitive to the means/byproduct distinction pg. 57 of 206

Regard for Reason | J. May

(e.g., Cushman et al. 2006; Hauser et al. 2007; Moore et al. 2008; Sinnott-Armstrong et al. 2008; Cushman & Young 2011; Millar et al. 2014; Barak-Corren et al. forthcoming). Often these other experiments use non-trolley cases, which suggests that they aren’t well suited to revealing the impact of the means/byproduct effect on moral judgment. These further studies are somewhat limited for two reasons. First, they do not always isolate the means/byproduct distinction from confounding variables, such as personal force, in contrast with the Loop and Man-in-Front scenarios. Second, the effects when observed are consistently rather small. Even Cushman, who has reported the means/byproduct effect in several of his own experiments, says it’s, “Lilliputian even by the forgiving standards of social psychology” (2016: 763). Clearly, the extant evidence provides conflicting results. To help adjudicate the issue, Adam Feltz and I conducted a meta-analysis of over 100 studies of the means/byproduct effect on moral judgment involving over 24,000 participants (Feltz & May 2017). Our results suggest that there is a small effect, such that generating a bad outcome as a means is perceived as slightly morally worse than when generated as a byproduct, even across multiple experimental designs and dependent measures. However, the effect is heavily mediated by contact or personal force. That is, moral evaluations are harsher primarily when an outcome such as harm is inflicted as a means and in an up-close and personal manner—e.g., pushing or throwing. So it seems cases like Footbridge are treated as particularly impermissible because they involve both harming as a means and something like personal force (as in Greene et al. 2009). These results may seem problematic for those who believe that Double Effect is a common element of ordinary moral cognition. However, a more promising picture emerges when we consider a principled way to unify the impact of diverse factors like intention, commission, and personal force.

3.3.5 Unifying with Agential Involvement What’s clear from dozens of experiments is that our moral intuitions are at least sometimes affected by how an agent brought about a bad consequence. Consider a principle proposed by Ralph Wedgwood (2011), albeit without reference to any empirical data, which is meant to explain commonsense moral judgments: “When your act has a bad consequence, the more agentially involved you are in bringing about that consequence, the stronger the reason against the act will be” (393). According to Wedgewood, a person can be more “agentially involved” in an outcome if she brings it about intentionally (yielding Double Effect) or via action rather than omission (yielding the Doctrine of Doing and Allowing). As it happens, Wedgwood’s proposal seems to fit quite well the empirical data. Cushman and Young (2011) found that moral judgments about cases differing in terms of harming as a means vs. a byproduct were mediated by participants’ attributions of intentionality to the agent. That is, it looks as though we are (slightly) more inclined to say an act is morally worse if it brings about a bad outcome as a means rather than a byproduct, precisely because we view the act as more intentional. Strikingly in line with Wedgewood’s view, Cushman and Young also found that participants were more likely to treat actions as worse than omissions because they were less likely to think the agent was the cause of the outcome when it results from omission. A person can become more “agentially involved” in generating an outcome in various ways. Commission, intention, and causing as a means seem to contribute to agential involvement, and perhaps we could add the use of contact or personal force. At any rate, it seems pg. 58 of 206

Regard for Reason | J. May

we can posit a general principle along these lines that unifies at least some of these factors that impact moral cognition: Principle of Agential Involvement: All else being equal, it is morally worse for an agent to be more involved in bringing about a harmful outcome. Of note is the decidedly non-consequentialist character of this unifying principle: it’s not just morally significant that the outcome is harmful but that it’s brought about in a certain way. Now, the evidence for positing this and other moral principles has often involved dilemmas that seem strange, unusual, or unrealistic. One might worry about the use of intuitions about such “exotic” scenarios “for purposes of moral and legal analysis” (Sunstein 2005: 541). There is some empirical evidence that a sizeable number of people regard certain trolley cases, especially Footbridge, as not only unrealistic but humorous (Bauman et al. 2014). So one might worry that the studies mentioned in this chapter fail to reveal much about normal moral judgment—a matter of ecological validity. However, this worry is unfounded. The trolley scenarios, especially the most worrying ones (e.g., Footbridge) aren’t representative of all of the vignettes used in the relevant research. Even when researchers highlight trolley cases, exploring the supplementary materials often reveals that many involve quotidian or humorless situations, such as poisoning an enemy, pulling hair, saving children from a burning building, breaking one’s neck after slipping on a wet floor, and slipping off of an accelerating speedboat. And some experimental vignettes involve more everyday outcomes than death, such as financial loss, emotional harm, property damage, and getting up off the floor. So it’s difficult to challenge the ecological validity of the large and diverse body of research that has by and large converged on some core elements of moral cognition.

3.4 Moral Inference A clear picture is emerging from the science of moral judgment. We often rapidly infer the moral status of an action in part by relying on general principles that identify as morally relevant various features of agents, actions, and outcomes. In other words, we categorize the act as either moral or immoral on the basis of the presence or absence of what we take to be morally relevant factors. This is a matter of reasoning or inference, whether implicit, explicit, or some combination of the two across time. Compare ordinary non-moral inferences. Suppose I’m tasked with categorizing various objects as furniture or non-furniture. I direct my attention to each object one by one and consider its various furniture-relevant features. Like moral judgment, we could posit general principles that identify what I take to be the conditions for being furniture. Or what’s more plausible for both the concepts of furniture and morality is that the principles merely identify prototypical features that are statistically frequent in the category or exemplars with which I can compare the object in question (cf. Park 2011; Wylie 2015). Either way, I eventually categorize the object, making a relevant judgment. Such categorization is a matter of reasoning or inference (as characterized in Chapter 1, §1.2.2). It involves forming a new belief—“This is a piece of furniture”—on the basis of other beliefs or belief-like states (e.g., “The function of this object is for sitting,” or “This object resembles sofas, chairs, and tables”). Is it plausible, though, that we have even implicit beliefs in moral principles, such as the Doctrine of Doing and Allowing? Perhaps, but a more modest position is available, according to pg. 59 of 206

Regard for Reason | J. May

which we’re merely disposed to reason in accordance with the relevant principles (Horgan & Timmons 2007: 280; Zimmerman 2013: 6). Compare logical inference. One can count as reasoning in accordance with, say Modus Ponens, even if it isn’t represented in the content of a belief (cf. Boghossian 2012). So, regardless of whether we posit beliefs in the relevant moral principles or mere dispositions to reason in accordance with them, we have evidence of moral reasoning or inference. Of course, sentimentalists might insist that, while this may be inference, applying moral concepts requires relevant feelings or dispositions to feel them. But we saw in the previous chapter that we lack empirical grounds for such claims. What’s left is inference. Again, emotions may aid in moral inference by drawing one’s attention to morally relevant information, but that’s not enough to establish sentimentalism (see Chapter 1, §1.2.3).

3.4.1 Moral Reasoning, Fast and Slow? We’ve already encountered evidence that moral cognition can be automatic or controlled and conscious or unconscious. This picture fits with a familiar dual process theory of the mind more generally. Suppose that you’re running in a race and pass the person currently in second place. What place are you in? Your immediate gut reaction is likely: first place. If that’s the only answer you came up with, think again. Further reflection should suggest a different result: second place. If you pass the person in second, you’re merely taking their place in the hierarchy, not jumping into to lead. The question about the race is an item on a new version of the Cognitive Reflection Test (Thomson & Oppenheimer 2016). Such tests nicely reveal the dual process nature of our minds. We have two systems that correspond to thinking that is, roughly, either fast or slow (cf. Kahneman 2011). Automatic processing (“System 1”) is faster, less amendable to direct conscious control, and more often involves the application of heuristics. Such mental processing yielded the intuitive conclusion of “first place” in response to the question about the race. Controlled processing (“System 2”) is slower, more amenable to deliberate control, and often involves the application of conscious reasoning. Naturally, controlled processes can be recruited to regulate one’s automatic responses (Helion & Pizarro 2014). For example, using cognitive behavioral therapy, one might dampen one’s crippling anxiety by consciously reappraising the situation, thinking positive thoughts, and rehearsing strategies for regulating one’s emotional response. Consider two analogies for illustrating dual process theory that are useful if not taken too far. The first is the distinction between the automatic and manual settings of a camera (Greene 2013). We can simply rely on automatic settings or mental heuristics to navigate our world, or we can switch into manual mode and calculate the solutions deliberately. Another analogy is to an elephant and its rider (Haidt 2012). A wealth of research suggests that much of our mental lives are driven by powerful automatic processes, like an elephant, while conscious deliberation has comparatively less influence and is often just along for the ride. The rider can take over to some extent when one pauses to carefully deliberate, but the powerful elephant still exerts much force. A natural explanation is that we evolved to have two systems because each has its advantages. Automatic settings are quick and generally more efficient—no time to sit and deliberate when one spots a predator! But controlled cognition is more flexible, allowing one to forgo a heuristic or automatic response and carefully work out a complex solution to a novel problem. pg. 60 of 206

Regard for Reason | J. May

Dual process theory turns on a distinction that is so general that it’s almost trivial. Moreover, there is no bright line dividing the two systems, and they may be ultimately intertwined (Seligman et al. 2016: ch. 3). Nevertheless, the distinction has proven helpful in understanding cognition across many domains, including moral judgment, helping to mark out modes of moral cognition that tend to have different properties (see Table 3.2). Table 3.2: Two Modes of Moral Cognition Automatic Faster Less conscious effort and attention more efficient “The Elephant” “automatic settings” less reliable emotional deontological

Controlled Slower More conscious effort and attention more flexible “The Rider” “manual mode” more reliable emotionless utilitarian

Moral cognition certainly is often automatic and immediate. When one learns about the holocaust, there’s no need for conscious deliberation to determine that it’s morally abhorrent. Upon learning about all the relevant non-moral facts (e.g., the torturous conditions of the concentration camps, the mass graves, the human experiments), one’s moral judgment immediately follows. Other cases, though, do seem to require slower, controlled processes, as when someone lies awake deliberating about whether to come clean about an extramarital affair. This dual process framework for moral judgment is extremely minimalist (even more than Campbell & Kumar 2012). It jettisons further, more controversial claims, particularly the following three (crossed out in Table 3.2). First, some theorists argue that each system is tied to specific kinds of moral values. Greene (2008), in particular, contends that moral judgments arising from controlled cognition tend to favor the utilitarian maximizing of good consequences while automatic moral intuitions embody characteristically deontological (or broadly non-consequentialist) moral values, such as an actor’s intent or a person’s rights. However, the studies that provide evidence for this conclusion often involve moral dilemmas in which the utilitarian resolution (e.g., actively kill one to save five) is counterintuitive. Matters look different when participants are also presented with dilemmas in which the non-utilitarian option is counter-intuitive, as when one can tell a little white lie in order to promote the greater good (Kahane et al. 2012). Such cases suggest that automatic moral intuitions are associated only with intuitive processing, not deontological or non-utilitarian values. Moreover, there is some powerful evidence that “characteristically utilitarian” responses to sacrificial dilemmas aren’t associated with utilitarian values at all, such as impartial concern for the greater good, but rather with various egoistic and anti-social tendencies (Kahane et al. 2015). This suggests that people who are more inclined to push an innocent man to his death to save five others—such as those with VMPFC damage (see Chapter 2, §2.4.2)—may be more anti-social than they are utilitarian. Second, some proponents of dual process theory argue that automatic moral intuitions are largely emotionally driven. Some theorists go so far as to claim that “the cognitive/affective and pg. 61 of 206

Regard for Reason | J. May

conscious/intuitive divisions that have been made in the literature in fact pick out the same underlying structure within the moral mind” (Cushman, Young, & Greene 2010: 49). Some neuroimaging studies do suggest that automatic, compared to controlled, moral intuitions are correlated with areas of the brain independently associated with emotional processing (Greene 2008), and some changes in moral judgment seem to result after damage to these areas (see Chapter 2, §2.4). However, what are really intuitive responses can easily be mistaken for emotional responses. Again, other brain imaging studies suggest that automatic moral intuitions are correlated with evaluating dilemmas for which the normal response is intuitive, not necessarily emotional (Kahane et al. 2012). Moreover, fine-grained psychological measures of emotion suggest there isn’t a strong correlation between automatic moral judgments and the experience of emotions (Horne and Powell 2016). Finally, some researchers insist that automatic moral intuitions are less reliable. Sometimes, for example, automaticity arises from uncritically absorbing norms from one’s culture, such as rules prohibiting lying or homosexuality (Greene 2008). Other moral intuitions may be more universal but arise from uncritically accepting one’s evolved tendencies—e.g., norms against incest and unfair distributions of goods (Haidt 2012; Bloom 2013). But we’ve already encountered ample evidence that automatic moral intuitions can be responsive to, and change in light of, new information. Intuitive moral judgments are at least sensitive to information about an action’s consequences and how involved an agent was in bringing them about. Indeed, intuitions generally can be unconsciously attuned based on experience. In one famous experiment, healthy participants were able to automatically and unconsciously learn the patterns in decks of cards by drawing individual cards from them (Damasio 1994: ch. 9). Yet this ability is markedly absent or diminished in patients with damage to an area of the brain that Greene and others believe is a key player in automatic processing (the VMPFC). In fact, a range of studies on both human and non-human animals suggests that we regularly learn by unconsciously calculating probabilities and expected values (see, e.g., Railton 2014; Seligman et al. 2016). Moreover, sometimes our automatic moral intuitions have been put in place by prior deliberate reasoning that is, by hypothesis, the paradigm of flexible cognition (Kennett & Fine 2009; Sauer 2017). For example, suppose many years ago I came to believe that affirmative action is morally acceptable by engaging in controlled deliberation that overrode my automatic intuitive moral judgment. I may then automatically judge an instance of affirmative action as morally acceptable but only because I quickly recognize it as belonging to a category that I previously determined to be ethical through conscious deliberation.

3.4.2 Post-Hoc Rationalization? One might worry that when conscious reasoning does play a role in moral judgment it merely works to rationalize a moral judgment one has already made intuitively, lacking any power to change one’s mind (Haidt 2012). Conscious moral reasoning may, at least typically, be causally inert. The evidence for such claims comes primarily from the famous moral dumbfounding studies. Haidt and his collaborators (1993) presented participants from the United States and Brazil with various “harmless taboo violations,” such as eating road kill, cleaning the toilet with one’s national flag, and failing to fulfill a promise to someone who has died. Most participants, especially those lower in socioeconomic status, regarded these actions as immoral. Yet, in pg. 62 of 206

Regard for Reason | J. May

interviews with an experimenter, participants tended to look for reasons to justify their judgments, but generally weren’t able to find a satisfying one. Often they would look for possible harms, but the scenarios were described in a way that lacked any clear harms. Moreover, participants’ moral judgments correlated more strongly with their ratings of how bothered they were by the scenarios than with their judgments about how much harm was caused. These studies don’t show that conscious moral reasoning is mere post-hoc rationalization. Asking about how bothered one is by an action can easily be interpreted as a question about how appropriate, acceptable, or morally permissible the act is (May 2016a: 46). In that case, we’d expect a strong correlation between a judgment and itself (or something quite like it). In light of this, the smaller correlation between moral judgments and harm doesn’t sound so insignificant. Moral judgments may well have been responsive to perceived harm, or risk of harm, that was simply difficult to articulate (compare Dwyer 2009; Jacobson 2012; Railton 2014). This suggests that conscious attempts to articulate harms were not mere post-hoc rationalization. In fact, some later studies suggest that many people do think such “harmless” taboo violations involve risk of harm to the violator, such as damaging relationships with others or later on being tormented by the transgression (Royzman et al. 2011; Royzman et al. 2015). Some people may come to reject their intuitive condemnation when given time to reflect on a persuasive argument against it (Paxton et al. 2012), but that’s compatible with the intuitive reaction being driven by implicit reasoning. One recent study supports the idea that people are just highly motivated to remain consistent and avoid capriciously abandoning their initial judgments when challenged. Hall, Johansson, and Strandberg (2012) asked pedestrians at a park if they would indicate their agreement or disagreement with various moral statements. Some participants evaluated general moral principles (e.g., “To be moral is to follow the rules and regulations of the society, rather than weighing the positive and negative consequences of one’s actions”) while others responded to specific moral issues (e.g., “It is morally defensible to purchase sexual services in democratic societies where prostitution is legal and regulated by the government”). After participants provided their responses, they were asked to read and explain them. However, the experimenters surreptitiously changed two of the responses so that the statement endorsed was in fact its opposite (or close to it). Most participants (69%) failed to detect at least one of the altered responses and attempted to justify the opposite of the response originally recorded. It seems that anxiety about appearing inconsistent, confused, or uncooperative led many participants to try to justify a position that they did not in fact originally take. Even if the dumbfounding studies provide good evidence of post-hoc rationalization in these contexts, that conclusion doesn’t necessarily apply to all or even most moral reasoning. After all, numerous experiments demonstrate that conscious reasoning can override automatic heuristics. A key example is implicit bias. We are unfortunately influenced by unconscious biases against various groups based on morally irrelevant factors, such as race, gender, and sexual orientation. There is experimental evidence, however, that some deliberate efforts to correct for such biases are effective (for review, see Kennett & Fine 2009). For example, while we’re more prone to misidentify an item as a weapon if it’s held by a black man rather than a white man, there is some evidence that the influence of this bias can be mitigated by consciously thinking: “Whenever I see a Black face on the screen, I will think the word, safe” (Stewart and Payne 2008: 1336). Now, once this “implementation intention” is set, it may be triggered automatically at the time of action without deliberation. But the behavior has been automated by a prior deliberate moral choice. pg. 63 of 206

Regard for Reason | J. May

We can also look to studies that directly measure moral judgments after manipulating reasoning. In one set of experiments, Paxton, Ungar, and Greene (2012) found that participants’ automatic moral intuitions about sacrificial dilemmas could be overridden after receiving ample time to consciously deliberate about strong counter-arguments. Reflective participants tended to ultimately conclude that sacrificing one for the greater good was morally preferable, at least when this wouldn’t make anyone worse off than failing to intervene. Such results fit with other findings that, as we already noted, suggest that people will overwhelmingly agree with relevant utilitarian principles, particularly, “In the context of life or death situations, you should always take whatever means necessary to save the most lives” (Horne et al. 2015). However, Horne and his collaborators found that people are less inclined to accept this unrestricted principle after reasoning about a single counter-example, such as the famous Transplant dilemma in which a surgeon can save five ailing people only by harvesting the organs of one healthy person. Indeed, participants were less inclined to accept the utilitarian principle—which would justify killing the one—even when asked about it six hours later. Thus, we have converging evidence that conscious reflective reasoning prompts us to both override automatic moral intuitions in light of evidence and update our moral beliefs (or credences). Indeed, a growing body of evidence suggests that consistency reasoning in particular is a common source of belief revision in ethics (Campbell & Kumar 2012; Holyoak & Powell 2016; Barak-Corren et al. forthcoming; Lawrence et al. 2017). Haidt (2001) does present additional evidence for the idea that conscious moral reasoning is largely inert. Many of the data, however, don’t directly speak to moral judgment. For example, he discusses general evidence for dual-process theory. A relevant finding is that, more often than we naturally think, quick unconscious processes determine our thoughts and actions while more deliberate ones play at most a post-hoc or interpretive role. In one famous example, Nisbett and Wilson (1977) asked people which of a set of stockings they preferred most. Unbeknownst to the participants, the garments were qualitatively identical. Yet most people reported preferring the stocking on the far right and attempted to consciously justify the decision by appeal to other features that weren’t there. The real source of the choice seems to be an unconscious bias toward objects on the right. And, in the absence of any salient differences between the garments, this bias generates a hesitant choice. But the choice has been made, and conscious reasoning plays the role of interpreting and rationalizing it. These are considerations about cognition generally, however. They do suggest that moral reasoning will, like other kinds of thinking, involve more rationalization and confabulation than we tend to expect. But that only means moral reasoning is similar in this regard to other forms of cognition. While we’re certainly influenced by unconscious processes, they don’t necessarily render conscious reasoning typically inert. We’d need more evidence to ground skepticism about the ability of deliberate moral argument to persuade. In sum, moral intuitions can be shaped by conscious reasoning or controlled processing. Reasoning can revise more than philosophers’ moral judgments, such as the utilitarian’s rejection of the significance of killing vs. letting die. In light of reflection, ordinary individuals can revise deeply held moral beliefs too—such as those concerning racism, sexism, or factory farming— although often this will involve the interplay between conscious and unconscious thinking (Craigie 2011; Campbell & Kumar 2012).

3.4.3 Universal Moral Grammar? pg. 64 of 206

Regard for Reason | J. May

One theme of this chapter is that a good deal of moral judgment is automatic and intuitive. Some theorists have taken this idea to help support the case for a human moral faculty, akin to our plausibly innate capacity for developing knowledge of a language (e.g., Harman 1999; Dwyer 2009; Mikhail 2011). Moral judgment does seem to resemble the application of the rules of grammar in being relatively automatic and developing in adolescence with little explicit instruction. For example, people intuitively categorize commissions as morally worse than their corresponding omissions without necessarily having the ability to articulate the Doctrine of Doing and Allowing or its relation to Agential Involvement. And the moral relevance of intent appears to naturally develop early in childhood (Cushman et al. 2013). As Mikhail puts it, “the overall length, complexity, and abstract nature of these computations, along with their rapid, intuitive, and at least partially inaccessible character, lend at least modest support to the hypothesis that they depend on innate, domain-specific algorithms” (2011: 121). Proponents of the hypothesis of universal moral grammar typically make at least three distinct claims about moral cognition that can be separated. First, they contend that all or some moral judgments arise from a kind of moral module in the mind whose operations are relatively quick, unconscious, and encapsulated from information that’s otherwise consciously accessible. Second, proponents of moral grammar claim that some elements of moral cognition are in an important sense innate, unlearned, or organized in advance of experience. Finally, proponents typically claim that some elements are universal or widely shared across the human species. Ultimately, the picture developed in this chapter is compatible with some degree of modularity, innateness, and universality. We’ve already seen that some core elements of moral cognition, particularly intent, appear to be shared across cultures. While there may be significant variation in the details of moral systems within a culture and around the globe (more on this in Chapter 5), there do appear to be some structural commonalities, such as the significance of outcomes and agential involvement. Our intuitive moral judgments are also in some sense modular. They are relatively automatic and we’re unaware of at least some of their operations. However, as we saw in the previous section, this does not preclude our being able through conscious reasoning to investigate or modify the principles driving our moral judgments. Finally, even if specific moral principles are not themselves innate and can be modified with experience, there may still be a sense in which we possess an innate moral faculty (Mikhail 2013). Basic moral principles might still resemble the principles of grammar to some degree in being the result of both one’s genetic endowment and experience, including implicit and explicit reasoning. So we humans might possess a moral faculty. Even if we don’t, though, there are plausibly some universal elements in moral cognition that arise from a somewhat modular mechanism that is to some degree organized in advance of experience. At any rate, our focus is on abstract principles that are common elements of adult moral judgment, not innateness or moral development. Often theorists take the moral grammar hypothesis as somewhere between sentimentalism and “rationalism,” where the latter is conceived as the view that moral judgments are preceded by conscious reasoning. But this chapter has argued that both conscious and unconscious reasoning can generate moral judgments. Even sentimentalists recognize that this view is compatible with the rationalist tradition and the rejection of sentimentalism, since it claims that inference, not affect, is essential to moral judgment (see, e.g., Nichols 2008: n. 2; Mallon & Nichols 2010: 302-3; Zimmerman 2013; Prinz 2016: 49). As Prinz puts it: “Moral grammar is just unconscious rationalism” (2016: 56). pg. 65 of 206

Regard for Reason | J. May

3.5 Conclusion We’ve encountered ample research that directly favors rationalism by showing that both conscious and unconscious reasoning heavily influence moral cognition. This reasoning is at least systematically sensitive to an action’s outcomes and the agent’s role in generating them. While these considerations are often processed implicitly or unconsciously, we have little reason to believe that explicit, conscious reasoning is predominantly post-hoc rationalization. Our dual process minds afford both kinds of influence on moral thinking. Even if implicit processes are much more prominent than we ordinarily expect, they’re often driven by complex computations. Whether moral cognition is fast or slow, it often involves a complex interplay between implicit and explicit reasoning. One might now ask: Must moral cognition involve reasoning “all the way down”? Won’t reasoning bottom out in some basic moral beliefs, such as “Treat others as you’d like to be treated” or “It’s wrong to fail to maximize happiness”? Sentimentalists, after all, can admit that reasoning plays a role in moral cognition, just not at its foundations (Nichols 2008: n. 2; Prinz 2016: 65). In response, rationalists could simply embrace a conception of moral cognition without foundations. Perhaps we simply continue to reason to the most internally coherent system of moral beliefs until we reach reflective equilibrium. Or perhaps we continue to justify our moral beliefs without end. I prefer a more “foundationalist” framework and instead simply deny that basic moral beliefs must be backed by affect in order to be genuinely moral beliefs. Either way, the sentimentalist alternative is compelling only if the affective aspects of emotions are essential to distinctively moral judgment, independently of reasoning processes. We saw in the previous chapter that the empirical case is unconvincing. If feelings alone don’t moralize or underwrite non-pathological moral judgment, then we needn’t posit them as causally necessary for moral inference. Moreover, emotions are not always evoked by many of the dilemmas used to uncover tacit moral computations (e.g., when a hypothetical person’s property is damaged), and it’s a stretch to think that we’re even disposed to feel such emotions under appropriate conditions. There is certainly some role for emotions in moral cognition, but we should recognize their limits. We lack evidence that the affective aspects of emotions make judgments distinctively moral independent of inference. Emotions typically aid inference generally by, for example, drawing one’s attention to relevant information (Pizarro 2000; Prinz 2006: 31; Nichols 2014: 738). Deficits in emotion then might only indicate poor input to an intact capacity for reasoned moral cognition (Huebner et al. 2009). Rationalists should admit that genuine moral judgment is often automatic and unreflective (contra Kennett & Fine 2009) but they needn’t concede that mere feelings are necessary apart from their contribution to inference (contra Sauer 2017). Now, “affect” of some sort may be necessary for much of human cognition generally. Seligman, Railton, Baumeister, and Sripada (2016) argue that the brain is “built around affect” (25), as the affective system “makes a key contribution to our ability to learn about, anticipate, evaluate, estimate, and act upon the prospects or perils of the world” (54). Importantly, though, on this picture “automatic affect” isn’t a “full-blown emotion” at all but rather “a twinge of feeling” that “may be entirely unconscious” and “might not involve any physiological response, such as a state of arousal” (210). Even if this “affective primacy” picture is right, it’s no consolation for the sentimentalist tradition in ethics. Affect is not afforded a special role in distinctively moral cognition and it functions to facilitate unconscious learning and inference. This comports well with the rationalist tradition, which holds that moral cognition resembles pg. 66 of 206

Regard for Reason | J. May

other forms of cognition in being fundamentally a matter of learning and inference. Even if “affect” supplies an essentially evaluative (good/bad) ingredient in cognition, this isn’t a matter of mere feeling but of supplying relevant information, which is something rationalists can happily accept. This chapter has defended a rationalist account of moral cognition as commonly involving reasoning, which might seem to paint it in a positive light. Whether moral thinking is fast or slow, it exhibits a regard for reason. But of course, bad reasoning is still reasoning. Pessimism lurks if the science reveals that ordinary moral inference is a kludge. Our automatic moral intuitions, which often are non-consequentialist in character, might seem to arise from defective psychological and evolutionary processes that make them overly sensitive to morally irrelevant factors. In other words, we haven’t addressed whether ordinary moral judgment can be empirically debunked. We address such epistemic issues in the next two chapters.

pg. 67 of 206

Regard for Reason | J. May

Ch. 4: Defending Moral Judgment Word count: 10,925

4.1 Introduction Scientific evidence can lead to a critique of some class of beliefs by exposing their sordid origins. Consider the familiar accusation of wishful thinking. Why in 2003 did so many people in the United States think Saddam Hussein had weapons of mass destruction, despite the absence of sound evidence? Why do some people believe in crooked televangelists who pose as faith healers, like the despicable Peter Popoff of the 1980s? One common answer to these questions is, because they wanted it to be true. And one might point to empirical evidence to support such debunking explanations. Moral beliefs can similarly be threatened by an examination of their genealogy. So far, we have covered evidence that moral judgment is a process that’s fundamentally a matter of reasoning or inference. In particular, while emotions may play a role in forming moral beliefs, that is largely by drawing attention to relevant information—e.g., about positive and negative consequences of an action and how involved the agent is in the outcome. Moral cognition on this picture has the promise of being a rational process that can yield justified moral beliefs. However, empirical evidence may cast doubt on the possibility of moral knowledge based on ordinary moral reasoning. Much of moral thought is automatic, based on opaque sources, and even conscious reasoning can be mere confabulation. Science has the power to reveal whether the human capacity for moral judgment is a kludge. Pessimism remains if our ordinary moral beliefs are unjustified. Empirical debunking arguments in ethics have tended to be global or at least wideranging, targeting all moral beliefs or a large class of them. Indeed, Guy Kahane argues that debunking arguments (evolutionary ones at least) are all-or-nothing. It “seems utterly implausible,” he concludes, that such arguments can “have a legitimate piecemeal use in normative debate” since “to work at all” they are “bound to lead to a truly radical upheaval in our evaluative beliefs” (2011: 120-1; cf. Rini 2016). Many have recently argued in exactly this fashion that we cannot remain justified in our ordinary moral beliefs after realizing that evolutionary forces have substantially influenced them (e.g., Ruse 1986; Joyce 2006). But one needn’t appeal to evolution to generate such sweeping skepticism. Some argue, for example, that most ordinary moral intuitions are unreliable because they are distorted by cognitive biases, such as the order in which information is presented (e.g., Sinnott-Armstrong 2008; Nadelhoffer & Feltz 2008). Absent evidence that’s independent of such intuitions, one might conclude that most ordinary moral beliefs are unjustified. Some empirical debunking arguments target only certain moral beliefs, but the classes are apparently rather large. Some debunkers argue that disgust is an unreliable emotion in ethics, and therefore that all moral beliefs based on mere repugnance are unwarranted (e.g., Nussbaum 2004; Kelly 2011). Others argue that many intuitively compelling moral beliefs are based on automatic, pg. 68 of 206

Regard for Reason | J. May

unconscious, emotion-driven heuristics (e.g., Singer 2005; Greene 2014). Deontologists mistakenly offer elaborate, sophisticated justifications for these non-utilitarian beliefs, when in fact we have them for very simple, unsophisticated reasons. If we purge ourselves of the irrational moral beliefs that animate deontology, perhaps only utilitarianism remains standing. This chapter argues for a more optimistic picture of ordinary moral cognition, despite containing elements that have roots in our evolutionary history, including non-consequentialist intuitions that rely in part on automatic heuristics. We’ll see that any empirical debunking argument will likely struggle when targeting a large class of moral beliefs. Given their diversity of influences, it’s inevitably difficult to identify one substantial influence on an array of moral beliefs that is also systematically defective in all or most contexts. What emerges is a debunker’s dilemma: one can identify an influence on a large class of moral beliefs that is either defective or substantial, but not both. When one identifies a genuinely defective influence on a large class of moral beliefs (e.g., morally irrelevant framing effects, incidental disgust), this influence is insubstantial, failing to render the beliefs unjustified. When one identifies a main basis for belief (e.g., attuned heuristics, adaptive solutions to social problems), the influence is not defective. As a result, we lack solid empirical reasons to think that most moral beliefs are unjustified due to their origins.

4.2 Empirical Debunking in Ethics The term “debunk” has multiple meanings. Sometimes it just means to prove wrong (cf. Lillehammer 2003), as when someone aims to debunk astrology, phrenology, or a religious doctrine by showing that its assumptions or predictions are not borne out. A challenge to the truth of a moral belief is indeed a challenge to moral knowledge, insofar as knowledge requires at least justified true belief. Similarly, one lacks knowledge if one personally finds the genealogy of the relevant moral belief so disturbing that one loses any confidence in it (cf. Rini 2017). However, our focus will be on distinctively epistemological challenges to moral belief, which target the rationality or justification of maintaining belief. So we’ll proceed with the now common epistemological notion of debunking: to undermine the grounds for belief or to show that our existing reasons for accepting a view are, or become, bad reasons. What’s believed is not attacked as false; the belief is shown to be unjustified (see Kahane 2011; Wielenberg 2014; Nichols 2014). Rather than leading us to deny the view once believed, the immediate conclusion of the debunking arguments at issue is that we must withhold judgment. However, insofar as knowledge requires justification or well-founded belief, debunking arguments do ultimately threaten moral knowledge. There is a long intellectual tradition of debunking in ethics. Nietzsche and Freud, for example, tried to debunk religious morality by exposing its historical roots in activities unconcerned with the search for truth. Nietzsche argued that morality arose merely as a tool for social control while Freud argued that religion is based on wishful thinking (compare also Spinoza’s account of how we arrive at ordinary conceptions of religion). Similarly, Peter Singer has long worried that commonsense moral judgments are untrustworthy. We should rely instead on “self-evident moral axioms” given the worry that: …all the particular moral judgments we intuitively make are likely to derive from discarded religious systems, from warped views of sex and bodily functions, or from customs necessary for the survival of the group in social and economic circumstances that now lie in the distant past…. (1974: 516) pg. 69 of 206

Regard for Reason | J. May

These debunking arguments conclude that commonsense moral beliefs are unjustified even if not obviously false. We will likewise focus on empirically driven arguments that target the warrant of ordinary moral beliefs. But we’ll set aside attempts to debunk a particular meta-ethical theory, such as moral realism or the view that there are objective moral truths (e.g., Street 2006). One’s moral beliefs can be challenged regardless of whether they presuppose a robust form of objectivity. Even if the truth of any moral belief is response-dependent or relative to some degree, any plausible account of moral truths (and how we know them) will allow room for error and groundless belief. For example, one can be incorrect or unjustified in assuming that one’s culture approves of incest. As we saw in previous chapters, philosophers and scientists working in empirical ethics often use experimental methods to probe the moral intuitions of naïve participants. Cases of particular interest are those to which most people have the same reactions—not just philosophers but laypersons too. Because intuitions are automatic, gut-level responses, we typically can’t know all the bases of our intuitions though introspection alone. In some cases, researchers may find that our intuitions are based on factors that are plainly morally irrelevant. A general worry about empirical debunking is that it illicitly jumps the “is-ought gap” or attempts to derive normative conclusions from purely empirical premises. To understand the structure of debunking arguments, it’s important to see that they require a normative premise: roughly, that some basis for moral belief is morally irrelevant (Kumar & Campbell 2012; Greene 2014). For example, suppose a debunking argument pointed to implicit racial bias as an influence on moral beliefs about a criminal’s culpability and blameworthiness. The argument would then rest on the normative premise that race by itself is, of course, a morally irrelevant basis for such beliefs. The justification for normative premises like this need not be empirical. A debunking argument is successful, in part, to the extent that its normative premise is more plausible than the moral beliefs that the argument attempts to debunk. Plainly, then, empirical debunking arguments can respect the is-ought gap. The best debunking arguments combine empirical claims about the sources of moral beliefs with one or more normative premises to draw a normative conclusion, which is that certain target moral beliefs are unjustified. Note that the conclusion is epistemic and secondorder—yielding a verdict about the justification of one’s beliefs. Nonetheless, the conclusion can have first-order moral implications. Discovering that some moral belief is unjustified motivates abandoning it. Furthermore, if there is a tension among a set of beliefs and we find out that one subset is unjustified, then that lends support to the other, conflicting subset. Empirical research, then, doesn’t simply tell us what our moral beliefs are: it can offer suggestions about what they ought to be, when combined with a plausible normative assumption.

4.3 The Debunker’s Dilemma Philosophers who develop wide-ranging debunking arguments defend an empirical theory about the grounds of all or some large class of moral beliefs. They also defend a normative claim that these grounds are not sufficient to confer justification. Proponents of evolutionary debunking arguments, for example, claim that natural selection gave rise to moral beliefs and that natural selection does not track moral truth. Proponents of psychological debunking arguments claim that affective heuristics or framing effects substantially ground certain moral beliefs and do not pg. 70 of 206

Regard for Reason | J. May

confer justification on them. If any of these arguments are sound, it seems we must abandon many or all of our moral beliefs. These forays into empirical ethics suggest a general schema. All of the debunking arguments attack one or another class of moral beliefs as unjustified on the grounds that they are based on a defective process. The process is epistemically defective in the sense that it is unreliable, insensitive to evidence, or otherwise yields beliefs that are unjustified or unwarranted (Nichols 2014). As many have pointed out, this is similar to arguments that debunk a belief by arguing that it is based on wishful thinking, guesswork, motivated reasoning, rationalization, and paranoia. A process is typically epistemically defective in ethics if it is a poor indicator of moral rightness or wrongness, which includes general processes like wishful thinking but may also include other processes specific to forming moral beliefs, such as egocentricity, prejudice, favoritism, jealousy, and narrow-mindedness. Thus, for a given subject or group (S), class of moral beliefs (B), and an epistemically defective process (P), a Process Debunking Schema can be constructed as follows (cf. Kahane 2011; Nichols 2014): 1. For S, B is mainly based on P. (empirical premise) 2. P is epistemically defective. (normative premise) So: 3. S is unjustified in holding B. Arguably, one lacks justification for the targeted beliefs only if one is aware of these premises, or perhaps if one should be aware of them (cf. Sinnott-Armstrong 2008). On that assumption, the conclusion applies only to those people who are aware or should be aware. For simplicity’s sake, however, I won’t make this explicit in the schema. Once we understand the structure of sweeping debunking arguments, we begin to see their shortcomings. Wide-ranging debunkers confront a kind of dilemma or predicament (Kumar & May forthcoming), due to a tension between their premises (“dilemma” in the ordinary, not logical, sense of the term). First, debunkers can fail to identify a process of moral belief formation that is epistemically defective (the normative flaw). If we aren’t confident that the process is genuinely defective, then we can’t use it to challenge moral beliefs about which we are relatively more confident. Second, one might fail to identify a main basis for belief (the empirical flaw). If some genuinely defective process is a cause of belief, but it’s not the main cause of belief, and the belief is also based on other processes that don’t seem defective, then the debunking argument is weak. It may lower the justification of the relevant moral beliefs, but only to a degree that does not render them unjustified overall. Those convinced by the argument might be pressured to reexamine their moral beliefs, but they needn’t abandon the beliefs altogether. Each sweeping debunking argument struggles with one of these aspects of the debunker’s dilemma. It will become clear that this struggle is largely due to a tension or trade-off: establishing a plausible empirical premise leads to a corresponding normative premise that is implausible (and vice versa). We’ll examine four wide-ranging debunking arguments that have garnered much attention. But the discussion of any single one of them must be somewhat compressed since the aim is to bring to light difficulties common to them all.

4.4 Emotions pg. 71 of 206

Regard for Reason | J. May

Some of our moral beliefs can be influenced by emotions, which needn’t be problematic. As we’ve seen in previous chapters, emotions can at least facilitate moral reasoning by drawing one’s attention to relevant information. However, emotions can be incidental, failing to be a response to some relevant feature of the situation one is evaluating. It would certainly be suspect to base one’s belief on such mere feelings, and perhaps some emotions are systematically incidental in certain conditions. The many studies on disgust, in particular, might threaten to debunk all moral beliefs based on this peculiar emotion. The consensus among scientists is that disgust arose as a biological adaptation for detecting and avoiding pathogens that cause disease and infection (Rozin et al. 2008; Tybur et al. 2013). Only later on was disgust “co-opted” in moral cognition, and now it seems we can find ourselves disgusted by “impure” behavior, the violation of taboos, cheating, hypocrisy, and the like (Kelly 2011; Kumar 2017). Is this a defective basis for moral belief? The mechanisms underlying moral disgust seem to inherit the functionality of its pathogen-oriented precursor. Daniel Kelly (2011) argues that disgust has been designed to follow the rule “better safe than sorry,” as it’s much better to be oversensitive to germs than undersensitive. That is, it’s a mechanism that frequently generates false positives, for the sake of minimizing false negatives, and is therefore unreliable. Kelly concludes that all moral beliefs driven primarily by feelings of disgust are unjustified: “repugnance is simply irrelevant to moral justification” (148). In a similar vein, Martha Nussbaum (2004) argues that disgust is part of a class of emotions, which includes shame, that are “unreliable as guides to public practice” (13). Disgust, she writes, is “very different from anger, in that its thought-content is typically unreasonable, embodying magical ideas of contamination, and impossible aspirations to purity, immortality, and nonanimality, that are just not in line with human life as we know it” (2004: 14). A key problem with disgust, according to its critics, is that it seems to influence moral judgment even when it provides no morally relevant information. Studies that induce incidental feelings of an emotion suggest an influence on moral belief that is epistemically defective, at least when they are irrelevant to the action evaluated. This raises the worry that some such influences of emotion provide an unreliable heuristic guiding moral decisions. As Daryl Cameron and his collaborators put it, “the influence of irrelevant emotions is problematic because it suggests that moral judgment may be capricious and unreliable” (2013: 720; cf. also Sauer 2017). So, while disgust is the focus here, other incidental emotions may likewise have debunking potential. Even focusing only on beliefs influenced by disgust we can reveal the tension in the debunker’s dilemma. Some moral beliefs can be influenced by disgust to some extent, especially when the response is attuned to morally relevant information (see Chapter 2, §2.2). But this plausible empirical premise compromises the normative claim that this influence is epistemically defective. The eminently plausible normative premise, which most debunkers seem to have in mind, focuses instead on incidental disgust. We’re certainly unwarranted in forming moral beliefs based on feelings of repugnance that have nothing to do with the situation being evaluated. The problem then, however, is with the empirical premise. Incidental emotions may influence moral judgment to some extent. But we saw in Chapter 2 that the effect is ever so slight, particularly in the well-studied case of disgust. Across over a dozen experiments, incidental disgust only sometimes makes moral beliefs slightly harsher, consistently failing to alter the valence of moral judgments, whether concerning moral violations or morally neutral scenarios. The researchers sometimes—in fact, rarely—find that pg. 72 of 206

Regard for Reason | J. May

these differences are statistically significant, but at best that just means, roughly, that the difference is not likely due to chance. On the face of it, however, small differences on the same side of a fine-grained scale aren’t substantial for moral judgment. Particularly, for our purposes, they don’t provide strong evidence that disgust alone can be a main basis for moral belief. Sometimes an experimental effect is substantial even if it’s only a small, but statistically significant, shift on the given scale of measurement. John Doris raises this point in a slightly different context: “While the impact of each individual goofy influence may be statistically small, just as with medical interventions, the aggregate effect may be quite potent” (2015: 64). Implicit biases, for example, may exert small effects on individuals but generate large-scale social injustices (Greenwald, Banaji, & Nosek 2015). However, it matters greatly what’s being measured and what questions researchers are attempting to address. For example, relatively small movement on a scale can be substantial if researchers are measuring rates of infant mortality because any decrease would be important no matter how slight. In experimental research on moral judgment, however, very small shifts on a fine-grained scale are not clearly substantial. At any rate, the effects of incidental disgust cannot support the empirical premise of a process debunking argument, which seeks a main basis for belief. So extant evidence only suggests that incidental disgust may sometimes make one think an action is slightly worse, but one judges the action as right or wrong regardless of incidental feelings of disgust. While feeling disgust may be a common consequence of judging an action immoral, we don’t have evidence that it’s a main basis for a large class of moral beliefs. Of course, non-incidental or integral feelings of disgust may have a much stronger influence on moral belief. Some research suggests that feelings of disgust can be flexibly attuned by learning mechanisms and therefore is not generally unreliable (Kumar 2017). Now we have a plausible empirical premise, but the normative premise consequently suffers. When disgust is integral and carries with it morally relevant information, it’s not an epistemically defective basis for one’s beliefs (cf. Plakias 2013). The tension between the debunker’s two premises rears its ugly head. It’s not simply that the argument is unsound because one premise is false. More specifically, when one targets a large class of moral beliefs, one can satisfy one of the premises only at the expense of the other.

4.5 Framing Effects An even more sweeping debunking argument targets all moral intuitions, not just those moral beliefs influenced by emotions. The class of moral beliefs based on intuitions is arguably quite large, perhaps all of them. Some experimental evidence, however, suggests that moral intuitions are subject to framing effects: the way that a moral problem is framed can affect intuitive beliefs about which solutions are appropriate. Is it problematic to base one’s moral belief on how a dilemma is framed? It depends on the frame. As we’ve seen, some moral beliefs are influenced by whether an outcome is framed as arising from an action rather than an omission (Chapter 3, §3.2.2). Some regard the distinction between actions and omissions as morally irrelevant (e.g., Sunstein 2005; Greene 2013), but that takes on a controversial normative premise. Such a premise could be defended by a different debunking argument that directly targets such deontological heuristics, but we’ll address that later in this chapter. Debunkers who point to framing effects (e.g., Sinnott-Armstrong 2008; Nadelhoffer & Feltz 2008) instead have in mind frames that are clearly morally irrelevant, such pg. 73 of 206

Regard for Reason | J. May

the order in which a series of scenarios is presented (e.g., Liao et al. 2012; Schwitzgebel & Cushman 2012). Here we have a plausible normative premise for a debunking argument, but we’ll see that, in keeping with the debunker’s dilemma, the corresponding empirical premise is thereby compromised. Some of the relevant studies involve the trolley dilemmas. Petrinovich and O’Neill (1996, Study 1), for example, presented hundreds of participants with variations of scenarios in which one could only either kill one to save five or avoid killing one by letting five die. Participants were randomly assigned to evaluate versions in which the dilemma focused on who would die or be killed: either one will die or five will. The other group evaluated versions in which the choice was framed in terms of who would be saved: either five will be saved or one will. The researchers found that people’s judgments about the various trolley dilemmas were affected by this framing, which appeared to involve describing the very same dilemmas in different but equivalent terms. Similarly, Nadelhoffer and Feltz (2008) presented participants with a version of the Switch dilemma in which one could divert the trolley to save five or let it run the course and kill one. Participants were randomly assigned to read either a version in which they themselves were the actors or in which another person was and so they were simply observers. The choice was either “What should you do?” vs. “What should John do?” Again, the moral dilemma seems to be the same either way, yet participants’ moral judgments on average were noticeably affected by this “actor-observer bias.” Now, suppose one’s moral intuitions could be successfully debunked based on framing effects. Perhaps moral beliefs can still be justified if based on more than mere intuition, yielding inferential justification, as Sinnott-Armstrong (2008) maintains. However, this non-skeptical conclusion is difficult to establish if many or nearly all of our moral beliefs are based on intuitions. As Sinnott-Armstrong puts it himself: “We could never get started on everyday moral reasoning about any moral problem without relying on moral intuitions” (47). One might expect that we’ll eventually acquire evidence that certain moral intuitions are reliable, but it’s unclear how we could be confident that some evidence confirms the reliability of some moral intuitions without relying on intuitions about what counts as moral accuracy. Moreover, there is some empirical evidence that even philosophers’ moral intuitions are subject to the same sorts of framing effects (e.g., Schwitzgebel & Cushman 2012). At the very least, anyone who is (or perhaps should be) aware of such framing effects arguably lacks moral knowledge, unless one can acquire the elusive, independent confirmation required for justification. In this way, psychological debunking arguments targeting moral intuitions might be extended to show that moral beliefs in general are unjustified. This wide-ranging skeptical attack relies on what seems to be an eminently plausible normative premise. Mere framing, such as the order in which one considers moral dilemmas, certainly seems to be an epistemically defective basis for moral belief. Now, in some cases, it may be rational to update one’s beliefs in light of the order in which evidence is presented (Horne & Livengood 2017). Suppose, for example, that you’re one of those poor souls for whom cilantro tastes like soap, and you’re anxious to know whether your burrito contains any. If you first form the belief that it does based on looking inside it and finding cilantro, then you’re unlikely to change your mind if you subsequently see written on the wrapper “no cilantro.” Yet the reverse order of evidence does warrant change in belief. Does the same apply to moral beliefs? In general, it’s not clear that having different moral intuitions based on order is relevantly similar to such cases in which it’s virtuous to update one’s beliefs based on ordering. Moreover, other framing effects on moral intuitions needn’t appeal to order but instead to other pg. 74 of 206

Regard for Reason | J. May

problematic frames, such as whether sacrificing one to save five is described as killing one person or saving five (e.g., Petrinovich & O’Neill 1996). What about the empirical premise: are moral intuitions mainly based on framing effects? Like other intuitions, moral intuitions are in general “subject to framing effects.” However, this phrase is ambiguous, as it leaves the extent of the effect unspecified. It could mean that moral intuitions are only slightly affected by framing effects—e.g., a small proportion of responses change or overall confidence changes to a small degree. What a debunking argument requires, however, is that framing effects alone alter moral beliefs, such that people regularly tend to lose their belief or change its content (a point made independently by Shafer-Landau 2008). Yet the evidence fails to establish this, for two main reasons. First, in the vast majority of studies, moral judgments do not substantially change merely due to order or word choice. Some experiments that test for framing effects find no statistically significant difference between responses whatsoever. Other studies report finding effects but not substantial ones. For example, in Nadelhoffer and Feltz’s study, most participants still thought it was morally permissible to sacrifice one to save five. As the researchers recognize, only “25% of the participants’ intuitions were affected by the actor–observer difference” (2008: 140). While the difference in responses is statistically significant, this alone does not show that the difference is substantial. Other researchers measure moral judgments on scales and report only a slight shift on the same side of a scale of measurement, suggesting that the valence of the relevant beliefs doesn’t tend to change across conditions. Some results do straddle the midpoint, but only barely, suggesting that on average participants were ambivalent anyway and tend to lack confidence. Thus, the framing seems to at best only slightly affect some moral judgments, often to a small degree. This assessment of the evidence fits with a recent meta-analysis that suggests framing effects do not generally exert a substantial influence on moral intuitions. Roughly 80% of people’s moral intuitions subject to framing effects don’t change, and that figure excludes studies that found no effect (Demaree-Cotton 2016). Now, some particular framing effects do appear to have a powerful effect on people’s responses. Consider Tversky and Kahneman’s (1981) famous experiment in which participants were asked to choose between two hypothetical policies meant to address a fictional “Asian disease” outbreak that threatened a society. The vast majority of people would rather opt for the policy that would definitely save only 1/3 of the endangered group than go for a policy with only a 1/3 chance of saving everyone. However, on the alternative framing, most participants would rather take a 1/3 chance that no one dies than adopt a policy that would ensure that 2/3 die. In other words, people avoid the gamble when they can instead definitely save 1/3 of the group, yet they no longer prefer this very same option when it’s framed as ensuring that 2/3 will die. It’s quite striking that the vast majority of participants’ preferences reversed merely based on whether the policies were framed in terms of gains or losses. It turns out, though, that this huge effect is an outlier, for a meta-analysis of 230 framing effects on risky choices indicates much smaller effects across the board (Kühberger 1998: 45). This is partly explained by Kühberger’s finding that such effects aren’t moderated just by loss/gain framing. In particular, presenting one alternative as risky while the other is riskless, as Tversky and Kahneman did, yields larger effects (1998: 36). Yet then the difficulties in such dilemmas don’t arise for moral judgment specifically; rather, a wealth of studies reveals that we’re just bad at reasoning with probabilities and risk generally (Kahneman 2011). This brings us to a second major issue with framing effect studies: many aren’t suited to drawing conclusions about all, or even most, moral beliefs. Researchers don’t always measure pg. 75 of 206

Regard for Reason | J. May

moral judgment specifically, asking only which of various policies participants “would prefer” or which action a participant “would perform” (as in, e.g., Petrinovich & O’Neill 1996; Tversky & Kahneman 1981). Kühberger’s meta-analysis suggests that this is important, since framing effects were roughly five times larger when participants were asked about what they would choose vs. what they judge or rate to be the best option (1998: 36). Thus, the two main problems with framing effect studies are linked: when we home in on distinctively moral judgment and examine trends, not one or two provocative experiments, the effect is much smaller. In the end, like incidental disgust, framing effects may sometimes exert a negligible influence on moral judgment. As we’ve seen, though, there are numerous other experiments suggesting that rather different, and more morally relevant, factors are the central determinants of many moral beliefs (see Chapter 3). For example, people overwhelmingly condemn intentional harms but not those brought about accidentally. Consider the example of condemning someone for intentionally and successfully poisoning an innocent coworker. To my knowledge, no experimental evidence suggests that the valence of this moral judgment is substantially influenced by mere wording or order of presentation. There may be some moral beliefs for which framing effects play a substantial role, but moral judgment research taken as a whole suggests the relevant class is restricted to specific contexts (e.g., unconfident assessments about dilemmas whose non-moral content is especially difficult to process). The trade-off in the debunker’s dilemma also arises for framing effects. Cass Sunstein (2005) points to studies showing that moral intuitions are systematically affected by whether a bad outcome arises from someone’s action rather than omission (as we saw in the previous chapter). Now we have a plausible empirical premise demonstrating a substantial influence on moral belief, but is this moral heuristic prone to error? Such a normative premise is not obvious and is certainly question-begging in this context. Again, establishing one premise in a wideranging debunking argument incurs the cost of adopting an implausible version of the other. Some who point to framing effects on moral cognition may welcome the limitations imposed by the debunker’s dilemma. Again, Sinnott-Armstrong admits that one can get independent confirmation that some moral intuitions are reliable. His concern is primarily to attack moral intuitionism, which claims that some moral beliefs are justified non-inferentially. Rightly or wrongly, he considers nearly any additional evidence beyond one’s intuitive reaction to be independent confirmation (rather than simply undermining the debunking challenge). Our main concern here, however, is with whether morally irrelevant framing effects debunk ordinary moral belief, and careful attention to the relevant empirical studies suggests that they do not. The experimental evidence may make us slightly less confident in some or all of our moral judgments; we might even have less justification if justification comes in degrees. But the influence of framing effects does not render one’s moral beliefs unjustified.

4.6 Evolutionary Pressures One of the most ambitious, and most popular, debunking arguments of late appeals to evolutionary pressures on our moral beliefs, suggesting they’re mere biological adaptations (see, e.g., Ruse 1986; Joyce 2006; Rosenberg 2011). Perhaps forming judgments about right and wrong facilitates cooperation in large groups with complex social hierarchies by providing a common code that maintains order and promotes cooperation. An evolutionary source of the general concept of morality might not seem problematic, but there are plausible evolutionary influences on the content of particular moral beliefs as well. Just consider common beliefs about pg. 76 of 206

Regard for Reason | J. May

incest, reciprocity and fairness, cheating and free riding, special duties to one’s kin, loyalty to one’s group, retribution, and so on (Sober & Wilson 1998; Street 2006; Haidt 2012). Given that evolutionary forces plausibly influence some of our most basic moral beliefs, it might seem that we can’t rationally maintain them. Evolutionary debunkers formulate their arguments in different ways, so we’ll have to determine which if any pose a threat. Some formulations turn on explanatory dispensability: the idea that the best evolutionary explanation of the existence of our moral beliefs doesn’t presuppose their truth (cf. Joyce 2006: 211). One’s moral beliefs may then be unjustified if one is aware of this fact or should be aware of it. Either away, presumably one then ought to give up commitment to moral facts because one lacks evidence for the existence of moral truths. This, however, does not amount to an epistemological debunking argument of the sort at issue here (cf. Wielenberg 2014: ch. 4). Instead, this is an iteration of the longstanding debate about whether moral facts can really explain anything and, if they can’t, whether they deserve a place in our ontology. Explanatory dispensability could have a distinctively epistemic upshot if one’s beliefs become unjustified upon learning that their contents are explanatorily dispensable. We needn’t scrutinize this controversial epistemological claim (but see Wielenberg 2014: ch. 4; ClarkeDoane 2015). Instead we can focus on the more empirical problem: an evolutionary explanation of one’s moral beliefs is only partial. It provides only the ultimate explanation of some of our moral dispositions (and perhaps moral concepts) in terms of natural selection. However, the best explanation of our moral beliefs would be complete, including an account of their proximate causes. Yet, even if the ultimate evolutionary explanation needn’t appeal to moral facts, the true proximate explanation might (Mogensen 2015). In fact, no evolutionary debunker has provided a compelling empirical premise to the effect that the complete explanation of our moral beliefs does not appeal at all to their contents. This key premise is often simply assumed for the sake of argument to see what it implies (see, e.g., Lillehammer 2003; Locke 2014). Our concern, however, is with whether our moral beliefs are empirically debunked, not with whether they would be if we make an implausible empirical assumption. So we ought to consider directly what the proximate causes of our moral beliefs are, how evolutionary pressures have shaped those causes, and whether evolution’s influence renders moral beliefs unjustified. The best evolutionary debunking arguments point to a defective process of beliefformation (cf. Nichols 2014). The idea is that natural selection favors moral judgments that are expedient, not those that correspond to moral facts and properties. If we have our moral beliefs merely because they enhance survival and reproduction, at best some moral beliefs would be true by mere dint of luck, a mere coincidence akin to forming an accurate belief about some historical fact by swallowing a pill (Joyce 2006: 179). For all the debunking argument says, there may well be moral truths and they may somehow figure in the complete explanation of some moral beliefs. However, somehow understanding the process of natural selection forces, as Joyce puts it, “the recognition that we have no grounds one way or the other for maintaining these beliefs” (211). To evaluate the debunker’s claim that evolutionary processes fail to track the moral truth, we need some rough, but uncontroversial, conception of human evolution and moral truth. Of course, it’s tendentious, to say the least, which is the correct moral theory. But debunkers need only appeal to an uncontroversial aspect of moral truth that we have reason to expect evolutionary processes did not track (compare Shafer-Landau 2012). In particular, we need a story about how natural selection (the ultimate cause) generated psychological mechanisms (proximate cause) that, along with environmental factors (e.g., cultural transmission), lead pg. 77 of 206

Regard for Reason | J. May

humans to form moral beliefs. What uncontroversial conceptions of human evolution, the environment, and moral truth render these in tension? Our best theories of the evolution of human psychology suggest that our deeply social living conditions required navigating cooperation with others in groups not exclusively comprised of kin. Over time, it was plausibly fitness-enhancing for individuals and their groups to have various altruistic tendencies, compassion for the suffering of others, and importantly concerns about reciprocity, fairness, cheater detection, loyalty, harm reduction, flourishing of one’s group, and the like (see, e.g., Sober & Wilson 1998; Haidt 2012). Compare the way that evolution has instilled in us various prudential values and preferences. For example, we strongly desire sweet and fatty foods, plausibly for evolutionary reasons, and such desires strongly influence our judgments about what is good—e.g., sweet foods are valuable. Even if we sometimes come to judge that we ought to forgo sweets, that is likewise due to our desire to maintain good health, attract mates, or just garner social status. In this way, simple evolutionary forces can pervasively influence our evaluative tendencies, which then influence our judgments about what is good and bad, right and wrong (cf. Street 2006). Of course, one’s moral beliefs are also influenced by individual experience and the reasoning and experience of one’s ancestors, which one learns through either explicit instruction or the adoption of cultural norms that have evolved over generations (Henrich 2016). Ultimately, a confluence of psychological factors—some adaptive, some not—come together to generate one’s particular moral beliefs and the way in which they hang together. We end up with different sets of moral beliefs but we share some common general values, such as fairness, loyalty, and kindness (more on this in Chapter 5). Indeed, this can be expected from our evolutionary past, our regard for reason, and our sociality. As Darwin once put it, “the social instincts… with the aid of active intellectual powers and the effects of habits, naturally lead to the golden rule, ‘As ye would that men should do to you, do ye to them likewise,’ and this lies at the foundation of morality” (1871: ch. 4). Would such a genealogy lead us to form moral beliefs that fail to track the moral truth? It depends partly on what constitutes moral truth, but many plausible theories pose a problem for evolutionary debunking. One common thread in common sense morality and ethical theory is indeed something like the Golden Rule. If anything like it is a core element of moral truth, then Darwinian processes are not necessarily disconnected from the moral facts. For example, if either Kantianism, contractualism, or contractarianism is correct, then it will be far from a coincidence that evolutionary forces (e.g., reciprocal altruism) nudged human moral beliefs to track facts about social norms, respect for persons, or even what rules others can reasonably reject (see James 2009; Wielenberg 2014). The same goes for theories, such as virtue ethics or an ethics of care, that focus on valuing prudence, justice, loyalty, benevolence, honesty, courage, temperance, and so forth. Consider even utilitarianism: combined with individual reasoning, experience, and cultural transmission, our evolved tendencies plausibly shape moral beliefs that are sufficiently sensitive to what promotes happiness, well-being, or preference satisfaction in our own groups and those we cooperate with. This is especially plausible for “indirect” consequentialism, which holds that actions often maximize well-being if they proceed from rulebased reasoning and deliberation prevalent in ordinary moral thought. Even many act consequentialists believe, as Greene does, that as “private individuals we should nearly always respect the conventional moral rules” (2014: 717), since this will typically maximize overall well-being. Thus, given various plausible conceptions of moral truth, there is no reason to think that evolutionary forces, along with more proximate causes, would lead to beliefs that are pg. 78 of 206

Regard for Reason | J. May

sufficiently off track or distorted. (Note: The point is this paragraph is somewhat similar to those made by proponents of so-called “third factor” views. But keep in mind that such views are often developed in an effort to save moral realism, not the justification of ordinary moral beliefs.) Now, we’re certainly unwarranted in believing that an action is moral simply because it promotes the survival of the fittest or helps to propagate one’s genes. But, despite being influenced by Darwinian forces, we don’t form moral beliefs in this simplistic sort of way. Compare the process of punishing a child when she makes her parent angry, which by itself seems utterly blind to the moral truth. How could angering someone have anything to do with the moral facts? But this process amounts to moral learning for the child if combined with the further fact that her parent regularly becomes angry when people lie, cheat, steal, free ride, assault others, treat people unfairly, and so on. Of course, on some ethical theories, survival of one’s group and oneself can’t possibly have any connection whatsoever to the moral truth, even when combined with exercises of individual experience, reasoning, and transmission of cultural wisdom. But evolutionary debunkers don’t, and shouldn’t, take themselves to be relying on a controversial conception of ethics. As Rosenberg puts it, the debunker’s claim is that natural selection “can’t be a process that’s reliable for providing us with what we consider correct moral beliefs” (2011: 221, emphasis added). But don’t evolutionary debunkers often assume that moral truths could turn out to be radically unlike how we ordinarily conceive of them? Isn’t the idea that it could turn out that the morally right thing to do is to, say, stare at the sun all day while blinking every other second? No, because, as Katia Vavova says, if we assume “we have no idea what morality is about” then we can’t run the debunking argument for we’d “have no idea if evolutionary forces would have pushed us toward or away from the truth” (2015: 112). Compare an attempt to debunk belief in something called “gronk” based on wishful thinking. The debunker must assume something about what would make true the thing that’s believed. How could we know whether wishful thinking fails to track the truth about the existence of a gronk if we don’t take a stance on what it would be for that being to exist? If, for example, “gronk” simply means, “whatever makes me think there is meaning in life,” then wishful thinking isn’t a bad approach. Of course, “gronk” is a made-up word, but the point is that a process can be defective for one kind of belief but not another, and whether it’s defective depends in part on some minimal characterization of what’s believed. Similarly, debunkers must either take a stance on what makes moral beliefs true or show that evolutionary pressures fail to track any account of moral truth worth its weight in salt. Failure to do so is self-undermining (cf. Locke 2014). The question-begging charge may be fitting when addressing evolutionary arguments that target an extreme brand of moral realism. On any uncontroversial conception of moral truth, evolution may well provide a challenge for those who think additionally that such truths are entirely independent of how moral agents think, feel, and behave (see Street 2006). This provides a way to argue that evolution and moral truth are disconnected without making any controversial assumptions about ethics (beyond that its objectivity is essential). However, recall that our aim is to assess epistemological debunking arguments targeting ordinary moral beliefs, regardless of whether they are construed as presupposing objectivity, especially of such an extreme form. The main problem for our evolutionary debunkers, then, is that they offer no reason to believe that a plausible account of human evolution will lead to unjustified moral beliefs. These debunkers take on a hefty burden of spelling out exactly what the evolutionary forces are and why they must lead to beliefs that are entirely disconnected from the moral truth (see Vavova 2014a). One might retort that moral beliefs are debunked unless we can show that they aren’t pg. 79 of 206

Regard for Reason | J. May

based on defective processes, but this misconceives the dialectic (cf. Locke 2014). Debunking arguments aren’t like skeptical hypothesis arguments one encounters about the external world, where the skeptic merely raises the possibility of error. Debunking arguments are instead tasked with providing evidence of actual unreliability, not merely challenging us to provide justification for our moral beliefs (cf. May 2013b). At any rate, our concern is to evaluate arguments that take on the burden of showing moral beliefs to be based on defective processes. In the case of evolutionary debunking arguments, this burden has not been met. In particular, the normative premise is unsupported. The tension inherent in the debunker’s dilemma is on full view here. Evolutionary debunkers could establish a plausible normative premise: it’s surely epistemically defective to form one’s moral beliefs merely on the basis of what’s fitness-enhancing. But then the empirical premise is implausible: we don’t form our moral beliefs primarily on this basis. While fitness considerations are part of the ultimate explanation, the complete proximate explanation appeals also to our concerns about fairness, detection of cheaters, the welfare of others, and so on. Now we have a plausible empirical premise. What about its corresponding normative premise? Are these considerations morally irrelevant or epistemically defective? Certainly not, given that any worthwhile conception of moral truth will treat fairness, welfare, and the like as morally relevant considerations. Again, an evolutionary debunker might try to charge us with begging the question, for we’re making assumptions about moral truths. But, as we’ve seen, both we and the debunker must do this; and we have not helped ourselves to any contentious assumptions about moral truth.

4.7 Automatic Emotional Heuristics Other empirical attacks don’t target all moral beliefs but still a wide range of them, such as all characteristically deontological ones (e.g., Singer 2005; Greene 2014). As we saw in Chapter 3, there are many intuitions that typically fit with deontological theories in ethics, such as the intuition that lying or breaking a promise is immoral even though it has better consequences, or that sacrificing one person as a means to saving five others is wrong. Also targeted are patterns of intuitions that treat actions with bad outcomes as always worse than omissions with the same outcomes. Greene proclaims that “we can explain our tendency to draw a fundamental moral distinction between actions and omissions in terms of more basic cognitive mechanisms, ones that have nothing to do with morality per se” (2013: 241). Sunstein (2005) likewise regards our implicit commitment to such a distinction as an overly simplistic heuristic, which amounts to a “moral mistake” that “pervades both commonsense morality and law” (541). Are these intuitively compelling distinctions treated as morally significant for morally irrelevant reasons? We’ve already seen that Greene and others have gathered some evidence that some characteristically deontological intuitions are influenced partly by whether the harm involves something like personal harm (see Chapter 3, §3.3.3). There is some contrary empirical evidence that the dilemmas used in this research don’t track characteristically utilitarian vs. deontological attitudes (Kahane et al. 2012; Kahane et al. 2015). However, even setting that aside, Greene’s evidence alone doesn’t debunk the relevant intuitions. At best it only suggests that we can’t rationally maintain different judgments about cases like Switch and Footbridge primarily on the grounds that one involves personal force (Kumar & Campbell 2012). In order to be consistent, we must treat like cases alike, either by judging it morally acceptable to sacrifice the one to save the five in both kinds of cases (the “utilitarian” resolution) or in neither (the “deontological” pg. 80 of 206

Regard for Reason | J. May

resolution). Of course, showing that we should withhold judgment about a pair of cases is an important result (more on this in the next chapter). But that alone won’t debunk nonconsequentialist intuitions specifically. Greene does further argue that we should in general resolve such inconsistencies by accepting the counter-intuitive consequentialist verdicts, because these are allegedly more trustworthy in such contexts. Drawing on some of his own research and that of others, Greene maintains that characteristically deontological intuitions are generated by a quick-and-dirty system heavily influenced by automatic emotional heuristics. Key to this automatic system are brain regions, such as the ventromedial prefrontal cortex, which are associated with the use of gut feelings in decision-making (recall Chapter 2, §2.4). These brain areas apparently give rise to an array of automatic heuristics, some of which we’ve learned through experience while others evolved to work in situations common long ago in the environment of evolutionary adaptedness, when our ancestors developed the ability to cooperate in small groups. In novel situations, such moral heuristics are likely to lead us astray, according to Greene (cf. also Singer 2005; Sunstein 2005). Especially for the complex and controversial issues that plague our modern world, these heuristics are operating out of their element. Our automatic moral intuitions, which happen to be characteristically non-utilitarian, are therefore unjustified at least when applied to contemporary moral disputes. Whether commonsense moral intuitions can be debunked empirically depends on what’s supposed to be the morally irrelevant factor or defective process on which they’re based. This has proven to be something of a moving target. We’ll see that the target largely moves because it struggles to jointly satisfy both premises of a wide-ranging debunking argument. Sometimes utilitarian debunkers suggest that the problematic factor is emotion or evolutionary pressures or both (cf. Singer 2005; Greene 2008). However, while these factors may explain some key differences between certain kinds of moral intuitions, we’ve seen that evolutionary influences as such are not necessarily debunking. The same goes for emotions, at least those that are integral and not merely incidental. Integral emotions can alert us to morally relevant factors, so it’s not problematic in general to base one’s belief on them (see, e.g., Berker 2009; Kumar & Campbell 2012). Suppose, for example, that someone tried to debunk your belief that bullying, but not shaming, is wrong simply by demonstrating that bullying makes you angry while shaming doesn’t. Since feeling angry about one kind of action may track a morally relevant difference, identifying an emotional process alone isn’t sufficient to render the belief unjustified. As Greene himself now seems to recognize, the allegedly defective process can’t be emotion, even if we can tell a compelling story about how such reactions were fitness-enhancing. More recently, Greene suggests that the relevant factor in many cases is personal force. We intuitively treat bodily harm as worse when “the agent directly impacts the victim with the force of his/her muscles” (2014: 709). This certainly seems morally irrelevant, but Greene recognizes that this factor only substantially influences moral judgments about harm when it interacts with certain non-consequentialist factors, roughly intention and commission. Greene now (2013) unifies these under the heading of prototypically violent acts (245ff.): we have an “alarm gizmo” that makes us more likely to condemn a harm if it involves such acts, even if we think the act maximizes utility. Evolution and personal experience have instilled in our moral minds a “myopic module” that specifically condemns prototypically violent acts, which involve at least three features: (i) action rather than omission, (ii) harming as a means, and (iii) personal force. Importantly, according to Greene: “It seems that these are not three separate criteria, pg. 81 of 206

Regard for Reason | J. May

employed in checklist fashion. Rather, they appear to be intertwined in the operation of our alarm gizmo, forming an organic whole” (246). This myopic module theory doesn’t quite account for the data, however. As we saw in the previous chapter, multiple experiments consistently suggest that these elements of prototypically violent acts do have independent moral significance. The meta-analysis concerning the means/byproduct effect suggests that the distinction alone does have a small impact on moral cognition, even if it’s heavily mediated by whether a harm involves something like contact or personal force (Feltz & May 2017). And perhaps personal force merely draws one’s attention to the fact that the actor is harming as a means, which amplifies its impact on moral judgment. Moreover, we do systematically consider acts to be worse than omissions, regardless of whether they involve contact or personal force. Finally, various studies suggest that the combination of personal force and causing as a means influences moral intuitions about cases that don’t involve violence. Natalie Gold and colleagues (2013) report that the Switch/Footbridge asymmetry holds for outcomes other than bodily harm or death, including emotional suffering and financial loss. So it appears that the moral significance attached to the three factors in the alleged gizmo— commission, causing as a means, and personal force—are not tied specifically to representations of prototypically violent acts. Instead, different lines of evidence suggest that we have characteristically “deontological” intuitions about some cases but not others at least partly because they involve something like greater agential involvement in the generation of a bad outcome. Evaluations of cases like Switch and Footbridge, for example, likely diverge so strongly because they differ in terms of both personal force and harming as a means. At any rate, the more refined empirical premise predictably runs into normative problems, since being prototypically violent isn’t necessarily a morally irrelevant factor. We may be warranted in distinguishing unintentionally harming by omission, say, from harming another actively, purposefully, and in a personal way. Often researchers overlook how such acts relate to the legal notion of battery (or assault), which is an important element of moral and legal reasoning (Mikhail 2014). In general, insofar as prototypical violence involves deontological elements, it’s question-begging to assert that this factor is part of an epistemically defective process of belief formation. Again, it’s not question-begging to assert that mere personal force is morally irrelevant, but the data don’t suggest that this alone substantially influences moral cognition. There is another route to Greene’s debunking of deontology. He argues that the system underlying our automatic moral intuitions is unreliable in many contexts of our modern world. On Greene’s (2013) view, one’s automatic heuristics are rigidly suited to solving only a certain restricted range of moral problems, namely those involving ordinary interactions among people perceived to be in one’s in-group. For millennia, we’ve only had to solve problems among people of our own “tribes” or relatively homogenous groups. Commonsense morality helps group members overcome individual self-interest to solve the problem of “Me vs. Them.” But, now that we interact with people from a diverse array of groups, fundamentally different moral problems and disagreements arise (“Us vs. Them”). Thus, Greene (2014) argues that it would be a “cognitive miracle” if we “had reliably good moral instincts” (715) about these “unfamiliar” moral problems, which is a technical term referring to: “ones with which we have inadequate evolutionary, cultural, or personal experience” (714). Here Greene has in mind the disputed moral issues of our time, such as abortion and euthanasia. Surely we shouldn’t trust intuitions about which we have inadequate experience. Now we’re back to an eminently plausible normative claim, but the requisite empirical claims become pg. 82 of 206

Regard for Reason | J. May

dubious. The crucial question is whether our automatic moral intuitions lack adequate experience with the relevant moral problems. The answer depends on the case. Such intuitions can sometimes be obstacles to forming rational beliefs about how to resolve current crises. Automatic intuitions about property rights, for example, may hinder us from addressing the yawning wealth gap. But we can accept such a limited critique without impugning other “deontological” intuitions about, say, autonomy and self-respect that arguably help many of us conceptualize the immorality of slavery (regardless of such an institution’s effects on overall utility). There is no reason to think that in general automatic moral intuitions are inadequately attuned to the problems to which they are typically brought to bear. In fact, we should expect certain widespread and fundamental deontological considerations (such as intention and commission) to serve as common ground among diverse groups with differences in specific norms (cf. Wielenberg 2014: 131, n. 33). After all, as Greene realizes, “the means/side-effect distinction is widely regarded as morally relevant” (2013: 220). Part of the implausibility of Greene’s present empirical claim is the assumption that automatic intuitions are modular in a way that makes them inflexible to changing circumstances or insensitive to new information. While moral cognition resembles linguistic cognition’s automatic and unconscious application of implicit rules, such rules may be flexibly shaped by one’s experiences. We’ve seen that there is some evidence that ordinary moral intuitions are driven in part by some elements that are fairly universal or at least wide-spread across the world (e.g., Hauser et al. 2007; Barrett et al. 2016). These elements may be in some sense innate (Dwyer 2009; Mikhail 2011), but this does not necessarily entail inflexible modularity. We already saw that moral reasoning can shape one’s automatic emotional responses—e.g., when one becomes disgusted by eating meat after coming to believe it’s immoral (see Chapter 2, §2.3.1 and Chapter 3, §3.4). Indeed, there’s a growing literature in moral learning theory suggesting that automatic intuitions are flexibly shaped by local material and social conditions, a process that can be modeled computationally similar to non-moral forms of unconscious learning (see, e.g., Crockett 2013; Seligman et al. 2016). Consider even moral problems that Greene and others think clearly reflect the irrationality of automatic intuition, such as the case of adult siblings, Julie and Mark, who decide to have sex with each other. Many participants in various studies believe that this act is immoral, even though the siblings use ample protection and the one-off encounter doesn’t affect their healthy sibling relationship (Haidt 2012). On Greene’s view, this judgment stems from an automatic, emotional aversion to incest that is part of a generally unreliable guide to novel situations (2014: 712). After all, while incest normally has rather bad consequences, we’re bringing our automatic moral reactions to bear on a decidedly abnormal situation in which these consequences are absent or extremely unlikely. However, this concedes that one’s automatic intuitions are not mere emotional aversions but are tracking the potential harms that incest regularly inflicts, the threat of which may still be perceived to be very real in Julie and Mark’s case, averted only by luck (cf. Jacobson 2012; Railton 2014). Indeed, recent empirical evidence suggests that most people don’t think actions like Julie and Mark’s are truly harmless (Royzman et al. 2015). In sum, the attempt to undermine moral judgments driven by automatic intuitions actively struggles to satisfy both premises of the process-debunking schema. On the one hand, while personal force is a morally irrelevant factor, it’s not empirically plausible that it grounds a wide range of moral beliefs. Even if the dubious distinction influences our judgments about some types of scenarios, we have not thereby impugned a large class of moral beliefs—certainly not all pg. 83 of 206

Regard for Reason | J. May

those influenced by automatic moral intuitions, which may commonly be “deontological” or non-consequentialist. Rather, such intuitions are driven broadly by how involved an agent is in bringing about an outcome. On the other hand, establishing an adequate empirical premise identifies a process that’s not clearly defective. An aversion to prototypically violent acts, defined in terms of non-consequentialist factors like the act/omission distinction or other elements of agential involvement, is not necessarily defective. The same goes for affective heuristics. Even in what seem the most obvious cases in which our automatic intuitions are inflexible and untrustworthy (e.g., adult consensual incest with protection), matters are more complicated than debunkers might have hoped. One cannot cast doubt on the veracity of all automatic moral intuitions on the grounds that they’re generally unable to be attuned to today’s moral problems through evolutionary, cultural, or personal experience. If anything, core elements of ordinary moral cognition serve as common ground by which to resolve disputes among moral tribes.

4.8 Explaining the Dilemma Wide-ranging debunkers face a dilemma. Either they do not identify a genuinely defective beliefforming process or they do but the defective process is not a sufficiently central factor in the genesis or maintenance of many moral beliefs (see Table 4.1). Table 4.1: Example Processes Subject to the Debunker’s Dilemma General Process Emotions Framing Effects Evolutionary Pressures Automatic Heuristics

Specific Influence Incidental (e.g., hypnotic disgust) Integral (e.g., anger toward injustice) Irrelevant frames (e.g., equivalent wording) Relevant frames (e.g., act/omission) Merely fitness-enhancing considerations Adaptive solutions to social problems Personal force or inflexible responses to unfamiliar problems Agential involvement or flexible responses to familiar problems

Defective Influence? Yes No Yes No Yes No

Substantial Influence? No Yes No Yes No Yes

Yes

No

No

Yes

Either way, these ambitious arguments fail to establish one of the premises of the debunking schema. Of course, unsound arguments often fail to establish one or more of their premises because they’re implausible. But the debunker’s dilemma identifies a distinctive problem due to a relationship between its premises that makes them difficult to jointly satisfy. There’s specifically a trade-off between the empirical and normative premises: when one is well established, the other becomes much less plausible. The problem arises because ordinary moral beliefs are a heterogeneous class. They concern a wide range of topics, such as harm, care, fairness, cheating, liberty, respect for authority, sanctity, loyalty, and betrayal (Haidt 2012). Moral beliefs also have a wide range of influences. The fraction of empirical research that we’ve already canvassed (Chapters 2-4) already begins to show that these influences include emotions, framing effects, evolutionary pg. 84 of 206

Regard for Reason | J. May

pressures, implicit reasoning with automatic heuristics, and even conscious reasoning about an action’s outcomes and the agent’s role in bringing them about. Given the variety and complexity of moral beliefs, it’s no coincidence that we find this trade-off yielding a dilemma for wideranging debunkers. It’s implausible that there is a single kind of process that both substantially influences a heterogeneous class of beliefs but is also defective across this diverse class. Whether a process is defective depends greatly on the content of the beliefs and how exactly this process influences them in a context. Processes that are plausibly defective are thus fit to indict a specific kind of belief, not a large and diverse class. Of course, we can describe a single category of influence on all moral beliefs or a large class of them. Evolutionary pressures or affective heuristics are candidates. But, as we’ve seen, such causes will be too general to be uniformly debunking. While it’s certainly logically possible to successfully navigate the debunker’s dilemma, it’s a general problem that will likely plague all empirical debunking arguments that are wide in scope. Consider a comparison with politics. During the 2016 presidential campaign in the United States, political analysts tried to explain how such unconventional candidates as Donald Trump and Bernie Sanders appealed to so many voters. Many commentators pointed to job loss and other economic woes among the candidates’ supporters; others pointed to race or level of education. But looking for a single informative influence began to look futile. The large electorate is so diverse that their primary reasons for supporting a candidate will differ drastically. While we may be able to describe a single influence in sufficiently general terms so as to capture at least most voters, it’s unlikely to be as informative as we hope it to be, especially if we aim not only to explain voters’ opinions but also to rationally evaluate them. Similarly, many factors influence people’s moral beliefs. While we may be able to describe a single kind of influence, it’s unlikely to support sweeping and informative claims about moral epistemology. Some other philosophers have also said that certain debunking arguments are more likely to succeed with narrow targets (e.g., Lillehammer 2003; Zimmerman 2010: 30-1; Vavova 2014a, sect. 8.2). Vavova argues that evolutionary debunking in particular is problematic because it entails less common ground between the debunkers and their targets. We can perhaps draw a similar and more general conclusion from the debunker’s dilemma. Additionally, though, the dilemma brings to light that all wide-ranging debunking arguments are likely to succumb to a specific trade-off that rests heavily on the relevant empirical details and the normative assumptions made in a variety of arguments. We have, it seems, a general form of reply to a wide range of debunking arguments that grounds a form of optimism about ordinary moral cognition. Importantly, the lesson to take from all of this is not that experimental research is good for nothing in ethics. Rather, it’s that experimental research is not very good at providing a simple and complete story about the sources of all of our moral beliefs or a large, heterogeneous class of them, such that the sources are roundly defective. There is an inherent tension between targeting a wide range of moral beliefs and identifying a defective process that influences them all. In general, moral beliefs are based on many factors, some legitimate and some not. Empirical research is just unlikely to reveal a single cause of our moral beliefs that is defective across a diverse range of contexts.

4.9 Conclusion pg. 85 of 206

Regard for Reason | J. May

The science of moral judgment can certainly be a source for debunking moral belief, indeed for moral progress. We are now even better positioned in the twenty-first century to examine the genealogy of morality than many of our intellectual predecessors. In general, however, wide-ranging debunking arguments in ethics share a common ambition, and for that reason face the debunker’s dilemma. Targeting a large set of commonsense moral opinions involves the daunting task of identifying processes that substantially influence a motley set of beliefs and are defective in all relevant contexts. The examples of evolution, emotion, and framing effects are general processes to be sure, but in specific contexts they either hardly influence moral beliefs or aren’t defective. The larger the class of moral beliefs targeted, the more difficult the task of empirically identifying a single undermining influence. Empirical debunking arguments in ethics have more promise if they are highly selective. Emerging empirical research is unlikely to cast doubt on morality as a whole, but it is especially suited to debunking relevantly similar pairs of moral beliefs, along with any general moral distinctions based upon them. Pending independent support for the beliefs, we should withhold judgment. This approach, which we’ll examine in the next chapter, offers our best hope for debunking moral beliefs empirically, but it funds a much more limited critique. Let me close this chapter by emphasizing that not all forms of sweeping skepticism are impaled by the Debunker’s Dilemma. As already noted (in Chapter 1, §1.4.3), it’s beyond the book’s scope to argue against a deep skepticism about the reliability of our general cognitive, learning, and reasoning capacities. The aim is merely to show that there isn’t a special epistemic problem with moral cognition in particular.

pg. 86 of 206

Regard for Reason | J. May

Ch. 5: The Difficulty of Moral Knowledge Word count: 9,797

5.1 Introduction Soon after Barack Obama launched his campaign for the American presidency in 2008, conspiracy theories developed that the Illinois senator was not born in the United States. Wellknown people, such as Ted Nugent and Donald Trump, publicly doubted that Obama was an American citizen and demanded to see his birth certificate. Doubts about Obama’s origins were surprisingly widespread, even after the issue had been discussed and investigated far longer than necessary. Remarkably, even three years into his presidency, but just before Obama released the full “long-form” version of his birth certificate, polls indicated that roughly a quarter of Americans were so-called “birthers,” who explicitly believed in the conspiracy theory (CBS News Poll 2011). And only about 57% of Americans at the time positively believed that Obama was born in the USA. Presumably, a number of Americans’ moral and political opinions were influenced by the belief that Obama may have lied to the American people about his origins. A number of cognitive biases are likely responsible. Those already opposed to Obama selectively attended to evidence that seemed to confirm suspicions about his origins, and they likewise ignored or unwarrantedly discounted contrary evidence. The desire to remove the president from office, or to at least discredit him, triggered wishful thinking and other forms of motivated reasoning that are common among us all. (As the cartoon character Space Ghost once remarked about a sales pitch: “I believe every word that man just said, because it’s exactly what I wanted to hear.”) We can’t forget the truism that powerful passions surrounding ethics and politics can easily cloud judgment. One’s moral beliefs obviously can’t amount to knowledge if they rest heavily on unjustified non-moral beliefs. Indeed, moral knowledge, while possible, is difficult to acquire and maintain for many reasons. Knowledge generally seems to require at least justified true belief. Vladimir Putin doesn’t know that homosexuality is immoral if either he’s wrong, lacks good grounds for believing it, or for whatever reason no longer believes it. So, if one side of a given moral dispute is correct, then everyone on the other side lacks moral knowledge, even if both parties are being reasonable and remain justified. However, our focus is not on whether empirical research undermines the truth of any moral claims but rather the warrant of our moral beliefs. While being warranted needn’t require constructing a proof or even the ability to fully articulate one’s evidence, moral knowledge is unattainable in the absence of good grounds for belief or in the presence of good reasons for withholding judgment. The previous chapter concluded that sweeping debunking arguments are unlikely to succeed. However, empirical research is suited to a more limited critique of moral judgment. This chapter discusses two key empirical threats. First, as scientific research increasingly pg. 87 of 206

Regard for Reason | J. May

unearths the grounds of our moral beliefs, we must be willing to accept discoveries of biases that distort some ordinary moral beliefs. Existing empirical evidence already suggests that many people lack some moral knowledge due to cognitive biases, such as wishful thinking, overconfidence, and a lack of intellectual humility. A second and related threat comes from evidence of deep moral disagreements within and among cultures. In some cases, we have reason to believe that our opponents are no less likely to be in error, which militates against either party’s claim to knowledge. We’ll see that these threats are limited or restricted in several ways, however. While wishful thinking and other irrational processes certainly do afflict some moral beliefs, many are not infected. Similarly, while moral disagreements are common, few foundational disputes are among what one should regard as epistemic peers. Ultimately, we share many fundamental values, and moral knowledge is often elusive because the relevant non-moral beliefs are unjustified. Thus, the empirical threats are limited in scope and don’t necessarily expose flaws in our basic modes of moral thinking. Even though empirical evidence can selectively undermine some moral beliefs, large swaths are not thereby threatened. Whether we know right from wrong is certainly open to empirical scrutiny but the process is piecemeal.

5.2 The Threat of Selective Debunking We saw in the previous chapter that wide-ranging debunking arguments face a powerful problem, given that large classes of moral beliefs have such diverse influences that are only sometimes problematic. Sweeping skeptical attacks will struggle to empirically identify a factor that substantially influences the entire class yet is defective in all contexts. While incidental emotions, framing effects, evolutionary pressures, and automatic heuristics affect many moral beliefs to some degree, relying on them in part does not commonly lead to unwarranted beliefs. None of this, however, precludes more targeted attacks. Let’s consider some ways in which empirical evidence has a better shot at undermining more narrow classes of moral beliefs.

5.2.1 Process Debunking We can begin with more restricted forms of the debunking arguments, considered in the previous chapter. Compelling scientific evidence could emerge showing that mere incidental disgust, for example, substantially influences a specific kind of moral belief, perhaps even for a specific group of people. There is some evidence that, compared to liberals, conservatives or those on the political right are more easily disgusted and that disgust more heavily influences their moral beliefs about the purity of the body and mind (e.g., Inbar et al. 2012). While incidental disgust doesn’t appear to substantially influence a wide range of moral judgments, it may become clear that it is a key reason many conservatives tend to condemn a specific action or policy. Of course, some conservatives, such as Leon Kass (1997), have explicitly invoked repugnance in opposition to biotechnologies, such as human cloning. However, even Kass should regard such influences as pernicious if the disgust turned out to be incidental and thus fails to track morally relevant information. Now, empirical evidence so far suggests that disgust is not a prominent emotion felt toward cloning (May 2016b). Nevertheless, we can see how the debunking argument might succeed if the empirical data shook out in the right way. pg. 88 of 206

Regard for Reason | J. May

The most promising appeal to disgust might target condemnations of homosexuality. We could imagine acquiring compelling evidence that, say, conservative opposition to homosexuality is driven primarily by disgust. One group of researchers has found that disgustsensitivity is associated with greater intuitive disapproval of homosexuality (Inbar et al. 2009). We would need much more evidence, of course, but supposing we had it and a conservative became aware of it (or perhaps should be aware of it), then his belief in the immorality of homosexuality would be seriously threatened. If he has no other evidence (or only spurious evidence) in support of this belief, then it doesn’t amount to knowledge. Now, many conservatives who disapprove of homosexuality do so primarily because they believe it’s a threat to the stability of society or conflicts with their interpretation of the moral rules handed down by an almighty god. Further evidence could reveal such considerations to be mere post-hoc rationalizations of repugnance. Either way, the question is what mainly grounds the belief, and rigorous empirical evidence has the power to provide an answer. Appeals to evolutionary pressures can also be restricted. We could, for example, acquire evidence that beliefs about the immorality of all forms of incest are sufficiently insensitive to mitigating factors. People might condemn consensual intercourse between cousins, even when they use ample protection and it won’t destroy any relationships (cf. Haidt 2012). The intuitive reaction that there is still something wrong with such acts might be rational and even justified given the great risks (cf. Jacobson 2012; Railton 2014). But we could imagine compelling evidence that certain moral intuitions are automatically driven by our evolved tendencies in a way that seems suspect upon reflection. One experiment does suggest that people will override their automatic condemnation of one-off sex among consenting adult siblings when given time to reflect on an evolutionary explanation meant to undermine the intuitive reaction (Paxton et al. 2012). Moreover, the studies on moral dumbfounding (see Chapter 3, §3.4.2) suggest that wealthier and more educated people are less inclined to condemn such apparently harmless taboo violations (cf. Levy 2007: 307). Additional evidence is certainly required, and must go beyond a mere “just so” story, but arguments with such specific targets can evade the debunker’s dilemma from the previous chapter. Other process debunking arguments might connect well-established cognitive biases with certain kinds of popular moral beliefs. Consider just two examples of this approach. People overwhelmingly believe it’s immoral to create a human baby via cloning (such as somatic cell nuclear transfer). In fact, among Americans at least, opposition to human cloning is not in dispute; both liberals and conservatives condemn it as widely as marital infidelity and polygamy (Gallup Poll 2014a). However, rates of condemnation appear to drop dramatically, though not entirely, when cloning is explained accurately and straightforwardly (cf. May 2016b). As many bioethicists are aware, there are numerous myths among the general public about the process (Pence 1998). People often falsely, and arguably unwarrantedly, believe that cloning involves making an exact copy of an individual that would then somehow merely be an incompetent lemming fit only for slave labor or for having its organs harvested. Contrary to popular belief, human clones would be just like other people who share nearly all of their genes with another individual. A clone of thirty-year-old George Clooney, for instance, would in effect yield a delayed twin of Clooney who will be thirty years old when the “original” Clooney is sixty. Moreover, just as with ordinary twins, the clone could have substantially different physical or mental characteristics—from a slightly different nose-shape to an entirely distinct combination of preferences. Cloned Clooney might have neither the looks nor the desire to be a successful actor. Of course, not everyone will find human cloning morally unobjectionable upon pg. 89 of 206

Regard for Reason | J. May

having the common myths dispelled, and perhaps for good reason. Still, this bioethical issue provides a vivid example of how moral beliefs can be influenced by beliefs that are non-moral yet unwarranted, incorrect, or otherwise fall short of knowledge. Well-established cognitive biases are probably lurking here. The cloning myths are developed from misrepresentations in science fiction and the media that serve as prototypes in one’s mind. When people evaluate the ethics of cloning, many probably rely on this unrepresentative prototype as a guide. This is an instance of the famous “availability heuristic,” which can serve as a bias when the more accessible stereotype isn’t representative of the target of the judgment (Kahneman 2011: ch. 13). Of course, the science of one cognitive bias is unlikely to reveal that any particular person’s belief is unjustified. However, compelling and converging evidence can eventually put rational pressure on many of us to at least dampen our confidence in certain moral beliefs, especially those that rest on controversial or complicated claims. Meat consumption provides another example. Only a small proportion of academics who aren’t philosophers believe it’s morally problematic to regularly consume meat (about 19%); but this contrasts sharply with a much larger portion (60%) of academics who are ethicists (Schwitzgebel & Rust 2013). The difference in moral judgment is presumably explained in large part by the ethicists’ knowledge of the topic. Most moral philosophers oppose eating meat at least in current conditions because most of it comes from factory farms, at least in developed nations. However, many people object to factory farming once they become fully informed of the torturous conditions of the animals subjected to it. While few then change their behavior, for various reasons, the change in moral belief is not so recalcitrant. Here the non-moral facts aren’t particularly complicated but they are often willfully ignored as the conditions in factory farms become much more widely or easily known. So, for an increasing number of people, they may be unjustified in believing that the animals they regularly eat have not experienced considerable suffering, which undermines their acceptance of the modern omnivorous diet. Various cognitive biases stand out as especially relevant here too. Many people may believe that their meat didn’t arise from torture because they uncritically accept the status quo or ignore evidence to the contrary. Believing otherwise would militate against one’s strong desire to consume meat. Often wishful thinking is supported by confirmation bias or related forms of close-mindedness (sometimes termed the “Semmelweis reflex”). In 1998, some reporters at CNN were clearly subject to this bias in their coverage of Operation Tailwind, when they placed excessive weight on weak evidence that U.S. soldiers used sarin nerve gas (a war crime) in Vietnam. The desire to have the scoop on such a shocking story drove the reporters to be more credulous. The broader area of study here is motivated reasoning (see, e.g., Kunda 1990). Mountains of research not only demonstrate that we sometimes engage in wishful thinking and related biases but that it’s quite pervasive in nearly all forms of reasoning (so no special problem for moral reasoning). The next half of this book, especially Chapter 7, will delve into the vast literature that demonstrates our propensities to come up with ways to rationalize our choices. For now, it’s enough to notice that empirical research can help reveal that such biases substantially infect particular kinds of moral beliefs. Now, one might worry that if we concede that some of our moral beliefs can be empirically debunked, then the floodgates are open to more wide-ranging skepticism. After all, if some of our moral beliefs are driven by morally irrelevant factors, how do we know that the rest of our moral beliefs are safe? Accepting a more selective debunking argument may seem to place a burden on one to prove that it doesn’t generalize. Given such a generalization worry, Regina pg. 90 of 206

Regard for Reason | J. May

Rini concludes that “if psychological debunking of moral judgments works, it works globally” (2016: 694; cf. also Doris 2015: 64). However, this generalization worry fails to distinguish between epistemic and dialectical aims in a debate. Suppose I give you a reason against your position on a moral issue right before you have to rush off to a meeting. Insofar as you haven’t responded to my challenge, you may have failed to win the debate, which is a dialectical goal (Rescorla 2009). But what if my challenge is unreasonable or based on unwarranted assumptions—e.g., “Prove that racism is immoral using only terms posited by physicists”? Whether you have to leave or you can’t articulate a response even after blowing off the meeting, your failure to answer such a challenge doesn’t mean it’s irrational to maintain your belief—a decidedly epistemic goal. Indeed, the burden is on the skeptic to show that the challenge undermines your justification regardless of whether you can articulate a response to it. Similarly, to support a more sweeping moral skepticism, it’s not sufficient to merely raise the possibility that a challenge will generalize (cf. May 2013), particularly given the Debunker’s Dilemma of the previous chapter which provides a principled reason against such generalization.

5.2.2 Consistency Reasoning Consistency reasoning in ethics involves recognizing that one has conflicting moral beliefs about two similar kinds of cases that don’t seem to differ in any morally relevant respect. To treat like cases alike, one can either withhold judgment about both kinds of cases or revise one judgment to match the other (Campbell & Kumar 2012; Holyoak & Powell 2016). We’ve already encountered experimental evidence of this type of moral reasoning (see Chapter 3, §3.4.2). When people evaluate morally similar scenarios together or soon after the other, their otherwise different moral beliefs, or confidence in them, often change in order to be consistent. We even have evidence that this occurs for both hypothetical moral dilemmas (Horne et al. 2015; BarakCorren et al. forthcoming) and disputed moral issues, such as eating meat (Lawrence et al. 2017). Another route to selective debunking capitalizes on such consistency reasoning. The debunker focuses just on opposing pairs of moral beliefs that some people hold and then points to experimental research that reveals that people make these different judgments based on a morally irrelevant factor. We encountered the basic structure of this debunking strategy in the previous chapter; it was just taken too far. Rather than targeting all deontological intuitions, utilitarians and other ethicists can merely target our apparently irrational tendency to privilege the needs of others close to us rather than far away (e.g., Singer 1972; Greene 2014). For example, we tend to think people have a moral obligation to help a drowning child nearby but not a starving child that faces death far away on another continent. Now, as many ethicists have noted, there are multiple factors that differ between such cases, not just distance: the kind of need required, the extent of the need, the chances of success, and so on. However, experimental research is able to control for such factors and determine whether we generally provide different moral judgments due to distance alone. Indeed, some experiments, albeit as yet unpublished, use this methodology and suggest that we do think it’s more acceptable to fail to aid those in need simply because they are far away (Musen 2010). If this turns out to be a reliable and substantial effect, then it has the power to undermine the relevant beliefs about our obligations to help those in need insofar as mere differences in distance are morally irrelevant factors. pg. 91 of 206

Regard for Reason | J. May

There is one important limitation to this broad category of research. It only identifies what we might call difference effects, which reveal that we render different moral judgments about a pair of similar cases due to a factor that has been isolated experimentally. Such evidence, however, is insufficient on its own to reveal which belief in the pair should be rejected (Kumar & Campbell 2012). Rather, when combined with the requisite normative premise, difference effects show that we draw a distinction between two cases when we shouldn’t: we aren’t treating like cases alike. To be consistent, we need to hold the same moral judgment about both scenarios. In the case of providing aid, we must resolve the inconsistency by either judging that we have a moral obligation to help both those near and far or we lack a moral obligation toward either. At best, then, consistency reasoning on the basis of difference effects shows that we should withhold judgment on the pair of beliefs. As with process debunking arguments (see Chapter 4), we can construct a schema for consistency debunking arguments (cf. Kumar & Campbell 2012: 322; Kumar & May forthcoming). For any subject or group (S) and causal factor (F): 1. F is the main basis for why S holds opposing moral beliefs about two similar cases. (empirical premise) 2. F is a morally irrelevant difference. (normative premise) So, 3. S is unjustified in holding that pair of moral beliefs. Two points of clarification are required. First, again, to debunk one’s belief or pair of beliefs, it may be necessary that one is aware of the illicit influence or merely should be aware of it. However, as with process debunking arguments, we’ll avoid making this explicit in the basic schema. Second, the schema does not apply to cases in which one’s moral belief is mainly based on more than one factor (a kind of overdetermination). Your belief that it’s morally optional to aid starving people in other countries, for example, could be based primarily on greater distance and on the lower probability of success, such that you may remain justified in holding the belief even if one of your main reasons is undermined. Even in such situations, however, there are negative epistemic consequences—namely, a main basis of one’s moral belief is eroded. Consistency debunking arguments are well equipped to avoid the debunker’s dilemma from the previous chapter because they focus narrowly on pairs of similar beliefs. However, they are limited to concluding that one ought to withhold judgment about the pair. Knowing how to resolve the inconsistency would require further empirical research and moral argument. Still, consistency debunking arguments provide a model for how empirical debunking arguments can succeed in ethics. Let’s consider another application of the schema. Judges are supposed to make decisions based solely on the merits of the case at hand. However, a recent study by Shai Danziger and colleagues (2011) provides evidence that judges’ parole decisions are based in part on whether they are hungry or have eaten recently. After a meal, judges in their study were likely to grant parole to roughly 65% of the applicants appearing before them. However, just before a meal the proportion of applicants granted parole is close to 0%. One difference that explains judges’ beliefs about when parole is merited is whether or not they are hungry (or have the negative feelings associated with hunger). Of course, this difference is utterly irrelevant. The conclusion, then, is that the judges are not justified in believing that candidate A (before lunch) doesn’t deserve parole but candidate B (after lunch) does. Popular commentary on this study often infers that the judges are too harsh when hungry. However, our schema helps make clear that without further assumptions the data warrant at best the conclusion that either the judges are too harsh pg. 92 of 206

Regard for Reason | J. May

when they are hungry or that they are too lenient when they are sated. That is, the judges should treat like cases alike, and either grant or deny parole in similar cases, rather than differentiate them based on how irritable they feel at the time of decision. Notice that our examples show that a range of factors can make the difference in one’s assessment of two cases. The feature (F) might directly concern oneself (I’m hungry) or the cases (She’s far away). However, facts about the case presumably make a difference in one’s assessment ultimately due to a difference in oneself, such as greater compassion for those nearby. Difference effects are ultimately mental phenomena. Consistency debunking arguments are not without limitations. When appealing to difference effects, there are two main hazards. First, the effect of a morally irrelevant difference on a pair of moral judgments may be insubstantial. We already saw in the previous chapter how this arises with incidental emotions and framing effects. But, even when targeting only a pair of moral beliefs, one must be sure that the relevant effect is a main basis for the difference in moral evaluation of the cases. Otherwise, one might primarily hold the different judgments for a morally relevant reason. A second potential pitfall with appealing to difference effects is that the factor that differentiates one’s assessments is often difficult to identify. For example, the trolley problem contrasts impersonal harm with personal harm, as in the cases of Switch vs. Footbridge (see Chapter 3, §3.2). It is now clear, however, that these scenarios confound many factors. Not only do they contrast harming as a means vs. a byproduct; they also contrast physical contact from the absence of contact (or perhaps something like personal force). Thus, what may initially seem like a defensible grounding of differential responses (means/byproduct), might not upon further reflection (contact), and might later seem defensible again (agential involvement). Which factor is making the difference matters greatly for debunking arguments. Yet the relevant difference-making factor might be misidentified if described at the wrong level of explanation or in overly simplistic terms. Singer (2005: 348) claims that there is no morally relevant difference between killing someone in a way that was “possible a million years ago” (e.g., pushing) and doing so in a way that “became possible only two hundred years ago” (e.g., flipping a switch). But the evidence doesn’t show that what influences judgments in trolley dilemmas is whether the method of killing is anachronistic, rather than, say, prototypical violence or agential involvement (see Chapter 3, §3.3). Differences that may be morally relevant can too easily be redescribed so that they more clearly seem irrelevant. Consider another example of oversimplifying difference effects. Suppose we treat as morally acceptable, employing affirmative action policies for blacks but not whites. One way to describe the difference-maker in our moral judgments might be merely skin color, which seems morally irrelevant. But presumably our judgments differ here not just in terms of skin color or even race as such. The relevant difference between the judgments seems best explained by the different histories of mistreatment faced by each group and the discrimination they currently face. What might seem like a morally irrelevant difference (skin color or race) may be more relevant if properly described (historical mistreatment or susceptibility to discrimination). Nevertheless, when well constructed, such debunking arguments can have great epistemic force. While consistency reasoning has a long history in moral philosophy, what’s novel is harnessing the power of empirical, especially experimental, research. Finding difference effects from the armchair is difficult, since introspection is limited in its ability to identify commonsense intuitions and their unconscious influences. Moreover, it’s easier to dismiss genealogical speculations from the armchair, but converging empirical evidence is harder to ignore. pg. 93 of 206

Regard for Reason | J. May

The above are merely examples of how one might appeal to scientific evidence to undermine a specific kind of moral judgment or pair of moral beliefs. My aim is not to undermine any particular set of beliefs but to show that the science can contribute essential elements to powerful debunking arguments (contrast Berker 2009). Nevertheless, we’re ultimately left with an optimistic conclusion. We shouldn’t expect that large swaths of ordinary thinking will be undermined. At best, the threat is narrow in scope and still requires plenty of rigorous evidence to substantiate. Moreover, changes in moral beliefs, or even just confidence, via consistency reasoning is arguably a means to moral progress (Campbell & Kumar 2012). This can at least yield more warrant for one’s moral beliefs, perhaps even yielding moral knowledge.

5.3 The Threat of Peer Disagreement Let’s now turn away from the origin of our moral beliefs and consider skeptical challenges that point to foundational, pervasive, and intractable moral disagreements. How can most Canadians claim to know that homosexuality is morally acceptable when people from other cultures deem it immoral? Or how can Americans on the political right claim to know that abortion should be outlawed, even when intelligent liberals disagree? We’ll see that empirical evidence does uncover some, even if not many, foundational disagreements that threaten to undermine moral knowledge and expose overconfidence. Even if one retains a great deal of justification, some disagreements call for intellectual humility.

5.3.1 Which Disagreements? Arguments from disagreement are most often used to challenge the idea that morality is in some sense objective, not that it’s unknowable (Shafer-Landau 2003: ch. 9; Prinz 2007: ch. 5; Doris & Plakias 2008). After all, if moral truth is relative to, say, one’s culture, then we should expect moral disagreement, not convergence, across cultures. Learning that ethics is subjective could challenge one’s moral beliefs if they presuppose otherwise (cf. Nichols 2014). However, I have explicitly avoided conceiving of ordinary moral beliefs as presupposing a robust form of objectivity (recall Chapters 1 and 4), partly because there isn’t sufficient space to delve into the issue here. Another reason is that initial empirical investigations have not consistently shown that ordinary thinking systematically presupposes that moral statements, when true, are objectively true (see, e.g., Sarkissian et al. 2011). In order to conceive of a difference of moral opinion as a genuine disagreement we may have to assume that there is some room for error among disputants. It makes little sense to disagree with others if you assume they too must be right. But many forms of subjectivity allow for such error. Even a cultural relativist can accept that, when two Australians disagree about whether abortion is immoral, one of them is wrong about whether the practice is generally consistent with their culture’s norms. Ordinary moral judgment need only assume enough room for error to make sense of such disagreements. Provided our moral beliefs allow for some error, they can be challenged by widespread disagreement among one’s peers. Imagine you and a friend are both confident about your different answers to a math problem. If you have no reason to think your friend is in error, it seems you can’t claim to know the correct answer. For such an epistemic peer, “you have no pg. 94 of 206

Regard for Reason | J. May

more reason to think that he or she is in error than you are,” as Sarah McGrath puts it (2008: 91), echoing Sidgwick (1874/1907). Even if you retain some level of justification for your belief, it doesn’t seem sufficient for knowledge, and it may be irrational to do anything but withhold judgment until you acquire further evidence. The skeptical argument, then, runs as follows (cf. Vavova 2014b: 304): 1. In the face of peer disagreement about a claim, one does not know that claim. 2. There is a lot of peer disagreement about foundational moral claims. 3. Therefore, we lack much moral knowledge. This form of argument has received much attention, even if few philosophers have used it to defend fully global moral skepticism—although some come close (e.g., Miller 1985; SinnottArmstrong 2006: §9.4.2; Joyce 2013). Some epistemologists deny the first, conciliatory premise and hold instead that it’s rational to remain steadfast in your disputed belief (cf. Wedgwood 2010; Setiya 2012). However, let’s grant the skeptical premise that, all else being equal, disagreement among true epistemic peers does preclude knowledge and focus instead on the second, empirical premise. It’s notoriously difficult to tell whether two or more individuals actually meet the criteria for being epistemic peers. Epistemologists are primarily interested in idealized circumstances in order to determine whether it’s even possible for peer disagreement to undermine one’s claim to knowledge. But our concern is with the extent of moral knowledge among real people, so we’ll have to consider whether people do generally have many epistemic peers. However, first we should ask whether people have fundamentally different moral values in the first place. Clearly, there are many moral disagreements within and among cultures. But how deep do the disagreements go? Identifying foundational moral disputes can help determine the depth of the threat from disagreement. A non-foundational moral judgment, such as “Setting birds on fire is morally acceptable,” is indeed unjustified if it rests on an unwarranted empirical belief that birds can’t suffer. But the moral belief is threatened indirectly by targeting a non-moral belief (cf. May 2013b: 343). So, if many disagreements among epistemic peers turn entirely or primarily on disputes about non-moral facts, then the problem is with the non-moral beliefs, not the moral ones. Ethicists have long noted that many moral disagreements are largely rooted in disputes over relevant non-moral facts (see, e.g., Rachels 2010: ch. 2.5; Zimmerman 2010: 28-9). Religious beliefs, for example, ground many condemnations of abortion, premarital sex, euthanasia, and even war. Religion can also determine one’s perceived moral obligations, such as special duties to helping the poor or converting non-believers. Of course, for people who ground their morality in religion, it might seem specious to call religious beliefs “non-moral” (cf. Prinz 2007: 191). However, people seem comfortable drawing some line between morality and religion. Even in a country as religious as the United States, a survey of more than 35,000 adults indicates that only 33% “look to religion most for guidance on right and wrong” (Pew Research Center 2014). Most respondents chose instead either common sense (45%), philosophy/reason (11%), or science (9%). Even in the highly religious state of Alabama, it’s striking to find that only 50% identified religion as their greatest source of ethical guidance. At any rate, in some cases at least, there is a line to be drawn between moral and religious beliefs, even if it’s a fine one. Cross-cultural research, such as ethnography, may seem an ideal place to look for foundational moral disagreements. The Tiv of Nigeria, for example, appear to disagree with the Western idea that one shouldn’t obstruct justice in order to prevent a family member from being pg. 95 of 206

Regard for Reason | J. May

punished for a crime (Miller 1985). And Westerners believe anger is often justified while Buddhists (and Stoics) apparently think it’s unhealthy and immoral (Flanagan 2017: Part III). Such cross-cultural research is certainly an important place to look for foundational moral disagreements. However, it’s notoriously tricky terrain for a number of reasons. First, consider culturally specific conventions, such as “Use ‘Sir’ or ‘Ma’am’ when addressing elders” or “Use only your right hand for eating and for shaking hands with others.” On most moral theories, once a conventional rule is prevalent within a culture, a more general principle (e.g., respect others; maximize overall happiness) typically prescribes following the convention (cf. Scanlon 1998: ch. 8; Zimmerman 2010: 28; Greene 2014: 717). “When in Rome…” isn’t merely a prudential prescription but a moral guide. Second, finding truly foundational disagreement is difficult precisely because other cultures can be so difficult to properly understand. Indeed, the more radically different the culture (and thus perhaps the more likely any disagreement will be truly foundational), the more limited is an outsider’s ability to properly see the world from their perspective (Moody-Adams 1997; Vavova 2014b: 314). This problem is compounded when looking for foundational disagreement among epistemic peers. The more radically different a culture is from one’s own, the more justified one might be in doubting that a person from the other culture is an epistemic peer. Of course, an ordinary individual’s confidence in her moral framework might only seem justified because she is imprisoned by her own upbringing; “the way we do things around here” might seem “original, natural, and necessary, without in fact being so” (Flanagan 2017: 184). Still, like ancient beliefs in a flat Earth, most ordinary people might be epistemically blameless. A final issue is particular to our inquiry and dialectic. We’re effectively assuming for the sake of argument that ordinary moral beliefs don’t presuppose robust objectivity—indeed that some form of moral relativism could be true. If relativism is true, however, then cross-cultural disagreement needn’t pose a challenge to one’s moral beliefs. (Compare: If “It’s raining here” is relativized to one’s locale, then Laura in gloomy London isn’t disagreeing with Sandra in sunny Santa Barbara.) So, even if ordinary people assume that it makes no sense to disagree with someone from a radically different culture, they can still sincerely disagree with their neighbor who shares a basic set of norms against which actions can be evaluated. (Compare: Sandra and Sam, both in Santa Barbara, can disagree about whether “it’s raining here.”) Given these difficulties, let’s focus instead on moral disagreements that occur within a culture that has a relatively fixed set of conventional norms. At any rate, we can identify an intracultural challenge from peer disagreement and it will be our focus. A useful contrast is between liberals and conservatives within a culture. Although these labels (somewhat crudely) mark out political outlooks, they are partly grounded in different sets of moral beliefs (Haidt 2012). One might be tempted to treat these as two separate cultures— Americans are allegedly engaged in political “culture wars”—but there are important differences. Children who grow up in a culture naturally take on many, although not all, of its conventional norms; they identify with that culture (and perhaps others too). Much to the chagrin of many parents, however, children from different cultures and backgrounds find themselves adopting either a more liberal or more conservative worldview, regardless of their parents’ moral beliefs. (Ronald Reagan had five children, and only some of them adopted their father’s Republican “culture.”) At any rate, for our purposes, the study of liberal vs. conservative moral views is an ideal place to look for disagreements that are genuine and foundational, since they arguably occur within a culture that has a relatively common set of conventions. pg. 96 of 206

Regard for Reason | J. May

5.3.2 Moral Foundations Many disagreements between liberals and conservatives appear to be non-foundational. Their moral disagreements often arise from other differences in non-moral belief or, at any rate, from beliefs that don’t reveal commitments to different foundational moral values. Consider the policy issues that commonly divide liberals and conservatives. Both groups want to protect their fellow citizens but disagree about the likelihood of various threats, such as climate change, mass shootings, government take-over, and terrorist attacks. Both liberals and conservatives value social stability but disagree about whether same-sex marriage is likely to erode it. The rapid change in attitudes toward such marriages is arguably explained in part by greater exposure to same-sex couples, which has undermined many worries that their families and long-term relationships would threaten the fabric of society. Thus, like cross-cultural variation, intra-cultural variation in moral beliefs can rest on relatively non-moral disagreements. Still, some apparently non-moral beliefs may be post-hoc rationalizations of one’s intuitive moral judgments. Perhaps many liberals believe the fetus isn’t a person because they believe abortion is morally acceptable, not vice versa (Prinz 2007: 192). Even if the phenomenon of moral dumbfounding is limited (see Chapter 3, §3.4.2), we are generally poor at articulating the reasons for our beliefs and some of this may be confabulation. Moreover, experimental evidence suggests that moral considerations directly affect what is traditionally considered our non-moral understanding of the world—such as our attributions of intention, causation, and knowledge (Knobe 2010). Even more telling is positive evidence that some disagreements between liberals and conservatives are foundational. Some research suggests that Americans from the more conservative South embrace a “culture of honor” that values more violence and retaliation than more liberal Northerners typically tolerate (Nisbett & Cohen 1996). More recently, Moral Foundations Theory suggests even more fundamental variation in moral beliefs within a culture. Jonathan Haidt and others (Haidt 2012: ch. 7; Graham et al. 2009) have argued that considerations affecting moral judgment naturally cluster around at least five universal moral foundations, each of which breaks into a positive and negative value (see Table 5.1). Table 5.1: Five Moral Foundations Foundation Care/Harm Fairness/Cheating Loyalty/Betrayal Authority/Subversion Sanctity/Degradation

Examples charity, murder paying taxes, fraud keeping a promise, infidelity respecting one’s elders, treason chastity, sexual perversion

Evidence from a wide variety of disciplines suggests that these values deserve the label “foundations.” Cluster analyses of tens of thousands of responses from a wide range of cultures suggest that people’s moral judgments naturally fall into these basic groups. Moreover, each value is involved in the assessment of others’ behavior, is widespread across cultures, provokes intuitive moral responses, and helps to adaptively solve evolutionary problems of distant ancestors (Graham et al. 2013). Future research may motivate collapsing some of these foundations (cf. Schein & Gray 2015) or adding additional ones. Haidt (2012) has proposed pg. 97 of 206

Regard for Reason | J. May

Liberty/Oppression as a likely sixth foundation, but only the initial five are currently well studied. Now, Haidt and some theorists also hold that the moral foundations are in an important sense innate. The foundations at least provide some organization to the human mind in advance of experience. However, we aren’t focusing on the dispute over moral nativism. Thankfully, Moral Foundations Theory, much like the hypothesis of universal moral grammar (see Chapter 3, §3.4.3), is compatible with either position on this separate debate. Regardless of whether the five moral foundations are innate, they may be universal across the species. Indeed, it may be unsurprising that many of these foundations can be found in philosophical discussions of ethics in non-Western traditions, such as the “sprouts” of human nature posited by the ancient Confucian philosopher, Mencius (Flanagan 2017: ch. 4). Even if each foundation can be found across cultures, though, different moral traditions certainly vary in how much they value each. Like the dials on a sound mixer or the settings on a stereo equalizer, even liberals and conservatives within a culture rank some of these values higher than others. In the past few decades, numerous studies—conducted by various labs using multiple methods on large and diverse samples—now indicate that liberals place greatest importance on Harm and Fairness while conservatives embrace all foundations more equally (Figure 5.1). Figure 5.1: Ideological Differences in Foundation Endorsement (adapted from Graham et al. 2009: 1033)

For example, conservatives are more outraged when their country’s flag is disrespected (Betrayal), when police are demonized (Subversion), and when people engage in deviant sexual behavior (Degradation). Liberals, by contrast, are more concerned with protecting civil rights (Fairness) and instituting social safety nets that prevent vulnerable people from spiraling into poverty, illness, or incarceration (Care). Importantly, the best version of Moral Foundations Theory does not suggest that one group’s core values are unrecognizable to the other. Rather, liberals seem to share many of the pg. 98 of 206

Regard for Reason | J. May

same moral intuitions as conservatives (e.g., about the importance of Loyalty and Sanctity) but sometimes override those intuitive reactions in order to explicitly place greater value on Harm and Fairness (see Graham et al. 2013: 96). Moreover, sometimes liberals place the same weight on certain foundations just for different topics. For example, they seem as concerned about the purity of the environment as conservatives are about the purity of the body (Haidt 2012: ch. 7.5). Some scientists even more strongly doubt that liberals and conservatives differ much at all in their fundamental moral values, as both groups may rely primarily on a Harm/Care framework (see, e.g., Schein & Gray 2015). So both opponents and proponents of Moral Foundations Theory should agree that the evidence converges on a modest upshot: some disputes are foundational but not so divergent that one camp can easily disregard the other as utterly unfit to be an epistemic peer on moral matters.

5.3.3 Epistemic Peers Even if not conclusive, we have some rigorous empirical evidence of foundational moral disagreements within a culture—evidence that frees us from mere speculation. But are such disagreements among people who are just as likely to be right? It’s easier to figure out whether people disagree than to determine whether they are epistemic peers. Do political opponents really have all of the same key evidence? Even if they do, are they apportioning their beliefs to the evidence? Moreover, even if one can be confident that some disputant is a genuine peer, it matters how many there are on both sides (McGrath 2008: 95). If most of a scientist’s epistemic peers agree with her that humans have contributed to climate change, then this belief can amount to knowledge even if a lone Nobel laureate disagrees. We will do better, then, to examine the aggregate to see whether there are likely to be many genuine peers who disagree with many of each other’s moral beliefs. In general, we too quickly write off opponents as motivated by evil or idiocy. We tend to attribute bad motives to those who don’t share our moral and political ideologies. Both Democrats and Republicans think of themselves and their group as motivated more by love than hate, but they think the opposite of the opposing camp. Yet there’s evidence that this asymmetry in motivational attribution is significantly reduced when participants are rewarded with extra money for being accurate (Waytz, Young, & Ginges 2014). It’s tempting to demonize one’s opponents. However, most everyone is motivated to do what they think is right—we’re all “morally motivated” as Haidt is fond of saying (more on this in Chapter 7). And a moment’s reflection reveals that there are plenty of intelligent people on the other side of the dispute. More often than we tend to expect, our opponents aren’t ignorant but just place greater weight on other moral concerns, such as respect for authority and loyalty to one’s group, that are accepted by all parties, even if to differing degrees. Empirical research can help us determine whether many moral opponents are just as likely to be right. But it’s better suited to figuring out what amounts to the same: whether many people on the other side of a moral issue are just as likely to be wrong. A novice’s peer, after all, is another novice. Scientific evidence commonly reveals irrationalities and biases that affect us all, which should shake many people’s confidence in being more of a moral expert than their opponents. In this way, the previous empirical threat of process debunking can bolster the threat of disagreement. Consider some recent polling data on Americans. We’ve already seen that in 2011 a disappointing number didn’t believe that their president was born in the United States. A number pg. 99 of 206

Regard for Reason | J. May

of other beliefs that inform various moral judgments are similarly held by many people without sufficient warrant. For example, about 42 percent of Americans believe that humans did not evolve—that is, that a god created humans in their present form—which is virtually the same percentage reported in 1982 (Gallup Poll 2014b). Now, just as the ancients justifiably held some beliefs we now know to be false, some Americans may be justified in holding some false beliefs. But the vast majority of adults in America have at least a high school diploma and access to the Internet. On such topics, many people maintain unjustified beliefs. Some of the targeted beliefs might seem to be disproportionately held by conservatives on the political right (e.g., Republicans), which liberals on the left would seize upon to disqualify their opponents as epistemic peers. But there are plenty of other dubious beliefs that are either non-partisan or even more common among Democrats and other liberals. For example, slightly more Democrats than Republicans believe in fortune telling, astrology, and ghosts (Chapman University Poll 2014). Moreover, while 88% of members of the American Association for the Advancement of Science believe it’s safe to eat genetically modified foods, only about 37% of Americans do, and the skepticism doesn’t appear to be higher among conservatives than liberals (Pew Research Center 2015). These are just some particularly striking examples of beliefs that many readers of this book, chiefly academics, would recognize as unwarranted. The point is not that most Americans are ignorant but that dubious moral and political beliefs crop up on both sides of the liberal-conservative divide. Some of these dubious beliefs stem from religious views, but the irrationality is not necessarily in any religious belief itself but in the relevant cognitive biases. Such welldocumented biases affect us all (see, e.g., Kahneman 2011), even if not to such a degree that we all confidently believe that horoscopes reliably predict the future. Confirmation bias, for example, makes one focus selectively on evidence that supports one’s favored views, and a corresponding bias makes one excessively discount or ignore disconfirming evidence. Indeed, a number of cognitive biases and environmental factors can lead intelligent people toward a variety of wishful thinking, close-mindedness, rationalization, inattention, distraction, and overconfidence. (Again, while these biases are fairly widespread, they don’t afflict moral judgment in particular but reasoning generally.) Consider a classic study of biased assimilation of information that further polarizes disputants. Lord, Ross, and Lepper (1979) presented opponents and proponents of capital punishment with two fictional studies, one provided evidence that the death penalty effectively deters crime while the other found evidence against its effectiveness. Proponents of capital punishment tended to rate the pro-deterrence study as well conducted and compelling, while opponents had the opposite reaction. Similarly, on the quality of the anti-deterrence study, proponents and opponents of the death penalty had correspondingly different takes. In addition to assimilating the evidence in a biased manner, participants also exhibited polarization in their moral beliefs. After they read and evaluated the studies providing evidence for and against deterrence, proponents came away favoring capital punishment even more while opponents became even more opposed. A more recent example concerns the debate over the severity of climate change and whether it’s necessary to strictly regulate carbon emissions. Unlike conservatives, liberals tend to believe that, partly due to human activity, the earth’s climate is changing and we have a moral obligation to aggressively address it. Both sides of the debate, but especially liberals, likely assume that the other side is simply misinformed or deluded. Among the masses, smarter and more scientifically savvy individuals would presumably agree with the scientific consensus, for pg. 100 of 206

Regard for Reason | J. May

example. However, there is some evidence against this. Dan Kahan and his collaborators (2012) asked a large sample of Americans “How much risk do you believe climate change poses to human health, safety or prosperity?” Participants also answered a slew of questions that measured their political orientation, scientific literacy, and mathematical or numerical aptitude. Strikingly, across the sample, the more scientifically and mathematically skilled individuals perceived climate change as slightly less threatening. Moreover, these intellectual skills correlated with increased polarization: among more liberal types, the savviest perceived climate change as more threatening, but among more conservative individuals the savviest perceived less risk. It thus seems that greater intellectual ability leads to rationalizing the position on climate change that fits best with one’s existing moral and political outlook, perhaps due to confirmation bias or deferring more selectively to those who share one’s outlook. Other studies do indicate that bias is more prevalent among proponents of certain political ideologies. What we need is a broader view that can give us a sense of the overall trend. A recent meta-analysis focused on this key issue of “partisan bias” or the tendency to evaluate information more positively simply because it favors one’s own political views (Ditto et al. 2017). The analysis of forty-one experiments (from twenty-eight articles with over 12,000 participants) found that neither group was consistently more biased across a range of controversial moral and political issues. Liberals tended to exhibit more partisan bias on certain issues while conservatives were more biased on others. On the whole, however, both liberals and conservatives exhibited nearly identical levels of partisan bias. Of course, it is possible to show that some of one’s moral opponents are particularly irrational or ignorant in general or on a particular issue. However, in light of the science of judgment and decision-making, it’s increasingly difficult for many ordinary people to reasonably regard themselves as epistemically privileged, at least when it comes to controversial moral issues. Thus, many people do not know that their moral opponents are epistemic inferiors or that only a negligible number of them are peers. Whether by identifying fellow experts, novices, or something in between, empirical evidence can help reveal that there likely are such peers—even if not particular individuals, then relevant groups. While some people may be moral experts with few disagreeing peers, the average person, whether conservative or liberal, has little claim to being such a moral guru. Indeed, many ordinary people should recognize that a sizeable number of their opponents are their epistemic superiors. My aim, again, is not to undermine any particular moral belief but rather to take seriously an empirical threat to moral knowledge. However, as an example, perhaps peer disagreement alone precludes many conservatives from knowing that affirmative action policies unfairly discriminate against whites; and perhaps many liberals shouldn’t claim to know the moral status of a human fetus in the womb. The science suggests that by and large both sides of a debate are similarly biased, which makes it difficult to consider oneself without epistemic peers. Of course, especially intelligent and informed individuals—such as yourself, dear reader—may have few epistemic peers who dispute their foundational moral beliefs. However, disagreement poses a more serious threat to the masses. When there are many otherwise reasonable people who disagree with them about a complicated moral issue, the mere existence of such peer disagreement has epistemic consequences. Haidt (2012) draws a somewhat similar conclusion. However, he tends to see the threat as asymmetrical in that it’s more troubling for liberals than conservatives, even well-educated and highly informed liberals. Since liberals discount some of the five foundations, Haidt thinks they are in greatest danger of unwarrantedly ignoring certain universal values that conservatives pg. 101 of 206

Regard for Reason | J. May

already acknowledge. However, framing the problem in terms of peer disagreement provides a more symmetrical problem that applies more to the general public. When confidence should be reduced in the face of disagreement, the prescription applies to both sides of the dispute. Moreover, while Haidt’s skeptical conclusion may rely on his controversial “social intuitionist” model of moral judgment (Sauer 2015), peer disagreement poses a threat to some claims to moral knowledge even on more rationalist views. The conclusion to draw in light of this threat is also distinct from Joshua Greene’s (2013) position that we shouldn’t trust commonsense moral intuitions in the face of widespread disagreement. Recall from the previous chapter that he believes our automatic moral intuitions are unreliable when brought to bear on novel problems with which we lack sufficient experience. Moreover, Greene implores us to resolve moral disagreements by switching to a conscious deliberative mode of moral reasoning that he takes to be characteristically utilitarian. Yet we’ve already cast doubt on the claim that slow deliberative moral reasoning only or primarily values good consequences (see Chapter 3, §3.4.1). More importantly, while Greene positively recommends a certain form of moral reasoning in the face of disagreement, we have only drawn a negative conclusion—that moral knowledge is threatened. The best way to proceed in light of disagreement likely depends on the case. Conscious deliberation won’t always help, as it can just lead to rationalizing one’s existing position. In sum, the threat from peer disagreement is real but relatively circumscribed, for two reasons. First, as we’ve seen, many disagreements among potential peers aren’t foundational, turning primarily on disputes about the non-moral facts. This may still undermine many people’s claims to moral knowledge but the problem is not with morality in particular. Second, since the criteria for being an epistemic peer are demanding, many opponents will fail to meet them. As disagreements become more fundamental, they more clearly occur between parties that share fewer moral values, even if there is great overlap. Moreover, the empirical research seems to reveal that we are most biased on controversial moral issues, so it’s those cases that most warrant doubt that one is in a privileged epistemic position. Thus, for the average person, peer disagreement only threatens some of their more controversial moral beliefs. While many people should be less confident in their controversial moral beliefs, rationality doesn’t demand they withhold judgment about most of their moral framework.

5.4 Conclusion Pessimists about moral cognition draw on the science of moral judgment to doubt the possibility or promise of ordinary moral knowledge. Sentimentalists might argue that we do know right from wrong but that it’s ultimately a non-rational affair: one just needs the right sorts of feelings toward people and their actions. Debunkers can admit that moral judgment is primarily a matter of reasoning, but they attempt to expose deep flaws that set stringent limits on ordinary moral thinking. We’ve seen that the pessimists are right that empirical research undermines moral knowledge to some degree. The greatest threats are from research that targets narrow classes of moral beliefs that either have illicit influences or are controversial enough to conflict with the judgments of others who are just as likely to be right (or wrong). Thomas Reid may be right that one’s conscience can generally be trusted: pg. 102 of 206

Regard for Reason | J. May

…in order to know what is right and what is wrong in human conduct, we need only listen to the dictates of our conscience, when the mind is calm and unruffled, or attend to the judgment we form of others in like circumstances. (Reid 1788/2010: 290) However, our best science suggests that our minds are not always “calm and unruffled,” even when it seems otherwise from the inside. Moreover, as we’ve seen, we can’t simply aim to treat like cases alike, for that alone does not tell us how to resolve any inconsistencies. Finally, the study of moral disagreements suggest that the unruffled and informed conscience does seem to deliver slightly different verdicts on controversial moral issues, even among individuals who share a culture. Still, the empirical threats only go so far. The sciences have not uncovered, and are unlikely to uncover, widespread and fundamental flaws in ordinary moral judgment. Moreover, we have positive empirical reasons for optimism. Judging right from wrong is ultimately a rational enterprise. We are concerned to form moral beliefs based on reasons, such as consistency and coherence with other beliefs we hold. Moral judgment does as a result inherit all of the biases and irrationalities of human cognition generally. But that puts morality on a par with other domains that are likewise capable of yielding knowledge. By parity of reasoning, pessimism about our basic modes of moral thinking is no more warranted than pessimism about, say, our basic modes of mathematical reasoning. Our capacity for forming mathematical beliefs is also far from infallible. Empirical research is uncovering its inner workings, which reveals numerous ways in which it can falter or utterly break down, especially when we reason about probabilities. Moreover, as our world becomes increasingly complicated, our basic mathematical capacities must be augmented with additional education and assistive technology. While basic addition may come fairly naturally as part of normal development, mastery of calculus or probability theory does require great effort but not fundamentally different reasoning capacities. But the basic capacity is not fundamentally flawed, and the way to improve it will require harnessing our given reasoning abilities. Flawed judgment in a domain can be a cause for concern without being so fundamental as to warrant pessimism. As Kahneman says about judgment and decision-making generally, “the focus on error does not denigrate human intelligence, any more than the attention to diseases in medical texts denies good health. Most of us are healthy most of the time, and most of our judgments and actions are appropriate most of the time” (2011: 4). Moral beliefs are likely somewhat more susceptible to cognitive bias because we care so much about right and wrong. Ethics provides powerful desires that can lead to motivated reasoning, including confirmation bias, wishful thinking, and close-mindedness (more on this in Chapter 7). Moreover, since moral judgments help to mark group membership and bind one’s moral tribe together (Haidt 2012; Greene 2013), one often has others who share one’s views, supporting a bandwagon or false consensus effect. One also risks losing membership in a group by rejecting certain moral beliefs. Adults often embed themselves among friends who are fellow liberals or conservatives. Rejecting core elements of a moral framework often risks dramatic changes in one’s way of life, which is rather unlike beliefs about, say, thermodynamics. Finally, given that one’s opponents disagree about one’s moral beliefs, one is more likely to conceive of them as depraved and thus ignore what they have to say. Still, these flaws can largely be attributed to cognitive biases present in other domains, not to something particular about moral cognition itself. Optimism is warranted because wide-ranging debunking arguments face the Debunker’s Dilemma. The chapters that make up the first part of this book reinforce the Dilemma by pg. 103 of 206

Regard for Reason | J. May

demonstrating that moral judgments truly are influenced by a variety of factors—from the weighing of good and bad outcomes and agential involvement to cognitive biases and the interplay between emotion and reasoning. Sweeping skeptical arguments will inevitably struggle to identify one kind of influence on our moral beliefs that is defective in all the relevant contexts. So, at best we should admit a limited skepticism about moral knowledge. This might seem to be a form of pessimism, but acknowledging our limits in a rationalist framework makes way for empirical optimism. We can do better, for example, by correcting for cognitive biases (including overconfidence), developing intelligent emotional responses, and increasing deference to experts on complicated topics that warrant it (more on this in Chapter 10). Optimistic rationalism about moral judgment, however, is only half the battle. Even if we can know right from wrong, the science may warrant pessimism about our being able to act appropriately. The next part of the book addresses evidence that we’re primarily motivated by self-interest and non-rational passions, not our moral beliefs. Virtuous action may be a rarity at best because, even when we do what’s right, it’s often motivated by the wrong reasons.

pg. 104 of 206

Regard for Reason | J. May

PART II: Moral Motivation & Virtue

pg. 105 of 206

Regard for Reason | J. May

Ch. 6: Beyond Self-Interest …if [people] suddenly see a child about to fall into a well, they will without exception experience a feeling of alarm and distress. They will feel so, not as a ground on which they may gain the favor of the child’s parents, nor as a ground on which they may seek the praise of their neighbors and friends, nor from a dislike to the reputation of having been unmoved by such a thing. – Mencius Word count: 9,031

6.1 Introduction The first part of this book argued that ordinary moral thinking is a fundamentally rational enterprise that can yield justified moral beliefs, despite being heavily influenced by processes that are commonly automatic and unconscious. We now turn our attention to moral motivation and its empirical challenge: even supposing we can know right from wrong, this knowledge may utterly fail to properly guide our behavior. There is of course no doubt that people frequently behave badly. Even when we know better, immoral actions can result from failures at many stages in the process, from lack of knowledge to just plain bad luck. Our focus, however, will be on motivation. Certainly, we can behave well or badly for the wrong reasons. But our guiding question now is: Are we often motivated to do what’s right for the right reasons? Consider the brave holocaust rescuers in Nazi Europe, such as Raoul Wallenberg who famously used his wealth and power to save thousands of Hungarian Jews. Why do such people voluntarily engage in these magnanimous acts, risking their own well-being in the process? We often suppose they were ultimately motivated by their moral convictions—they recognized it was the right thing to do. Suppose we discovered, however, that the drive was, even if unconsciously, for immortality via posthumous fame. Our moral praise and esteem for such people would likely diminish, if not extinguish. Even if they did the right thing, they weren’t being fully virtuous (Arpaly 2003; Markovits 2010). Scientific evidence could reveal that we rarely act for the right reasons, which generates another kind of pessimism, but this time about virtuous motives. This second half of the book will accordingly focus on motivation. We’ll follow the standard philosophical convention of using the term desire broadly to denote any mental state that itself is, constitutes, or includes motivation. While seeing a giant spider may lead me to yelp and run away, the visual experience is not itself a motive or “motivation-encompassing” state (Mele 2003: ch. 1). Some theorists have more specific and controversial theories of desire (e.g., Schroeder 2004), but our concern is pg. 106 of 206

Regard for Reason | J. May

with motivation, not the proper use of the term “desire.” So “desire” in our sense will be a quasitechnical term referring roughly to states whose function is to bring about some state of affairs. The contrast is with more cognitive states, like beliefs or perceptions, which aim to accurately represent some state of affairs (see Chapter 2, §2.1). Everyone in a stadium may believe that the home team will win the competition, but not everyone desires that state of affairs. There has been substantial debate about how one’s motives or desires are relevant to attributions of virtue, and we will address that in due course (Chapters 7 and 9). But our initial question is a more psychological one: Can we do what’s right for the right reasons? After all, empirical studies seem to suggest again and again that we are often ultimately driven by selfishness, mood, or arbitrary features of the situation—considerations we don’t regard as morally relevant. Over the next few chapters, I aim to show that we can be, and often are, motivated by the right reasons. In this chapter, we’ll see that we aren’t entirely motivated by self-interest and are often ultimately concerned for the welfare of others (altruism). The next chapter then argues that we’re often motivated to do what we believe is right (moral integrity)—whether this is a specific action (e.g., telling the truth), a general moral principle (e.g., be fair), or a commitment to morality as such (e.g., do whatever is right). In Chapter 8, I’ll resist neo-Humean theories on which beliefs, including moral ones, can only ever tell us how to get what we happen to antecedently want. Reason, I’ll argue, is not a slave to the passions in this way. Finally, in Chapter 9, I’ll examine pessimistic views that admit that we’re capable of being motivated by altruism and moral integrity but that these virtuous motives are rare. Such pessimism is inspired by studies suggesting that self-interest often overpowers our moral commitments or by situationist experiments that suggest we’re often ultimately motivated by arbitrary features of our circumstances. In this second half of the book, we’ll see that properly understanding the theories and the science defuses these various threats to virtuous motivation. In this chapter, we’ll consider and reject the view that we’re always ultimately motivated by self-interest. Few philosophers defend this view, but it is taken seriously by many scientists. Indeed, there is a wealth of scientific evidence that bears on the debate, much of it concerning empathy-induced helping. And some philosophers and scientists argue that this work fails to rule out all the relevant egoistic explanations of the data. We’ll focus on one of the most powerful criticisms based on the idea of self-other merging. One might worry that when empathizing we’re acting on ultimately non-altruistic motives, even if unconsciously, because compassionate feelings for someone in distress tend to cause us to blur the distinction between ourselves and the other. Some conceive of self-other fusion as supporting an egoistic hypothesis (we’re not motivated to benefit another), but it might also support an extreme form of selflessness (we’re not motivated to benefit anyone in particular, including ourselves). We’ll see that both approaches are flawed. The evidence suggests that we are capable of ordinary altruism—of being ultimately motivated by a concern for the well-being of other particular individuals, conceived as distinct from ourselves.

6.2 The Egoism-Altruism Debate No one doubts that we’re all motivated by our own motives. (Who else’s motives could one act on?) But one might go further and say that we’re always ultimately motivated by self-interest. On this view, our motives always ultimately have a certain object or content: namely, what we take to be in our own self-interest. Many philosophers think this extreme theory—psychological pg. 107 of 206

Regard for Reason | J. May

egoism—obviously can’t be right, because there are many examples of genuine altruism, in which one isn’t ultimately motivated by self-interest. Consider acts of heroism, like that of Matthew McQuinn who died in 2012 during the tragic mass shooting at a theater in Aurora, Colorado. McQuinn shielded his girlfriend from the bullets that the assailant, James Holmes, spread around the smoke-filled theater (Goode & Frosch 2012). Fictional examples of heroic self-sacrifice are no less familiar and are commonly considered by philosophers (e.g., Hume 1751/1998: App. 2.9, 167; Hutcheson 1725/1991: 278, Raphael sect. 327; Feinberg 1965/1999: 496). We can also consider more common and mundane examples, such as caring for one’s sick child or friend. These do seem like examples of genuine altruism. Introspection suggests to many of us that our motivations behind helping friends and family aren’t self-interest. We ultimately help because we love and care about them, not because we want to curry favor, feel righteous, or avoid a bad reputation. However, the consensus among psychologists is that a great number of our motives aren’t easily accessible via introspection. While introspection isn’t a worthless source of knowledge of our own minds, it’s quite limited in its ability to discern the nature of our most basic motives that may very well be unconscious (see, e.g., Nisbett & Wilson 1977). It could be that none of our desires to help others are ultimate or intrinsic; they’re merely instrumental to some unconscious desire to ultimately benefit oneself. This egoistic picture of motivation is considered a live empirical possibility by some, albeit few, philosophers (e.g., Slote 1964; Morillo 1990; Mercer 2001) and arguably by even more scientists (e.g., Hornstein 1991; Cialdini et al. 1997). To properly understand the egoism-altruism debate, we need to clarify the terminological terrain. Both philosophers and psychologists in this debate are concerned with the motives that ultimately underlie behavior. Philosophers often call these intrinsic desires, which are desires for something for its own sake, not as a way of promoting or gaining something else instrumentally (see, e.g., Mele 2003: 33-4). However, psychologists typically concentrate on the motives themselves, giving them labels like the following: • Egoism: an intrinsic desire for only one’s own benefit. • Altruism: an intrinsic desire for the benefit of another. Philosophers, on the other hand, focus on the relevant theories in the debate and give them similar labels: • •

Psychological egoism: the thesis that all our intrinsic desires are egoistic. Psychological altruism: the thesis that some of our intrinsic desires are altruistic.

Naturally, egoistic desires concern one’s own benefit, while altruistic desires concern the benefit of someone other than oneself (this is egoism and altruism conceived as motives). Now, sometimes we have desires that concern both oneself and another (dubbed “relational desires” in Sober & Wilson 1998). For example, is it egoistic or altruistic for you to have an intrinsic desire to play a mutually enjoyable game of tennis with me? Such desires can be altruistic, despite concerning one’s own benefit, provided they also include another’s benefit (May 2011a). Psychological egoism insists that one only ultimately acts to benefit oneself. This theory makes a claim about the motives people do in fact have, as opposed to those they should have. Yet much of the philosophical discussion over the centuries has concerned the evaluation of arguments that would establish a priori that we are universally motivated by selfinterest. Given the poor quality of such arguments, there appears to be something of a consensus among philosophers today that genuine altruism exists. pg. 108 of 206

Regard for Reason | J. May

Arguments against psychological egoism that appeal to introspection and common sense are rather weak, but there are others that have received more attention. For example, philosophers often worry that the theory isn’t empirically falsifiable. But it certainly is. Suppose I lend my sister some money because I want to help her post bail and avoid jailtime. Further suppose that I don’t want to help her in order to make myself happy, or in order to avoid being shunned by my mother, or in order to achieve any other goal. Instead, my desire to help my sister is ultimate or intrinsic: I desire that state of affairs for its own sake. If even one of us is like this, then psychological egoism is false. So we should take the theory as an empirical hypothesis capable of being falsified. Many philosophers take Bishop Joseph Butler to have refuted psychological egoism in the eighteenth century. Broad proclaims that Butler “killed the theory so thoroughly that he sometimes seems to the modern reader to be flogging dead horses” (1930/2000: 55; see also Feinberg 1965/1999: 497). Yet Sober and Wilson (1998: ch. 9) have shown that this argument utterly fails to appreciate the importance of establishing that desires for things other than selfinterest are ultimate or intrinsic. In fact, to my mind, Sober and Wilson show that all of the traditional philosophical arguments against psychological egoism are surprisingly flawed. Without rehearsing all the a priori philosophical arguments and their problems, let’s take psychological egoism as a live empirical claim, as some have now done (e.g., Stich, Doris, & Roedder 2010). While we shouldn’t rule out the possibility that non-empirical, philosophical arguments can advance or even settle the issue, there is a body of empirical research that deserves examination.

6.3 Empirical Evidence for Altruism 6.3.1 Early Empirical Debates For much of the twentieth century, the empirical debate focused on the apparent puzzle of how a “ruthless” process like evolution, grounded in so-called “selfish” genes, could possibly give rise to altruistic motives. We now know that this pseudo-problem rests on a fallacy of “inferring the ‘true’ psychology of the person from the fact that his or her genes have proved good at replicating over time” (Blackburn 1998: 147; see also de Waal 2009: ch. 2). Compare: my desire for water when parched presumably exists in part due to its ability to aid in the propagation of my genes, but the desire is for water, not for the propagation of my genes. Indeed, discussions of altruism in evolutionary biology often just concern an entirely separate issue. What Elliott Sober and David Sloan Wilson call evolutionary altruism concerns mere behavior rather than motives: “An organism behaves altruistically—in the evolutionary sense of the term—if it reduces its own fitness and augments the fitness of others” (1998: 199). So a bird’s behavior might count as evolutionarily altruistic without being altruistic in our sense—that is, without being ultimately motivated by a desire to benefit another individual. None of this is to say that evolutionary considerations can’t help determine whether psychological egoism is true. Such considerations might even suggest it’s false, since evolutionary pressures might have positively favored altruistic motives. Sure, it’s possible that humans developed concern for their offspring, kin, and members of their community by coming to believe that it best ensures their fitness. But a more secure, reliable, and parsimonious mental pg. 109 of 206

Regard for Reason | J. May

mechanism might be an ultimate or intrinsic desire for the welfare of such individuals (Sober & Wilson 1998). Nevertheless, since the egoism-altruism debate concerns human motives, more direct evidence would come from the sciences of the mind. Some philosophers have argued that the evidence from psychology points in favor of psychological egoism. Michael Slote (1964), for example, once took the theory to be a serious empirical claim and argued that it may well be true. However, he appealed to dated behavioristic learning theory, which has been widely rejected since the “cognitive revolution.” A more recent example is Carolyn Morillo (1990) who grounds her hedonistic “reward event theory” in the neuroscience of motivation and pleasure, especially as studied in rats. The behavior of humans and other mammals is heavily influenced by unconsciously learning which actions and events are likely to be rewarding (and aversive). Positive reinforcement learning involves internal reward events or signals in the brain, generated in part by releases of dopamine based on the difference between expected and actual rewards attained. According to Morillo “we intrinsically desire these reward events because we find them to be intrinsically satisfying” (173). The support for this theory lies in our understanding of the brain’s reward center, a set of primarily sub-cortical structures deep in the brain (e.g., the ventral tegmental area, hypothalamus, substantia nigra) involved in reward-based learning. Since these structures regulate the release of dopamine and influence pleasure and motivation, many regard the reward system as the spring of desire. Importantly, though, Morillo interprets the reward event as always centered on pleasure, thus making it the basic reward driving action. If correct, this would certainly provide an empirical vindication of psychological egoism (particularly psychological hedonism). However, Morillo readily admits that the idea is “highly speculative” and based on “empirical straws in the wind” (173). As it happens, later work in neuroscience casts serious doubt on construing the reward event as inextricably tied to pleasure. Single-cell recording of dopamine neurons in monkeys, for example, suggests that these neurons so central to motivation and learning respond to information about expected rewards, not pleasure (Schultz et al. 1997). Other studies, conducted by Kent Berridge and his collaborators, have found similar results by manipulating rats’ brains. They’ve produced substantial evidence that being motivated to get something or “wanting” is entirely separable from its generating pleasure or “liking” it (see, e.g., Berridge 2009). For example, by injecting drugs directly into targeted areas of the brain, rats experience increased motivation to engage in behavior, such as pressing a lever, without exhibiting any evidence of finding it more pleasurable. Against Morillo, Timothy Schroeder (2004) appropriately concludes that extant neuroscientific data are better explained by the hypothesis that the reward center of the brain “can indirectly activate the pleasure center than by the hypothesis that either is such a center” (81, emphasis added; see also Schroeder et al. 2010: 105-6). While motivation and pleasure often go hand in hand, they are not only dissociable in principle but in practice in the brain’s neural circuitry.

6.3.2 Developmental Evidence Egoism may seem inevitable since humans apparently begin life with only self-interested drives. Using rewards and punishments, parents and caregivers spend considerable time trying to get children to overcome their selfish impulses. A satirical news headline from The Onion once read, “New Study Reveals Most Children Unrepentant Sociopaths”—it’s humorous in part because it pg. 110 of 206

Regard for Reason | J. May

hints at a grain of truth. The intuitive line of argument for egoism here was once considered by Joel Feinberg (1965/1999: §4d). Although not a proponent of psychological egoism, Feinberg never adequately addressed this developmental argument, perhaps because it relies on decidedly empirical claims. But we now have experimental evidence that undermines such arguments. Numerous studies suggest that young children exhibit altruism without instruction. Toddlers as young as fourteen months often spontaneously help a stranger they perceive to be in need. In controlled studies, toddlers frequently help pick up an object—e.g., a clothes pin or marker—that the adult accidentally dropped, tried to retrieve, but cannot reach (Warneken & Tomasello 2007). Toddlers help significantly more than those in the control condition in which the adult does not appear to be in need, as he or she intentionally drops the object and doesn’t reach for it. Moreover, the children don’t seem to help just for the fun of it—as if picking up a marker is fun—since they continue to help at significantly higher rates even when helping requires temporarily ceasing an entertaining activity, such as playing in a pit of colored balls, and even when helping requires surmounting physical obstacles (Warneken et al. 2007). Perhaps the child helps others when they’re in need because they provide a cue (e.g., reaching for an object) that help is expected. However, by around two years of age, children even proactively help a stranger in need when the stranger isn’t aware of her own need. In one study, while the child is playing with toys, an adult is putting away some cans. One of the cans then falls on the ground and the adult, who has her back to the child, doesn’t appear to notice that a can fell. Children still frequently stop playing, pick up the can, and try to alert the adult (Warneken 2013). This line of studies suggests altruistic motivation, particularly because such helping behavior seems to arise before a child learns, even unconsciously, that such actions may promote one’s self-interest by winning friends and influencing people. Alternative egoistic explanations are always available, of course, since theory is always underdetermined by evidence. But many plausible egoistic explanations have been ruled out and those remaining seem strained. Still, it’s difficult to discern whether children ultimately help for the sake of another person’s well-being rather than some unconscious concern for self-interest, especially given that infants are quick automatic learners. Even children as young as six months of age can discern helpers from harmers and prefer the former (Hamlin et al. 2007). If being a helper themselves makes ordinary children happy and is generally expected by adults, then even young infants may well pick up on these facts, just as they pick up on their parents’ dialect, mannerisms, and of course outbursts of profanity. To further support the existence of altruism, then, we must turn to experimental work in social psychology that is more suited to uncovering the ultimate goal of helping behavior.

6.3.3 Empathy-Induced Altruism Throughout his career, C. Daniel Batson has made a powerful case that humans can have genuinely altruistic motives. He defends the empathy-altruism hypothesis, which holds that empathy in particular tends to induce in us intrinsic desires for the well-being of someone other than ourselves. In its most succinct form: “Empathic concern produces altruistic motivation” (Batson 2011: 11). While Batson admits that the empathy-altruism hypothesis hasn’t been established with absolute certainty, he maintains that we have sufficient evidence to conclude: “Contrary to the beliefs of Hobbes, La Rochefoucauld, Mandeville, and virtually all psychologists, altruistic concern for the welfare of others is within the human repertoire” (1991: 174). pg. 111 of 206

Regard for Reason | J. May

We should first clarify the term empathy. There are many uses, but researchers commonly conceive of it now as the ability to take on the perspective of another and as a result have similar feelings (and perhaps thoughts). For example, watching my daughter erupt in joy while celebrating an accomplishment, I empathize and am similarly filled with joy at her success. Many contrast empathy with sympathy and related forms of concern, which are likewise otheroriented but needn’t generate feelings that are similar to those of the person with whom one sympathizes. For example, I may sympathize with someone on the radio who mentions struggling with post-traumatic stress disorder, even though I don’t know what it’s like and don’t at the time feel similar anguish. Now, Batson (2011) doesn’t focus on empathy as a capacity but rather on empathic concern defined as “other-oriented emotion elicited by and congruent with the perceived welfare of someone in need” (11). As Batson recognizes (12), empathic concern may correspond primarily to what many would consider compassion (see also Nussbaum 2001) or perhaps a kind of sympathy. But our general capacity for empathy may well be doing the work in this line of research, and Batson does construe empathic concern as involving a similar valence of emotion as the person in need. Batson begins by concentrating on a robust effect of empathy on helping behavior discovered empirically in the 1970s. The empathy-helping relationship is the finding that the experience of relatively high empathy for another perceived to be in need (“empathic arousal”) causes people to help the other more than relatively low empathy. However, as Batson recognizes, this doesn’t establish psychological altruism or egoism, because it only specifies behavior, not the intrinsic motive, much less whether it’s altruistic or egoistic. Given that there can be both egoistic and altruistic explanations of the empathy-helping relationship, Batson and others have devised experiments to test some of their differing predictions. The general experimental approach involves placing people in situations in which they have an opportunity to help someone they believe is in need while manipulating other variables in the situation. For example, in some early experiments, participants were led to believe that they were randomly assigned to watch a fellow undergraduate named “Elaine” (actually a confederate of the experiment) complete a task while randomly receiving electric shocks (see, e.g., Batson et al. 1981). After experiencing some of the shocks, Elaine reacted strongly and asked to pause the experiment, explaining that as a child she had been thrown from a horse onto an electric fence. The experimenter then asked participants if they would be willing to switch places with Elaine and take the shocks in her stead. In addition to providing an opportunity to help another in distress, experimenters could induce higher levels of empathy in various ways, such as describing the victim as similar to the participant and instructing participants to vividly imagine what the victim is going through. Such experiments provide circumstances in which egoistic vs. altruistic explanations of empathically induced helping behavior make different predictions about what people will do. Several egoistic explanations of the empathy-helping relationship are in competition with the empathy-altruism hypothesis. Each one claims that empathy causes people to help more because they tacitly believe that it will promote an intrinsic desire to benefit themselves in some way. Consider one of the egoistic alternatives that was developed early on. On the “aversivearousal reduction hypothesis,” empathizing with someone suffering makes one suffer too. Empathy thus tends to make one ultimately seek to eliminate this aversive feeling, and typically this can be done by helping the other in need. This egoistic hypothesis seems to predict that participants experiencing relatively high levels of empathy will not help another in need in higher frequencies if they’re able to easily exit the situation. Since empathy tends to fade after pg. 112 of 206

Regard for Reason | J. May

one is no longer vividly witnessing the suffering of another, the easiest option, as far as selfinterest is concerned, is to simply leave without helping (see, e.g., Batson 1991: 109ff). In other words, compassion for another in need is like the feelings of annoyance someone might experience when hearing a loud noise coming from a broken machine. One is expected to attempt to unplug the machine in order to make the annoying noise stop, unless one is able to simply leave the vicinity of the noise. Similarly, empathically aroused individuals are expected to help only because they can’t easily escape the situation. Batson and his collaborators (1981) designed six studies to test the aversive-arousal reduction hypothesis. They randomly assigned participants to one of four conditions that manipulated two variables: level of empathy and the ease of existing the experiment. The data all conformed to the predictions of the empathy-altruism hypothesis, not the egoistic theory of reducing aversive-arousal. The results of an early first experiment (see Table 6.1), show the common pattern predicted by the empathy-altruism hypothesis, in which high empathy leads to greater helping even when an alternative option is available that should better serve one’s selfinterest. Table 6.1 Proportion of Participants Offering to Help (adapted from Batson et al. 1981: 296) Low Empathy

High Empathy

Easy Escape

.18

.91

Difficult Escape

.64

.82

Such data indicate that there is still an empathy-helping relationship even when people who are empathically aroused can easily opt to no longer witness the other in need (for review, see Batson 1991: 109-27). Importantly, ease of escape lowers rates of helping on its own, yet as the empathy-altruism hypothesis predicts empathy protects against this effect. When participants feel more empathic concern for the perceived victim, they help regardless of whether they can just exit the situation. These and similar studies cast serious doubt on the popular aversive-arousal reduction hypothesis. The intrinsic desire of empathically aroused individuals doesn’t seem to be the desire to avoid feeling empathy. Of course, there are other egoistic explanations available. Perhaps participants feeling higher empathy anticipate feeling especially guilty or regretful if they don’t help, or especially proud if they do. As Batson notes, these can sometimes seem rather implausible on their face, since the explanations must be “empathy-specific”—participants must perceive a greater threat to self-interest only when feeling more empathy. Perhaps, though, empathy draws our attention to one’s suffering in a way that makes the prospect of regret or similar feelings more salient. Nevertheless, numerous experiments test both antecedently plausible and implausible egoistic explanations (Batson & Shaw 1991; Batson 1991, 2011). The most prominent egoistic accounts theories posit that empathy activates intrinsic desires to either: • • • •

relieve personal distress or aversive arousal, avoid self-punishment (e.g., feelings of regret), avoid social-punishment (e.g., a bad reputation), obtain rewards from self or others (e.g., praise, pride), pg. 113 of 206

Regard for Reason | J. May

• •

obtain feelings of vicarious joy at relief from the suffering, or gain a mood-enhancing experience by helping.

A number of ingenious studies have been conducted which seem to rule these hypotheses out. For example, if empathy leads to seeking praise form others, then why does it increase helping rates even when participants are led to believe that no one will ever know whether they helped? And why aren’t they disappointed upon learning that they in particular can’t help while someone else can? Or, if empathy makes us ultimately want to avoid feeling guilty or regretful for not helping, then why does it continue to increase helping when we have an easy way to rationalize avoiding it—e.g., being told that others have also declined helping or that there are others who will likely help instead? Or, if empathy makes us want to vicariously share in the joy of being helped, then why do helping rates remain high among empathically aroused participants even when they don’t expect to know whether their helping was effective? Not only do the data fail to support an array of egoistic hypotheses, they all conform to the empathy-altruism hypothesis. Indeed, the data ultimately motivate egoistic theories to look more and more like the empathy-altruism hypothesis. A viable egoistic hypothesis should predict that empathy leads people to feel bad about the other person’s plight but that not just any boost in mood will relieve this pain. Empathy must produce an intrinsic desire with something like the following content: I am relieved from feeling bad about the suffering of this person by this person being helped. However, while one’s own benefit is essential to such a desire the other person’s is essential too (which makes this a “relational desire”). However, then it isn’t clear that we have an egoistic desire, since one is ultimately concerned about the welfare of another, even if one’s own benefit is also desired (May 2011a). In the end, something like the empathyaltruism hypothesis is well supported. Empathy does indeed appear to consistently induce an intrinsic desire for the person in need to be helped, even if other concerns are also present. Combining these social-psychological experiments with the developmental evidence provides a strong empirical case against psychological egoism.

6.4 Self-Other Merging The empirical defense of altruism relies heavily on helping behavior generated by empathic concern for others in need. A potentially powerful challenge to this line of research draws on the idea that empathy involves taking the perspective of another and feeling their pain. In this way, empathy may so thoroughly blur the distinction between self and other that it actually eliminates the possibility of either altruistic or egoistic motivation. There are two ways to formulate this challenge. One is positive: the merging of self and other leads to a kind of no-self that is an ideal praised by various moral and religious traditions. The other option is more negative: empathy so merges self and other that it looks more like egoism than altruism. Either way, though, we lose the ability to say that empathy yields concern for the welfare of someone distinct from oneself. However, after describing both ways of framing self-other merging, we’ll see that there is an overarching conceptual problem with the idea.

6.4.1 Merging as No-Self? pg. 114 of 206

Regard for Reason | J. May

The idea that empathy leads to self-other merging is not particularly new. Some, however, consider it a kind of moral ideal, not a threat to altruism. After all, empathy leads to feeling another’s pain or joy as if it were one’s own, which can lead to a distinctive form of fellow feeling. Indeed, this thought crops up in a number of ethical traditions. Schopenhauer writes that acting from compassion for another “requires that I am in some way identified with him” such that the “entire difference between me and everyone else, which is the very basis of my egoism, is eliminated, to a certain extent at least... the difference between him and me is now no longer absolute” (1840/1999: 143-4). For these reasons, Schopenhauer believes compassion is the cement of the moral universe (see also Slote 2010). Derek Parfit (1984), in a different philosophical tradition in the West, famously argues that we must develop fully impersonal principles of morality, reminiscent of utilitarianism, that don’t rely so heavily on what he sees as a dubious separateness of persons. This picture of personhood and its moral implications is similar to the Buddhist notion of no-self (anātman or anattā), according to which our concept of an individual person—designated by “I”—is merely a useful fiction (Carpenter 2014: ch. 2; Flanagan 2017: ch. 11). Drawing on these traditions, Mark Johnston (2010) has taken such considerations to their extreme. He urges us to embrace the Christian notion of agape “or radical altruism” (49) based on a merging of self and other: The good person is one who has undergone a kind of death of the self; as a result he or she lives a transformed life driven by entering imaginatively into the lives of others, anticipating their needs and true interests, and responding to these as far as is reasonable. (14, emphasis added) Moreover, Johnston strikingly argues that by being “really good” one can “survive death,” without appeal to a supernatural afterlife. If we were to follow “the command of agape,” we “would survive wherever and whenever interests are to be found,” living on “in the onward rush of humankind” (296). These are no doubt lofty claims, connecting personal identity to morality. But there is increasing empirical evidence suggesting that this isn’t far from how we conceive of people across time. While continuity of memory accounts for much of how we identify individuals, changes to one’s moral opinions and behavior are apparently the greatest perceived threat to loss of one’s identity (see, e.g., Strohminger & Nichols 2014). Such research suggests there may be something to the way Johnston connects moral character to individuality. However, while this research helps to support one aspect of Johnston’s theory, it puts pressure on the claim that extreme empathic altruism amounts to survival. Very few of us are so altruistic. If I were to care about others the way I care about myself (extending beyond my close friends and family), then I might have already failed to survive as the same person. Suppose, say, that through meditation a previously callous and crass grandmother is able to undergo a “death of the self” and become one with everyone. She is suddenly kind and considerate, for she loves all just as much as she loves herself. If truly radical enough, this may make us think she’s no longer really with us, similar to the effects of severe dementia. Moreover, it’s not clear that radical altruism via self-other merging is psychologically realistic or morally ideal. People with mirror-touch synesthesia commonly report a kind of “hyper-empathy,” in which they seem to experience the very same bodily sensations they perceive in others, even in the same locations. Fiona Torrance (2011), for example, reports pg. 115 of 206

Regard for Reason | J. May

feeling “as if my body was being beaten” when watching an unexpected torture scene in a film. Mirror-touch synesthesia thus comes with the burden of vividly experiencing the plight of others. However, the condition does have some positive effects on how one relates to others. Torrance’s experience is reminiscent of the commonly posited link between empathy and compassion: “I’m hugely considerate of other people—after all, I know exactly what it feels like to be them.” Some neuroscientific evidence suggests that the hyper-empathy characteristic of mirrortouch synesthesia is connected to empathy specifically. One study compared gray matter in people with mirror-touch synesthesia to controls. In synesthetes’ brains, there was less gray matter in an area that includes the right temporal parietal junction, which is associated with empathy, understanding the minds of others, and the distinction between self and other. The researchers note the possibility that the hyper-empathy patients experience may be “a consequence of faulty self-other monitoring” (Holle et al. 2013: 1049).

6.4.2 Merging as Non-altruistic? Empathy’s self-other merging may lead to a radical form of compassion for others, but then it seems superhuman and for that reason practically impossible. Other theorists have gone in a more negative direction and considered self-other merging to preclude any kind of altruism— ordinary or radical. Empathy may so blur the distinction between self and other that one is in some sense concerned with oneself, or at least not concerned with a distinct other. Consider the great love we have for family and friends and the personal sacrifices we make for them. The thought that this may in fact be non-altruistic goes back to at least Francis Hutcheson who considered (but ultimately rejected) the challenge: “Children are not only made of our bodies, but resemble us in body and mind; they are rational agents as we are, and we only love our own likeness in them” (1725/1991: 279, Raphael sect. 327). The materials for this kind of view are also in the work of contemporary psychologists engaging with the empathy studies, such as Daniel Wegner, Harvey Hornstein, and Art Aron, among others (see Batson et al. 1997). And the basic idea has been defended on the basis of case studies, though not in response to Batson’s project (Monroe et al. 1990: 122). The merging hypothesis is most explicitly formulated, and directed against Batson, by Robert Cialdini and his collaborators (1997). They contend that blurring of self and other motivates one to help the other for whom empathy is felt, but the “other” is represented in the content of one’s intrinsic desire as, to at least some extent, oneself. Empathy allegedly provides an “experience of oneness—a sense of shared, merged, or interconnected personal identities” (483). When one helps another in need due to self-other merging, Cialdini and his collaborators “doubt whether those helpful acts reflect the selflessness required for true altruism” (1997: 490; see also Neuberg et al. 1997; Maner et al. 2002). On this view, empathy doesn’t lead to altruism, even if the relevant motive isn’t necessarily egoistic. The empirical evidence Cialdini and his co-authors (1997) advance in favor of their theory involves teasing apart empathy from perceived oneness as variables and testing how predictive they are of helping. They had participants imagine various people in need and report their level of empathy, personal distress, and oneness. Crucially, oneness was measured using two items. One asked participants to indicate “the extent to which they would use the term we to describe their relationship” with the person imagined to be in need (484). The other item involved pairs of increasingly overlapping circles that represent self vs. other, and participants “selected the pair of circles that they believed best characterized their relationship” with the other pg. 116 of 206

Regard for Reason | J. May

(484). Across several experiments, high measures of oneness, not empathic concern, predicted increased helping. Although Cialdini and his collaborators don’t always make this clear, what they’re suggesting is that the empathy-helping relationship is really a oneness-helping relationship: perceived oneness is the real cause of increased helping behavior, and empathy is a mere “concomitant” (483). There are several issues with these experiments. First, they did not, as Batson typically does, get participants to believe that someone is actually in need, providing a perceived opportunity to help. Instead, participants were asked to predict what they would do in an imagined situation, which might not be a reliable means of measuring helping behavior or generating empathic concern. Second, the measure of oneness is rather metaphorical and ambiguous (Batson, Sager, Garst, et al. 1997: 497; cf. also Badhwar 1993: §2). For example, use of the pronoun “we” is hardly indicative of mentally representing oneself as merging with another. It is interesting that oneness apparently correlated with predictions of helping. But oneness and empathy cannot so easily be separated with such measures, as some participants may report increased oneness as a way of indicating greater compassion or closeness. Third, Batson and colleagues (1997) conducted a series of experiments attempting to avoid these problems. To better measure only oneness, for example, the researchers compared participants’ ratings of themselves, and a person believed to be in need, on various personality traits. Yet no support for the oneness account surfaced. These issues raise serious problems for the experimental support of the oneness account. However, let’s turn to a more fundamental and conceptual problem with the very idea of selfother merging, either in its positive or negative form. We’ll see that we can’t take self-other merging too literally, and as a result we should conclude that empathy leads to genuinely altruistic motives.

6.5 Dividing Self from Other So far, we have taken seriously the idea that empathy can involve a merging of self and other. A common view puts a positive spin on this: merging leads to entirely selfless altruism, impartial concern for others, and perhaps even immortality. Taken quite literally, however, the fusion of self and other seems to lead to a negative (or at least neutral) upshot: the intersubjectivity in empathy actually motivates egoism, or at least precludes altruism by making the egoism-altruism divide inapt. It’s not clear, however, that we should take self-other merging literally (Deigh 1995; Nussbaum 2001; May 2011b). Many scientists explicitly characterize empathy as involving separateness from others or “no confusion between self and other” (Decety & Jackson 2004: 75). On this view, as Nussbaum puts it, at least typically empathy is like “the mental preparation of a skilled (Method) actor: it involves a participatory enactment of the situation of the sufferer, but is always combined with the awareness that one is not oneself the sufferer” (2001: 327). There are several skeptics about the extent of self-other fusion, but few arguments in favor of this position. We can, however, articulate a powerful objection to literal self-other merging—one that suggests we’ll never find solid experimental evidence that oneness, instead of empathy, increases helping behavior. There are three main interpretations of self-other merging, each of which is problematic, so we can think of the overarching problem as a trilemma. pg. 117 of 206

Regard for Reason | J. May

6.5.1 Peculiar Beliefs The most straightforward and literal interpretation of merging is that one mentally represents oneself as strictly (quantitatively or numerically) identical with “another.” Compare someone, say Kanye, who believes he is Jesus Christ—not just similar to the historical individual but literally him. This interpretation of self-other merging has the implausible implication that feeling empathy for someone in need tends to induce what would normally be classified as delusional beliefs (compare Nussbaum 2001: 328). Of course, we have seen that some prominent philosophers and traditions have denied there’s a real distinction between self and other. But they recognize that this isn’t typically part of ordinary people’s thinking, even when empathizing. Self-other merging is meant to be a counter-intuitive concept that requires great effort to accept and implement in one’s life (cf. Flanagan 2017: ch. 11). Consider even the so-called “hyper-empathy” in mirror-touch synesthesia: patients don’t appear to perceive themselves in the other person’s shoes. Rather, a feeling is generated in themselves, which is represented as occurring in their own body. The other person’s experience merely triggers a similar feeling in oneself.

6.5.2 Indeterminate Identities Some proponents of merging, however, seem to explicitly reject this interpretation. Cialdini and his collaborators write: “We are not suggesting that individuals with overlapping identities confuse their physical beings or situations with those of the other” (1997: 482). These researchers speak only of “blurring” the distinction between self and other, in which the self is “dynamic” and “malleable.” We find a similar take in Owen Flanagan’s (2017) defense of the plausibility of the Buddhist conception of no-self. He says that we shouldn’t necessarily think of selves “as literally fused,” for it “might be true that selves are normally metaphysically and psychobiologically bounded in the minimal sense such that each of us is a subject of experience, an I” (248). Still, Flanagan suggests that the ego vanishes at some psychological level: at what I “identify with” or at the level of “what I care about and judge is important” (246). However, this interpretation of self-other merging still seems to require a sharp division between self and other. Suppose John looks across the room and sees a man with a spider in his hair. If he mentally represents that individual “de se” or as himself (say he believes he’s looking in a mirror), then this will normally motivate vigorous patting of his own head. If, on the other hand, John represents the individual as another (as distinct from himself), then this will typically motivate rather different behavior, such as calling out to notify the other of the spider. In predicting and explaining behavior, then, it’s normally crucial whether the actor conceives of an individual first-personally or third-personally. As John Perry (1979) famously points out, this is in some ways similar to indexical expressions—like “I,” “here,” and “now”—which seem to have irreducible reference to the self (e.g., the speaker or to the speaker’s location in space or time). Perhaps we needn’t posit irreducibly perspectival facts in language, but there is a distinctive first-personal mode of presentation that seems necessary to explain certain actions (cf. Paul 2017). Emotions also often contain an essential reference to the self. For example, there may be a sense in which I “fear” for a soldier as she charges into battle, but normally fear is for a predicament one conceives as one’s own (cf. Williams 1970; Nussbaum 2001: 30-1). Similarly, I pg. 118 of 206

Regard for Reason | J. May

would not feel guilt unless I distinguished what I did from what someone else has done. I might in a loose sense feel “guilty” for what my child has done, but arguably this is really a feeling of guilt for not teaching her better or taking some other action myself that relates to her mistake. Even if self-reference isn’t an essential component for the emotion, when a self-referential element conjoins with emotion its character and effects on action are again distinctive. I may empathize with my coworker’s financial windfall and share in the excitement, but the feeling of excitement would be markedly different if I were to represent myself as winning the lottery—and I would then behave rather differently. Return now to empathy-induced helping behavior. If we suppose that empathically aroused individuals mentally blur the distinction between themselves and the other, then we would expect rather different behavior. Perhaps they would be unsure how to act, who to help, which pronouns to use (not just “we” but “me” vs. “her”), and so on. Mentally blurring the selfother distinction would normally have dramatic and distinctive effects on behavior. Yet even infants help others without any confusion or evidence of self-other blurring. This makes it plausible to attribute to them a third-personal representation of the other as distinct from themselves. There is certainly a sense in which empathy causes us to experience the world from another’s perspective. People with mirror-touch synesthesia often describe their experience as if they’re living another person’s experience. Torrance (2011) writes: “When I watch a film, I feel as if I’m starring in it.” Compare ordinary reactions of flinching when witnessing someone about to incur bodily damage. However, such descriptions are compatible with representing the threatening force and its subsequent effects as occurring in another’s body. When empathizing with someone in physical danger at a distance, one doesn’t jump out of the way as if representing oneself as literally in the other’s shoes. Instead, we cringe, flinch, or avert our eyes—actions that presume a representation of oneself as distinct from the other. Indeed, common descriptions patently represent another as distinct from oneself: “I was constantly crying—not because something had happened to me, but because I had seen someone else crying or felt someone else’s pain” (Torrance 2011, emphasis added).

6.5.3 Sharing Properties Finally, one might weaken the merging proposal to simply say that empathy tends to make us locate, not our identities, but shared properties or qualities in others. We don’t represent ourselves as strictly identical but rather as similar to the other. Cialdini and colleagues do at times seem to conceive of their proposal along such lines when they suggest that empathy makes people “locate more of themselves in the others to whom they are closely attached” (1997: 483). Here the explanation of the act of helping another is supposed to be explained, it seems, by appealing to an intrinsic desire to benefit oneself combined with a belief that aspects of oneself are, as it were, over there in that other body. This provides the tools for explaining participants’ behavior of helping another distinct from themselves, since two individuals are represented (it’s only the properties that are identified). This account has two problems, which aren’t mutually exclusive. First, it seems again rather implausible empathy leads people to believe that aspects of themselves exist in other people’s bodies. Second, and more importantly, this account can’t appeal to non-altruistic desires while still explaining the helping behavior. Ultimately, these “aspects” must be properties pg. 119 of 206

Regard for Reason | J. May

of individuals, and believing that one shares certain properties with another doesn’t entail that the intrinsic desire of the individual is not to help another. Empathizing with you may lead me to think we have the same taste in music, but that doesn’t preclude a concern for your well-being. Maybe it depends on the property. The motive might be non-altruistic if what’s apparently shared is specifically the property of being happy (or some such benefit). Perhaps my intrinsic desire is to benefit myself and I believe that I can do this by promoting the property of being happy in you, since I also believe this is the exact same property I have. However, in addition to attributing a peculiar belief to people, this explanation would be too narrow to account for the empathy-helping relationship in its entirety. Research on empathy indicates that there are various similarities (e.g., same race, gender, age, background, etc.) that can induce higher levels of empathy for another perceived to be in need. People needn’t believe they are similar to the needy other in respect of harm or benefit to demonstrate the empathy-helping relationship (see the discussion of the “similarity manipulation” of empathy by Batson 1991: 114). While it’s plausible that normal individuals will always believe that the other will benefit from help, there’s no reason to think that empathy always induces the belief that they are similar in respect of that benefit or harm, let alone identical.

6.5.4 Separating Self and Other It’s thus difficult to take literally the idea that empathizing with another in need makes us collapse or blur the self-other distinction. The intersubjectivity in empathy is most easily captured by positing a strong mental divide between oneself and another. So it doesn’t seem that we can appeal to merging to ground a non-altruistic account or one in which there is a death of the self. Indeed, agape (and the immortality it may promise) may be impossible for humans, because it may not be a psychological or conceptual possibility. Psychologically, it may require pathology to achieve (compare mirror-touch synesthesia), which comes with great burdens. Conceptually, agape may prevent anyone from appropriately interacting with the world. How, after all, could John get the spider out of his own hair (or another’s) if he mentally blurs or abandon’s the distinction between self and other? The Upanishads are certainly right: “Who sees all beings in his own self and his own self in all beings, loses all fear” (Isha Upanishad 6). But this might be a terrible thing to lose. There is a first-personal element in fear, and many other mental states, that is important for the kind of prospection that allows animals like us to navigate our environments via first-personal “egocentric” maps (Seligman et al. 2016: 68-71, 242-3). Humans are certainly capable of third-personal mental representation too, but we can’t rely on this alone when helping ourselves or others. Thus, a death of the self may, paradoxically, yield a failure to effectively help anyone, including oneself. The necessity of self-representation, even for beneficence, is crucial. Some conceptions of the Buddhist doctrine of no-self grant that we must posit some conception of the individual as distinct from others. However, the idea still seems to be that this watered-down notion of “self” lacks first-personal representation; it “does not invite identification” or “prompt ‘mine’thoughts,” as Amber Carpenter puts it (2014: 28; cf. also Flanagan 2017: 246). Again, while we sometimes do engage in such third-personal representation, we cannot purge first-personal representation from the minds of creatures like ourselves. pg. 120 of 206

Regard for Reason | J. May

At this point, one might ask how we can explain the effects of empathy without positing at least some substantial self-other blurring, especially given that one empathizes better with those similar to oneself. Otherwise, why should empathy make us more concerned for others if we don’t somehow merge ourselves with them? The answer may lie in the simple idea that empathy for another’s plight draws our attention to them, connecting them to ourselves (compare Hume 1739-40: 2.2.7). We needn’t conceive of this in self-interested terms. Empathy may simply induce relational desires, which concern both self and other. I might, for example, desire to be the one to help Aashka or desire to have a mutually enjoyable game of tennis with Sasha. Again, such desires aren’t egoistic, since they represent another person as an essential beneficiary, not an individual who is merely essential to one’s own benefit (May 2011a). In this way, empathy can increase altruism, not because we merge self and other, but rather because empathy focuses one’s attention on others and connects one’s concerns with theirs.

6.6 Conclusion Once we take psychological egoism seriously as a live thesis about our intrinsic motives, there is every reason to address empirical work that bears on it. Evolutionary pressures certainly shaped our minds, but they alone do not establish that we’re all ultimately motivated by self-interest. The best evidence comes from decades of experiments in social and developmental psychology on empathic concern and helping behavior. Empathy seems to motivate more helping behavior, and this isn’t best explained by egoistic desires to gain praise or avoid self-censure. One powerful objection, however, maintains that the role of perspective-taking in empathy leads to self-other merging. Whether this is conceived as requiring superhuman feats of agape or as precluding ordinary altruism, merging is a problem. However, no matter how we interpret such a self-other merging account, it’s untenable. The empathy-merging hypothesis can’t appropriately explain the increased helping behavior observed in experimental settings. Even if empathy’s intersubjectivity makes us neither self-interested snakes nor selfless saints on the path to immortality, it does ground ordinary altruistic concern for others who are conceived as distinct from oneself. We can thus rule out one form of pessimism about moral motivation—psychological egoism—for we are not always ultimately motivated by self-interest. Even if we perceive net personal losses, we can still intrinsically desire to help a friend in need, especially if we feel compassion for them. This opens up the possibility that we can also ultimately desire much else for its own sake—to destroy enemies, attain freedom, achieve towering fame, acquire massive wealth, or even do the right thing. Other forms of pessimism, however, admit the existence of a plurality of intrinsic desires that includes altruism. It is to those theories that we now turn.

pg. 121 of 206

Regard for Reason | J. May

Ch. 7: The Motivational Power of Moral Beliefs Word count: 10,242

7.1 Introduction Suppose the previous chapter has successfully shown that we can be ultimately motivated by altruism. Egoism as an intrinsic motive no doubt exists, but the theory of psychological egoism, which takes self-interest as the only ultimate goal of all human action, is utterly implausible in light of common experience and rigorous experiments. At least two intrinsic desires are within the human repertoire: egoism and altruism. Pessimists will argue that this alone is no consolation. We only tend to strongly empathize with those already in our inner circle, such as friends and family, but not distant strangers equally deserving of consideration. Since altruism has limited benefits and some liabilities, we need moral convictions to keep us in line. But are we sufficiently motivated by our moral principles? There appear to be numerous examples in which we are so motivated. Martin Luther King Jr. sacrificed so much for the civil rights movement, not because he sought fame and fortune, but because he believed it was right to fight against injustice. A similar motive was likely present in Edward Snowden when he disclosed classified information about the National Security Agency, including secret surveillance programs that illicitly amassed personal data on many ordinary American citizens. Snowden knew he would have to endure hefty sacrifices and risk infamy, such as living in exile for what many would consider treason. It seems a stretch to think he ultimately did it merely for notoriety, riches, or an indefinite stay in Moscow. Joseph Gordon-Levitt met with Snowden in preparation to play the man in a film, and the actor said he “left knowing without a doubt that what [Snowden] did, he did because he believed it was the right thing to do” (Lamont 2015). Such explanations of each other’s behavior are quite commonplace (more on this in the next chapter). It’s important, though, to recognize that our moral beliefs can be corrupted and motivate unequivocally bad behavior, even atrocities. In the antebellum South, whites didn’t just institute and maintain slavery for explicitly economic reasons; most rationalized it by dehumanizing Africans as not fully persons. More ordinary moral failings—from infidelity to embezzlement— are rationalized too, whether by shirking responsibility or selective attention to evidence. Even when bad behavior is rationalized in order to avoid the cost of doing what’s right, the prudential concerns that motivate are normative (“I really oughtn’t risk my political career just to help some preacher from Georgia; it can wait”). That is, moral considerations aren’t the only considerations pg. 122 of 206

Regard for Reason | J. May

that provide reasons for action, and thus are normative. So rationalizations can be guided by either moral or prudential concerns, assuming these domains are distinct. Appearances can be deceiving, of course. Despite accepting the existence of altruism, Batson describes himself as “cynical” about moral motivation generally. Based on the science, he writes that, “motivation to actually be moral isn’t as prevalent as often thought” while a desire to “appear moral yet, if possible, avoid the cost of being moral is common” (2016: 122). Perhaps we rarely have noble goals and are more often motivated by subtle forms of self-interest that are difficult to detect without experimental methods. Batson believes we rarely act from what he terms moral integrity: “Motivation with the ultimate goal of promoting some moral standard, principle, or ideal” (29). As we’ll see in this chapter, moral integrity is meant to be expansive, including the motivation to do some action that one believes is right but also the motivation to uphold a moral principle or even morality in general. Even if moral integrity in any of these guises is possible, though, it may be rare—present perhaps in only some historical heroes. If so, virtue likewise may be rare, and pessimism prevails. In this chapter, we’ll closely examine the evidence and draw a more optimistic conclusion: moral beliefs play a frequent and pervasive role in motivating action. We have a deep regard for reason that is not merely instrumental to some egoistic concern. Somewhat paradoxically, this is especially highlighted when we fail to live up to our normal ideals, as we frequently attempt to rationalize or justify such failings to ourselves and to others. Research on motivated reasoning, moral licensing, and related phenomena reveal that we frequently justify both immoral and imprudent behavior in a self-deceived manner. Indeed, we’ll go so far as to temporarily lower our standards or ignore evidence that our actions conflict with them. Selfinterested drives are certainly involved but often in the service of reason.

7.2 Ante Hoc Rationalization The goal of this chapter is to show that normative beliefs play a more prominent role in ordinary motivation than we often realize. To demonstrate this, we’ll consider a wide range of scientific research on motivation concerning two kinds of normative domains: morality and prudence. Much of this work addresses temptation of various sorts—whether temptation to cheat on one’s spouse or on one’s diet. But first let’s consider a framework for understanding how such beliefs can influence motivation.

7.2.1 Temptation by Appeal When we’re tempted to do something, we are drawn to it—we desire it—despite our having at least some sense that the action is not in fact desirable. So it’s quite natural to think that desire is the ultimate source of motivation when we succumb to temptation. But that’s a bit too quick. Often we rationalize succumbing to temptation, in the pejorative sense of attempting to justify what’s not in fact justifiable. Sometimes this occurs after the fact, as in Haidt’s moral dumbfounding studies (see Chapter 3, §3.4.2), which yields post-hoc rationalization that involves coming up with reasons that didn’t actually drive one’s behavior (cf. Rust & Schwitzgebel 2014). However, we’re also capable of what we might term ante hoc rationalization, which occurs just before the choice and can then motivate one’s ensuing action. pg. 123 of 206

Regard for Reason | J. May

Rationalizing before acting is a frequent and familiar phenomenon. For example, when one contemplates eating a delectable donut after resolving to stay on a strict diet that prohibits such food, one might think something along the lines of “It’s okay just this once” or “I deserve this, because I worked out earlier.” Or, in evaluating someone else’s work, where I have no personal stake in the results, I will spot many problems. Yet, when evaluating my own work, my attention will be drawn to all the positive features and away from mistakes. Evaluating myself in a more positive light than I do others provides a justification for all the self-serving actions I will later take, such as letting myself off the hook for being inconsiderate or late to meetings. Such self-serving rationalizations were even recognized in antiquity, as in the Buddhist saying: “It is easy to see the faults of others, but difficult to see one’s own faults.” Compare also Aesop’s fabled phenomenon of “sour grapes.” Often when we expect some goal is out of reach, such as being honest, we rationalize giving up on it: “It’s really for the best; it would have hurt her feelings too much anyway.” Even writers during the Enlightenment recognized the ways in which their beloved reason can devolve into rationalization (see, e.g., the “natural dialectic” in Kant 1785/2002: Ak 4:405). Modern science now allows us to demonstrate such phenomena experimentally, and we know that in the right circumstances some of us can rationalize allowing or committing atrocities by disengaging with or exempting oneself from common moral standards (Bandura 1999)—e.g., “It’s none of my business” or “It’s really for the greater good of humanity.” So it often seems that when we give in to temptation we delude ourselves into thinking that it was really acceptable or even the thing to do. As Gary Watson (1999/2004) puts it, “desire enslaves by appeal, rather than by brute force” (66) so that our better judgment is “not so much overpowered by brute force as seduced” (71). In this way, normative judgments may in fact play an important role in motivation that on the surface seems only governed by desire (cf. also Scanlon 1998). The existence of rationalization should perhaps be unsurprising in humans. Our intentional actions tend to make sense from the actor’s point of view by having what philosophers call a “rationalizing explanation,” but devoid of pejorative connotations. As Donald Davidson famously puts it: A reason rationalizes an action only if it leads us to see something the agent saw, or thought he saw, in his action—some feature, consequence, or aspect of the action the agent wanted, desired, prized, held dear, thought dutiful, beneficial, obligatory, or agreeable. (1963/2001: 3) For example, if a person’s act of turning on a radio is caused and explained simply by his belief that it’s off, the act remains in some sense unintelligible. The explanation doesn’t reveal any “favorable light” in which this perplexing individual saw the action, as John McDowell would put it (1978/1998: 79). Yet desiring to alleviate one’s anxiety about silent radios arguably is intelligible—or at least let us grant (contra Quinn 1993/1995). Perhaps more clearly, any action is rationalized by believing that the action is right (or best, called for, etc.) simply because such a normative belief clearly specifies that the individual saw something favorable about the action. Of course, the relevant normative belief might not be justified. The action remains rationalized from that person’s deluded perspective, but it’s then rationalized in the pejorative sense: one’s rationalizing explanation is a poor one. While ante hoc rationalization is common enough, such a picture of temptation may seem obviously over-intellectualized, and thus doomed from the start. Attributing changes in pg. 124 of 206

Regard for Reason | J. May

normative judgment to oneself or to others is of course rather implausible if we’re supposed to always be aware of such judgments or if we deny the existence of brute urges that don’t involve such judgments. But both of these claims can be jettisoned (cf. Arpaly 2003; Schapiro 2009). We admittedly don’t always rationalize consciously at the time, and sometimes we succumb to temptation without a change in normative judgment. While rationalization can involve inference, it’s not always a matter of conscious deliberation. Temptation by appeal is a matter of normative or evaluative thought that isn’t essentially conscious—what we might call normative cognition (compare the account of unconscious reasoning in Chapter 3). In ordinary cases of temptation, the motivational chain that leads to succumbing at least sometimes involves an implicit reevaluation of one’s options and their merits. On this view, normative cognition needn’t always involve explicit deliberation, like the mental process one engages in when deciding who to vote for in an upcoming election. This conception of temptation is so far largely based on armchair reflection. Nevertheless, we’ll see that it fits well with a wide range of empirical studies. A first step in that direction is to note the abundance of research demonstrating the ubiquity of implicit normative cognition. For example, we find across cultures that humans have a “norm psychology” such that even as young children we automatically and unconsciously acquire social norms, readily detect norm violators, and promptly enforce the rules (for review, see Henrich 2016). We can also consider a wealth of data produced by Joshua Knobe and others, which indicates that normative and evaluative considerations unconsciously influence much of our ordinary thinking. We’re more inclined, for example, to count a side effect of a person’s action as intentional if we think the side effect is bad rather than good. Similar effects have apparently been found for ordinary ascriptions of deciding and advocating, judgments about causation and knowledge, and more (Knobe 2010). Even ordinary attributions of weakness of will appear to be influenced by whether one gives into a temptation to do something bad (e.g., May & Holton 2012). Now, perhaps we should explain some of these cases away such that normative considerations don’t affect practically all areas of folk psychology. Perhaps Knobe is wrong that these effects reflect our ordinary competencies rather than mere performance errors. But we need only rely on this line of research to reveal what should be a rather uncontroversial truth: norms permeate our mental lives, even if unconsciously. Of course, the ubiquity of implicit normative cognition doesn’t alone establish that it frequently affects motivation. We must turn to empirical research on temptation, which reveals that both adults and children easily rationalize bad behavior and imprudent choices by appeal rather than brute force. We already saw (in Chapter 3) that rapid and unconscious inference shapes moral judgment; now we’ll see that rapid and unconscious changes in moral judgment shape behavior.

7.2.2 Rationalizing Imprudence Our focus is on moral beliefs and how they influence behavior, but morality isn’t special in this regard. The phenomenon seems to be tied more generally to normative beliefs, or a concern for acting justifiably by some standards or other. Assessments of prudence, in particular, influence action even if we assume that they extend beyond the domain of ethics. Briefly examining the power of prudential beliefs will help explain how moral beliefs pervasively impact what we do because they too are normative. pg. 125 of 206

Regard for Reason | J. May

Numerous studies on consumer choice show that we can rationalize or “license” imprudent actions, such as making indulgent purchases. As common experience suggests, when we previously exhibit restraint by not buying an unnecessary luxury item, this leads to an increased tendency to later make impulsive purchases of such items. Earlier restraint makes you feel warranted in indulging later—after all, you deserve it. Later indulgence is “licensed” by having earlier exhibited virtuous restraint. On the other hand, when you indulge earlier, you feel guilty and can’t justify indulging later. Some experiments show that justifiability in particular is at least one way in which we license imprudence. Consider one series of studies in which participants could choose to buy (or vividly imagined buying) an indulgent product (Mukhopadhyay and Johar 2009). In one experiment, people had an opportunity to purchase some fancy chocolate with some of the money they would earn for participating in the study. After making this decision, they were asked to choose either cake (indulge) or fruit salad (restrain). As in previous experiments, the researchers found that, when there was no justification available for earlier indulgence, earlier restraint led to marginally greater indulgence later (see the first column of data in Table 7.1). Such findings alone suggest rationalizing via prudential licensing (e.g., “I didn’t buy that chocolate earlier, so I deserve the cake”). Table 7.1: Proportion of Later Indulgence by Choosing Cake (adapted from Mukhopadhyay and Johar 2009: 341)

Indulged Earlier Restrained Earlier

No Justification for Earlier Indulgence

Justification for Earlier Indulgence

36.8%

82.4%

58.5%

37.5%

“Justification” refers to the condition in which the proceeds from the earlier purchase of chocolate would be donated to charity. “Indulged earlier” refers to purchasing the chocolate, even if it’s not the best term in the donation context.

However, the researchers aimed to explicitly test whether manipulating justifiability would alter behavior. Half of the participants were randomly assigned to a group told that the proceeds from purchasing the fancy chocolate would go to charity, which provides a justification for indulging. As a result, those who didn’t purchase the charitable chocolate couldn’t see that as particularly virtuous or as restraint, since it involved not only resisting sweets but also charity. Prudential licensing then wasn’t available to increase later rates of indulgence, reversing the previous pattern of results when no justification was available (see the second column of Table 7.1). Many other experiments have indeed demonstrated prudential licensing and implicate rationalization in the explanation of this and related phenomena. As a recent review of the research puts it, “a justification-based pathway is an important and common route to selfregulation failure in many behavioral domains” (De Witt Huberts et al. 2014: 132). Perhaps what immediately motivates people is the positive feelings associated with acting justifiably. But the point is that what drives the feeling—or at any rate the motivation—is the implicit normative cognition that changes with one’s rationalization. pg. 126 of 206

Regard for Reason | J. May

Prudential licensing isn’t the only relevant phenomenon. Other choices within the domain of prudence also seem to involve changes in one’s beliefs about what’s best, from the famous studies of self-control and “ego depletion” to the notorious marshmallow studies and other research on “delay of gratification” (see, e.g., Karniol & Miller 1983; Holton 2009; Levy 2011). Our primary concern is with moral beliefs, though, so we’ll focus our attention on them.

7.3 Rationalizing Immorality We rationalize immorality just as much as imprudence. Consider how politicians rationalize their voting choices and other morally motivated actions. In the United States, for example, if a seat on the Supreme Court becomes vacant, presidents are able to have a powerful influence on the future of the country by nominating a new justice. When this occurs in a president’s final year of office, as in the case of Barack Obama in 2016, the Senate can often delay considering the nomination until a new president enters the White House. In such cases, those in congress commonly tout the importance of following their Constitutional “duty” to proceed with the confirmation process—but only when the current president is a member of their own political party, of course. Otherwise, we should absolutely “let the people decide” with their choice of the next president in the upcoming election. Neither Democrats nor Republicans are immune to this kind of casuistry.

7.3.1 Motivated Moral Reasoning It might seem that a natural starting place for moral rationalization is moral dumbfounding, which we encountered many chapters ago (in Chapter 3, §3.4.2). Recall that Haidt and his collaborators (1993) found that participants would often intuitively condemn taboo violations that were apparently harmless, such as using one’s national flag to clean a toilet. When asked for reasons, people seemed to engage in post-hoc rationalization of their initial gut feeling. Being post-hoc, however, such rationalization doesn’t necessarily affect participants’ initial judgments and behavior. People might be concocting justifications to please the experimenter who is explicitly asking for them. A better source of evidence is the literature on motivated reasoning. In both the lab and the field, it’s clear that we evaluate evidence more favorably when it matches our preferences, goals, and values (for review, see Kunda 1990; Mercier & Sperber 2011). In fact, our motives can influence our reasoning about both what to do and what to believe—wishful thinking knows no boundaries. Only recently, however, have researchers focused heavily on motivated moral reasoning and its effects on moral motivation. Consider first a provocative study which shows, as ordinary experience would suggest, that sexual arousal corrupts judgment. Dan Ariely and George Loewenstein (2006) had men report their judgments about what they would do in a range of hypothetical scenarios involving sexual intercourse and barriers to it. All participants viewed erotic photographs, but some were randomly assigned to be in a group that viewed the images while being especially aroused at the time by privately engaging in self-stimulation (or what Kant would describe as “wanton selfabuse”). As predicted, participants’ judgments changed when especially aroused: they were more inclined to say they would have unsafe sex and to engage in a range of unethical behavior, such pg. 127 of 206

Regard for Reason | J. May

as drugging a sexual partner or encouraging her to drink more alcohol in order to increase the likelihood of intercourse (see Table 7.2). Table 7.2: Mean Self-Reported Likelihood to Engage in Behavior (adapted from Ariely & Loewenstein 2006: 94) Question Would you take a date to a fancy restaurant to increase your chance of having sex with her? Would you tell a woman that you loved her to increase the chance that she would have sex with you? Would you encourage your date to drink to increase the chance that she would have sex with you? Would you keep trying to have sex after your date says “no.” Would you slip a woman a drug to increase the chance that she would have sex with you?

Low Arousal 55

High Arousal 70

30

51

46

63

20

45

5

26

Answers measured on a visual-analog scale from “No” on one end (0) to “Yes”’ on the other (100) with “Possibly” in the middle (50). All differences between Low and High Arousal groups are statistically significant at p < .01.

Ariely and Loewenstein conclude that, at least for men, sexual arousal can “decrease the relative importance of other considerations such as behaving ethically toward a potential sexual partner” (95). While the experimenters didn’t explicitly measure moral beliefs, changes in choices about what one would do suggest a change in one’s judgment about what one should do. Other studies provide more direct evidence that we can implicitly rationalize the use of different principles when making moral judgments. In one experiment, Uhlmann, Pizarro, Tannenbaum, and Ditto (2009) had participants rate the morality of saving 100 innocent people by pushing a single innocent person to his death. This variant of the Footbridge dilemma (see Chapter 3, §3.3.3) apparently pits a “consequentialist” rationale for pushing (save more people) against a “deontological” rationale for abstaining (don’t actively kill). The researchers found that liberals were less likely to approve sacrificing one if he had a name that was stereotypically black (“Tyrone Payton”) rather than white (“Chip Ellsworth III”). Liberals, in other words, were less inclined to adopt a more “utilitarian” rationale, which focuses on good overall consequences, when it came to sacrificing a minority. In another study, Uhlmann and his colleagues primed students at American universities with either the idea of patriotism or of multiculturalism and then asked about the morality of collateral damage. Everyone then rated the morality of a military action that would lead to some civilian deaths but take out key leaders of the opposition, preventing many future deaths. Those students primed with patriotic thoughts were more likely to rationalize the “utilitarian” choice when the hypothetical involved American forces sacrificing Iraqi civilians, rather than Iraqi forces sacrificing American civilians. Spock’s famous line should apparently be qualified: The needs of the many outweigh the needs of the few when the few are outsiders. Moral judgments in these studies and more were affected, even though most people surveyed say that the race or nationality of potential victims shouldn’t affect whether one ought to promote the greater good at their expense. However, the temptation to uphold certain moral or political ideologies apparently made certain moral principles or rationales seem more plausible. pg. 128 of 206

Regard for Reason | J. May

(As Groucho Marx once quipped: “Those are my principles, and if you don’t like them, well, I have others.”) One might argue that in cases of rationalization feelings are doing the real work. Participants feel more white guilt, feel more patriotic, or feel more sexually aroused. Sure, but participants whose feelings were elevated weren’t just motivated to act differently; they judged certain practices to be more morally acceptable than they would otherwise. In other words, the feelings worked via changes in moral judgment. Some might regard such normative cognition as inherently emotional, but we’ve already seen the limitations of such a sentimentalist approach (in Chapters 2 and 3). Now these studies of rationalization didn’t directly test whether participants would be significantly more motivated to opt for collateral damage or engage in dubious dating practices. To make the connection to motivation more explicit, we must turn to studies on moral licensing and moral hypocrisy.

7.3.2 Moral Licensing Licensing immorality can be just as easy as licensing imprudence. Feeling morally smug, for example, can lead to unconsciously rationalizing bad behavior. Suppose you’re tempted to make up an excuse for why you missed a recent meeting (e.g., “My mother was ill”). As it happens, you also recently received a note in the mail thanking you for a recent charitable donation, and you’re struck by how much you’ve donated to charities throughout the year: “What a generous person I am!” Consciously or not, you then rationalize the lie based on being such a good person otherwise. This is an instance of moral licensing: recognition of a previous good trait or deed later makes us more comfortable with doing something unethical or morally questionable. A number of studies have demonstrated this phenomenon both in the lab and in the wild. In one set of experiments, expressing support for Barack Obama led participants to exhibit more favorable attitudes towards white people (Effron et al. 2009). Participants were, for example, slightly more inclined to describe a job advertisement as more fitting for white rather than black people, but only for those participants who were recently able to explicitly endorse Obama vs. McCain (in the Credentials Condition). The effect might seem due to a liberal selection bias or merely priming thoughts about Obama. However, as Table 7.3 shows, moral licensing didn’t occur among participants who were only able to endorse either a white Democrat or a white Republican (the Political Expression Condition) or among those who were merely instructed to circle the picture of the younger politician, which was Obama (Priming Condition). Table 7.3: Mean Responses to Whether a Job was Suited for a Particular Race (data from Effron et al. 2009: 591) Condition Political Expression (control) Priming (control) Credentials (endorsement)

Mean Response -.05 -.15 .50*

Responses measured on a scale ranging from 3 (“much better [job] for a Black”) to +3 (“much better [job] for a White”). *Mean differences between the Credentials condition and the two control conditions are statistically significant at p < .05. pg. 129 of 206

Regard for Reason | J. May

It seems that specifically demonstrating one’s endorsement of America’s first Black president made people feel licensed to later express a slight preference for Whites. Effron and his collaborators conclude that affirming “moral credentials increase confidence that subsequent ambiguous behavior will appear non-prejudiced” (592). Another set of studies found that people were slightly less altruistic and slightly more willing to cheat after having done their part supporting more organic and environmentally friendly products (Mazar & Zhong 2010). In one of the experiments, for example, more participants lied and cheated after choosing more “green” items to potentially win in a raffle, but only because they were randomly assigned to a virtual store that happened to have more organic and environmentally friendly products. This effect does not appear to be driven by mere priming either, since the researchers found that mere exposure to such “green” products had the opposite effect: the amount of money people were willing to share with another participant in an anonymous economic game (the dictator game) slightly increased. Many other experiments have reported similar findings across a range of moral behaviors, such as lying, discrimination, cooperation, and charitable giving. A recent metaanalysis of over ninety studies across thirty articles suggests that moral licensing is a real, even if limited, phenomenon (Blanken et al. 2015). The effect size is officially “small-to-moderate” (Cohen’s d=0.31), which is only slightly smaller than one typically finds in psychology. Moreover, the meta-analysis did not find a difference in the effect when the traits or behaviors measured were in different domains. Just as exercise can license overeating, good deeds of one type can license more questionable deeds of another type. For example, that the later licensed behavior of cheating doesn’t concern the same topic as the previously virtuous action of supporting the environment. Especially significant for our purposes is that moral licensing seems to involve rationalization. One experiment, for example, explicitly revealed how “rationalizability” can moderate the effect (Brown et al. 2011). Participants were randomly assigned to either affirm the good moral character of themselves or an acquaintance—by stating whether they would likely donate some money to charity after a windfall from an inheritance (apparently, most would). The researchers then had participants play a math game on a computer that allowed for the opportunity to secretly cheat to receive more rewards. In one condition, however, it was much easier to rationalize cheating because a “bug” in the program left little time to press a button before the correct answer would pop up on the screen. Thus, unlike those in a condition with ample time, failure to block the spoiler could more easily be seen as a mere oversight or temporary lapse in attention, rather than a willingness to cheat. Naturally, cheating increased when rationalization was easier. In keeping with licensing, moreover, cheating was even higher among those who were previously able to express their own virtuous dispositions, rather than those of an acquaintance. I submit that much, if not all, of moral licensing is due to rationalization. Many studies have shown that the phenomenon is more than a mere priming effect. The studies mentioned above, for example, found licensing to be specific to the affirming or recognition of one’s character—endorsing Obama (not merely thinking about him), choosing green products (not merely being exposed to them). It’s called “moral licensing” for good reason: the later misconduct is regarded by the individual as more acceptable or justifiable, a minor exception to one’s ordinarily good character. This is significant since, even though moral licensing isn’t laudable, it reveals that we are at least motivated to act in ways we can justify not only to others pg. 130 of 206

Regard for Reason | J. May

but to ourselves (a point also made independently by de Kenessey & Darwall 2014: §3.2.1). The vice at least reveals an underlying regard for reason. However, few studies focus on rationalization as a mediator of moral licensing, and the recent meta-analysis unfortunately didn’t explore it. Another line of research, though, does track rationalization more explicitly in connection with moral motivation. So it’s worth adding more fuel to the evidential fire.

7.3.3 Moral Hypocrisy We began this chapter with a range of cases in which moral beliefs seem to play a role in motivating behavior, whether virtuous or vicious. The studies of motivated moral reasoning and moral licensing, which implicate ante hoc rationalization, reveal a way in which motivation is commonly driven, at least in part, by such beliefs. These cases may, then, suggest a motivation to be moral. But some other evidence might suggest this is an illusion. As Batson puts it, “much behavior thought to be directed toward the ultimate goal of promoting some moral principle or ideal may instead be directed toward appearing moral while, if possible, avoiding the cost of actually being moral” (2016: 4). Through a kind of grandstanding, self-interested motives “masquerade” as moral integrity. Batson and his collaborators claim to have uncovered this phenomenon of “moral hypocrisy” in a series of brilliant experiments. In most variations, the researchers ask participants to assign one of two tasks to themselves and another (actually fictitious) person. One task is positive or rewarding (e.g., yielding raffle tickets) while the other is boring or negative. Left to their own devices, about 90% of people tend to allocate the positive task to themselves, and the negative one to the other person. Results are different when participants are offered a randomized procedure, such as flipping a coin, and told that most others think it’s fair to give each participant an equal chance at the better task. When a coin is available to flip privately, about half use it, but—and here’s the kicker—somehow such participants still overwhelmingly award themselves the positive task (typically 85-90%). Later studies monitored participants’ ostensibly private choices, and it turns out that people fiddle the flip by, for example, secretly manipulating it or ignoring the coin to benefit themselves (Batson et al. 2002). Importantly for our purposes, those who flip the coin tend to rate their actions as moral (see Table 7.4, based on Batson, Kobrynowicz, et al. 1997), a result that has been replicated in later experiments (e.g., Batson et al. 1999). Table 7.4: Task Assignment and Moral Ratings of It (adapted from Batson, Kobrynowicz, et al. 1997, Study 2) Study Didn’t Flip Coin

Assigned self to positive task 90%

Moral rating of assignment 4.00

Did Flip Coin

90%

7.30

Moral ratings of the task assignment on a 9-point scale from 1 (“not at all [morally right]”) to 9 (“yes, totally [morally right]”).

pg. 131 of 206

Regard for Reason | J. May

One might object that the average morality rating among flippers is especially high simply because about half of them presumably did honestly win the flip and for this reason rated their action as highly moral. However, follow-up studies reveal that people still rate their action as moderately moral as long as they flipped the coin, even if they didn’t follow it (Batson et al. 2002). Cheating in these studies clearly requires some fancy footwork. Participants are secretly fudging their use of a fair allocation procedure in order to ensure they get the chance at some cash. However, the wiggle room doesn’t provide an opportunity to avoid looking egoistic to others. Since participants make their choices privately, there is no real risk of others finding out whether the coin was even flipped. Cheating here requires self-deception. As Batson recognizes, this could happen by merely misremembering how the benefit was to be distributed (e.g., “Heads meant I get the good task, right?”), and there is some evidence that we rationalize dishonesty through such “motivated forgetting” (Shu, Gino, & Bazerman 2011). However, some evidence tells against this particular form of self-deception in the hypocrisy studies, since people still appear to fiddle the flip when the two sides are clearly marked with who gets the positive task (Batson et al. 1999). Fiddling was found to be practically eliminated, however, when participants made the task-assignment in a room with a mirror just leaning up against the wall (thus clearly not a two-way mirror), which other studies suggest can provide a reminder of one’s moral standards. The main method of self-deception, Batson concludes, involves avoiding an explicit comparison of the self-interested action to one’s moral standards. Batson prefers a cynical take on these studies: many people are ultimately motivated by self-interest (egoism). The specific form here is moral hypocrisy: to appear moral while avoiding the costs of actually being moral if possible (e.g., when there’s sufficient wiggle room). As Batson notes, this fits with a growing literature that suggests we’re highly prone to cheat if it yields greater benefits and we think we can fudge the results to our own satisfaction (e.g., Mazar, Amir, & Ariely 2008). While Batson believes we are sometimes ultimately motivated by a concern for others (see Chapter 6) and perhaps moral principles, he believes egoism is more common. Indeed, echoing Kant, Batson at one point explicitly espoused ignorance about whether we regularly act from duty: “I do not think we know whether principlism [or moral integrity] is a distinct form of motivation or only a form of egoism” (2011: 224). Now Batson believes moral integrity exists but is rare, especially in contexts of cheating, fairness, and other interpersonal conflicts (2016: ch. 5). We needn’t be so cynical. While it may seem that we succumb to temptation by the brute pull of passion or selfish desire, rationalization is again a key player. Even when we can just say “To hell with what’s right,” we instead more often say (or think) “I deserve the reward” or “Anyone else would do the same.” There is nearly always enough wiggle room in the application of moral principles that we can bend them toward a more favorable verdict. And we’re quite willing to act in a way that is in fact immoral (or imprudent) if we can somehow portray it to ourselves as acceptable. Much as the road to hell is paved with good intentions, the route to bad behavior is paved with rationalization. Indeed, the main theme of this chapter is that much of our behavior, good and bad, is driven by moral judgment or, more broadly, normative or evaluative cognition. As with moral licensing and other forms of motivated moral reasoning, the moral hypocrisy experiments actually provide evidence of a concern to act in ways one can justify to oneself in terms of moral principles or ideals. So we have genuine moral integrity, not moral hypocrisy or pretense (a similar reply to Batson is developed independently in Sie 2015; compare also the discussion in Miller 2013: 89pg. 132 of 206

Regard for Reason | J. May

99). Of course, it may seem inapt to call motived reasoning a form of “integrity,” since it involves succumbing to temptation and failing to stick to one’s considered judgments. But the terminology doesn’t matter. The point is that such rationalizations are compatible with being morally motivated. Consider the hypocrisy experiments in more detail. It’s significant that participants are sensitive to whether their actions can be morally justified, despite no perceived risk of being caught. In all of the experiments, participants are led to believe that no one will know how they made their choice about who will receive the good task. So why do people commonly fiddle with the flip? Why not just bypass the coin flip altogether? And why does fairness increase when one is more liable to compare one’s behavior to moral standards? A plausible explanation posits a concern to conform to one’s scruples—even if they become unwittingly corrupted. After all, those who fiddled the flip later rated their actions as more morally acceptable, presumably because they were able to justify it to themselves (“It’s fair; I flipped the coin”). This should be expected since the process of rationalization only makes sense in creatures who are ultimately concerned to uphold their principles, even in private. An intrinsic goal of merely appearing moral to others is undoubtedly egoistic, but ultimately aiming to appear moral to oneself involves moral integrity, for one must then represent the act as moral in order to achieve one’s goal. (Again, a similar point has been made independently, but about the moral licensing studies, by de Kenessey & Darwall 2014: §3.2.1.) Compare someone who waffles on moral issues but not due to anything like wishful thinking. Suppose Sasha is a judge regularly tasked with deciding whether defendants deserve the death penalty or merely life imprisonment. She frequently goes back and forth on whether it’s immoral to reduce offenders’ sentences merely because they live in extreme poverty. However, Sasha has to make her legal judgments and, being a woman of conviction, she does it based on her moral beliefs. One morning she becomes convinced that extreme poverty does warrant a reduced sentence and thus the relevant criminals are not sent to death row. Later in the day, however, she becomes convinced that poverty isn’t a legitimate basis for such decisions and acts accordingly. (Similar cases of moral uncertainty are considered, but for different purposes, by Lillehammer 1997 and Carbonell 2013.) All along Sasha is morally motivated. Indeed, despite drastic changes in moral belief, she remains motivated by moral integrity. Even when one’s moral beliefs change due to less noble factors—self-interest rather than the pursuit of truth—moral motivation remains, even if corrupt and worthy of criticism. One might object that, while rationalization is relevant in the hypocrisy studies, it merely reveals that egoism is the ultimate motive. But how could participants’ moral judgments merely satisfy some intrinsic desire for self-interest? Doing what’s right might promote one’s interests if others will witness it and dole out praise or other rewards. But, again, participants in these studies made their choices privately. Perhaps morality, albeit corrupted by rationalization, serves one’s self-interest because acting immorally makes one feel bad. But why would privately being moral yield positive feelings (or avoid negative ones) in the absence of a prior concern for morality or justifiability? Feeling bad about violating one’s own moral scruples seems to presuppose a concern to be moral. Absent such a concern, it makes little sense to feel bad about doing wrong—that is, to feel guilt. Merely seeking to avoid punishment or censure calls for fear, not guilt. Now, Batson considers guilt, shame, and the like to be egoistic, at least typically. It’s allegedly a “form of egoism,” for example, if “I toe the moral line not because I intrinsically value the standard but in order to avoid self-censure/guilt and to gain self-esteem/pride” (2016: pg. 133 of 206

Regard for Reason | J. May

51). Batson similarly regards as egoistic the intrinsic goal of “seeing ourselves and being seen by others as a good person” (61; cf. also Mazar et al. 2008). For this reason, he doesn’t even consider guilt and shame to be moral emotions (153). In Chapter 6, we treated the desire to avoid guilt as not a genuinely altruistic motivation (contra Slote 2013), since it doesn’t directly concern another’s well-being. Now that our topic is moral motivation, though, it’s important to see that aversion to guilt isn’t necessarily egoistic. Admittedly, the desire to merely eliminate guilt can be egoistic and morally problematic. A bit of dialogue from the television drama, Orange is the New Black, provides an illustrative example (season 4, episode 10). One of the leading characters, Piper Chapman, is feeling guilty for smoking crack cocaine with Nicky, who had been clean for three years after recovering from a devastating drug addiction. Piper tries to express concern for Nicky who is now back to using heroin: “I’m trying to say I’m worried about you.” But Nicky, irritated, replies, “Are you? Or are you attempting to assuage some of that guilt you’ve been carrying around?” Nicky is clearly charging Piper with lacking the right motivations. Part of what makes Piper’s motivation inappropriate is that in these particular circumstances she’s dealing with a friend in need. Guilt alone doesn’t seem sufficient here since Piper is expressing concern for, not just some stranger, but a friend, which calls for an intrinsic desire for Nicky’s well-being (altruism). But the desire to avoid guilt needn’t sour one’s motivations so long as it’s combined with such appropriate motives. Interestingly, Piper ultimately retorts, “Can it be both?” That is, she attempts to redeem her motives by saying that she is both intrinsically concerned about Nicky’s well-being and about her own guilt for enabling a friend’s drug addiction. It’s not the presence of the desire to avoid guilt that’s problematic but that apparent lack of altruistic concern for a friend. In fact, guilt-induced anxiety indicates that one is ultimately motivated by moral integrity. As Thomas Nagel once wrote regarding altruism: “guilt is precisely the pained recognition that one is acting or has acted contrary to a reason which the claims, rights, or interests of others provide—a reason which must therefore be antecedently acknowledged” (Nagel 1970/1978: 80 n. 1; see also Slote 2013 and de Kenessey & Darwall 2014: 3.3). This makes sense of why we rarely find it problematic when people are honest, helpful, or kind ultimately because they otherwise “couldn’t live with the guilt.” At the very least, feeling guilty for doing something while not being ultimately motivated to do what’s right calls for a special explanation. More to the point, the hypocrisy studies don’t necessarily implicate a desire to avoid guilt anyway, but rather to see oneself as moral. After all, Batson’s participants apparently acted unfairly by justifying it to themselves as moderately moral. As we’ll see in Chapter 9, the desire to see oneself as moral is in some way self-regarding but not in a way that conflicts with virtue. Batson (2017) interprets my rationalization account as a matter of moral integrity being overpowered by self-interest. Coin fiddlers are motivated by an intrinsic desire to be moral but this drive is weaker than the intrinsic desire to benefit oneself. However, the account is more complex. While we do have two competing motives, it’s not so much that egoism overpowers integrity. Rather, egoism corrupts some other thoughts in order to allow integrity to win, so to speak. The situation is similar to when a clever salesman gets a customer to think she really needs to buy the more expensive car. The customer splurges due to two competing motivations: the salesman’s and the customer’s. In a sense, the salesman wins because he got what he wanted, but it’s not as though he got his way by brute force. Rather, he did it through the customer’s goals, perhaps even by corrupting her judgment through spurious reasoning. Similarly, in the pg. 134 of 206

Regard for Reason | J. May

hypocrisy experiments, cheating occurs ultimately because egoistic desire finds a way to make its interests promote the concern to uphold a moral principle. As with the egoism-altruism debate, one can always portray any action as ultimately egoistic. A solely self-interested explanation is theoretically possible, but not necessarily probable. It can seem odd to construe individuals as acting from moral integrity when they’re acting immorally. But moral integrity does change as one’s moral beliefs change, even if only temporarily. This, again, is precisely what an egoistic account struggles to explain: Why do people’s moral beliefs change at all if they aren’t ultimately motivated to be moral? Moral integrity via rationalization better accounts for changes in moral judgments than a purely egoistic account due to moral hypocrisy. At any rate, we needn’t rule out all egoistic alternatives since we need only show that there is a plausible story to tell that features moral integrity. The rationalization hypothesis avoids the two extremes of painting us as either primarily egoistic or moral saints. What’s clear rather is that many of us are deeply concerned to act in ways we can justify to ourselves and to others. Egoism is surely playing a role for most participants, since they are rationalizing the selfinterested choice. The best explanation is likely mixed, involving more than one intrinsic goal, particularly egoism and moral integrity (compare Ariely 2012; de Kenessey & Darwall 2014), perhaps also altruism since some participants might be intrinsically, albeit weakly, motivated to benefit the other person. Positing multiple intrinsic desires, however, admits that we haven’t ruled out moral integrity in the hypocrisy experiments. Now, it doesn’t seem particularly virtuous to uphold principles corrupted by ante hoc rationalization. Our tendency to rationalize is toxic, not merely inert (to borrow a term from Rust & Schwitzgebel 2014), which seems to taint one’s motivations. However, at this stage, we’re merely challenging the claim that moral integrity is impossible or rare. It’s a separate question whether the prevalence of rationalization threatens the prominence of virtue. In Chapter 9, we’ll see that, to a certain extent, it does. Whether the situation is truly tragic, though, depends partly on how pervasive egoistic motives like moral hypocrisy are.

7.4 Motivating Virtue 7.4.1 Moral Integrity & Willpower The many experiments covered in this chapter converge on at least one important point: beliefs about what one ought to do play a pervasive role in human motivation. Unfortunately, these normative beliefs often are corrupted by self-interest and self-deception to rationalize the immoral or imprudent choice. We’ve had to focus on bad behavior since self-regulatory failures are a common research topic (no doubt given our interest in human flaws and how to fix them). However, the general mechanism is available for virtue as well. The same regard for reason can play a role in temptation’s enemy: willpower. While our wills may become weak due to rationalization, they may be strengthened by it as well. Sometimes we simply evaluate the options, make a judgment, and follow it. Just as the weak dieter can succumb to temptation by re-evaluating the donuts as not all that bad, the stronger person can defeat temptation by maintaining the proper normative beliefs—by avoiding corruption of judgment. When one’s normative belief is not corrupted, moral integrity can drive behavior in the right direction. pg. 135 of 206

Regard for Reason | J. May

In fact, some of the research we’ve already discussed suggests that our normative beliefs can motivate good actions. In the moral hypocrisy studies, for example, not everyone cheated when given the wiggle room to justify it to themselves. Some participants assigned the positive task to the other individual, whether by skipping the coin toss altogether or not fiddling the flip. This may be room for optimism, even if few participants were so gracious (only 18% in Batson et al. 2002, Study 2). Moreover, some experiments reveal conditions in which moral hypocrisy may be mitigated, such as when people were reminded of their moral standards by seeing themselves in a mirror (Batson et al. 1999). Batson isn’t confident that even these apparently scrupulous people were acting from moral integrity. They could have felt it was too risky to act selfishly—their standard for wiggle room is higher than most (Batson 2016: 116). However, this skepticism rests on the idea that the moral hypocrisy experiments demonstrate hypocrisy, which we’ve challenged. A higher standard for “wiggle room” just is a higher standard for what’s justifiable to oneself, which suggests moral motivation. So there’s no reason for doubt about the even clearer cases of moral integrity. Besides, evidence of moral integrity is also evident from other lines of research, which reveal internal conflicts between morality and self-interest. Greene and Paxton (2009) used fMRI to scan the brains of participants under the guise of a study on paranormal abilities to predict the future. Participants were paid for every correct prediction made about the outcomes of random coin flips. Some trials, however, allowed them to cheat by privately noting their prediction and only reporting whether it was accurate. Those who were obviously dishonest in their set of trials (reporting accurate predictions well above chance) took longer to make a choice when they had the opportunity to cheat than when they didn’t. Moreover, when these noticeably dishonest individuals didn’t lie, they exhibited greater activity in brain areas associated with controlled cognition and resolution of conflicts (e.g., anterior cingulate cortex and dorsolateral prefrontal cortex). These differences were not detected, however, among noticeably honest individuals who didn’t appear to cheat. These results suggest that deciding whether to be honest involves overriding an impulse to serve self-interest, at least when one is willing to be dishonest and is thus conflicted. For those who were consistently honest, there was no evidence of greater conflict to resolve, which can also easily be explained by moral integrity. Thus, as Batson himself recognizes (2016: 115), this study provides some neurobiological evidence of moral integrity. We can also find some evidence of the power of moral integrity in the study of moral identity. Researchers have validated a tool for measuring how important being moral is to one’s self-conception by asking people about nine moral traits: caring, compassionate, fair, friendly, generous, hardworking, helpful, honest, and kind. Participants are asked ten simple questions about their relation to these traits, such as “Being someone who has these characteristics is an important part of who I am” and “I am actively involved in activities that communicate to others that I have these characteristics” (Aquino & Reed 2002). Interestingly, a high score on the moral identity scale predicts a number of morally relevant behaviors, such as donating more to charity (Aquino & Reed 2002, Study 6) and being less susceptible to rationalizing overly punitive responses to a wrongdoer (Aquino et al. 2007, Study 1). Moreover, in one study, researchers found that being explicitly offered rationalizations of prisoner abuse makes people less likely to feel badly about such events. However, the effect is mitigated when one’s moral identity is primed by having to write a brief story about oneself that employs the nine moral traits rather than nine non-moral but positive traits, such as being carefree and open-minded (Aquino et al. 2007, Study 2). Not only do we have evidence of the concern to be moral but that it can have positive effects on actual behavior by protecting against (bad) rationalization. Of course, we pg. 136 of 206

Regard for Reason | J. May

can’t rule out with absolute certainty that the concern for morality is merely instrumental to an egoistic desire to gain pleasure or avoid pain (Batson 2016: 202). But the evidence we’ve encountered for moral integrity makes such explanations look strained. Experiments do involve some degree of artificiality. Let’s return to the real cases we began this chapter with, in which ordinary individuals did good deeds ostensibly in the name of morality. Now that we’ve considered compelling empirical evidence for the existence of both altruism and moral integrity, there is little reason to construe such individuals as instead motivated by self-interest. From high-stakes political activism to Southern hospitality, we can sometimes act from duty. Batson regards these individuals and their virtuous motivations as rare at best (2016: 116). Often when we’re kind to others we may be merely motivated by compassion, not moral principle. However, given how pervasively unconscious moral motives can drive moral failures, it’s likely that moral integrity is frequently present in moral successes. Imagine, for example, someone who inconspicuously donates some money to the American Red Cross to help disaster relief in Haiti after their devastating earthquake in 2010. Perhaps she ultimately did it to feel good about herself or merely because she felt compassion for the victims. However, given that the science suggests that moral considerations pervasively impact motivation through unconscious thought processes, it’s likely that in many cases we are motivated at least in part by a concern to be moral, even when making a donation to charity on a whim of compassion. The recognition that helping would be morally good likely taps into moral integrity, not merely egoism or altruism. In most situations we’re probably motivated by all of these factors independently (cf. Kennett 1993)—donating will help those people, it’s the right thing to do, and gosh darn it I’ll feel good about it. This is true even if only one of these considerations, or none, bubbles up to consciousness.

7.4.2 Many Virtuous Motives Virtue seems to require being motivated by the right reasons. But which motives are appropriate? Is it always moral integrity? It depends on how we conceive of it. Let’s understand moral integrity broadly to include two key forms of motivation. One is the intrinsic desire to uphold specific moral principles—such as Be fair, Pay it forward, or Maximize happiness—which requires only a desire to do some particular thing which one happens to also believe is right (what we’ll identify in the next chapter as a “de re” desire to do what’s right). However, moral integrity also includes the intrinsic motivation to do whatever is right as such—which is often identified with the infamous “motive of duty” (later: a “de dicto” desire). To illustrate, suppose I’m deliberating about whether I should be honest with my mother or instead lie to spare her feelings. I’m ultimately motivated to do whatever is right, and I value honesty, but I opt to lie because I conclude that being kind in this situation is the right thing to do. Such an action would be driven by the motive of duty or a desire to do what’s right as such. Ethicists disagree about whether the motive of duty is virtuous. Some follow Kant (1785/2002) and champion it. After all, it’s not particularly noble to help someone in need out of self-interest or simply because you happen to feel like it. The virtuous are guided by their values, even if they experience some internal conflict (cf. Kennett 1993). Other ethicists think the motive of duty involves an obsession with abstract morality, a kind of fetishizing that amounts to a vice (e.g., Stocker 1976; Smith 1994; Miller 2013; Arpaly & Schroeder 2014). Imagine, for example, that Michael visits a friend in the hospital ultimately pg. 137 of 206

Regard for Reason | J. May

because he just wants to do the right thing. Michael exhibits moral integrity, but wouldn’t it be more appropriate—more virtuous—to just be motivated ultimately by a concern for his friend’s well-being (i.e., altruism)? Or consider patients with scrupulosity, a form of obsessivecompulsive disorder that commonly involves an excessive concern with doing the right thing (Summers & Sinnott-Armstrong 2015). Suppose Bridget’s anxiety about violating moral rules leads her to have an overwhelming concern to make sure she’s not accidentally poisoning her customers’ food. That’s considerate, but isn’t the concern to be moral leading to a problematic motivation for preparing safe food? Shouldn’t she just be motivated by a concern for others’ well-being rather than obsessing over avoiding immorality? Such examples may suggest that the motive of duty alienates us from the personal attachments that often provide us with moral obligations in the first place. However, the motive of duty needn’t always conflict with virtue. For example, it seems quite appropriate to be ultimately motivated to do whatever is right (as such) when one is reasonably uncertain about what’s right (Carbonell 2013). Consider again the white lie to my mother: the motive of duty generates the appropriate motivation based on whatever I decide is the right thing to do in such circumstances. Moreover, the problem in scrupulosity isn’t necessarily with the motive of duty but with the attendant anxiety that generates irrational uncertainty about what’s right or overconfidence that one is liable to violate one’s moral principles. Likewise, when one is helping close friends or family, the motive of duty may only be problematic when it prompts unnecessary reflection or “one thought too many” about whether to help. Yet this motive only sometimes generates such reflections and they aren’t always problematic (cf. Aboodi 2015). So we shouldn’t reject the motive of duty as always alienating or utterly irrelevant to virtue. An appealing approach is pluralistic: the desire to do what’s right as such is sometimes morally appropriate, but not always (Hurka 2014). Moral integrity isn’t sufficient for virtue of course, at least because one’s moral belief must also be accurate. Many Nazis might have been motivated by duty to carry out the Final Solution but they sorely lacked virtue nonetheless. Moral integrity may not always be necessary either, since being motivated by the right reasons may not always require that one conceive of what one is doing as morally right—certainly not consciously. Visiting a friend at the hospital, for example, might be virtuous only if motivated by an intrinsic desire to help him feel better (altruism). Still, a broad form of moral integrity provides a plausible necessary condition for many cases: people aren’t generally virtuous if their actions are rarely motivated ultimately by at least an unconscious belief or desire with some moral content.

7.5 Conclusion Throughout this chapter, we’ve discussed an exhausting number of experiments. But together they provide ample evidence that ordinary temptation often affects our actions by affecting our normative cognition or judgment. The familiar idea of rationalizing poor choices is borne out by the empirical research, which reveals the pervasive impact normative beliefs have on motivation. The irony of rationalization is that it reveals, perhaps when least expected, our regard for reason—for doing what we think is reasonable, rational, or justifiable. This picture of motivation is of a piece with recognizing that normative beliefs were at play in even the most evil deeds of Nazis, who struggled against their natural sympathies for their victims in order to uphold a code they gravely mistook to be moral. Whatever cases we focus on, they serve our purposes well, pg. 138 of 206

Regard for Reason | J. May

since we have only been concerned to demonstrate the motivational power of moral beliefs, not necessarily moral knowledge. Establishing this claim makes room for the principled motivation of virtue as well. Thus, whether normative beliefs lead to virtuous or vicious behavior, they clearly play a prominent role in guiding our actions. In the moral domain, philosophers would likely follow Kant and describe us as having a “motive of duty.” This characterization, however, suggests that acting from duty requires acting from an antecedent desire to do what’s right. (Again, we’re using “desire” in the broad philosophical sense of a general motivational state whose function is to bring about its object.) If our moral beliefs merely tell us how to get what we ultimately want, then we can’t become motivated to do what’s right simply because we recognize it’s our duty. We must already want to do it or come to want to do it through something other than reasoning. Even when we do what’s truly right for the right reasons, acting from duty may be driven ultimately by desire. Hume may be right that reason is a slave to the passions. Moral integrity, however, needn’t take on such a Humean flavor. Throughout this chapter, I have often described moral integrity as a “concern,” which might be ambiguous between a motive or a mere disposition. Perhaps one can exhibit moral integrity without pursuing any goal that one conceives of in moral terms, such as duty or what’s right. Instead of moral concepts figuring in the content of one’s motive or desire, one merely has a disposition to do what one believes is right. As Marcia Baron puts it: “One acts from duty not by virtue of doing what is right because one wants to do what is right; rather, one does what is right because… [one] recognizes that one should” (2002: 101, emphasis added). Similarly, Christine Korsgaard writes: The person who acts from duty… chooses the action because she conceives it as one that is required of her. […] The point is not that her purpose is “to do her duty”… she chooses the action for its own sake: her purpose is to help. The point is that she chooses helping as her purpose because that is what she is required to do. (1996/2008: 179) While Korsgaard’s conception leaves open what the source of the ultimate goal is, we have construed it as normative beliefs. And the contents of such beliefs can include any significantly normative or evaluative content to the effect that the action is: what one ought to do, what one has most reason to do, what one should do, what is best for one to do, etc. So, as we’ll see in the following chapter, the intrinsic desire of the person of moral integrity needn’t make reference to one’s duty, only the normative belief has such content. Assuming that only goal-directed states (desires) count as motives, then the normative belief that does make reference to duty is not a motive, even if it generates a motive (a subsequent desire). Nevertheless, the belief is the source of the motive and plays a role in the (non-pathological) explanation of the action, and so it is motivational. The moral belief can function much like empathy in relation to altruism: it generates the intrinsic desire to help but isn’t itself what’s ultimately desired. Throughout this chapter, I have tried to remain neutral on whether acting from moral integrity always involves moral beliefs serving or furthering an antecedent desire to do whatever is right. In the next chapter, we’ll see that reason can be freed from these passions.

pg. 139 of 206

Regard for Reason | J. May

Ch. 8: Freeing Reason from Desire Word count: 9,212

8.1 Introduction So far, the science of moral motivation suggests that we are often ultimately motivated by our moral principles—we exhibit moral integrity. A pessimist about reason might argue that we’re still mere slaves to our arational desires. Such intrinsic desires may include compassionate concern for others and even respect for morality in general, but they needn’t have their source in reason. As Hume famously said: “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them” (1739-40/2000, 2.3.3.4). Suppose, for example, that civil rights leaders were motivated, not by fame or merely compassion for others, but by their belief that they were morally required to fight against injustice. Still, this moral belief may have motivated such heroes only because they happened to care deeply about doing whatever they believe is right. That is, they happened to have an antecedent desire to be moral whose source is not reason. Even if moral beliefs often have an effect on what we do, perhaps all motivation has its source in desire, and thus such beliefs must serve or further an antecedent desire. Moral beliefs, and thus moral reasoning, only motivate those already possessed of good character. According to such Humeanism, all normal, non-pathological motivation is ultimately driven by desires. So a normative belief can never produce an action (in a non-pathological way) without serving or furthering an antecedent desire. A canonical reading of Bernard Williams (1979/1981) has him assuming such a view, since he denies that a new motive can be generated in an agent when “there is no motivation for the agent to deliberate from” (109). Reasoning, the presumed source of beliefs, can only tell us how to get what we want. Have a yen for sushi? Reasoning can direct you to the best place in town, like a smartphone’s virtual assistant. But what about when moral beliefs play a role in motivation? On the Humean view, this only makes sense if that belief is merely informing you about how to satisfy an antecedent desire, such as a desire to do whatever it is that you believe is right. Unlike psychological egoism, the Humean picture is arguably the dominant theory of motivation among philosophers. Of course, the view is associated with Hume, but some now resist attributing such a theory to him. A reader may prefer to mentally substitute another label, even if more cumbersome, such as “instrumentalism” (Schroeder et al. 2010) or “antecedent motivation theory” (Mele 2003). At any rate, there’s no shortage of theorists accepting such an account of human motivation. Some defend it explicitly (e.g., Lenman 1996; Mele 2003; Sinhababu 2017), while it’s more of an assumption among some philosophers (e.g., Williams 1979/1981; Railton 1986) and some working in other disciplines. On a common picture in pg. 140 of 206

Regard for Reason | J. May

economics, for example, reasoning to the “rational” choice always just involves getting what you happen to most want. Humeanism has plenty of detractors, of course, from Kantians to consequentialists. The opposing theory, anti-Humeanism (often labeled “rationalism”), maintains that normative beliefs can causally produce a normal, non-pathological action without merely serving or furthering an antecedent desire. On this picture, we can be motivated to act ultimately by recognizing that it’s the right thing to do. Reason is not a slave to such passions. It’s difficult to discern from the armchair alone which account of motivation is correct. There are certainly non-empirical considerations one could adduce. Some philosophers are Humeans, not because they doubt that moral beliefs can ever possibly generate intrinsic desires, but because they think such causal and explanatory chains, if they occurred, wouldn’t even be intelligible or can’t ground genuinely rational action (e.g., Lenman 1996). As I and many others have argued elsewhere, such a priori arguments for Humeanism aren’t compelling (see, e.g., Darwall 1983; Korsgaard 1986/1996; Wallace 1990; Smith 1994; Parfit 1997; Shafer-Landau 2003; Barry 2010; May 2013a). But Humeans often defend their picture on the grounds that it is more naturalistic, psychologically realistic, or fitting with a scientific account of human motivation. Simon Blackburn, for example, believes it’s a “fantasy” that we can deliberate: …from a standpoint independent of any desire or concern: independent of a desire for our own good, or for the happiness of humanity, or respect for this or that, or the myriad other passions that make up our individual profiles of concern and care. (1998: 252) More recently, Alfred Mele (2003) argues that the Humean view is “consonant with a familiar empirical approach to the explanation of motivated behavior that has proved fruitful” (99) while anti-Humeanism is “mysterious” (100). This chapter will focus on the more empirical aspects of the debate. We’ll first see that anti-Humeanism can provide perfectly sensible explanations of each other’s actions that don’t appeal to antecedent desires. We’ll examine empirical arguments for positing such desires, including appeals to research on neurological disorders, normal motivational mechanisms, characteristic aspects of desire, and parsimony. However, as these arguments fail to properly understand the opposition, they fail to provide empirical reason to always posit antecedent desires to which our moral beliefs are slaves.

8.2 Anti-Humean Moral Integrity In the previous chapter, we encountered a wealth of empirical evidence suggesting that normative beliefs play a pervasive role in much of our everyday actions. Such beliefs not only motivate good behavior, they can lead to rationalizing bad behavior, such as lying to get more raffle tickets or indulging in unhealthy snacks. While the extent of ante hoc rationalization may be surprising, explanations of actions in terms of normative beliefs are quite ubiquitous in ordinary discourse, and we don’t need to go beyond them to explain the scientific evidence. In addition to the examples of Martin Luther King Jr. and Edward Snowden (from Chapter 7), consider some lesser-known and more ordinary cases. While Governor of Florida in 2010, Charlie Crist proposed to have Jim Morrison pardoned for some allegedly lewd behavior in 1969 at a concert in Miami. Regarding the proposal, Crist said, “I’ve decided to do it, for the pure and simple reason that I just think it’s the right thing to do” (Itzkoff 2010). Similarly, the pg. 141 of 206

Regard for Reason | J. May

son of the first Australian to successfully sue a tobacco company for her lung cancer, said of his late mother, Rolah McCabe: “She did it because she believed it was the right thing to do” (Rintoul 2011). We often describe one another, and ourselves, this way—as ultimately motivated by beliefs with normative or evaluative content. Humeans, of course, can admit that normative beliefs play a role in motivation, but such beliefs can’t be the ultimate source of motivation. Some antecedent desire must be responsible for the change, such as the desire to do what’s right. The Humean strategy is always to trace an action to an ultimate or intrinsic desire that’s not generated by any processes of reasoning, inference, or non-pathological belief formation. So, on the Humean view, moral integrity must always involve an intrinsic desire, such as the desire to do what’s right. However, we have to be clear about how we read the phrase “desire to do what’s right” (see Smith 1994)—it could be read either “de dicto” (about the statement) or “de re” (about the object of the statement). On a de dicto reading of “Crist wants to do what’s right,” rightness as such is part of what’s desired (Crist wants to: do whatever is right). On the de re reading, the individual simply wants to perform the action (Crist desires to: pardon) and separately believes that the action is right. Here the rightness is only part of what’s believed, not the intrinsic desire. In other words, the different readings affect what features in the content of the desire one is attributing to the person in question. The following provides a more schematic representation of the kinds of explanations of an action that correspond to each reading (parentheses indicate what is in the content of the mental state and the arrow represents causation): De re: Believe (A-ing is right) à Intrinsic desire (to A) à A De dicto: Intrinsic desire (to do what’s right) + Believe (A-ing is right) à Desire (to A) à A

Anti-Humeans can allow the kind of explanation that employs only a de re reading of the attribution of a “desire to do what’s right.” Humeans, however, deny that the de re model ever accurately explains normal, non-pathological motivation. Now, with this distinction in mind, how do Humeans account for moral integrity? They might dispense with the de dicto desire to do whatever is right and instead say that people who do what’s right merely have a desire to perform that specific action. Crist, for example, just desired to pardon Morrison and happened to separately believe it was the right thing to do. However, the evidence from the previous chapter suggests that our moral beliefs often play a causal role in what we do, even when we act from moral integrity. In such cases, Humeans must posit an antecedent desire that this moral belief serves, furthers, or promotes. Presumably that would be the desire to do whatever is right (de dicto). Some Humeans, such as Nick Zangwill, rather explicitly embrace this: “The motivating desire is the desire to do the morally preferable thing—or perhaps to do the right thing” (2003: 144). A Humean could perhaps posit a different antecedent desire (cf. Finlay 2007), but they must come up with some antecedent desire or other. Anti-Humeans, on the other hand, can attribute merely the intrinsic desire to perform the action but all that it is brought about by the individual’s moral belief. On this approach, we can take at face value the ordinary intelligible explanation of actions like Crist’s, which has the moral belief generating an intrinsic desire to do the thing (read de re). We needn’t redescribe or read into the case to conform with a theory that requires an antecedent desire. Indeed, anti-Humeans can avail themselves of either model in order to capture moral integrity. Either one acts pg. 142 of 206

Regard for Reason | J. May

ultimately from an intrinsic desire to do what’s right (de dicto) or one acts from the intrinsic desire to perform some specific action (de re) which is produced by one’s belief that it’s right. Humeans would no doubt insist that anti-Humean explanations of action are either erroneous or elliptical. Saying “She did it because she believed it was right” is an abbreviated way of attributing an antecedent desire to do what’s right (de dicto). Let’s now consider whether there is any empirical reason to prefer this form of explanation in every case.

8.3 Neurological Disorders Neuroscience is a natural place to look for evidence of Humeanism. Some neuroscientists believe that our decisions are based on unconscious urges that arise in the brain prior to a conscious decision or intention to act (e.g., Libet 1985; Haggard 2005). Does the science show that we’re merely slaves to these passions? Or can we control them, not only through some conscious mental state or other, but specifically through our judgments about what we ought to do? There are two reasons we might favor a Humean answer, each of which points to certain neurological disorders. First, patients with so-called “acquired sociopathy” seem to be counterexamples to the apparently anti-Humean claim that moral judgment issues in corresponding motivation. Second, examining brain function in various neurological disorders might seem to reveal the necessity of desire in non-pathological motivation. Addressing these challenges both defends anti-Humeanism and fleshes it out within naturalistic parameters.

8.3.1 Acquired Sociopathy We encountered so-called “acquired sociopathy” in Chapter 2 when considering patients with lesions of ventromedial prefrontal cortex (VMPFC). Recall that these patients, like Phineas Gage, don’t appear to have impairments in paradigm “cognitive” functions, such as memory, language, and learning. Nonetheless, they often make imprudent and socially unacceptable choices because they apparently have diminished affect or gut feelings that unconsciously guide ordinary decision-making. Such patients can often reason and make a judgment about how one ought to act in a hypothetical situation, but they struggle when making a decision about what they themselves ought to do in some particular circumstances. Again, Damasio (1994/2005), the leading researcher studying these lesions, argues that the patients suffer from diminished “somatic markers.” One of his patients, Elliot, had an “especially pure version of the condition” being “a modern Gage” (34). Elliot suffered VMPFC damage from a brain tumor that left him with “a normal intellect” but “unable to decide properly, especially when the decision involved personal or social matters” (43). Damasio describes his mental profile as “to know but not to feel” (45), because a battery of tests showed that Elliot’s lesion “did not destroy the records of social knowledge” but this “contrasted sharply with the defective decision-making he exhibited in real life” (49). Given this psychological profile, patients with acquired sociopathy might seem to be a problem for certain views about moral motivation, as Adina Roskies (2003) has argued. The strictest form of motivational internalism, for example, maintains that moral judgments necessarily yield some motivation to act in accordance with them. If patients with lesions of VMPFC can make genuine moral judgments while lacking any motivation to act in accordance pg. 143 of 206

Regard for Reason | J. May

with them, then they are living counter-examples to such an internalist thesis, which is often associated with anti-Humean conceptions of motivation. There are two claims one must make about these patients in order to raise problems for this extreme form of internalism. First, those with acquired sociopathy have genuine moral concepts and can make genuine moral judgments using them. Second, these patients sometimes lack any motivation whatsoever to act in accordance with these judgments. The key evidence for the second claim is that they fail to produce normal skin-conductance responses to emotionally charged stimuli, and clinical observation suggests their relevant behavior is abnormal or in conflict with their better judgments. This all suggests that the problem is a deficit in moral motivation—or, more broadly, normative motivation, which involves considerations about what one has reason to do, whether this is a matter of morality, prudence, or some other normative domain. Yet, turning to the first claim, these patients seem to possess normal moral and other normative beliefs, as opposed to, say, psychopaths who arguably use moral terms in impaired ways (see Chapter 2, §2.4.2). Roskies in particular points to studies suggesting that people with VMPFC damage have relatively normal responses to hypothetical moral dilemmas. There has been significant debate about whether the empirical evidence warrants such conclusions. Recall, in particular, that patients with acquired sociopathy may struggle with decision-making not merely for lack of motivation but because they struggle to form an allthings-considered judgment about what they themselves ought to do in this situation (Kennett & Fine 2008). Such patients may struggle to form the relevant normative judgment in the first place. However, the question that concerns us is whether Roskies’s account of acquired sociopathy, even if correct, poses a threat to anti-Humeanism. We should first clarify the nature of the debate between motivational internalists and externalists. Many internalists deny the strict form of the view that Roskies attacks. For example, Christine Korsgaard, in attempting to make room for “true irrationality,” maintains that we can be irrational “not merely by failing to observe rational connections—say, failing to see that the sufficient means are at hand—but also by being ‘willfully’ blind to them, or even by being indifferent to them when they are pointed out” (1986/1996: 320). This can occur in all of us through ordinary processes of weakness, including rationalization and self-deception. Considering the same sorts of cases, other anti-Humeans such as Michael Smith (1994) similarly conclude that only a weaker form of internalism is tenable, which maintains that one will act in accordance with one’s normative judgments provided one is acting rationally. Roskies recognizes this but argues that such a view isn’t substantive enough to address. It seems “trivially true” assuming that “as is often held, to be practically rational is merely to desire to act in accordance with what one judges right or best” (53). But it’s unclear why this renders such a view irrelevant here. Perhaps Roskies thinks the weaker form of internalism is not very substantive because it doesn’t make a claim about the causal or explanatory efficacy of various mental states. After all, as Smith makes clear, his internalism is only meant to be an a priori conceptual claim about what counts as a moral judgment—namely, a person’s judgment is moral only if, while being rational, she is somewhat motivated to act in accordance with it. But the strong form of internalism that Roskies seeks to undermine is likewise devoid of causal claims. Strict internalism says that a judgment is moral only if one has the corresponding motivation, even while one is being irrational. Roskies considers this brand of internalism sufficiently substantive, even though it doesn’t claim that the moral judgment caused the motivation. There is a causal thesis that is at least closely related to, though distinct from, internalism—namely, anti-Humeanism. This makes the causal-explanatory claim that moral (or pg. 144 of 206

Regard for Reason | J. May

broadly normative) judgments can at least sometimes causally produce intrinsic or ultimate desires. Roskies appears to assume internalism is a causal thesis when she characterizes classical versions as holding that the “motivation to act in accordance with one’s beliefs or judgments… must stem from the moral character of a belief or judgment itself” (2003, 52, emphasis added; cf. also Smith 1994: 179). But this blurs the distinction between internalism and anti-Humeanism. The question now is whether anti-Humeanism, closely linked as it is to internalism, is threatened by what we know about acquired sociopathy. Several theorists appear to think that internalism is central to anti-Humean claims about motivation. For example, Schroeder, Roskies, and Nichols (2010) discuss a position they call “cognitivism” which is a form of antiHumeanism as we’ve construed it. They claim that a population “apparently capable of making moral judgments but not at all motivated by them” will “present an obvious challenge” to the view (95). Yet acquired sociopathy, even as Roskies describes it, poses no problem for antiHumeans, whether or not they’re internalists. It’s important not to conflate theories about the causal powers of moral judgments with theories about what could possibly count as a moral judgment. The two issues cut across one another. Anti-Humeans who embrace the kind of weak internalism espoused by Korsgaard and Smith maintain that one is either motivated by the relevant judgment or else being irrational. The existence of patients with acquired sociopathy simply forces such theorists to maintain the truth of the second option: these patients have as part of their deficit an increased tendency to act irrationally. And this should not be tendentious if, as Roskies claims, this view of rationality is “trivially true.” There is another option as well. If anti-Humeans only concede that the connection between normative beliefs and motivation is contingent and not universal, they can simply rest content with no diagnosis at all (compare Shafer-Landau 2003: 159-60). While we all sometimes fail to be motivated by our normative judgments, VMPFC patients simply do so more often in certain circumstances. Compare: striking matches generally causes them to light even if this fails when a match is wet. While some story eventually needs to be told about why a wet match doesn’t light, this does not threaten the initial causal claim. Similarly, anti-Humeans only hold that normative beliefs can sometimes motivate a person to act in accordance with them without serving an antecedent desire. They needn’t go further, as Smith likely would, and commit themselves to the claim that it is their practical irrationality that is the problem; it could, for instance, be a non-rational character flaw (cf. McDowell 1978/1998). Our concern is whether “reason” can motivate, not precisely how. Acquired sociopathy, then, poses no threat to anti-Humeanism, even if we accept the characterization offered by Roskies, Schroeder, and Nichols (2010). In fact, the research provides some support for anti-Humeanism. After all, as these authors point out, VMPFC damage seems to “sever the link between the cognitive judgments and motivation, leaving intact the judgment and its content, but not causing motivation that might normally result” (98). Researchers studying the phenomenon have documented the negative impact such brain damage can have on behavior. While many philosophers seem to take this as evidence that normative judgments play a small or non-existent role in motivation, compared to gut feelings, it seems to only highlight once again the pervasive impact of normative cognition on normal, nonpathological motivation. Of course, Humeans would insist that these normative beliefs are ultimately in the service of antecedent desires, such as the desire to do whatever is right (read de dicto). However, this is just a restatement of Humeanism that has not been foisted upon us by the study of acquired pg. 145 of 206

Regard for Reason | J. May

sociopathy. We do not have positive evidence that what’s lacking in VMPFC damage is such an intrinsic desire. So the anti-Humean theory is perfectly compatible with acquired sociopathy, even if we conceive of such patients as suffering in part from a failure to be motivated by their judgments about what they ought to do in particular circumstances.

8.3.2 Parkinson’s & Tourette’s Perhaps the Humean theory can be supported empirically by an appreciation of other neurological disorders. Schroeder, Roskies, and Nichols (2010) discuss Parkinson’s disease, which involves impaired communication between the reward system and the basal ganglia. On their account of the brain structures that bring about certain psychological phenomena, intrinsic desires are realized by the reward system, which facilitates reward-based learning, and actionselection is carried out by the basal ganglia (81-2). On this model, Parkinson’s is a disorder in which intrinsic desires “slowly lose their capacity to causally influence motivation” (93), regardless of the person’s beliefs about what they should do. So the proposal appears to be that the difficulty in controlling movement so common in those with Parkinson’s can be interpreted as the inability to translate into action intrinsic desires to, say, hold one’s coffee mug steady. According to Schroeder and colleagues, this shows that “desires are necessary to the production of motivation in normal human beings,” which “would seem to put serious pressure on the [antiHumean] position” (93). A similar issue arises with Tourette syndrome. Looking at the neurophysiology, they argue that the brain of a person with this disorder should be the paradigm of purely cognitive motivation that bypasses desires. Such patients overwhelmingly report that their tics are often voluntary: they intentionally curse, for example, because the urges are too difficult to resist. Yet the tics characteristic of the syndrome appear to involve a failure of the basal ganglia to inhibit certain motor commands from “higher cognitive centers,” bypassing the reward center, which we’re supposing realizes intrinsic desires. For example, just thinking about uttering an expletive in public might lead one to have a Tourettic urge to do so. This appears to be another source of evidence that normal action is dependent on desires while only pathological behavior can deviate from this by relying only on more cognitive structures. But, as one can readily see, it would be quite problematic to assimilate normal moral action and its motivation to such pathological behavior (2010: 94). As Schroeder (2004) has put the point in the past, “this would put moral motivation on a par with Tourettic urges” (160). These objections presume anti-Humeans hold that beliefs motivate independently of any desires whatsoever. Such a purely cognitive theory is committed to the existence of so-called “besires”—mental states with the characteristic functional role of both beliefs and desires. Some anti-Humeans have held this view (e.g., Dancy 1993). Yet sophisticated anti-Humeans have long held that beliefs can only motivate by producing subsequent desires. Thomas Nagel (1970/1978), for example, maintains that “considerations about my future welfare or about the interests of others cannot motivate me to act without a desire being present at the time of action” (29). In this way, “all motivation implies the presence of a desire” (32). Nagel is not always read as acknowledging that intentionally performing an action requires a preceding desire to perform that action, but it is certainly easy to do so (cf. Darwall 1983: ch. 3; Wallace 1990; Dancy 1993: ch. 1). Such sophisticated forms of anti-Humeanism can happily admit what some misleadingly call a “Humean” belief-desire psychology (cf. Smith 1994). In particular, they can agree that pg. 146 of 206

Regard for Reason | J. May

intentionally performing an action, A, always requires a desire to A, in the broad sense of “desire” as a state whose function is to bring about its content. Moreover, anti-Humeans should agree that beliefs alone are not essentially motivational states because they lack the relevant “direction of fit” (see esp. Darwall 1983). So they needn’t believe in special hybrid mental states like “besires.” Opponents of Humeanism must maintain only that sometimes a (normal, nonpathological) intrinsic desire can be generated by a normative belief. Even if moral beliefs don’t themselves constitute or encompass motivation, they can produce such motivation-encompassing states (Mele 2003: ch. 1). Counting moral beliefs only as motivation-producing states is sufficient for the anti-Humean claim that one can do something ultimately because one believes it’s the right thing to do (see Figure 8.1). Figure 8.1: Accounts of Moral Motivation

Arrow: causation; D: intrinsic desire; B: moral belief; A: action. (Figure inspired by Dancy 1993: 9.)

So anti-Humeans can and should explain moral motivation by appeal to desires that are subsequent to the relevant moral beliefs. At one point, Schroeder, Roskies, and Nichols (2010) seem to acknowledge this, as they characterize the anti-Humean theory as only holding that sometimes “beliefs lead to motivation… quite independently of any antecedent desires” (76, emphasis added). But such a view is perfectly compatible with neurological disorders revealing that intrinsic desires are “necessary to the production of motivation in normal human beings.” On the sophisticated form of anti-Humeanism, sometimes a cognitive state is just the ultimate source of intrinsic desire. One might worry that this fails to yield a genuinely anti-Humean position on which reason can motivate. After all, reason can only allegedly motivate on this picture by producing some desire. But I am in agreement with Derek Parfit (1997: 105-6) that this is no better than maintaining that a bomb can’t be destructive because it can be so only by producing some explosion. Moreover, if reason can produce a desire that furthers its dictates without serving some antecedent desire, then we can resist the Humean idea that reason’s role in motivation is only to tell us how to satisfy our desires. Anti-Humeanism yields the conclusion that in an important sense reason is not a slave to unreasoned desire. It opens up the psychological possibility that one can be motivated to do something by coming to believe it’s right, regardless of whether the act promotes something one already wants. For example, I needn’t have an pg. 147 of 206

Regard for Reason | J. May

antecedent desire to do whatever I think is right in order to become motivated to keep a promise, once I recognize that I should keep it. Of course, many of us may well want to do whatever is right (read de dicto), but anti-Humeanism allows that one’s moral belief can motivated without the aid of such a desire. Compare: While most of us have standing desires to acquire more money, the more virtuous among us may turn a fugitive in merely to uphold justice, not to gain the handsome reward.

8.4 Special Mechanisms Aside from specific neurological disorders, perhaps more general empirical considerations could establish the need to always posit antecedent desires. Alfred Mele (2003) argues for the Humean theory partly on the grounds that it’s “not at all mysterious how a desire to A would derive some of its force from a relevant antecedent desire” (94). Anti-Humeans, on the other hand, maintain that certain beliefs can motivate by producing a corresponding desire but without serving an antecedent desire. Mele objects that the “uncaused coming into being of attitudes of this kind would be mysterious” (100), failing to be “consonant with a familiar empirical approach to the explanation of motivated behavior that has proved fruitful” (99). Similarly, Bernard Williams famously criticizes anti-Humeanism as committed to the existence of beliefs about reasons that generate new motivations “in a special way” (1979/1981: 108). It is a common refrain among Humeans that the opposition is somehow at odds with a scientifically respectable account of human action (cf. Railton 1986: 206; Blackburn 1998: 252). Can we reconcile the motivational power of moral beliefs with a naturalistic approach to human motivation? The rejection of Humeanism might seem to require positing special mechanisms in human motivation. But the best anti-Humean theory merely posits a disposition for normative beliefs to generate the corresponding desires. While a strong-willed person, for example, may lack the antecedent desire to throw her pack of cigarettes away, she may simply have a disposition to do so if she believes it’s best to trash them. Humeans may be tempted to count such dispositions as desires and declare victory. However, while we’re working with a rather broad conception of desire, we shouldn’t broaden it to any mere disposition of a person to do something. A desire is a goal-directed, conative, motivation-encompassing state with some content, so it’s a state whose function is to bring about what’s desired. But desire is not the sole proprietor of dispositions relevant to action. To adapt an example from Darwall (1983: 40), even if I’m disposed to eat a piece of pie upon seeing one, this disposition alone need not constitute a desire for some pie. A mere disposition can graduate to a full-fledged desire only if it has the requisite function and content (cf. Dreier 1997: 94). Of course, Humeans would explain the eating of some pie in terms of an antecedent desire to eat delicious food when it’s available. However, the pie example is not meant to refute Humeanism but rather to show that being disposed toward some action (e.g., eating pie) doesn’t entail antecedently desiring it in particular. Moreover, Humeans should want to avoid counting mere dispositions as desires. Otherwise they would be unable to provide their characteristic explanation of a person’s moral belief as promoting an antecedent desire. Suppose, for example, that Simone believes she ought to hold her tongue and refrain from insulting her sexist coworker. Furthermore, she doesn’t have, or isn’t in this case motivated by, an antecedent goal to do whatever is right. She merely has a disposition to desire to do something after coming to believe it’s right. Humeans could call this disposition a “desire” but then the explanation is not a Humean one, since a mere disposition pg. 148 of 206

Regard for Reason | J. May

lacks the specification of anything like a goal that can then be served or furthered by the subsequent desire. Such explanations are part and parcel of the Humean theory. Mele anticipates the appeal to mere dispositions or capacities. But his characterization of it is uncharitable: “Perhaps it will be said that rationality, or practical rationality, is partly constituted by an indefeasible disposition to desire to A upon coming to believe that one has a reason for A-ing” (100). The disposition needn’t be indefeasible, given that people can sometimes be rational and sometimes not. Indeed, all theorists should agree that people can fail to do what they believe they have reason to do. It’s only virtuous people who will, among other things, transition from the normative belief to the new desire to act as it dictates. Anti-Humeans need only hold that this can occur without the subsequent desire serving an antecedent one. While one might provide arguments against this conception of what a virtuous person’s psychology is sometimes like, it’s not mysterious or puzzling on its face. The process involves ordinary causation: an intrinsic desire is causally generated by a normative belief and the relevant disposition. Anti-Humeanism would be mysterious if it held that moral beliefs generate desires randomly. Then we could rightly question why this would ever happen at all, regardless of how it’s implemented in the brain. But it’s perfectly unsurprising that an individual would have this disposition insofar as she is being rational, strong-willed, or whatever is most appropriate here (compare Davis 2005: 256-8). The demand for an alternative explanation simply betrays a bias in favor of Humean explanations. The point is amplified when we consider the fact that whatever anti-Humeans say here, Humeans must say something quite similar. The only difference is that, instead of a disposition, Humeans posit a full-blown desire. While the motivation attached to such a desire is perfectly explicable (since desires are by hypothesis motivational states), the fact that it appears in the individuals it does would be mysterious unless the Humean holds, as Mele seems to, that it’s partly constitutive of their rationality or good character to possess such antecedent desires. But this isn’t importantly different from the anti-Humean claim that it’s partly constitutive of being a rational, virtuous, or strong-willed person that one possesses the disposition to desire in accordance with one’s normative beliefs. So the relevant version of Mele’s question applies to the Humean theory as well: How does a person’s being rational contribute causally to her coming to desire to do something? Humeans must likewise provide some answer, but the inability to provide certain details, such as how this is implemented at some lower level, is no count against them. Humeans, then, must treat certain causal processes as privileged, with no further explanation needed. Consider what Mele takes to be the familiar empirical approach: “(setting aside wholly intrinsic desires for action) motivation for specific courses of action is produced by combinations of antecedent motivation and beliefs” (99). It’s instructive to focus on Mele’s parenthetical remark, which shows that Humeans must treat intrinsic desires as special, for they can be generated without the help of antecedent desires—otherwise a regress ensues (see May 2013a). Yet we should readily see that this makes anti-Humeanism equally compatible with the familiar empirical approach to motivation provided it is described at the relevant level: setting aside certain cases, motivation for specific courses of action is produced by combinations of antecedent motivation and beliefs. Put more intuitively, the familiar empirical approach is simply that normal, non-pathological actions are generated by intrinsic desires combined with beliefs about how to satisfy them. This is silent on how intrinsic desires are generated, including whether they can be generated by normative or evaluative beliefs. pg. 149 of 206

Regard for Reason | J. May

Normative beliefs are indeed special in a certain sense since not all beliefs can generate intrinsic desires in a non-pathological way. Beliefs about what one ought to do are special precisely because they are normative or evaluative—they represent something as good, valuable, best, ideal, right, what one has most reason to do, and so on (recall Chapter 7, §7.2.1). But there is no obstacle to situating this anti-Humean theory of motivation within the standard architecture of the brain. Consider the brain’s reward system, which is intimately tied to motivation and pleasure. Suppose I intrinsically desire to play music. My representational capacities treat that scenario (playing music) as “rewarding” by generating well studied “reward signals” that aid in learning, primarily through the release of dopamine. As Schroeder puts it, reward signals in the brain generally “make people happy, motivate them directly in a manner mediated by conscious deliberation, inculcate behavioral tendencies unconsciously, create intrinsic desires” and so on (2004: 54). Very roughly, playing music tends to please me; to motivate me to play; to learn the means to playing; to facilitate relevant habits; and so on. Schroeder prefers a Humean approach to how desire arises in the reward system, but antiHumeans can also use it to illuminate how normative beliefs can motivate in a non-pathological way. Normative beliefs are rather fit for representing states of affairs as rewards in the sense relevant to reward-based learning. For example, suppose I represent helping a friend in need as good or right. By representing helping in this positive light, I am disposed to be pleased by helping, to be motivated to help, to identify ways of helping, and even generate the relevant intrinsic desire to help. Of course, there is perhaps little difference between intrinsically desiring to help and simply believing that helping is good, since both are normally disposed to generate reward signals, and thus motivation. However, even Schroeder recognizes (132) that beliefs about what’s right should be distinguished from desires—a subtle but important difference when it comes to the role of reason in moral psychology. In sum, the anti-Humean account is not inexplicable or deeply mysterious on its face. We can, in particular, assuage such worries by appealing to the relevant dispositions that can be realized by the brain’s reward system. Moral beliefs are special in one respect—they have normative content—but no special mechanism is necessary. Indeed, the normative content of such beliefs can exploit the same brain mechanisms involved in generating intrinsic desires. There is no reason to reject this approach as positing special mechanisms that are incompatible with a scientifically sound approach to the explanation of human action.

8.5 Aspects of Desire Another empirical route to Humeanism appeals to special properties of desires that beliefs, even moral ones, may seem to lack. Suppose I have a craving for some coffee ice cream. This desire will motivate me to head over to the local creamery, I’ll feel pleasure on the way as I imagine devouring it, and this pleasure and motivation will increase as my attention is directed toward more vivid representations of the ice cream (e.g., a billboard image) and things I associate with it (e.g., images of the creamery, other flavors of ice cream, coffee beans). Thus, as Sinhababu (2009) points out (cf. also Schroeder 2004): • desires are motivational; • we experience pleasure at the thought of satisfying desires (and displeasure at the thought of not satisfying them); pg. 150 of 206

Regard for Reason | J. May

• •

desires direct our attention towards their objects and what’s associated with them; desiring that something isn’t the case is a particular flavor, aversion, which has different characteristic pleasures or displeasures (relief and anxiety, rather than delight and disappointment); • desires are strengthened by more vivid representations of their objects (or of what’s associated with them). These properties of desire don’t just explain simplistic examples. Sinhababu (2017) uses these properties to explain a wide range of actions and mental states in Humean terms—e.g., intention, deliberation, willpower, and weakness of will—thus revealing the theory’s explanatory power. Consider one of our key cases, in which a moral belief plays a role in motivating action, such as Crist’s proposal to pardon Morrison. On the anti-Humean account on offer here, Crist’s action needn’t be motivated ultimately by an intrinsic desire to do whatever is right (read de dicto). He could be motivated simply by the recognition that it’s the right thing to do, which combines with a virtuous disposition to generate an intrinsic desire to propose the pardon. Humeans like Sinhababu would urge that we posit an antecedent desire here because what’s allegedly a mere disposition has all the characteristic features of a desire. Crist’s virtuous disposition is certainly motivational: combined with the moral belief, it generates a desire. He will presumably experience pleasure upon doing what he regards as right (e.g., pardoning Morrison), and his attention will presumably be directed toward actions he believes to be right. Finally, vivid representations of doing things that are right will increase the motivational strength of his disposition (e.g., viewing iconic photographs of civil rights leaders). So why not graduate the relevant disposition to a full-fledged desire? The problem is that some mental states other than desire, such as beliefs, share these aspects or properties as well. That is, these aspects, while perhaps characteristic of desires, are not unique to them. Let’s consider four of the five key aspects in turn. Motivation. As we’ve already seen, there are at least two senses in which a mental state can be motivational. A state can itself be, constitute, or encompass motivation or it can merely produce a motivation-encompassing state (Mele 2003). Various mental states other than desire are motivational in the second sense, such as perceptual states. Suppose, for example, that you simply fell into the habit of flossing before brushing your teeth, rather than vice versa. Habits are often directly motivated by perception of certain cues. Perceiving that it’s nighttime and that you’re in front of the sink produces motivation in you to carry out the teeth-cleaning routine, which involves grabbing the floss first. Indeed, as we saw, it behooves Humeans to allow that there are some mere dispositions to produce action that don’t amount to desires. And normative beliefs are well suited to be motivation-producing, since they represent some state of affairs in a positive normative light and they can combine with the virtuous disposition to be motivated to do what one regards as right. Hedonic effects. While we typically gain pleasure upon getting what we want or imagining as much, these effects occur in the absence of a relevant desire as well. Consider getting pleasure from something unexpected and undesired, such as the notoriously smelly but apparently tasty fruit durian. Other examples might include an unexpectedly pleasant hug from one’s enemy or a high from a long run. Humeans would probably want to explain these cases by positing more general antecedent desires that these events satisfy—a desire to eat tasty things, to be loved, or to feel euphoric. But suppose we focus on examples of newborns who experience pleasure or pg. 151 of 206

Regard for Reason | J. May

displeasure upon encountering many new objects and events for the first time. Or consider the famous patient Clive Wearing who perpetually lives in the present due to profound anterograde and retrograde amnesia. Surely, Humeans don’t want to posit innate desires for pleasurable experiences that all unexpected future pleasures serve. Otherwise, much like psychological egoism, it can’t allow that someone gains an unexpected pleasure that isn’t somehow due to an antecedent desire. Instead, we should all recognize that some mental states produce pleasure other than desire, including normative beliefs. If I believe, for example, that I have most reason to tell the truth, then I’ll be disposed to feel pleasure upon telling the truth or vividly imagining doing so. Attention. One’s attention can be directed to many things that one does not desire, as Sinhababu seems to grant (2017: 85). Consider again cases of encountering what’s unexpected. My attention can be drawn to a new funny looking mural on campus, the absence of a tree I normally see on my walk but never before recognized, or a ring tone on my phone that isn’t the one I setup. In such cases it seems strained to explain these grabs of attention in terms of a desire to see funny things, a desire to see what’s familiar on walks, or an aversion to the world being other than as I expect it to be. Perhaps I merely associate these things with what I do have desires or aversions towards, such as the expected or the unexpected, but again the explanation is strained. At the very least, anti-Humeans have a perfectly plausible explanation that doesn’t suppose directing attention is exclusive to desires. Another relevant kind of example is of what’s old hat—what one is indifferent to but isn’t exactly unexpected. Here are two examples from my own life. Ever since Kanye West released his mediocre album 808s and Heartbreak, my attention is often powerfully directed to clocks that read 8:08, even when caught in my peripheral vision. Now I don’t have any strong desires or aversions towards this album. I neither hate it nor like it. The situation is similar to another common phenomenon: one’s attention being directed toward words one has recently learned. I distinctly recall as a child when the term “grovel” jumped out at me on an episode of the show Roseanne; I had recently learned the word in school for vocabulary. I didn’t have and have never had an infatuation with the word “grovel” nor with words generally. “Grovel” was just more recognizable once I knew its meaning. It caught my attention for reasons unrelated to the things I desired or even associated with them. Plausibly the explanation for why one’s attention is directed in such cases will appeal to more cognitive elements in our minds, such as the recognition that there are relationships among certain objects or ideas present or exemplified in one’s environment. Of course, in the case of normative beliefs, recognizing one has most reason to do something isn’t merely recognizing that an action is related to the idea of rationality. Plausibly our attention is directed toward things we consider reasonable because we view them positively. Humeans can of course account for this in terms of a desire, but the point is that one needn’t. We can view things positively merely by recognizing they’re reasonable or right. So it’s no surprise that what we regard as reasonable or right (or associated with such normative properties) sometimes grabs our attention. Our regard for reason can manifest itself as a desire to do what we ought (de dicto) but also as a mere disposition to desire to what we ought (de re). In either case, we regard normative properties positively, which explains why our attention may be directed toward such ideas and what we associate with them. Compare: Even if I have no desires regarding the Mona Lisa, that famous painting and similar representations of it will no doubt grab my attention given that I regard that work of da Vinci’s positively or as noteworthy. pg. 152 of 206

Regard for Reason | J. May

Vividness. Vivid sensory or imaginative representations of what I regard as significant, even if not desired, will increase their tendency to direct my attention and, when relevant, to motivate. For example, vivid representations of one’s tic (e.g., the use of curse words in a film), will increase the impulse to tic. Similarly, vivid perception of novel but pleasing objects will cause increased pleasure and command even more of one’s attention. We can expect such vividness effects for normative beliefs in particular, since they precisely mark certain actions or traits as normatively significant. In sum, many of the characteristic features of desire are not unique to that mental state. Normative beliefs and virtuous dispositions involve regarding certain things as noteworthy or significant, which naturally affects one’s motivation, attention, and pleasure. Of course, many of the properties of desire will show up on an anti-Humean explanation of action simply because such explanations posit that moral beliefs motivate only by generating a desire—namely, a desire to do the thing that one believes is right (de re). Consider again Crist: we can explain his pleasure upon pardoning Morrison (or vividly contemplating it) by simply appealing to his intrinsic desire to pardon the musician, not an antecedent desire to do whatever is right (read de dicto). So, even when the properties of characteristic of desire are due to the presence of a desire in the explanation, we don’t necessarily have reason to posit an antecedent motive that the moral belief serves.

8.6 Simplicity So far, it seems Humeans can’t justify always imputing antecedent desires to people based on what we know empirically about human motivation. Perhaps one could simply employ Ockham’s razor, a prized theoretical virtue of scientific theories. Humeans do often admit a sort of explanatory stalemate—both they and their opponents can provide adequate explanations of human motivation—but the famed razor is used to shift the case in favor of Humeanism. Sinhababu, for example, writes: The Humean theory offers us the attractive promise that a simple explanation invoking only desire-belief pairs for motivation will be sufficient to account for all cases of action. If this promise cannot be kept, we will have reason to go to theories drawing on a more expansive set of explanatory resources—perhaps theories according to which beliefs about our reasons are capable of causing action or generating new motivational forces without any assistance from desire. (2009: 466) There are two distinct problems with this final empirical strategy. First, parsimony isn’t an uncontroversial virtue of empirical theories, for it alone only increases the probability of a hypothesis in rather specific conditions (Sober 2015). In the particular case of motivation, there is special reason to worry about staking one’s account solely on simplicity. The history of psychological theory has shown a trend in the proliferation of moving parts, such as types of mental states, processes, or modules. At this point, the value of even seeking to appeal to Ockham’s razor may be suspect, at least given the domain in which it’s being employed. Reductive programs might seem appropriate in physics, but psychology and related fields have tended toward introducing new entities and mechanisms, not eliminating them (cf. Haidt & Bjorklund 2008: 205-6; Mikhail 2013). Consider memory as an example (cf. Holton pg. 153 of 206

Regard for Reason | J. May

2009: xii-xiii). Rather than develop a unified conception of memory, psychologists have posited quite distinct kinds with rather different functions (e.g., short-term, long-term, declarative, procedural, episodic, semantic). Of course, these are still all memory, having features that unify them under that genus (e.g., they’re all ways of storing information in one’s mind). Similarly, Humeans do not shy away from distinguishing different types of desires. However, the general point still holds, which is that we should in advance expect the architecture of our evolved minds to be rather disjointed and modulated rather than simple and elegant. While the Razor may still be of some value, we might at least bet that its role in psychological theorizing will be limited. Perhaps, though, we shouldn’t be antecedently dubious of parsimony in moral psychology. Even so, a second problem is that it’s unclear whether the Humean theory is in fact any more parsimonious. Some argue that it isn’t (e.g., Barry 2010: 209), but we needn’t rely on that strong a claim. The kind of anti-Humeanism defended here doesn’t posit a distinct kind of motivational mental state. Given that we can agree that beliefs are not motivation-encompassing, we needn’t posit “besires” which have both directions of fit. Instead, we can hold that there is no need to always appeal to an antecedent desire. This involves a kind of transition that Humeans don’t admit: a normative belief causes (non-pathologically) a corresponding intrinsic desire. But roughly the same kind of transition is allowed by Humeans: sometimes we transition, in an ordinary way, between a belief and desire, but only when there is a relevant antecedent desire. On one way of counting, there is a certain kind of mental process here that Humeans needn’t posit (Sinhababu 2017: 57). Perhaps we should individuate processes by what states can be involved in the relationship (the “relata”). On the Humean theory, motivational relationships only arise between one mental state and a desire, and the latter must initiate the process (compare the relation of biological parenthood). Anti-Humeans just allow more than desires to initiate a motivational relationship between mental states (compare the relation next to). Sometimes we do seem to distinguish processes based on what’s related, as when we distinguish becoming a mother from becoming a father. However, even in such cases we can recognize that the processes or relations are fundamentally the same (here, becoming a parent). And many processes are counted the same while relating different things so long as they aren’t importantly different. For example, we don’t posit two kinds of baking or two kinds of corrosion just because the relationship can hold between different entities. A human or a robot can bake a cake (or a quiche); water or acid can corrode a pipe (or a rock). Similarly, the motivational process posited by anti-Humeans is not radically different from the one their opponents recognize. We needn’t posit two kinds of motivational process just because one is initiated by a desire while the other is initiated by a belief. Given just these two problems with the appeal to parsimony, I submit we follow Hume himself in being wary of that “love of simplicity which has been the source of much false reasoning in philosophy” (Hume 1751/1998: App. 2.6).

8.7 Conclusion Once we recognize that our beliefs about what we ought to do frequently play a role in motivation, Humeans must posit an antecedent desire with normative content. Yet we’ve seen that there is no empirical reason to doubt that a virtuous person’s normative beliefs can generate desires to act in accordance with them without serving or furthering antecedent desires. AntiHumeanism is entirely compatible with what we know about the neuroscience of motivation and psychological explanations of action. Moreover, we needn’t cut anti-Humeanism out of our pg. 154 of 206

Regard for Reason | J. May

theorizing in the name of parsimony. Perhaps, then, we can in fact be ultimately motivated to act for the pure and simple reason that we believe it’s the right thing to do. Moral integrity can thus take on one of two forms. We can act from an intrinsic desire to do whatever is right (de dicto). This is the only possibility on the Humean view. But the antiHumean theory allows another possibility. By coming to recognize that something is right, a virtuous person is then motivated to act accordingly (de re). Either way, one acts on a desire to do what’s right (read either de dicto or de re) for its own sake, not as a means to something else. But how do we come to acquire moral integrity? We could discover that the relevant desires are innate, but more plausibly they are acquired over time. On the Humean theory, since beliefs can never generate intrinsic desires, we can eventually come to care about being moral for its own sake only through non-rational processes, such as habituation or cultural osmosis. We could reason our way to caring about morality in a Humean fashion, but only through means-end reasoning (e.g., concluding that being moral serves our interests), which yields an extrinsic desire to be moral, not moral integrity. Either genealogy subjugates reason in moral motivation. What anti-Humeanism provides is a way in which we can come to desire to do the right thing (de re) through processes of cognition and reasoning that are not beholden to our given passions. Early in our lives, we come to understand and recognize instances of what’s right and good, which lead to intrinsic desires for such things, and perhaps even an intrinsic desire for whatever we end up deeming to be right (read de dicto). In either case, the anti-Humean genealogy allows reason to direct our desires, even if at others times it is subservient. Now, I don’t pretend to have conclusively ruled out Humeanism as a hypothesis about the structure of motivation. I only aim to show that moral beliefs play an important role in human action and that we lack empirical reason to always posit antecedent desires that they serve or further. Perhaps our best theory will require always positing such antecedent desires. As we saw in the previous chapter, construing moral integrity as involving such antecedent desires isn’t always incompatible with virtue anyway. The next source of pessimism to address doesn’t turn on whether our moral beliefs motivate without serving an antecedent desire. Even if reason isn’t a slave to our desires, we still have to address scientific evidence that suggests that we often fail to act for the right reasons, due to being influenced by corrupted rationalizations or arbitrary situational factors.

pg. 155 of 206

Regard for Reason | J. May

Ch. 9: Defending Virtuous Motivation Word count: 11,012

9.1 Introduction Like most professors, whenever my students take a test, I take measures to curb cheating. During the exam I project an image onto the screen at the front of the class: a big pair of penetrating eyes. I introduce this non-standard practice by informing the students that it’s based on experimental evidence that people cheat less when there are eyes on the wall (e.g., Bateson et al. 2006). It always gets a laugh. Students presumably find it humorous in part because it sounds preposterous that a mere image of eyes deters. Cheaters are going to cheat regardless of eyes, and good students don’t cheat because they believe it’s wrong, unfair, or unwise, right? As John Doris puts it, no one is going to say “I did it because of the eye spots” (2015: 43). Given the sheer number of similar findings, scientific evidence may still seem to warrant wide-ranging pessimism by showing that, even when we do what’s right, we’re frequently motivated by the wrong reasons. Some theorists argue that arbitrary or morally irrelevant aspects of the situation—such as an image of eyes or the pleasing smell of cookies—unexpectedly motivate us to act (e.g., Nelkin 2005; Nahmias 2007; Vargas 2013; Doris 2015). Even when more stable factors drive us to be kind, fair, and considerate, other commentators argue that these are often forms of self-interest, masquerading as morality, often due to rationalization (e.g., Cialdini 1991; Wright 1994; Batson 2016). In either case, the claim to noble motivation is defeated. Virtue remains elusive, even if we can know right from wrong (Chapters 1-5) and can be motivated by compassion for others (Chapter 6) or by our moral beliefs (Chapters 7-8). These remaining threats to moral motivation resemble the wide-ranging debunking arguments encountered in Chapter 4. Pessimists contend that moral behavior, by and large, has defective influences that render our motives lacking, not in knowledge, but in its motivational analog, which we’ve been calling “virtuous motivation.” Some of these influences may be rather stable and familiar (e.g., self-interest), but others are often surprising and vary with the circumstances (e.g., watchful eyes). We thus have two final “defeater” threats to virtuous motivation—frequent, even if not universal, egoism and situationism—which suggest that virtue is at best found only in the few moral saints among us. As with ambitious debunking arguments, the wide-ranging defeater threats succumb to a dilemma: one can identify influences on many of our morally relevant behaviors that are either substantial or ethically arbitrary, but not both. Our actions are influenced by many factors, only some of which are problematic in certain contexts. Naturally, it’s difficult to identify one influence on a wide range of actions that is both sizeable and defective enough that it “defeats” any claim to virtuous motivation. Our best science so far bears this out, suggesting a familiar trade-off. When skeptics identify substantial influences on many morally relevant actions, it pg. 156 of 206

Regard for Reason | J. May

comes at the cost of identifying influences that aren’t roundly defective. Yet, when skeptics identify genuinely arbitrary influences on a wide range of behaviors, it comes at the cost of such influences not being consistently substantial. The dilemma provides a principled and systematic reply to the sweeping egoistic and situationist challenges. What’s warranted is a cautious optimism: virtuous motivation is prevalent even if sometimes elusive. Often we are ultimately motivated by a concern for others (altruism) and morality (moral integrity), not just self-interest (egoism) or situational non-reasons. We only have pressure to accept a more limited critique of our motivations. Like moral knowledge, virtuous motivation by and large is not under siege.

9.2 The Defeater’s Dilemma Being primarily motivated by self-interest clearly precludes many morally relevant actions from being virtuously motivated. Situational factors are incompatible with virtuous motivation if they similarly prevent one from acting for the right reasons. According to our pluralistic theory of virtuous motivation (see Chapter 7, §7.4.2), what’s commonly necessary for virtuous motivation is our broad conception of moral integrity: motivation by an intrinsic desire either to do whatever is right (read de dicto) or to do some particular action that one believes is right (de re). The antiHumean conception of motivation (Chapter 8) allows for moral integrity in either of its guises. However, virtue is rare if moral integrity is too, crowded out by either self-interest or arbitrary situational factors. The two remaining empirical threats to virtuous motivation—near-universal egoism and situationism—bear a deep resemblance to debunking arguments. Each threat involves targeting a wide range of mental states by identifying an influence on them that prevents one from achieving a certain morally relevant honorific, here “virtue” instead of “knowledge.” To distinguish the cognitive from the motivational challenge, let’s follow Doris in calling illicit motivational influences not defective ones but “defeaters” (a distinct usage from the one popular among epistemologists). He defines defeaters as causes of an actor’s behavior that “would not be recognized by the actor as [good] reasons for that… behavior, were she aware of these causes at the time of performance” (2015: 64-5; cf. “moral dissociation” in Merritt et al. 2010: 363 and “agential disunity” in Rini 2017). The final clause containing the counter-factual condition (“were she aware…”) is arguably unnecessary for our purposes. Doris primarily aims to undermine a common conception of agency or moral responsibility, which is a broader category than virtuous motivation (cf. also Nahmias 2007). And, on common accounts, I lose my agency only if I act for what I wouldn’t even consider to be reasons. If, however, I’m honest because of the watchful eye spots on the wall and I would consider that a good reason for being honest, even if it isn’t, then I act of my own free will but not virtuously. Virtue and freedom diverge in this way, because one needn’t recognize an influence as problematic in order for it to conflict with acting for the right reasons. I need to act for the truly right reasons, not what I merely believe are the right reasons (or even what I would believe upon reflection). Erroneously thinking the eye spots are a good reason for honesty may help me retain my agency, but that’s not enough to save virtuous motivation. Similarly, Ayn Rand might regard the self-interested motivations for her acts of charity as enlightened. But, if her actions are ultimately motivated only by a concern to promote her business, they aren’t virtuous, even if she remains responsible for them (cf. Arpaly 2003: 84). pg. 157 of 206

Regard for Reason | J. May

So, for our purposes, let’s define defeaters in a slightly different way so that we focus on influences that are in fact morally irrelevant (cf. Alfano 2013: ch. 2): A defeater is an influence on one’s action that renders it motivated by the wrong kind of reason or increases the degree to which it’s so motivated. The reason is of the wrong kind in the sense that, if the influence is substantial enough, one’s motivation is morally problematic or less virtuous. Again, even if it’s not sufficient for virtue to act for good reasons, it’s generally a necessary condition that one’s action is not significantly influenced by defeaters. Importantly, defeater challenges to moral motivation apply regardless of whether there is such a thing as robust character traits. Selfishness, for example, is commonly a defeater regardless of whether it’s a stable vice or arises from a fleeting feature of the situation. Both egoism and the effects of arbitrary situational factors often seem to be defeaters for morally relevant actions. If one does what’s right primarily for self-interested reasons, then one’s action is morally lacking in some respect, as Kant famously recognized (see also, e.g., Stocker 1976; Miller 2013: 48). Virtuous people can often allow inappropriate reasons to play a minor role in their behavior, but they cannot be the main basis (cf. Kennett 1993). Situational factors often affect one’s behavior by tapping into self-interest, as when someone decides against embezzlement in one instance only because a crime drama on television primed her to dwell on the risk of punishment. But acting on situational factors can conflict with doing what’s right for the right reasons even when it taps into more other-regarding concerns, as when you help a man in need because you can fully empathize with a white person’s plight. Virtuous motivation requires instead that we be motivated primarily by the fact that the action would help someone in need, would be fair, would promote the greater good, or would simply be moral. As with debunking arguments, let’s construct a schema which factors, such as selfinterest and mood, can be plugged into, using variables for any morally relevant behavior (B) and motivational influence (M): Defeater’s Schema 1. B is mainly based on M. (empirical premise) 2. M is a defeater. (normative premise) 3. B is not virtuously motivated. Notice that, unlike defective epistemic influences, it’s clearer that motivational influences are problematic even if one is unaware of them. There may be some room for debate here, but it needn’t detain us. So let’s likewise omit any such parameter from this schema. Much like the epistemic case, we’ll see that the schema tends to work well for smaller rather than larger classes of behavior. Many of our actions, including morally relevant ones, are based on multiple factors. Some are stable versions of the intrinsic motivations familiar from the second half of this book: egoism, altruism, and moral integrity. More fleeting factors specific to one’s circumstances can also play an independent role, although they sometimes operate through one of our three intrinsic motives. These various motivational factors are morally problematic only in some contexts. So there is reason to suspect that sweeping challenges to virtuous motivation are subject to a Defeater’s Dilemma: one can easily establish that an influence on a wide range of behavior is either substantial or morally problematic but not both. As before, we should accept the empirical challenges but only regarding a limited range of our behaviors.

pg. 158 of 206

Regard for Reason | J. May

9.3 The Threat of Egoism Pessimism about moral motivation may be warranted even if moral integrity exists, provided that egoistic motives, such as moral hypocrisy, are much more frequent and potent (Batson 2016). Chapter 7 suggests that moral integrity is possible, even fairly common, but determining the frequency and potency of a motive is challenging. We can make further progress, however, by considering the key evidence for the prevalence of egoism vs. the prevalence of non-egoistic motives, such as moral integrity and altruism.

9.3.1 Moral Integrity or Near Enough Batson supports his pessimism primarily by pointing to his experiments on moral hypocrisy (see Chapter 7, §7.3.3). Even when we appear to be motivated to be moral, our concern instead may be self-interested. The studies can only hope to directly demonstrate the possibility of being motivated to merely appear moral when it promises personal gain. But the results have the power to cast doubt on the assumption that moral behavior is always morally motivated. The pessimist will hasten to point out that (ante hoc) rationalization is common across a variety of contexts. After all, we saw in Chapter 7 that motivated moral reasoning can make sexually aroused individuals rationalize sleazy behavior. Moral licensing can make us rationalize bad behavior in light of recent good deeds. And we often rationalize dishonesty when presented with opportunities to cheat for financial gain. In all of these contexts, however, we saw that succumbing to temptation often works through changes in normative or evaluative judgment. For example, by devaluing what was previously thought more valuable, one can rationalize a different option by a change in one’s perception of what’s best. In this way, the studies appear to be revealing moral motivation through rationalization. Participants in the experiments don’t just fudge the results of a coin flip to benefit themselves; they regard their fudged flip as more moral than simply taking the selfinterested option without flipping. People aim to appear moral to themselves, not just to others, which suggests moral integrity: either a desire to be moral (de dicto) or a disposition to transition from the belief that an action is moral to desiring that it occur (de re). It’s tempting to see rationalization as driven entirely by egoism. Self-interest may seem to be the only intrinsic motive on the scene that can drive changes in normative beliefs that amount to rationalization. But it’s unclear how the change in normative judgment is supposed to serve self-interest, except insofar as one ultimately wants to avoid doing what’s immoral, unreasonable, or unjustifiable. If the only intrinsic goal is self-interest, then presumably there would be no need to change one’s normative assessment. One can simply accept that, say, cheating is wrong but then cheat anyway in order to gain the benefits. Cheating while believing it’s wrong may induce unpleasant feelings of guilt, but again this seems to presuppose a prior concern to be moral. So, as Chapter 7 argued, the evidence fails to rule out that the tempting option appears better in order to serve the motive of moral integrity. Having a personal stake in the evaluative assessment simply paints the object of evaluation in a better light. Perhaps an intrinsic desire for one’s own benefit also partly drives the process of ante hoc rationalization, since one is justifying a self-interested choice. But, even if we posit two intrinsic goals, what seemed like an entirely egoistic motivation becomes partly a matter of moral integrity. pg. 159 of 206

Regard for Reason | J. May

Let’s concede, though, that it’s difficult to adjudicate this aspect of the debate. All we need to rebut the threat to virtuous motivation is to show that the relevant motive isn’t a defeater. What’s clear is that rationalization is plausibly driven by a concern to see oneself as moral. The labels “moral integrity” and “egoism” don’t matter so much as determining whether this motivation is often morally problematic or incompatible with virtuous motivation, and it isn’t. You can criticize a politician who passes anti-discrimination legislation only to acquire votes, but not if she does it primarily because she wants to see herself as doing the right thing and would otherwise feel guilty. Even if the intrinsic desire is self-focused—on seeing oneself as moral or avoiding guilt—it isn’t typically a defeater.

9.3.2 Limited Egoism & Rationalization Let’s focus on the kind of egoism that commonly serves as a genuine defeater. Here the problem is that we have some positive evidence that such egoism isn’t so commonly serving as the main basis for behavior. We have strong evidence—from common experience to rigorous experiments—that egoism often competes with other morally laudable motives. Experiments consistently suggest that egoism is commonly tempered in particular by both altruism and moral integrity. Consider altruism first. As we saw in Chapter 6, a wealth of research suggests that we can have an ultimate concern for the well-being of others, particularly when we empathize with them. Much of the experiments on altruism involve the choice of either helping at some personal cost or avoiding that cost at another’s expense. Yet individuals strongly empathizing with others don’t tend to rationalize the self-interested option. For example, a range of experiments show that empathizing participants will go so far as to suffer electric shocks so that another person in distress does not. A common egoistic explanation of why empathy increases motivation is that feeling another’s pain is unpleasant and empathizers implicitly believe that helping is the best way to eliminate it. But numerous studies have consistently shown that people feeling especially high levels of empathy are still inclined to help when it’s easy for them to simply exit the experiment. The motive of altruism prevents many people from letting the person in need suffer. Empathy is limited in many ways, of course. We are less inclined to feel empathy in the first place for those we loath or for depersonalized individuals, such as large groups of victims described in terms of mere statistics (e.g., Jenni & Loewenstein 1997). When we do empathize, the altruism studies suggest that such feelings can help us overcome self-interest, but not unconditionally. When experimenters describe the electric shocks as more than mild, for example, helping rates naturally go down, even among the empathizers (Batson 2011: 191). Excessive compassion can also lead to injustice, as unfairly privileging the target of empathy leads to nepotism and other forms of partiality. In one experiment (Batson, Klein, et al. 1995) participants were told details about a 10-year-old girl (actually fictional) with a terminal illness and then suddenly had the opportunity to move her off a waiting list into immediate treatment ahead of clearly more deserving children. The majority of people feeling high empathy for little “Sheri Summers” tended to take the opportunity to help her cut in line (73%), in stark contrast to a minority who made this choice in the low empathy group (33%). As Batson realizes, however, the science shows that the relative prevalence and power of egoism and altruism vary with the circumstances. The only general conclusion we can draw is that “empathy-induced altruism can at times be overridden by self-concern, and at times empathy-induced altruism can override self-concern” (Batson 2011: 203). Moreover, while we pg. 160 of 206

Regard for Reason | J. May

are less inclined to empathize with those suffering in faraway lands, in one’s ordinary interactions with friends, family, and acquaintances, one’s compassion is unlikely to dramatically diminish or collapse. Rigorous empirical research does suggest that empathy has considerable limits, but doubt on this score should only be accepted as far as it goes. Let’s now turn to the ways in which moral integrity limits egoism. We can begin by continuing with a discussion of helping behavior. It is well established that increased feelings of guilt motivate morally relevant actions, particularly helping behavior (for review, see Miller 2013: ch. 2). For example, in one brief field experiment, participants were more likely to let someone know that candy is leaking out of a shopper’s bag if they were feeling guilty about having apparently broken someone’s camera (Regan et al. 1972). The explanation of the relationship between guilt and helping is controversial. On various plausible accounts, though, guilt generates more helping because guilty people are more motivated to be moral. Again, some regard such motivation as incompatible with virtue (including Miller 2013), but we have rejected such an overly restrictive account. Thus, there are at least two other motivations for helping behavior that constrain egoism’s influence: altruism (via compassion) and moral integrity (via guilt). Studies of cheating and dishonesty provide another rigorous and extensive line of research that pits egoism against integrity. A series of such experiments—conducted by multiple labs in different countries using thousands of participants—have consistently shown that most people cheat a little but not a lot (Ariely 2012). Many of the studies involve presenting participants with an opportunity to lie with impunity about the number of problems they solved, thus increasing the cash they can pocket at the end of the experiment. One of the most prominent paradigms has been carried out by Dan Ariely and his collaborators (see, e.g., Mazar, Amir, & Ariely 2008). Participants are asked to solve as many math problems as they can in a short amount of time, after which they receive payment for the number correctly solved. Those in the control condition do not have an opportunity to cheat, typically because they must return their answer sheet to the experimenter. The group with an opportunity to cheat is told to simply report the number correctly solved and to either keep or shred their answer sheet. In such experiments, participants who had an opportunity to cheat claimed on average to solve only two more math problems (six out of twenty total) than those without an opportunity to be dishonest (four out of twenty). Importantly, the average difference wasn’t due to a select minority of amoral liars who claimed to solve most of the problems, but rather to “lots of people who cheated by just a little bit” (Ariely 2012: 23). Such results are the norm in this line of research. Small amounts of cheating are even observed when the payout per problem solved is substantially increased and even when the probability of being caught varies (Mazar et al. 2008). So the best explanation again appeals to rationalization: we tend to cheat only a little because that’s the amount we can implicitly justify to ourselves (and to others if necessary), possibly in part by disengaging with or paying insufficient attention to the relevant moral rules (Bandura 1999; Shu & Gino 2012). Whatever the specific mechanism, increasing the “fudge factor,” as Ariely puts it, appears to increase cheating because it allows one to rationalize bending the moral rules. Other studies of rationalization also suggest it has limits. Various studies of motivated reasoning demonstrate that it is tempered by the desire to be accurate (for review, see Kunda 1990: 481-2). Consider one recent and elaborate study of how we rationalize our preferred political candidates. David Redlawsk and his collaborators (2010) had participants periodically evaluate fictional candidates in a simulated presidential primary in which new information about pg. 161 of 206

Regard for Reason | J. May

the candidates came in over time. The researchers did find that negative information about one’s initially preferred candidate didn’t always lead to correspondingly more negative evaluations of that individual. Such data confirm the conventional wisdom that a politician’s followers are inclined to rationalize continued support even in the face of rather egregious scandals. But, as any politician can tell you, the love isn’t unconditional and supporters will eventually jump ship. Accordingly, in the simulated campaign study, there was a tipping point: as negative information built up, participants’ evaluations of their preferred candidate did eventually change accordingly. Similar limitations are present in moral licensing, in which people justify bad behavior to themselves based on recognizing their previously good deeds or traits—e.g., “I’ve already done my good deed for the day” (see Chapter 7, §7.3.2). A meta-analysis of ninety-one studies suggests that the effect size is officially small to moderate (Blanken et al. 2015). Moreover, the experiments tend to show that self-interested desires combine with moral motivation to rationalize only slight deviations from moral norms. For example, Mazar and Chen-Bo Zhong (2010: Experiment 3) randomly assigned some participants to engage in an action that could be seen as particularly virtuous (selecting environmentally friendly products from the virtual “green” store, rather than a conventional one). However, all participants then engaged in another task that required ninety trials of reporting which of two visual patterns appeared on a computer screen, one of which paid out more money. On any of the trials, participants could earn more by misreporting which of the patterns they saw. Moreover, after completing the trials, everyone had the opportunity to steal money because they were allowed to freely collect their earnings from an envelope with a total of $5. Thus, this is the maximum amount that lying and stealing could yield. Yet on average the moral licensing group walked away from the experiment with only $0.83 more than the controls (who only previously had an opportunity to choose ordinary items from the “conventional store”). Thus, moral licensing also appears to be limited in how much it can rationalize immorality. Many people may feel licensed to be a bit dishonest but not, say, licensed to kill. Rationalizing bad behavior may also be limited to only certain contexts, such as secretly stealing a little money from experimenters. It would be shocking if there were no situations in which being primed to recall one’s good deeds instead increases moral behavior. It’s appropriate that there is evidence of such moral reinforcement, particularly in the context of charitable donations. In one experiment, one hundred people were offered $5 for participating, but at the end of the study there was an unexpected opportunity to donate any portion of it to a charity (Young et al. 2012). Participants wrote about at least five recent acts of theirs, but they were randomly assigned to write about only one of three types of action: good deeds, bad deeds, or mere conversations. The researchers then coded participants’ descriptions of good deeds as either “social signalers” who wrote about feeling appreciated or unappreciated for doing their good deed or “do-gooders” who didn’t. The results provided evidence of moral reinforcement: people assigned to write about their past good deeds donated more than the other two groups did (about $3 vs. $2). Moreover, within the good deeds group, average donations differed even more drastically between the do-gooders and the social signalers who focused on whether their good deed was appreciated: $4.5 vs. $2.76, respectively. Not only can past good deeds motivate more good behavior, the effect is more powerful if one conceives of the previous deeds as done for the right reasons—namely, because it’s right or helps others, not because it makes one feel appreciated. All this research on moral behavior suggests that two well matched motives—egoism and moral integrity—often battle to win over action. Positing moral integrity seems necessary at least pg. 162 of 206

Regard for Reason | J. May

to explain why people rationalize self-interest. After all, if people tend to care about being, say, honest only instrumentally—e.g., to avoid punishment—then why don’t they cheat more when they know they can get away with it? Pessimists might again point to a desire to avoid guilt or seeing oneself as a bad person, which one is more liable to feel when one cheats more than just a little (cf. Batson 2016: 145). But again, it’s difficult to distinguish this motivation from moral integrity. At any rate, a desire to see oneself as good or righteous isn’t enough to defeat one’s claim to being virtuously motivated. So moral integrity seems to compete well with egoism both in terms of frequency and potency. And it’s significant that moral integrity is present even if egoism is as well. As many ethicists have noted, the virtuous among us may well have mixed motives (see, e.g., Kennett 1993; Arpaly 2003). Perhaps fulfilling one’s moral duty requires doing it primarily for the right reasons, but the wrong reasons may play some role. Moreover, occasionally self-interest is a morally appropriate consideration, as when you duck to avoid being hit by a rock while knowing full well it will strike the unsuspecting man behind you. Such cases reveal that self-interest doesn’t always conflict with virtue. Another source of optimism is that it’s often quite easy to be redirected back onto the moral path. Researchers have found that dishonest behavior is mitigated or eliminated when, for example, participants are initially asked: to sign an honor code, to sign the top of the answer sheet, to write down the Ten Commandments, to complete the task in front of a mirror, or to simply not be a cheater (for review, see Ariely 2012; Miller 2016). Of course, these studies are not the final word. Perhaps it’s just as easy to increase dishonesty. Just dim the lights, mention that other similar people cheat, or introduce some psychological distance (e.g., rather than pay participants with cash directly use tokens they can later exchange for real money). Moreover, most of the experiments are done in artificial laboratory settings with Western university students, which do not represent all ways of being dishonest or human. Fortunately, though, our task is largely defensive: to show that the science doesn’t at this stage warrant pessimism grounded in egoism.

9.3.3 The Dilemma for Egoism In sum, it’s difficult to identify self-interest as the main motive that systematically prevents most of our actions from being virtuously motivated. The science does not suggest that egoism dominates our social interactions. Rather, we’re commonly motivated also by altruism and moral integrity, which both place significant limits on egoism both in terms of frequency and potency. So, while egoism is a defeater in many contexts, it isn’t necessarily the main basis for most morally relevant behavior. If we water down “egoism” to merely a desire to see oneself as morally good, then it substantially influences a wide range of morally relevant behaviors, but then it looks more like moral integrity or, at any rate, no longer a defeater. More schematically, sweeping egoistic threats are subject to the Defeater’s Dilemma: • Rationalizing self-interest is a main basis of many morally relevant actions (e.g., a desire to see oneself as moral), but it isn’t a defeater in most contexts. • Paradigm egoism (e.g., enlightened self-interest) is a defeater, but it’s not the main basis of many morally relevant actions (given the frequency and potency of altruism and moral integrity). Ultimately, like our propensities to cheat, we should be a little bit pessimistic about the nobility of our motives, but not a lot. pg. 163 of 206

Regard for Reason | J. May

9.4 The Threat of Situationism Situationism is, roughly, the idea that human behavior is influenced by features of one’s circumstances far more heavily and more often than we tend to think. Now, the specific threat we’ll examine isn’t necessarily situationism in particular, but a view closely associated with it: that much of our behavior is motivated by factors we would recognize as arbitrary, alien, or nonreasons. For example, in the watching eyes experiments mentioned at the beginning of this chapter (e.g., Bateson et al. 2006), many may be concerned that they avoided cheating primarily because an image of eyes was placed on the wall. Some prominent neuroscientists argue that in general we’re mostly consciously aware of mere confabulations we concoct that have little connection to the unconscious influences on action (e.g., Gazzaniga 2014). In other words, the sciences of the mind seem to have shown that, while we may be able to reliably report some of our motives, many are unexpected, even shocking and unwanted. Many of the studies marshalled in support of this view primarily lie in the situationist tradition. Moreover, the identification of arbitrary influences on human behavior is what many regard as “the heart of the situationist challenge” (Alfano 2013: 50). So I’ll continue to speak of the threat of “situationism.” Some theorists explicitly espouse pessimism in light of such findings. Or at least these theorists tend to accept that our conception of human agency must be drastically revised or reconceived in order to accommodate situationism and related empirical literatures (e.g., Nelkin 2005; Nahmias 2007; Vargas 2013; Doris 2015). While such philosophers have not always discussed virtuous motivation directly, they recognize it to be the core issue. As Dana Nelkin puts the problem (in the context of discussing moral responsibility): “the experiments challenge the idea that we can control our actions on the basis of good reasons” (2005: 204). Similarly, Manuel Vargas argues that “contemporary psychological science” poses a threat “by showing that the basis of our actions is disconnected from our assessments of what we have reason to do” (2013: 327). Writing with Maria Merritt and Gilbert Harman on the science of character, Doris and his co-authors argue that situations often yield “behavioral outcomes inconsistent with the agent’s (often reflectively affirmed) evaluative commitments” (Merritt et al. 2010: 371). More recently, Doris (2015) argues that the situationist literature reveals a “divergence of reasons and causes” (44) such that an ordinary person would think “she’s done the right thing, but not for the right reasons; doing it because you’re watched is not the same thing as doing it because it’s decent, honest, or fair” (43). Such divergence, according to Doris, is “widespread in everyday life” (61), given that “many studies identify causes of behavior that are not plausibly taken as reasons for behavior” (43). The common challenge in these discussions is that scientific evidence exposes influences on behavior that “defeat” our common claims to be acting for the right reasons. Of course, some of these theorists wouldn’t consider themselves to be arguing for pessimism about moral motivation. But such frameworks can easily lead to it. Indeed, we might not be able to evade this challenge by just downplaying the role of reflection or emphasizing the role of circumstances in agency (à la, e.g., Arpaly 2003; Merritt et al. 2010; Vargas 2013; Doris 2015). As we’ll see, the problem is driven more by unconscious influences on actions than by reflection and they needn’t be features of one’s circumstances.

pg. 164 of 206

Regard for Reason | J. May

9.4.1 Some Evidence There are many studies in the situationist literature that concern morally relevant behavior. Perhaps most famous are Philip Zimbardo’s Stanford Prison experiment and Stanley Milgram’s study of obedience to authority. However, the significance of these studies for virtue has already been thoroughly discussed. Let’s instead focus on experiments that are more recent or that have received less treatment by ethicists. We’ve already encountered some relevant experiments in this chapter and others, including those on cheating, dishonesty, fairness, and framing effects—most of which focus on situational influences. Many of the prominent studies we haven’t yet discussed suggest that seemingly arbitrary changes in one’s circumstances substantially influence helping behavior. In one classic experiment, Darley and Batson (1973) found that seminary students were more likely to stop to aid a slumped and groaning man clearly in need when they weren’t in a hurry to reach their destination. What’s more surprising is that helping was not affected by whether the students were heading to give a speech on the parable of the Good Samaritan. A more recent study found that people in a shopping mall were more than twice as likely to help a stranger make change for a dollar when the request was made in front of a store with more pleasing fragrances, such as cookies or coffee, as opposed to shops emitting neutral smells (Baron 1997). Other studies of helping behavior famously document that we’re less prone to act when part of a group that is unresponsive. Imagine you’re participating in an experiment and you seem to hear someone in another room fall and cry for help. Numerous studies show that if you’re the only one in the room, then you’re very likely to attempt to help this person. However, if there are more strangers around you who don’t get up to help or even investigate, the chances that you’ll buck the trend and help are shockingly low. This so-called “bystander effect” and related group effects appear to further document ethically problematic influences on morally relevant actions (for review, see Latané & Nida 1981). It’s not just helping behavior that’s influenced by defeaters. Studies of implicit bias suggest that we sometimes treat others differently partly based on morally irrelevant factors, such as race, gender, age, and sexual orientation (for review, see Brownstein 2015). For example, when making quick decisions we’re more likely to mistakenly believe that an individual is holding a weapon when that person is black rather than white (e.g., Payne 2001). Other studies involve sending out resumes to potential employers that are identical except that the resumes are randomly marked as originating from a person with either a stereotypically white or black name. One field experiment found that an employer is more likely to pursue the (actually fictitious) job applicants if they are perceived to be white rather than black (e.g., Bertrand & Mullainathan 2004; Uhlmann & Cohen 2005). Such experiments often focus on morally questionable acts, but implicit bias also infects the motivations behind morally good actions. For example, an optimist might say “Your grandfather is a man of virtue: always kind and considerate toward people” but a pessimist might mutter in reply “It just seems that way because he only hangs around white people.” Or a selfreflective manager might know he absolutely hired the most qualified person but worry about the thoughts that flashed through his mind when evaluating the candidates: “He’ll remain levelheaded under pressure” and “She’ll eventually just go on maternity leave.” The manager might wonder: “While Jack was certainly much more qualified than Jill, was I actually motivated by misogyny?” Even when we’re fair or kind, the influence of implicit attitudes might defeat any claims to virtuous motives. pg. 165 of 206

Regard for Reason | J. May

We also previously encountered framing effects on morally relevant decisions (in Chapter 4, §4.5). Recall that Tversky and Kahneman (1981) found that participants would opt for one approach to dealing with a disease outbreak based on whether the approach was framed in terms of the number of lives lost as opposed to saved. Now, the researchers only directly measured judgments about what to do in a hypothetical scenario, but presumably people’s choices would be similar if the disease dilemma became a reality. At any rate, other studies directly measure real-world decisions. Consider the choice of political candidate, a morally relevant behavior. Extensive analyses of voting records have established that candidates tend to win more votes when they occupy the top of the ballot (see, e.g., Ho & Imai 2008). The order effect tends to be quite small and most pronounced when voters do not have a wealth of other information to go on, but it’s surprising nonetheless. We can also look to studies of fairness, cheating, or dishonesty, reviewed earlier in this chapter. In addition to the watching eyes experiment, there are the studies of moral hypocrisy, which suggest that people will allocate a benefit fairly based partly on whether there is a mirror in the room (Batson et al. 1999). Similarly, Ariely (2012) and his collaborators found that participants cheated more or less based partly on whether they were asked to recall the Biblical Ten Commandments or sign an honor code. This is just a sampling of some of the relevant situationist studies that appear to reveal the influence of non-reasons on ethical behavior. There are three general features to notice in order to appreciate the threat to virtuous motivation. First, many of the studies cover a wide range of morally relevant behavior, such as harm, lying, beneficence, cheating, and even discrimination. Second, the studies themselves cover a wide range of designs, from tightly controlled laboratory experiments to more ecologically valid field studies. Finally, the data seem to reveal defeaters. By and large, ethical behavior shouldn’t be substantially motivated by the presence or absence of morally irrelevant factors, such as narcissism, racial or gender bias, anxiety about cartoon eyes, or the mere order in which one receives information. Again, while pessimists often point to studies which show people apparently failing to live up to moral obligations or failing to behave some way due to an ethically arbitrary factor, one can also point to studies showing that arbitrary factors increase good behavior or compliance with a norm. People may cheat less when a mirror is present, for instance, but this appears to be an inappropriate influence regardless of whether it enhances or inhibits morally desirable action. Either way the influence goes, it suggests that we aren’t motivated for the right reasons.

9.4.2 The Dilemma for Situationism Given our focus on wide-ranging pessimism, the question before us is not whether there are some problematic influences on our moral behavior. There absolutely are. The question is whether many of our morally relevant actions are substantially motivated by genuine defeaters that significantly diminish any claim to such motivations being virtuous. We’ll see, however, that wide-ranging pessimism struggles to identify a range of factors that both substantially influence action and are genuine defeaters. While we may not see in every study a clear trade-off between identifying a substantial influence and identifying a defeater, a trend will emerge that reflects the Defeater’s Dilemma. Before proceeding, notice an important difference between attempts to debunk moral beliefs and attempts to defeat moral motivation. An influence is a main basis for some class of moral beliefs if it causes a shift in the polarity or valence of those beliefs. Measures of moral pg. 166 of 206

Regard for Reason | J. May

behavior, however, don’t always have a clear valence. The studies of helping behavior, for example, typically measure rates of helping or amount of help provided (e.g., How many papers did they help pick up?). In the abstract we can only say: an influence is a main basis for behavior if the action likely would have been importantly different had the influence been absent. As we evaluate the threat from situationism, we must consider whether arbitrary factors exert this kind of influence on a wide range of morally relevant behaviors. It will be futile to look for a single problematic influence on all, or even the vast majority, of morally relevant behavior. Sure, many kinds of behavior are plausibly influenced by egoism, framing effects, and transient mood. However, whether any one of these is substantial and arbitrary depends on the case. So, while it is useful to consider a diverse array of experiments, we must ultimately address them on a case-by-case basis. We can’t evaluate them all here, but we can at least divide some of them into useful groups and attempt to look for general trends. Let’s examine three different categories of morally relevant behavior that then have various influences identified empirically (Table 9.1). Table 9.1: Situational Influences on Classes of Behavior Behavior Type Helping/Beneficence Non-maleficence Honesty/Fairness

Some Influences haste, mood, group, implicit bias implicit bias, framing (order, wording) moral reminders (mirror, honor code, recall commandments)

We’ll see that in all three categories studies identify only some defeaters among other appropriate influences. Indeed, when a genuinely illicit influence is found, it tends to have a small effect across a wide range of behavior, which suggests that it’s not the only basis, and not the main basis, for action. No doubt some studies provide evidence of a substantial influence that is ethically arbitrary, but we cannot generalize from one study to a wide range of behavior. Moreover, some people in these experiments surely lack virtuous motives. However, our concern is not with particular individuals but with whether the experimental data they produce warrant pessimistic conclusions about people in general who find themselves in a variety of circumstances. Perhaps a fully virtuous person wouldn’t be so liable to distraction or bias in so many circumstances, but recall that our present concern also is not with whether people possess global character traits, are fully virtuous, or possess practical wisdom. Our question is whether the science gives us reason to doubt that people are frequently motivated by the right reasons. While the science may warrant a limited critique of the motivations behind ethical acts, the Defeater’s Dilemma only constrains more ambitious projects.

9.4.3 Helping/Beneficence There have been many more investigations of how circumstances affect helping behavior than the few examples recounted above. A meta-analysis of sixty-one studies of positive mood, in particular, indicates that its effect on helping behavior is relatively large and reliable (Carlson et al. 1988). The role of negative mood is apparently smaller but has a detectable effect on helping behavior, although the relationship between the two is more complex and heavily moderated by distinct variables (Carlson & Miller 1987). Assuming the general trend is of a reliable effect of mood on such prosocial behavior, then we have some support for the relevant empirical premise pg. 167 of 206

Regard for Reason | J. May

of the Defeater Schema, at least for a wide range of morally relevant actions. What about the normative premise? Does circumstantial mood generally constitute a defeater? It depends greatly on the helping behavior at issue. Christian Miller (2013: 75) considers the influence of positive mood to be non-altruistic and perhaps even fully egoistic if “the motive is concerned merely with keeping the person in a positive mood.” But many of the studies at issue present opportunities to act that are arguably supererogatory or morally optional. When out in public, surely one has a moral obligation to aid someone who is in danger of suffering death or bodily injury. But, especially in a crowd, you typically go beyond the call of duty when you help pick up some dropped papers, exchange four quarters for a dollar, or alert a shopper that her bag is spilling candy. It’s important that the opportunities involve supererogation because often when an action is morally optional it’s perfectly appropriate to be guided by one’s mood. In general one’s mood is certainly irrelevant to why one shouldn’t lie, cheat, steal, or maim, but the reasons to do what’s morally optional often do depend precisely on whether one feels like it. For example, sometimes it’s appropriate to give a gift or a compliment in part because the thought occurred to you while in a good mood. Perhaps we all have an “imperfect” duty to be benevolent and charitable from time to time, but which times may be dictated by circumstantial mood. Indeed, such discretionary duties, as we might call them, can be appropriately influenced by a range of circumstantial mindsets—such as guilt, embarrassment, shame, compassion, or elevation—which numerous studies have shown influence benevolence (for review, see Miller 2013; Doris 2015; Batson 2016). Even when a transient change in mood has a substantial impact on a decision, it does not necessarily rule out being moved by other relevant considerations. As Arpaly puts the point by analogy: “When a person whimsically asks for milk instead of cream in the coffee she has with her chocolate cake, one does not doubt that she does it for health reasons but doubts merely the seriousness of her concern” (2003: 88). Similarly, as Arpaly makes clear, both a committed and a capricious philanthropist may do what’s right for the right reasons, even though the latter would have blown his extra money on a yacht if the charity hadn’t called while he was in a good mood. A committed philanthropist may deserve more praise, but both actions are praiseworthy. So, when one does what’s right in part due to the right situational factors being present, we must ask, not whether one could have been better motivated, but whether the situational factor defeats any claim to virtuous motivation. (Compare our treatment of debunking arguments in Chapter 4: our concern there was with whether certain influences render one’s moral beliefs unjustified, not less justified.) The Good Samaritan experiment requires similar but slightly different treatment. It may present at least some participants with a situation in which they have a moral obligation to at least check on the man, who looks in serious need with no one else around to help him. It is surprising how much helping rates drop when people are in a hurry. However, it’s not obvious that one is obligated to stop and help someone whose need is deliberately “ambiguous” and only “possibly in need of help” (Darley & Batson 1973: 102), especially when one is in a hurry to fulfill another obligation that is unambiguous (Sabini & Silver 2005: 558). It is also surprising that so many seminary students in particular rarely stopped despite recently pondering the parable. But those students who did help may have been motivated by the right reasons, even if they wouldn’t have stopped had they been in a hurry. Being in a rush isn’t an excellent candidate for a morally inappropriate influence. Studies of group effects do often involve situations in which one arguably has a moral obligation to help, given that the person often appears to be seriously injured. One’s mood is pg. 168 of 206

Regard for Reason | J. May

commonly a morally irrelevant factor in such contexts. However, the social inhibition of helping isn’t driven by mere mood. Instead, as Latané and Nida (1981) make clear, the effect is plausibly due to at least three factors related to the unresponsiveness of the group (see also Miller 2013: ch. 6.4). When more people are around, one is more likely to think that others bear some of the responsibility to help (diffusion of responsibility); that there is no real need to help since others aren’t (social influence); and that the risk of embarrassment is high given the audience for one’s potentially foolish attempt to help when it’s unnecessary (audience inhibition). The support for this account is morally significant. Experimental evidence suggests, for example, that even in large groups helping rates remain high when it’s clear (rather than ambiguous) that there is need for aid, even when one can just see that other group members are startled by the sound of something crashing down on the victim. In other words, there is evidence that helping someone in need is driven by at least some morally relevant considerations, particularly whether help really is needed. What seems like a defeater (being in a group), may turn out to be driven by a number of sub-factors, many of which provide appropriate reasons. What about implicit biases? They certainly affect prosocial behavior and are better candidates for inappropriate influences. There is ample evidence that implicit attitudes about race and gender, in particular, influence a wide range of actions. However, while the matter is somewhat controversial at this stage, the effects do appear to be rather small. Take, for example, the weapon identification task, developed by Keith Payne and his collaborators. Over many trials, participants quickly categorize an image as either a hand gun or hand tool after briefly viewing an image of either an African American or Caucasian male’s face (200 milliseconds). Payne (2001) tracked, among things, the rates of errors that participants made, either from misidentifying a tool as a gun or vice versa. When participants were not under time pressure, there were no differences in error rates, which were quite low anyway (6%). In a second experiment, however, participants had to respond within 500 milliseconds while still trying to remain accurate, which led to more errors (29% on average). Under these conditions, participants were more likely to misidentify a tool as a weapon after seeing a black rather than a white face flash up on the screen (error rates of 37% compared to 31%). Importantly, however, the difference, while statistically significant, is quite small (6%). This is arguably characteristic of studies of implicit bias across a range of behaviors and reactions. One large meta-analysis suggests a small to medium effect size, but implicit attitudes were only more predictive of behavior than explicit attitudes in two out of the nine domains examined (Greenwald et al. 2009). A more recent meta-analysis of implicit racial attitudes in particular indicates that they are hardly predictive of discriminatory behavior (Oswald et al. 2013). Now, perhaps the Implicit Association Test isn’t the best measure of implicit biases, which is a main subject of these meta-analyses. However, another recent meta-analysis of 426 studies included other measures as well, and of a variety of implicit biases (Forscher et al. 2017). Yet the researchers found that, while implicit attitudes can be changed, these changes don’t appear to substantially influence explicit biases or behavior. Of course, even miniscule effects can add up to great social injustices. As Neil Levy points out, “thousands of small slights and instances of disrespect over the life of a single black person” add up and could sometimes make the difference between “being hired or not” or “being taken to carry a wallet or a gun” (2015a: 803; see also Greenwald et al. 2015). However, our current focus is one’s reasons for acting, not the action itself or other downstream effects. And the data suggest that implicit biases exert minor influences on action. While this is still cause for great concern when it comes to social justice, the threat is much less significant for virtuous pg. 169 of 206

Regard for Reason | J. May

motivation. When someone does what’s right—e.g., gives the benefit of the doubt to an apparent stranger walking late at night in one’s neighborhood—the action or omission may have been partly influenced by the stranger’s white, rather than black, skin. But in general, this is only one minor influence among many, such as the concern to not hassle an innocent person merely taking a late-night stroll. At any rate, the implicit attitude doesn’t clearly disqualify the considerate act from being virtuously motivated. Now, implicit or explicit attitudes about race and gender can sometimes substantially influence behavior. Some people are so chauvinistic that their choices are primarily based on racists or sexist attitudes, whether or not they’re aware of it. However, the existence of some such cases doesn’t substantiate sweeping pessimism about most people’s motives. Moreover, in some cases we should be motivated by facts about race or gender. Affirmative action policies, for example, intentionally yield decisions substantially based on an applicant’s race. Even opponents often agree that public policy should be sensitive to whether someone is a member of a systematically disadvantaged group, given that a “color blind” approach to racial disparities often only deepens them. In such cases, there’s no doubt that implicit (or explicit) attitudes about race or gender substantially influence action, but they are not defeaters.

9.4.4 Non-maleficence Situational effects on acts of harm and fairness also tend to exhibit the trade-off between being substantial and arbitrary. Implicit biases, for example, also affect decisions regarding justice and fairness, but we’ve already seen that they are commonly insubstantial. What about framing effects? Surely they are all ethically arbitrary factors fit for being defeaters. And they influence many moral decisions, including those for which harm and fairness are relevant, such as who to save when a disease or a trolley threatens a large group. The discussion of moral judgment in Chapter 4, however, should make us wary of jumping to conclusions, for at least two reasons. First, like implicit biases, arbitrary framing effects often aren’t substantial. Recall that a recent meta-analysis suggests that moral decisions are hardly influenced by framing effects (Demaree-Cotton 2016; see Chapter 4, §4.5). Even only focusing on studies that report an effect (rather than a null result), the vast majority of moral intuitions (about 80%) remain unchanged when subject to differences in mere framing. So, while mere order plausibly is arbitrary in many, if not all, contexts, it is unlikely to be a main motivational basis for many morally relevant behaviors. Of course, some framing effects appear to exert powerful influences on decisions. Tversky and Kahneman (1981) reported a full flip in which policy the vast majority of participants would choose simply due to framing the decisions as either involving losses or gains. However, one example does not suffice for a trend. Again, a meta-analysis of 230 similar effects suggests that Tversky and Kahneman’s is an outlier (Kühberger 1998). Second, framing effects that are substantial often don’t constitute defeaters. Recall that for order effects (e.g., Liao et al. 2012; Schwitzgebel & Cushman 2012) it’s sometimes reasonable for a moral decision to be different depending on the order in which information is presented, as one rationally updates on different sets of evidence (Horne & Livengood 2017). Or consider the frame present in Tversky and Kahneman’s disease problem. The two sets of policies are equal in terms of expected outcomes, and it’s surprising how mere framing influenced which policy in each set was perceived as yielding losses. But it’s not ethically arbitrary to have one’s decision influenced by whether a choice involves a loss or a gain. That, after all, is a fundamental pg. 170 of 206

Regard for Reason | J. May

feature of ordinary human judgment and decision-making (Kahneman 2011). We certainly shouldn’t be overly sensitive to the prospect of losses vs. gains, but the loss/gain distinction isn’t obviously irrelevant. At any rate, if our greater aversion to loss than to gains is unwarranted, the problem isn’t so much improper motivation but rather improper moral beliefs or values.

9.4.5 Honesty/Fairness We’ve already covered numerous studies on honesty and related duties of fidelity from Batson, Ariely, and others (above and in Chapter 7), and many of the effects involve situational factors. Fitting with our recurring theme, however, only some of these influences are defeaters while others are insubstantial. On the one hand, substantial situational influences on honest and fair action often aren’t defeaters because they tap into relevant reasons. In the moral hypocrisy experiments, situational factors could prod more people to employ the fair procedure of flipping a coin and actually following it. However, these effects were driven by moral reminders—such as the presence of a mirror or the salience of fairness—that draw one’s attention to morally relevant considerations. It may be disappointing that our attention isn’t always focused on doing what’s right, but our sensitivity to differences in moral focus suggest morally relevant influences on our decisions. In the studies of dishonesty, cheating can be enhanced or inhibited by the signing of honor codes or pledges, exchanging tokens instead of cash, and so forth. Much of the cheating in these studies is also moderated by moral reminders. Participants cheat less, for instance, when they are first asked to recall as many of the Ten Commandments as they can. However, such primes appear to protect against dishonesty because they draw one’s attention to morally relevant features of the situation and activate one’s “moral identity” (Aquino et al. 2009). Related studies suggest that people are more inclined to honestly report their earnings from the experiment if they are asked to sign a statement of honesty at the top of the report form rather than at the bottom (Shu, Mazar, et al. 2012). The signature at the bottom apparently fails to prevent the rationalization of dishonesty that one engages in while completing the form—what’s done is done and my signature isn’t enough to undo it. So, when a participant does what’s right (reports honestly), this appears due to another moral reminder, but here it forestalls her natural tendency to rationalize promoting self-interest, the right motivating reasons remain intact. On the other hand, situational influences on cheating that are morally irrelevant often make small differences. For example, in the studies of dishonesty, egoism no doubt plays some role in rationalizing cheating, which is after all in one’s self-interest. As we’ve seen, however, Ariely and his collaborators have consistently demonstrated that self-interested rationalizations only slightly affect levels of cheating. While it’s impressive that some manipulations entirely eliminate cheating, the levels are low to begin with (e.g., claiming to have solved only about 10% more of the math problems than one actually did). Of course, some experiments record more substantial increases in cheating when participants have the opportunity to do so. However, since our concern is with general trends in the data, we must look to more than one study. And so far it seems that illicit influences on dishonesty are generally small and one among many other reasons for action that are morally appropriate (e.g., moral reminders). A related limitation of situational influences on dishonesty is their instability. For example, watching eyes on a wall really do seem to affect whether people in the break room donate their fair share to the coffee or tea fund, but the effect on free riding appears to not only be weak but to diminish over time (as noted by Batson 2016: 213). And this trend seems to be pg. 171 of 206

Regard for Reason | J. May

confirmed by a meta-analysis of dozens of watching eyes studies (Sparks & Barclay 2013). So the fear of punishment that eyes stimulate may be a minor and fickle deterrent to cheating (even if a cheery gimmick for the classroom). In fact, this is a general worry about influences uncovered in the situationist literature: they may either diminish naturally over time or after people become aware of them, explicitly or implicitly (cf. Nahmias 2007). The ease with which such influences lose their effect on choice further suggests that they aren’t frequently a main basis for action.

9.5 Containing the Threats We’ve only considered a sample of situationist studies, but they are some of the more popular ones to draw on, and many others we’ve omitted even less clearly involve ethically arbitrary influences or even morally relevant behaviors. And we already have enough to extract a general trend consistent with the Defeater’s Dilemma and the analysis of egoism. Indeed, the science suggests that both egoism and situational forces exert an influence on a wide range of ethical behavior but not that genuine defeaters are commonly the main basis of ethical behavior (see Table 9.2). The result favors a limited critique of our motives, not sweeping skepticism. Table 9.2: Example Factors Subject to the Defeater’s Dilemma Category of Behavior

Substantial but Non-defeater

Defeater but Insubstantial

Helping/Beneficence

mood (for discretionary duties), group, haste

implicit bias (e.g., race)

Non-maleficence

some frames (e.g., gain/loss)

Honesty/Fairness

moral integrity (e.g., rationalization), moral reminders (e.g., honor code)

some frames (e.g., some order effects), implicit bias self-interested gains (e.g., money), fear of punishment (e.g., eyes)

Why would the Defeater’s Dilemma arise? A plausible explanation is similar to the source of the Debunker’s Dilemma: the heterogeneity of morally relevant actions and their influences. When examining a wide range of behavior (e.g., helping, dishonesty), we’re unlikely to identify a single kind of influence that is morally problematic across most contexts. Experiments do often seem to uncover arbitrary factors, but matters are more complex once one digs deeper into the phenomenon. One complication is that what seems like a single kind of influence (e.g., being in an unresponsive group) turns out to be heterogeneous (diffusion of responsibility, social influence, audience inhibition), only some of which may be defeaters. Another related complication is that what seems like a universally arbitrary influence often isn’t, as when moral reminders draw one’s attention to relevant information and activate a concern to be moral (moral integrity). At this point, a pessimist might be tempted by a kind of “whack-a-mole” rebuttal. There are just so many studies in the situationist literature: if you find a problem with one, we can find another that suggests an arbitrary influence. However, the previous discussion, while incomplete, is meant to focus on general trends in the scientific literature. We have not examined one-off studies here and there but rather identified a pattern that poses a problem for drawing sweeping conclusions on the basis of the relevant studies. While it’s logically possible to avoid the pg. 172 of 206

Regard for Reason | J. May

Defeater’s Dilemma, it looks empirically unlikely. Moreover, while the situationist literature contains studies that don’t involve morally relevant behavior, our inquiry limits us to that subset. For example, Doris is fond of the apparent effects one’s own name can have on one’s choice of profession or home state (e.g., people named “Dennis” are slightly more likely to become dentists). Or recall the experiment Nisbett and Wilson (1977) conducted in which people confabulated explanations for why they preferred the stockings on the far right, even though the items were in fact all identical. Such studies serve as striking examples of how funny factors influence some decisions. However, since they don’t clearly involve morally relevant decisions, one cannot appeal to them to attack moral motivation. It may be that, when the stakes are low and the situation is ambiguous and the choice is more personal than ethical, then truly arbitrary factors can gain a more significant grip. Another objection appeals to a kind of slippery slope argument. I’ve granted that some experiments do demonstrate a substantial effect of an illicit influence on morally relevant behavior. Doris seems to think we can still infer sweeping skepticism from this modest set. Despite making strong claims about the high frequency of defeaters, he alleges that the “critical question concerns not how often defeaters should be thought to obtain, but how their presence can be ruled out” (68). After all, the floodgates are now opened: “there’s a large, odorous, and ill-tempered animal under the awning of agency” and thus, “for all one knows, any decision may be infested by any number of rationally and ethically arbitrary influences” (2015: 64; cf. Rini 2016). Is the burden really on optimists to show that problematic influences aren’t widespread? Doris assumes that optimists can only shift the burden onto pessimists by appeal to little old “common sense” which “hasn’t yet accounted for the [empirical] literature” (68). The Defeater’s Dilemma, however, isn’t based on mere common sense but on a serious examination of the science that suggests the situationist threat can be contained. So, even if the optimist does have the argumentative burden, this chapter provides a way to discharge it (compare the discussion in Chapter 5, §5.2.1).

9.6 Conclusion Many of those working in moral psychology are familiar with discussions of empirical research on egoism and situationism. However, this chapter has aimed to articulate a core target of this research: acting for the right reasons. Moreover, the chapter provides a general and principled way to defend this aspect of virtuous motivation. The science suggests that morally irrelevant influences are often insubstantial while substantial influences are often morally relevant. Like the Debunker’s Dilemma (Chapter 4), the Defeater’s Dilemma provides empirical reason to believe that a wide-ranging critique is likely to be circumscribed. No doubt some of our actions are substantially affected by illicit influences. As the discussion of the capricious philanthropist suggests, however, an act can be virtuously motivated even if there is room for improvement. Our question isn’t whether we could be more virtuously motivated but whether our motivations generally lack the status of being virtuous. As with debunking arguments, we’re interested in whether the relevant honorific applies or not, rather than whether there are small changes in degree. So, in the end, effect sizes and frequencies matter for wide-ranging pessimism. Of course, effect sizes only provide a statistic of the aggregate, so a small effect of one variable across the group may be a main basis for some individual’s behavior. But wide-ranging pessimists need both frequency and potency. Illicit pg. 173 of 206

Regard for Reason | J. May

influences must be substantial and widespread—not substantial for few or insubstantial for many. There’s another similarity between moral motivation and our prior discussion of moral judgment: we haven’t addressed a deeper form of skepticism. Even if one agrees that our actions are typically influenced by what we tend to think are relevant reasons, one might distrust our ability to identify what counts as a good reason for action. But again, the skepticism here goes deeper than morality to all forms of cognition, even those often considered non-moral, such as our assessments of intent, causation, probability, and possibility—all of which are important inputs to moral judgment and action. As before, we’re not taking on this sort of global skepticism. The aim of this book has only been to resist the idea that there are special empirical problems for moral psychology in particular. Indeed, the rationalism of this book is committed to the idea that if we should be skeptical of the reliability of human reasoning generally, then as a result we should be skeptical of moral thought and motivation. Human reasoning and moral psychology are deeply connected. In the next chapter, we’ll take a step back and further examine the picture of our moral minds that has emerged in light of a careful, philosophical examination of the science. We’ll conclude with a cautious optimism and consider some implications of this position for more practical questions about the prospects of moral enhancement.

pg. 174 of 206

Regard for Reason | J. May

Conclusion

pg. 175 of 206

Regard for Reason | J. May

Ch. 10: Cautious Optimism Word count: 4,518

10.1 Introduction The science of ethics has given rise to many sweeping forms of pessimism. It is said that we form our moral beliefs primarily based on mere emotions divorced from reasoning; or on automatic emotional heuristics that frequently go awry; or on evolutionary influences that would only at best be coincidentally connected to moral facts. Even if we could know right from wrong, it’s said that we’re largely motivated by the wrong reasons. While it may seem that we’re often motivated by our moral beliefs, the science allegedly reveals that we’re primarily driven by various stable forms of self-interest, such as moral hypocrisy, as well as more transient factors, such as framing effects and implicit biases. The previous chapters, however, suggest that the science ultimately warrants a more optimistic picture of our moral minds in which reason plays a fundamental role—optimistic rationalism. Empirical evidence does force us to reckon with the fact that unconscious biases are more widespread than we typically expect and that they can lead even the best of us to poor judgment and rationalizing self-interested choices. The scientific challenges to moral judgment and motivation should be taken seriously but they are unfit for a wide-ranging critique. Our optimism should absolutely be tempered by what we’re learning from the empirical literature. Cautious optimism, however, is optimism nonetheless. The main claims of this book have already been explicitly stated. What’s less clear are the book’s main maneuvers, so to speak. In this concluding chapter, there is some recapitulation, but the goal is to draw out some of the more implicit lessons already meant to be contained in the preceding chapters. We’ll also briefly consider how cautious optimism suggests we can enhance moral virtue.

10.2 Lessons 10.2.1 Limits of the Reason/Emotion Dichotomy We’ve seen that the divide between reason and emotion is a fuzzy one at best (Chapters 2 and 3). Reasoning can be rapid and unconscious while emotional processing can be drawn out and reflective. There is a growing scientific consensus that the reason/emotion dichotomy, intuitive as it may be, is either spurious or fruitless. Ultimately, this means that both reasoning and emotion play important roles in moral psychology. Moral deliberation is ultimately extended in time, sometimes over weeks or months, often with a mix of various types of processes that can appear both inferential and emotional. However, this result fits best with the rationalist, not sentimentalist, tradition. In moral cognition, there is good empirical reason to place great weight on the cognitive aspects of emotion that can facilitate inference and related belief-forming processes. Gut feelings and pg. 176 of 206

Regard for Reason | J. May

twinges of affect are required for moral cognition only insofar as they are required for reasoning and inference generally. We have not found any reason to embrace the distinctively sentimentalist idea that mere feelings, apart from their connection to inference, are essential ingredients in distinctively moral judgment. Moral judgment is just like other forms of cognition except that it involves moral matters. When it comes to moral behavior, we’ve seen that our reasoned moral beliefs frequently drive moral motivation. And we needn’t deny that reasoning can be the ultimate source of motivation. While this dispute might not be fully adjudicated by the scientific evidence to date, we’ve seen that the empirical literature does not force us to treat reason as a slave to unreasoned passions.

10.2.2 Reason Corrupted People have a tendency to admonish those who don’t remain cool, calm, and collected, and those who don’t consciously weigh the pros and cons. Contemporary sentimentalists are quite right to balk at this antiquated understanding of the moral life. When our mental lives go well, often we are deploying unreflective and automatic processes. Moreover, we now know that reflective deliberation can be easily derailed by cognitive biases, motivated reasoning, and self-deceptive rationalization. Sometimes, when slow and reflective processes facilitate moral depravity, the fault may ultimately lie with the cognitive biases and rationalizations that derail conscious deliberation. However, we should be wary of attributing our moral successes and failures to emotions. When we look carefully, good and bad reasoning does much of the explanatory work. Many of one’s moral intuitions and motivations have been unconsciously shaped over time by learning from one’s personal experience and the experience of one’s distant ancestors whose moral heuristics have been handed down by genetic and cultural transmission. When automatic and unreflective processes facilitate virtue, the credit should often go to unconscious learning and inference more than non-cognitive and unreasoned emotional responses.

10.2.3 Reason as Ruler, Affect as Advisor We should thus recast Haidt’s analogy of the elephant and the rider (from Chapter 3, §3.4.1). The emotional elephant is indeed powerful and is required for reaching one’s destination, but it’s still the slave if anything is, for it must seduce the reasoning rider with what at least appear to be reasons. The commonsense picture of having a divided mind is not so far off, but perhaps a better analogy in light of the science is to a ruler (reason) and her trusted advisor (emotion/passion). An effective advisor not only makes proposals to the ruler but persuades her through either sound reasoning or cleverly disguised casuistry. Sometimes this involves abruptly and dramatically distracting the ruler or forcing her to only see the evidence that promotes the advisor’s ends. However, while a clever advisor may sometimes appear to be the puppeteer, the exercise of such control works through the reasoning ruler.

10.2.4 Statistical Details Matter We should be attentive not just to effects on moral judgment and motivation uncovered by experiments but also to the size and substance of such effects. In general, we should avoid pg. 177 of 206

Regard for Reason | J. May

concluding that small effects have great moral import, given the following dilemma implicit throughout the book. Either small effects are ethically significant or they aren’t. If they are, then it cuts both ways: we can easily stack up small appropriate influences against small inappropriate influences. Yet wide-ranging pessimists can’t take comfort in morally appropriate influences equally explaining some variance in moral psychology. On the other hand, if small influences don’t warrant sweeping ethical conclusions, then we likewise have little support for pessimism.

10.2.5 The Power of Parity When examining pessimism about our moral minds, it is useful to draw comparisons with other domains of judgment and decision-making. Moral judgment may be prone to bias but the threat is much less significant when we realize it equally plagues non-moral judgment. If moral reasoning is only as problematic as reasoning about mathematics, geography, or the minds of others, then the situation is not so dire; ethics is a “companion in innocence” (and guilt, so far as it goes). Hence cautious optimism: just like all human cognition, we should recognize the limits of moral reasoning, but that doesn’t warrant sweeping pessimism about only the moral domain. Of course, this won’t satisfy skeptics who contend that all cognition, moral and nonmoral, is bunk. As I’ve said, my aim in this book has not been to take on such deep skepticism. I only hope to undermine the claim that the empirical literature reveals a special problem with moral psychology in particular.

10.2.6 Table Turning Often what seems like a cause for pessimism is actually cause for optimism. For example, the focus on temptation when discussing moral motivation may seem an odd source of support for a rationalist approach to moral psychology. Since rationalization often corrupts one’s judgment so as to serve self-interest, it might seem that this concedes that emotions play an integral role in moral psychology. However, this only affords emotions and passionate motives the power to affect moral judgment indirectly by influencing reasoning. Emotions and desires can achieve this by simply focusing one’s attention on certain pieces of evidence. For example, one’s lust for another person may motivate infidelity by focusing intently on the thought that one’s significant other won’t be hurt or that the thrill is worth the costs. Without a direct impact on moral judgment unmediated by reasoning, feelings retain their subsidiary role. In the end, selfdeceptive rationalization may seem to be driven by blind emotion and self-interest, but in fact it reveals our regard for reason. Similarly, our susceptibility to illicit influences provides hope for moral enhancement that helps us to overcome pernicious cognitive and behavioral biases. Rather than uncovering a devastating challenge to ordinary moral thought and action, we see ways we can improve. The rest of this chapter briefly explores such enhancement in light of the rationalist conception of moral psychology developed throughout the book.

10.3 Enhancing Moral Motivation Much of our inquiry has presumed that we often do what’s right and have accurate moral beliefs. The worry instead has been that in ethics we seldom act or believe for the right reasons, which pg. 178 of 206

Regard for Reason | J. May

often precludes attributions of “virtue” and “knowledge.” Perhaps we should be more concerned with people who behave badly or are just morally ignorant. These problems lead to a great deal of immorality, which seems to crop up everywhere one looks. There is yet another mass shooting, sex scandal, terrorist attack, negligent corporation, corrupt politician, ungodly priest, plagiarizing student, or just another jerk cutting you off on the road. Many of the empirical threats we’ve considered can lead to flat out immorality. In many instances, one is derailed by self-interest or the interests of one’s group, as when a cardinal covers up a scandal for the church. What’s also striking, as situationists emphasize, is how frequently otherwise decent people err. One sometimes sees how easily one could have been the focus of the news cycle. In the right circumstances, you too could have bundled subprime mortgages, made a racially insensitive comment, embezzled from that thankless employer, or even killed an innocent person. Moral convictions may motivate us to some degree, but some behavior is significantly explained by morally irrelevant features of the situation—e.g., mood, implicit biases, or just bad luck. Fortunately, often what seem like threats to virtue point to ways to enhance it. So our examination of the reasons for one’s moral belief or action has implications for avoiding immorality or enhancing virtuous action.

10.3.1 The Need for Nudges Improving virtue is nothing short of a task in human enhancement. We’ve seen that getting ourselves to behave better is difficult for many reasons. Even if we do form the right moral beliefs, we may simply rationalize bad behavior. Since moral deliberation is fundamentally a rational enterprise, somehow we need to engage people’s reasoning capacities yet limit the risk that they’ll become corrupted. This suggests that one of the most effective ways for moral enhancement will require particular kinds of moral technology (Alfano 2013), such as so-called “nudges” (Thaler & Sunstein 2008) which manipulate the context of choice so as to facilitate better choices. One of the most famous examples is that many more people will become organ donors if the default option is to be one and a decision requires opting out. Similarly, we’ve seen that reminding people of moral norms can reduce cheating and putting them in a good mood can make them more charitable. Such moral technology structures our environments to nudge us toward doing what we ought. Some worry that deliberately influencing moral decision-making is unethically manipulative, but that is less likely for moral technologies aimed at facilitating good reasoning. What’s particularly troubling is when interventions bypass our rational capacities (Levy 2007: ch. 2). Unlike subliminal messages or commercial advertising, the exploitation of rational learning mechanisms isn’t manipulative, even if the individual reasoner is unaware of the influence (Kumar 2016b). In this way, some interventions can improve moral reasoning in a way that’s akin to feeding one’s children healthy food for optimal brain development. Nudges typically involve situational effects and these may often be small and diminish over time (cf. Batson 2016: 213ff). Some nudges, however, seem more resistant to the erosion of efficacy. More successful nudges, such as opt-out policies for organ donation, will counteract relatively weak preferences for self-interest and make a better choice the socially expected norm. Moreover, since problematic influences sometimes diminish or disappear once we’re aware of them, we may need to educate people about such influences. One general approach would be to increase moral integrity. There is some evidence that empathy can produce moral integrity in addition to altruism (Batson 2016: 117ff). Perhaps these pg. 179 of 206

Regard for Reason | J. May

results should be unsurprising given the commonsense idea, frequently espoused by philosophers, that virtue requires putting oneself in another’s shoes. Another way to boost moral integrity is to make people feel “elevated” by witnessing good deeds and moral exemplars (see Miller 2013: ch. 4; Han et al. 2017). Empathy and elevation may ultimately be emotional inductions, which can seem in conflict with rationalism. However, in these cases the feelings are not merely incidental to the issue at hand but integrated in a way that facilitates moral learning and reasoning. So the relevant moral technology for motivation may well involve manipulation of morally relevant emotions. It might seem that we are bound to value being moral only instrumentally, given that being moral doesn’t come naturally without a struggle. Young children may display some altruistic motivation surprisingly early in development (Chapter 6), but adults must take great pains to teach virtue through reward and punishment, especially when doing what’s right conflicts with self-interest (Batson 2016: ch. 5). However, a more optimistic account of moral development is available, even if its beginnings are awash with egoism. Early in life we may desire to be moral only, or primarily, to gain praise or to avoid punishment, but this doesn’t preclude eventually valuing morality for its own sake. Some studies suggest that children do develop a sense of morality in infancy (e.g., Hamlin et al. 2007; Bloom 2013); the period of maturation is just excruciatingly long. The problem is not so much that we struggle to inculcate a desire to be moral from an amoral slate. Rather, morality conflicts so often with strong self-interested concerns that are difficult to override. Morality requires children to do so many things they don’t want to do, such as share toys, refrain from taking someone’s candy, admit fault, be quiet while the baby is sleeping, and avoid hitting. The struggle is real for much of human development, for even teens are notoriously impulsive and risk-seeking creatures. So the battle to get children to behave provides just as much support for moral integrity being overpowered by egoism as it does for moral hypocrisy.

10.3.2 The Limits of Moral Integrity It may seem that we can be much more virtuous if we can just get people to possess moral integrity, to care about morality for its own sake rather than as a means to promoting self-interest or the interests of one’s group. Batson takes precisely this position: “If we want people to live up to the moral standards, principles, and ideals they espouse, the best treatment would seem to be one that leads them to fully internalize their principles, integrating them into their core self as intrinsic values” (2016: 224). While Batson recognizes the limits of moral integrity, he remains focused on the “orchestration” of intrinsic motives like moral integrity and altruism in order to treat our “moral maladies.” However, it’s certainly not enough to be motivated by one’s moral convictions. After the historic U.S. Supreme Court ruling in 2015 that granted same-sex couples the right to marry (Obergefell v. Hodges), a county clerk in Kentucky launched herself into the limelight by refusing to do her job. Kim Davis was tasked with processing marriage licenses, but refused due to her staunch opposition to same-sex marriage. Liberals of course criticized her for failing to uphold the law, while conservatives praised her as a brave culture warrior. Davis was motivated by moral and religious principles, and ones that many other Americans apparently accepted as well. Shall we consider her motivation to be virtuous? As Neil Levy (2015b) put it, is Kim Davis a “virtuous homophobe”? pg. 180 of 206

Regard for Reason | J. May

No. This case reveals that virtue requires accurate moral beliefs. Even Kant conceived of acting from duty as acting from genuine duties, not false ones. There is something admirable about those who stick to their principles, even if those principles are wrong-headed. However, virtue requires acting not just from one’s moral beliefs, but from moral knowledge. So it’s not enough to act from moral integrity. The situation is even more dire given that people motivated by moral integrity can easily rationalize self-interest. One of the themes of this book is that moral judgment and motivation are inextricably linked when it comes to virtuous and vicious actions. Moral beliefs exert a great influence on action (Chapter 7) and the rejection of Humeanism (Chapter 8) indicates that we can get people to adopt better motivations by improving their moral beliefs. So it will be useful to also identify enhancements to moral cognition.

10.4 Enhancing Moral Cognition Understanding the nature of moral cognition informs how it can break down but also how it can function well. We saw that moral judgment involves plenty of implicit and explicit reasoning (Chapter 3), so it inherits all of the biases that plague cognition in other domains. We’ve acknowledged some relevant empirical threats in this regard but emphasized their limits (Chapters 4 and 5). There is further reason for optimism since the threats point toward ways we can do better.

10.4.1 Cognitive Biases Empirical threats to moral judgment typically point to illicit influences. We began the discussion of debunking arguments (Chapter 4) with classical appeals to wishful thinking and other cognitive biases. There is every reason to believe such biases affect some ordinary moral thinking. Even in peer disagreement (Chapter 5), part of the problem is a failure of intellectual humility or excessive confidence that one is in a privileged epistemic position. The common barriers to moral knowledge are thus patterns of flawed belief formation, not a failure to have the right sentiments. It would be a mistake, of course, to assume that we can best make people more virtuous by “directing some piece of reasoning” at them, as John McDowell would put it (1995/1998: 101). Again, we may need to employ “nudges” or similar devices that shape more implicit learning mechanisms and unconscious reasoning. Sometimes the flawed beliefs are distinctively moral or concern foundational moral values, but more often they will be non-moral beliefs that aim to extend basic moral knowledge. We already saw in Chapter 5 that many moral disagreements turn on non-moral disagreements, since one’s own moral beliefs frequently rest on non-moral facts concerning complicated topics (e.g., universal health care, international trade deals, immigration policy, cloning, climate change). Empirical evidence can bring to light how irrational the relevant non-moral beliefs are too, particularly because they support cherished moral values one is keen to protect with rationalization or motivated reasoning. The problem, then, is not always or primarily with foundational values, since extending moral knowledge requires many sound non-moral beliefs about a wide range of topics. Even if we assume that cognitive biases are bugs of our general reasoning capacities, science suggests ways to stamp them out. For example, we should be able to improve moral pg. 181 of 206

Regard for Reason | J. May

knowledge by enhancing everyone’s intelligence through quality education, improved diets, and perhaps even pharmaceuticals (Persson & Savulescu 2012). Sometimes improvements can be made to individuals themselves, but others will require modifying the environment. Confirmation bias, for example, is notoriously reticent, but we can counteract it by surrounding ourselves with intelligent peers who provide a source of healthy critique (Mercier & Sperber 2011; Graham et al. 2013). Indeed, a growing body of evidence suggests that both adults and children across cultures can mitigate the effects of cognitive biases by reasoning cooperatively in groups (see, e.g., Mercier et al. 2016). A promising method of improving moral knowledge, then, is to improve the social scaffolding in which we’re embedded so that we can enhance collective reasoning, trust, and deference (Levy 2007: 308-16). Of course, there is no magic elixir that will cure all cognitive biases. Some modifications may only mitigate their pernicious effects, not eliminate them. Recall the study indicating that more scientifically savvy thinkers are simply better at rationalizing the position on climate change that fits their politics (Kahan et al. 2012). However, while motivated reasoning is indeed a powerful force, experimental evidence does show that we can protect against its perils in various ways, such as strengthening the motivation to be accurate (Kunda 1990: 481-2). Moreover, these studies focus on people who already have firmly held moral views, which are embedded in an intricate and well-developed web of beliefs. We can’t expect substantial revision immediately, or perhaps ever, in those already committed to a particular moral belief. The problem arises for non-moral issues as well. Kahneman notes that, “even compelling causal statistics will not change long-held beliefs or beliefs rooted in personal experience” (2011: 174). More provocatively, Max Planck (1950: 97) famously cautioned: An important scientific innovation rarely makes its way by gradually winning over and converting its opponents…. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth. Sometimes you really can’t teach an old dog new tricks—moral or non-moral ones. All the more reason to improve moral reasoning early in development. At any rate, while irrationality is often painfully resistant to treatment, the point is that cognitive biases are an appropriate target for efforts to improve moral knowledge; just don’t expect such interventions to be very effective after moral development is complete.

10.4.2 Educated Emotions Given the prominence of sentimentalism in recent years, many philosophers and scientists have claimed that improved emotions will be our saving grace. The idea is even popular in some areas of public discourse. Both Barack Obama and Mark Zuckerberg have attributed our moral ills to a deficit in empathy (see Coplan 2011), which is tied to compassion and other fellow feelings. Would cognitive enhancement and moral technology require changes in emotional response? Absolutely, but primarily because emotions have or affect cognitive elements that can facilitate good reasoning, particularly during moral learning (cf. Railton 2017). Emotions can be unreliable when applied without regard for good reasoning. Empathy, for example, can generate compassion for those in need, but sometimes it leads people to care more about the adorable young white girl stuck in a well than the numerous children starving in Africa (Batson 2011: ch. 8; Bloom 2016). Even among those far away, experimental evidence pg. 182 of 206

Regard for Reason | J. May

suggests that we feel more compassion for particular identifiable victims than for large groups of them (e.g., Jenni & Loewenstein 1997). Disgust can also distract us from morally relevant information. A conservative Christian’s attention might be focused so intently on what he regards as a disgusting lifestyle that he misses the extent of the suffering caused by the AIDS epidemic. However, in some contexts, such as sexual assault, disgust may draw one’s attention to important features of the violation that do make it particularly heinous. Feelings alone may be simply too blunt an instrument to do the precise work required for moral judgment about complex and nuanced disputes. Indiscriminately amplifying emotions, such as anger or compassion, among the masses is likely to merely entrench existing moral beliefs (on disgust, see May 2018). Most conservatives would become more furious with immigrants and empathize more strongly with the unborn fetus whose life was taken away in an abortion clinic. Liberals would instead become even more outraged at income inequality and have greater compassion for those suffering racial or sexual discrimination. Regardless of who is right, merely amplifying emotions won’t change someone’s moral beliefs without guidance from relevant cognition (see Chapter 2, §2.2). Contrast the emotional intervention with indiscriminately increasing education, focus, attention, working memory, and knowledge of science and policy. While cleverness can sometimes just lead to rationalizing bad moral principles, our powers of intellect are necessary for reasoning through nuanced moral problems. Sophisticated liberals and conservatives are still bound to disagree with one another, but they’re likely to have more well founded moral beliefs than those ignorant of the key details or more prone to cognitive errors. Moreover, enhancing inferential capacities is likely to even better work if implemented early in development. We’ve seen how some profound rational deficits during moral development can lead to psychopathy (see Chapter 2, §2.4). Emotions can be educated, of course. Based on your considered judgment that you ought to donate 10% of your income to charities, you might deliberately inculcate more compassion for the less fortunate in other countries. Or, based on the judgment that justice should be served to violent offenders, you might take measures to feel less angry about your son being hauled off to prison. Emotions do, after all, affect attention and motivation, which in turn affect reasoning. So one way to protect moral knowledge is to improve one’s affective responses. Importantly, though, such improvements must be guided by good data and sound inferential capacities.

10.5 Conclusion Carefully scrutinizing our current science of morality yields an empirical defense of reason’s power over the passions and an aggressive attack on sentimentalism—one of the most popular theories in moral psychology. We have also found a general rebuttal to an entire class of sweeping debunking arguments, which ultimately illuminates the sources of moral ignorance and disagreements. Moreover, reasoned moral cognition needn’t be confined to merely telling us how to satisfy our desires, self-interested or otherwise. Genuine altruism is within the human repertoire, as is the ability to do what’s right for the right reasons. This yields a more optimistic picture of our moral minds than many philosophers and scientists have admitted. Of course, to cope with our twenty-first century understanding of the human mind, we must recognize its limits. We aren’t especially reflective thinkers who consciously deliberate their way through life’s convoluted social dilemmas. Unconscious processes do drive us more than we care to admit. The cautious optimism defended in this book is not the claim that most pg. 183 of 206

Regard for Reason | J. May

people possess extensive moral knowledge and are by and large virtuous. Rather a scientific understanding of our moral minds provides reason to embrace the possibility of moral knowledge and virtue without requiring radical revision in how we ordinary conceive of ethics. The unconscious processes that dominate ordinary moral deliberation aren’t fundamentally non-cognitive, irrational, or even arational. Our minds evolved to navigate interpersonal problems, with heuristics that do not always lead us astray. We are, moreover, capable of reflection and guiding our behavior by a conception of what’s justifiable to others. As our social world becomes increasingly complicated, our automatic heuristics will have a great deal of learning and adapting to do. But we needn’t abandon the basic non-consequentialist commitments of our moral minds. In fact, given their widespread acceptance, such commitments can serve as a common currency to reconcile conflicts among the multitude of groups and traditions around the globe. We’ll witness more virtue not by rejecting our basic mode of moral thought but by deploying sound inferential abilities in an environment that allows them to flourish throughout the delicate process of moral learning that begins in childhood and continues throughout the ethical life.

pg. 184 of 206

Regard for Reason | J. May

References Word count: 8,407 Aboodi, R. 2015. “The Wrong Time to Aim at What’s Right: When is De Dicto Moral Motivation Less Virtuous?” Proceedings of the Aristotelian Society 115 (3pt3): 307-314. Aharoni, E., Sinnott-Armstrong, W., & Kiehl, K. A. 2012. “Can psychopathic offenders discern moral wrongs? A new look at the moral/conventional distinction.” Journal of Abnormal Psychology 121(2): 484–497. Alfano, M. 2013. Character as Moral Fiction. Cambridge University Press. Aquino, K., Freeman, D., Reed, A., Lim, V. K. G., & Felps, W. 2009. “Testing a Social-Cognitive Model of Moral Behavior.” Journal of Personality and Social Psychology 97(1): 123–141. Aquino, K., & Reed, A. 2002. “The Self-Importance of Moral Identity.” Journal of Personality and Social Psychology 83(6): 1423–1440. Aquino, K., Reed, A., Thau, S., & Freeman, D. 2007. “A Grotesque and Dark Beauty: How Moral Identity and Mechanisms of Moral Disengagement Influence Cognitive and Emotional Reactions to War.” Journal of Experimental Social Psychology, 43(3): 385–392. Ariely, D. 2012. The Honest Truth About Dishonesty. HarperCollins. Ariely, D. & Loewenstein, G. 2006. “The Heat of the Moment: The Effect of Sexual Arousal on Sexual Decision Making.” Journal of Behavioral Decision Making 19: 87–98. Arpaly, N. 2003. Unprincipled Virtue. Oxford University Press. Arpaly, N., & Schroeder, T. (2014). In Praise of Desire. Oxford University Press. Badhwar, N. 1993. “Altruism versus Self-Interest: Sometimes a False Dichotomy.” Social Philosophy and Policy 10 (1): 90-117. Bandura, A. 1999. “Moral Disengagement in the Perpetration of Inhumanities.” Personality and Social Psychology Review 3(3): 193–209. Barak-Corren, N., Tsay, C.-J., Cushman, F. A., & Bazerman, M. H. Forthcoming. “If You’re Going to Do Wrong, At Least Do It Right: Considering Two Moral Dilemmas at the Same Time Promotes Moral Consistency.” Management Science. DOI: 10.1287/mnsc.2016.2659 Baron, M. 2002. “Acting from Duty.” Groundwork for the Metaphysics of Morals. Ed. and trans. by A. Wood. Yale University Press, pp. 92-110. Baron, R. A. 1997. “The Sweet Smell of... Helping: Effects of Pleasant Ambient Fragrance on Prosocial Behavior in Shopping Malls.” Personality and Social Psychology Bulletin 23(5): 498–503. Barrett, H. C., Bolyanatz, A. et al. 2016. “Small-scale societies exhibit fundamental variation in the role of intentions in moral judgment.” Proceedings of the National Academy of Sciences 113 (17): 4688–4693. Barry, M. 2010. “Humean Theories of Motivation.” Oxford Studies in Metaethics Vol. 5, ed. by R. Shafer-Landau, pp. 195-223. Oxford University Press. Bateson, M., Nettle, D., & Roberts, G. 2006. “Cues of being watched enhance cooperation in a real-world setting.” Biology Letters 2(3): 412–414. Batson, C. D. 1991. The Altruism Question: Toward a Social-Psychological Answer. Lawrence Erlbaum Associates. Batson, C. D. 2011. Altruism in Humans. Oxford University Press. Batson, C. D. 2016. What’s Wrong with Morality? Oxford University Press. pg. 185 of 206

Regard for Reason | J. May

Batson, C. D. 2017. “Help Thou My Unbelief: A Reply to May and Aquino.” Moral Psychology, Volume 5: Virtue & Happiness, eds. W. Sinnott-Armstrong & C. B. Miller. MIT Press. Batson, C. D., Duncan, B. D., Ackerman, P., Buckley, T., & Birch, K. 1981. “Is empathic emotion a source of altruistic motivation?” Journal of Personality and Social Psychology 40(2): 290-302. Batson, C. D., Klein, T. R., Highberger, L., & Shaw, L. L. 1995. “Immorality from Empathy-induced Altruism: When Compassion and Justice Conflict.” Journal of Personality and Social Psychology 68(6): 1042–1054. Batson, C. D., Kobrynowicz, D., Dinnerstein, J. L., Kampf, H. C., & Wilson, A. D. 1997. “In a Very Different Voice: Unmasking Moral Hypocrisy.” Journal of Personality and Social Psychology 72(6): 1335–1348. Batson, C. D., Sager, K., Garst, E., Kang, M., Rubchinsky, K., & Dawson, K. 1997. “Is Empathy-Induced Helping Due to Self-Other Merging?” Journal of Personality and Social Psychology 73(3): 495– 509. Batson, C. D. & Shaw, L. L. 1991. “Evidence for Altruism: Toward a Pluralism of Prosocial Motives.” Psychological Inquiry 2(2): 107–122. Batson, C. D., Thompson, E. R., Seuferling, G., Whitney, H., & Strongman, J. A. 1999. “Moral Hypocrisy: Appearing Moral to Oneself Without Being So.” Journal of Personality and Social Psychology 77(3): 525–537. Batson, C. D., Thompson, E. R., & Chen, H. 2002. “Moral Hypocrisy: Addressing Some Alternatives.” Journal of Personality and Social Psychology 83(2): 330–339. Bauman, C. W., McGraw, A. P., Bartels, D. M., & Warren, C. 2014. “Revisiting External Validity: Concerns about Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology.” Social and Personality Psychology Compass 8(9): 536–554. Berker, S. 2009. “The Normative Insignificance of Neuroscience.” Philosophy & Public Affairs 37, pp. 293-329. Berridge, K. C. 2009. “Wanting and Liking: Observations from the Neuroscience and Psychology Laboratory.” Inquiry 52(4): 378–398. Bertrand, M., & Mullainathan, S. 2004. “Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination.” The American Economic Review 94(4): 991–1013. Betancourt, H. 1990. “An Attribution-Empathy Model of Helping Behavior.” Personality and Social Psychology Bulletin 16(3): 573–591. Blackburn, S. 1998. Ruling Passions. Oxford University Press. Blair, R. J. R. 1995. “A Cognitive Developmental Approach to Morality: Investigating the Psychopath.” Cognition 57(1): 1-29. Blair, R. J. R. 1996. “Brief Report: Morality in the Autistic Child.” Journal of Autism and Developmental Disorders 26(5): 571–579. Blair, R. J. R. 2007. “The Amygdala and Ventromedial Prefrontal Cortex in Morality and Psychopathy.” Trends in Cognitive Sciences 11(9): 387-392. Blanken, I., van de Ven, N., & Zeelenberg, M. 2015. “A Meta-Analytic Review of Moral Licensing.” Personality and Social Psychology Bulletin 41(4): 540–558. Bloom, P. 2013. Just Babies: The Origins of Good and Evil. New York: Crown. Bloom, P. 2016. Against Empathy: The Case for Rational Compassion. New York: Ecco. Boghossian, P. 2012. “What is Inference?” Philosophical Studies 169(1): 1–18. Bourget, D., & Chalmers, D. J. 2014. “What Do Philosophers Believe?” Philosophical Studies 170(3): 465-500. Broad, C. D. 1930/2000. Five Types of Ethical Theory. London: Routledge. Brown, R. P., Tamborski, M., Wang, X., Barnes, C. D., et al. 2011. “Moral Credentialing and the Rationalization of Misconduct.” Ethics & Behavior 21(1): 1–12. pg. 186 of 206

Regard for Reason | J. May

Brownstein, M. 2015. “Implicit Bias.” In The Stanford Encyclopedia of Philosophy, Spring 2015 Edition, ed. by E. N. Zalta. . Cameron, C. D., Payne, B. K., & Doris, J. M. 2013. “Morality in High Definition: Emotion Differentiation Calibrates the Influence of Incidental Disgust on Moral Judgments.” Journal of Experimental Social Psychology 49(4): 719–725. Campbell, R. & Kumar, V. 2012. “Moral Reasoning on the Ground.” Ethics 122 (2):273-312. Carbonell, V. 2013. “De Dicto Desires and Morality as Fetish.” Philosophical Studies 163(2): 459-477. Carlson, M., Charlin, V., & Miller, N. 1988. “Positive Mood and Helping Behavior: A Test of Six Hypotheses.” Journal of Personality and Social Psychology 55(2): 211–229. Carlson, M., & Miller, N. 1987. Explanation of the relation between negative mood and helping. Psychological Bulletin 102(1): 91–108. Carpenter, A. 2014. Indian Buddhist Philosophy. New York: Routledge. CBS News Poll 2011. “One in four Americans think Obama was not born in U.S.” Accessed 1 July 2016, . Chapman, H. A. and A. K. Anderson, 2013. ‘Things Rank and Gross in Nature: A Review and Synthesis of Moral Disgust’. Psychological Bulletin 139(2): 300–327. Chapman University Poll 2014. “Study: Americans are as likely to believe in Bigfoot as in the big bang theory.” Accessed 22 July 2016, . Cialdini, Robert B., S. L. Brown, B. P. Lewis, C. Luce, & S. L. Neuberg 1997. “Reinterpreting the Empathy- Altruism Relationship: When One Into One Equals Oneness” Journal of Personality and Social Psychology 73(3): 481-494. Cialdini, R. B. 1991. “Altruism or Egoism? That Is (Still) the Question.” Psychological Inquiry 2: 124126. Ciaramelli, E., Muccioli, M., Ladavas, E., & di Pellegrino, G. 2007. “Selective Deficit in Personal Moral Judgment Following Damage to Ventromedial Prefrontal Cortex.” Social Cognitive and Affective Neuroscience 2(2): 84–92. Cima, M., Tonnaer, F., & Hauser, M. D. 2010. “Psychopaths Know Right from Wrong but Don’t Care.” Social Cognitive and Affective Neuroscience 5(1): 59–67. Clarke-Doane, J. 2015. “Justification and Explanation in Mathematics and Morality.” In Oxford Studies in Metaethics Vol. 10, ed. by R. Shafer-Landau. Oxford University Press. Coplan, A. 2011. “Will the Real Empathy Please Stand Up? A Case for a Narrow Conceptualization.” The Southern Journal of Philosophy 49(s1): 40-65. Copp, D. 2001. “Realist-Expressivism: A Neglected Option for Moral Realism.” Social Philosophy and Policy 18(2): 1–43. Craigie, J. 2011. “Thinking and Feeling: Moral Deliberation in a Dual-process Framework.” Philosophical Psychology 24(1): 53–71. Crockett, M. 2013. “Models of Morality.” Trends in Cognitive Sciences 17(8):363-6. Cushman, F. 2008. “Crime and Punishment: Distinguishing the Roles of Causal and Intentional Analyses in Moral Judgment.” Cognition 108:353–80. Cushman, F. 2016. “The Psychological Origins of the Doctrine of Double Effect.” Criminal Law and Philosophy 10(4): 763–776. Cushman, F., Sheketoff, R., Wharton, S., & Carey, S. 2013. “The Development of Intent-based Moral Judgment.” Cognition 127: 6–21. Cushman, F., & Young, L. 2011. “Patterns of Moral Judgment Derive from Nonmoral Psychological Representations.” Cognitive Science 35(6): 1052-1075. Cushman, F., Young, L. and J. Greene 2010. “Multi-System Moral Psychology.” In The Moral Psychology Handbook, ed. J. M. Doris and The Moral Psychology Research Group, pp. 47–71. New York: Oxford University Press. pg. 187 of 206

Regard for Reason | J. May

Cushman, F., Young, L., and M. Hauser, 2006. “The Role of Conscious Reasoning and Intuition in Moral Judgment: Testing Three Principles of Harm.” Psychological Science 17(12): 1082–1089. D’Arms, J. & Jacobson, D. 2014. “Sentimentalism and Scientism.” In Moral Psychology and Human Agency, ed. by J. D’Arms & D. Jacobson. Oxford University Press. Damasio, A. 1994/2005. Descartes’ Error. Penguin Books. (Originally published by Putnam.) Dancy, J. 1993. Moral Reasons. Wiley-Blackwell. Danziger, S., Levav, J., & Avnaim-Pesso, L. 2011. “Extraneous Factors in Judicial Decisions.” Proceedings of the National Academy of Sciences 108(17): 6889-6892. Darley, J. M., & Batson, C. D. 1973. “From Jerusalem to Jericho.” Journal of Personality and Social Psychology 27(1): 100–108. Darwall, S. 1983. Impartial Reason. Ithica: Cornell University Press. Darwin, C. 1871. The Descent of Man, and Selection in Relation to Sex. London: John Murray. Davidson, D. 1963/2001. “Actions, Reasons, and Causes.” Essay 1 in Essays on Actions and Events. New York: Oxford University Press. Davis, W. A. 2005. “The Antecedent Motivation Theory – Discussion.” Philosophical Studies 123(3): 249–260. De Witt Huberts, J. C., Evers, C., & De Ridder, D. T. D. 2014. “‘Because I Am Worth It’: A Theoretical Framework and Empirical Review of a Justification-Based Account of Self-Regulation Failure.” Personality and Social Psychology Review 18(2): 119–138. Decety, J., & Cacioppo, S. 2012. “The Speed of Morality: A High-density Electrical Neuroimaging Study.” Journal of Neurophysiology 108(11): 3068–3072. Decety, J. & Jackson, P. L. 2004. “The Functional Architecture of Human Empathy.” Behavioral and Cognitive Neuroscience Reviews 3(2): 71-100. Deigh, J. 1995. “Empathy and Universalizability.” Ethics 105(4): 743-763. Demaree-Cotton, J. 2016. “Do Framing Effects make Moral Intuitions Unreliable?” Philosophical Psychology 29(1): 1-22. Ditto, P. H., Liu, B., Clark, C. J., Wojcik, S. P., Chen, E. E., Grady, R. H., & Zinger, J. F. 2017. “At Least Bias is Bipartisan: A Meta-Analytic Comparison of Partisan Bias in Liberals and Conservatives.” Unpublished manuscript. Doris, J. 2015. Talking to Our Selves: Reflection, Ignorance, and Agency. New York: Oxford University Press. Doris, J. M., & Plakias, A. 2008. “How to Argue about Disagreement.” In W. Sinnott-Armstrong (Ed.), Moral Psychology, Vol. 2. MIT Press. Drayson, Z. 2012. “The Uses and Abuses of the Personal/Subpersonal Distinction.” Philosophical Perspectives 26(1): 1–18. Dreier, J. 1997. “Humean Doubts about the Practical Justification of Morality.” In Ethics and Practical Reason, eds. G. Cullity & B. Gaut, pp. 81-100. Oxford: Clarendon Press. Dutton, D. G., & Aron, A. P. 1974. “Some Evidence for Heightened Sexual Attraction Under Conditions of High Anxiety.” Journal of Personality and Social Psychology 30(4): 510-517. Dwyer, S. 2009. “Moral Dumbfounding and the Linguistic Analogy.” Mind and Language 24(3): 274– 296. Effron, D. A., Cameron, J. S., & Monin, B. 2009. “Endorsing Obama Licenses Favoring Whites.” Journal of Experimental Social Psychology 45(3): 590–593. Enoch, D. 2013. “On Analogies, Disanalogies, and Moral Philosophy: A Comment on John Mikhail's Elements of Moral Cognition.” Jerusalem Review of Legal Studies 8(1): 1–25. Eskine, K. J., Kacinik, N. A., J. J. Prinz 2011. “A Bad Taste in the Mouth: Gustatory Disgust Influences Moral Judgment.” Psychological Science 22(3): 295–299. Estes, S. 2012. “The Myth of Self-Correcting Science.” The Atlantic, Accessed 6 Nov. 2015, . pg. 188 of 206

Regard for Reason | J. May

Feinberg, J. 1965/1999. “Psychological Egoism,” In Reason and Responsibility (10th edition), ed. by J. Feinberg & R. Shafer-Landau. Belmont, CA: Wadsworth. Feltz, A. & May, J. 2017. “The Means/Side-Effect Distinction in Moral Cognition: A Meta-Analysis.” Cognition 166: 314–327. Fessler, D. M., Arguello, A. P., Mekdara, J. M., & Macias, R. 2003. “Disgust Sensitivity and Meat Consumption: A Test of an Emotivist Account of Moral Vegetarianism.” Appetite 41(1): 31-41. Finlay, S. 2007. “Responding to Normativity.” In Oxford Studies in Metaethics Vol. 2, ed. R. ShaferLandau, pp. 220-39. Oxford University Press. Flanagan, O. 2017. The Geography of Morals: Varieties of Moral Possibility. Oxford University Press. Fodor, J. 1987. Psychosemantics. Cambridge, MA: MIT Press. Foot, P. 1967. “The Problem of Abortion and the Doctrine of the Double Effect.” Oxford Review, 5, 5–15. Foot, P. 1984. “Killing and Letting Die.” In Abortion: Moral and Legal Perspectives, ed. J. L. Garfield & P. Hennessey, pp. 177–85. Amherst, MA: University of Massachusetts Press. Forscher, P. S., Lai, C. K., Axt, J. R., Ebersole, C. R., Herman, M., Devine, P. G., & Nosek, B. A. 2017. “A Meta-Analysis of Change in Implicit Bias.” Unpublished manuscript. Foster, C. A., Witcher, B. S., Campbell, W. K., & Green, J. D. 1998. “Arousal and Attraction: Evidence for Automatic and Controlled Processes.” Journal of Personality and Social Psychology 74(1): 86-101. Gallup Poll 2014a. “New Record Highs in Moral Acceptability.” Gallup Politics, Accessed 15 June 2016, . Gallup Poll 2014b. “Evolution, Creationism, Intelligent Design.” Accessed 21 July 2016, . Gazzaniga, M. S. 2014. “Mental Life and Responsibility in Real Time with a Determined Brain.” In Moral Psychology, Vol. .4: Free Will and Moral Responsibility, ed. W. Sinnott-Armstrong. Cambridge, MA: MIT Press. Glenn, A. L., & Raine, A. 2014. Psychopathy: An Introduction to Biological Findings and Their Implications. New York University Press. Glenn, A. L., Schug, R. A., Young, L., & Hauser, M. D. 2009. “Increased DLPFC activity during moral decision-making in psychopathy.” Molecular Psychiatry 14(10): 909–911. Gold, N., Pulford, B. D., & Colman, A. M. 2013. “Your Money or Your Life: Comparing Judgements in Trolley Problems Involving Economic and Emotional Harms, Injury and Death.” Economics and Philosophy 29: 213–33. Gold, N., Colman, A. M., & Pulford, B. D. 2014. “Cultural Differences in Responses to Trolley Problems.” Judgment and Decision Making 9(1): 65-76. Goode, E. & Frosch, D. 2012. “From a Dark Theater, Tales of Protection and Loss.” New York Times Accessed 15 June 2016, . Graham, J., Haidt, J., & Nosek, B. A. 2009. “Liberals and Conservatives Rely on Different Sets of Moral Foundations.” Journal of Personality and Social Psychology 96(5): 1029–1046. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. 2013. “Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism.” In Advances in Experimental Social Psychology 47: 55–130. Greene, J. 2008. “The Secret Joke of Kant’s Soul.” In Moral Psychology Vol. 3, ed. W. SinnottArmstrong, Cambridge, MA: MIT Press: 35–117. Greene, J. 2013. Moral Tribes. Penguin Press. Greene, J. D. 2014. “Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics.” Ethics 124(4): 695-726. Greene, J. D., Cushman, F. A., Stewart, L. E., Lowenberg, K., Nystrom, L. E., and Cohen, J. D. 2009. “Pushing Moral Buttons: The Interaction Between Personal Force and Intention in Moral Judgment.” Cognition 111 (3): 364 –371. pg. 189 of 206

Regard for Reason | J. May

Greene, J. D., & Paxton, J. M. 2009. “Patterns of Neural Activity Associated with Honest and Dishonest Moral Decisions.” Proceedings of the National Academy of Sciences of the United States of America 106(30): 12506–12511. Greenwald, A. G., Banaji, M. R., & Nosek, B. A. 2015. “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects.” Journal of Personality and Social Psychology 108(4): 553–561. Greenwald, A. G., Poehlman, T. A., Uhlmann, E. L., & Banaji, M. R. 2009. “Understanding and Using the Implicit Association Test: III.” Journal of Personality and Social Psychology 97(1): 17–41. Haggard, P. 2005. “Conscious Intention and Motor Cognition.” Trends in Cognitive Sciences 9: 290–295. Haidt, J. 2001. “The Emotional Dog and Its Rational Tail.” Psychological Review 108(4): 814–834. Haidt, J. 2003. “The Moral Emotions.” In Handbook of Affective Sciences, ed. by R. J. Davidson, K. R. Scherer, & H. H. Goldsmith. Oxford University Press. Haidt, J. 2012. The Righteous Mind. New York: Pantheon. Haidt, J. & Bjorklund, F. 2008. “Social Intuitionists Answer Six Questions About Morality.” In Moral Psychology Vol. 2, ed. W. Sinnott-Armstrong, pp. 181-218. MIT Press. Haidt, J., Koller, S. H., and M. G. Dias 1993. “Affect, Culture, and Morality, or Is It Wrong to Eat Your Dog?” Journal of Personality and Social Psychology 65(4): 613–628. Hall, L., Johansson, P., & Strandberg, T. 2012. “Lifting the Veil of Morality: Choice Blindness and Attitude Reversals on a Self-Transforming Survey.” PLoS ONE 7(9): e45457–8. Hamlin, J. K., Wynn, K., & Bloom, P. 2007. “Social Evaluation by Preverbal Infants.” Nature 450(7169): 557–559. Han, H., Kim, J., Jeong, C., & Cohen, G. L. 2017. “Attainable and Relevant Moral Exemplars Are More Effective than Extraordinary Exemplars in Promoting Voluntary Service Engagement.” Frontiers in Psychology 8 (283): 1-14. Hare, R. D. 1993. Without Conscience. Guilford Press. Harman, G. 1999. “Moral Philosophy and Linguistics.” In Proceedings of the 20th World Congress of Philosophy: Vol. 1. Ethics, ed. K. Brinkmann, pp. 107–115. Bowling Green, OH: Philosophy Documentation Center. Harman, G., Mason, K., & Sinnott-Armstrong, W. 2010. “Moral Reasoning.” In The Moral Psychology Handbook, ed. J. M. Doris and the Moral Psychology Research Group. Oxford University Press. Hauser, M., Cushman, F., Young, L., Jin, R., & Mikhail, J. 2007. “A Dissociation Between Moral Judgments and Justifications.” Mind and Language 22(1): 1–21. Helion, C., & Pizarro, D. A. 2014. “Beyond Dual-Processes: The Interplay of Reason and Emotion in Moral Judgment.” In Handbook of Neuroethics, ed. by J. Clausen & N. Levy. Dordrecht: Springer Netherlands. Henrich, J. 2016. The Secret of Our Success. Princeton University Press. Henrich, J., Heine, S. J., & Norenzayan, A. 2010. “The Weirdest People in the World?” Behavioral and Brain Sciences 33(2-3): 61-83. Ho, D. E., & Imai, K. 2008. “Estimating Causal Effects of Ballot Order from a Randomized Natural Experiment the California Alphabet Lottery, 1978–2002.” Public Opinion Quarterly 72(2): 216240. Holle, H., Banissy, M. J., & Ward, J. 2013. “Functional and Structural Brain Differences Associated with Mirror-touch Synaesthesia.” Neuroimage 83:1041-1050. Holton, R. 2009. Willing, Wanting, Waiting. Oxford: Clarendon Press. Holyoak, K. J., & Powell, D. 2016. “Deontological Coherence: A Framework for Commonsense Moral Reasoning.” Psychological Bulletin 142(11): 1179-1203. Horberg, E. J., Oveis, C., & Keltner, D. 2011. “Emotions as Moral Amplifiers.” Emotion Review 3(3): 237-244. Horgan, T., & Timmons, M. 2007. “Morphological Rationalism and the Psychology of Moral Judgment.” Ethical Theory and Moral Practice 10(3): 279–295. pg. 190 of 206

Regard for Reason | J. May

Horne, Z., & Livengood, J. 2017. “Ordering Effects, Updating Effects, and the Specter of Global Skepticism.” Synthese 194(4): 1189–1218. Horne, Z. & Powell, D. 2016. “How Large Is the Role of Emotion in Judgments of Moral Dilemmas?” PLoS ONE. Horne, Z., Powell, D., & Hummel, J. 2015. “A Single Counterexample Leads to Moral Belief Revision.” Cognitive Science 39(8): 1950–1964. Hornstein, H. 1991. “Empathic distress and altruism: Still inseparable.” Psychological Inquiry 2 (2): 133– 135. Huebner, B. 2015. “Do Emotions Play a Constitutive Role in Moral Cognition?” Topoi 34 (2): 427-440. Huebner, B., Dwyer, S., & Hauser, M. 2009. “The Role of Emotion in Moral Psychology.” Trends in Cognitive Sciences 13(1): 1–6. Huebner, B., Hauser, M. D., & Pettit, P. 2011. “How the Source, Inevitability and Means of Bringing About Harm Interact in Folk-Moral Judgments.” Mind & Language 26(2): 210-233. Hume, D. 1739-40/2000. A Treatise of Human Nature. Ed. by D. F. Norton and M. J. Norton. Oxford University Press. Hume, D. 1751/1998. An Enquiry Concerning the Principles of Morals. Ed. T. L. Beauchamp. Oxford University Press. Hurka, T. 2014. “Many Faces of Virtue.” Philosophy and Phenomenological Research 89(2): 496–503. Hutcheson, F. 1725/1991. An Inquiry into the Original of our Ideas of Beauty and Virtue. Reprinted in part in British Moralists: 1650-1800, 2 Vols, ed. D.D. Raphael. Hackett, pp. 260–321. Inbar, Y., Pizarro, D. A., Knobe, J. and P. Bloom 2009. “Disgust Sensitivity Predicts Intuitive Disapproval of Gays.” Emotion 9(3): 435– 43. Inbar, Y., Pizarro, D., Iyer, R., & Haidt, J. 2012. “Disgust sensitivity, political conservatism, and voting.” Social Psychological and Personality Science 3(5): 537-544. Itzkoff, D. 2010. “Florida Governor Will Seek Pardon for Jim Morrison.” The New York Times, Arts Beat, Accessed 16 Nov. 2010, . Jackson, F. 1982. “Epiphenomenal Qualia.” The Philosophical Quarterly 32(127): 127-136. Jacobson, D. 2012. “Moral Dumbfounding and Moral Stupefaction.” In Oxford Studies in Normative Ethics: Vol. 2, edited by Mark Timmons, Oxford University Press. James, S. M. 2009. “The Caveman's Conscience: Evolution and Moral Realism.” Australasian Journal of Philosophy 87(2): 215-233. Jenni, K., & Loewenstein, G. 1997. “Explaining the Identifiable Victim Effect.” Journal of Risk and Uncertainty 14(3): 235-257. Johnston, M. 2010. Surviving Death. Princeton University Press. Jones, K. 2006. “Quick and Smart? Modularity and the Pro-Emotion Consensus.” Canadian Journal of Philosophy 36(Supplement): 3–27. Joyce, R. 2006. The Evolution of Morality, Cambridge, MA: MIT Press. Joyce, R. 2013. “Arguments from Moral Disagreement to Moral Skepticism.” In D. Machuca (Ed.), Moral Skepticism: New Essays, Routledge. Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. 2012. “The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks.” Nature Climate Change 2(6): 1–4. Kahane, G. 2011. “Evolutionary Debunking Arguments.” Noûs 45(1):103-125. Kahane, G., Everett, J. A. C., Earp, B. D., Farias, M., & Savulescu, J. 2015. “‘Utilitarian’ Judgments in Sacrificial Moral Dilemmas Do Not Reflect Impartial Concern for the Greater Good.” Cognition 134(C): 193–209. Kahane, G., Wiech, K., Shackel, N., Farias, M., Savulescu, J., & Tracey, I. 2012. “The Neural Basis of Intuitive and Counterintuitive Moral Judgment.” Social Cognitive and Affective Neuroscience 7(4): 393–402. pg. 191 of 206

Regard for Reason | J. May

Kahneman, D. 2011. Thinking, Fast and Slow. Macmillan. Kamm, F. M. 2009. “Neuroscience and Moral Reasoning: A Note on Recent Research.” Philosophy & Public Affairs 37(4): 330-345. Kant, I. 1785/2002. Groundwork for the Metaphysics of Morals. Trans. Mary Gregor. Cambridge University Press. Karniol, R. & Miller, D. T. 1983. “Why Not Wait? A Cognitive Model of Self-Imposed Delay Termination.” Journal of Personality and Social Psychology 43 (4): 935-942 Kass, L. R. 1997. “The Wisdom of Repugnance.” The New Republic 216 (22): 17–26. Kauppinen, A. 2013. “A Humean Theory of Moral Intuition.” Canadian Journal of Philosophy 43(3): 360–381. Kelly, D., Stich, S., Haley, K. J., Eng, S. J., & Fessler, D. M. 2007. “Harm, Affect, and the Moral/Conventional Distinction.” Mind & Language 22(2): 117-131. Kelly, D. 2011. Yuck!: The Nature and Moral Significance of Disgust. MIT Press. Kenessey, B. de & Darwall, S. 2014. “Moral Psychology as Accountability.” In Moral Psychology and Human Agency, ed. by J. D’Arms and D. Jacobson. Oxford University Press. (Originally published under the name “Brendan Dill.”) Kennett, J. 1993. “Mixed Motives.” Australasian Journal of Philosophy 71(3): 256-269. Kennett, J. 2002. “Autism, Empathy and Moral Agency.” Philosophical Quarterly 52 (208): 340-357. Kennett, J. 2006. “Do Psychopaths Really Threaten Moral Rationalism?” Philosophical Explorations 9(1): 69-82. Kennett, J. & Fine, C. 2008. “Internalism and the Evidence from Psychopaths and ‘Acquired Sociopaths.’” In Moral Psychology, Vol. 3, ed. W. Sinnott-Armstrong, pp. 173–90. MIT Press. Kennett, J., & Fine, C. 2009. “Will the Real Moral Judgment Please Stand Up?” Ethical Theory and Moral Practice 12(1): 77-96. Knobe, J. 2010. “Person as Scientist, Person as Moralist.” Behavioral and Brain Sciences, 33, 315-329. Koenigs, M., Kruepke, M., Zeier, J., & Newman, J. P. 2012. “Utilitarian Moral Judgment in Psychopathy.” Social Cognitive and Affective Neuroscience 7(6): 708–714. Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M. D., & Damasio, A. R. 2007. “Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgements.” Nature 446(7138): 908–911. Kohlberg, L. 1973. “The Claim to Moral Adequacy of a Highest Stage of Moral Judgment.” The Journal of Philosophy 70(18): 630-646. Korsgaard, C. M. 1986/1996. “Skepticism about Practical Reason.” Ch.11 in Korsgaard’s Creating the Kingdom of Ends, pp. 311-34. Cambridge University Press. Korsgaard, C. M. 1996/2008. “From Duty and for the Sake of the Noble: Kant and Aristotle on Morally Good Action.” Ch. 6 in Korsgaards’s The Constitution of Agency, pp. 174-206. Oxford University Press. Korsgaard, C. M. 2009. Self-Constitution: Agency, Identity, Integrity. Oxford University Press. Kühberger, A. 1998. “The Influence of Framing on Risky Decisions: A Meta-Analysis.” Organizational Behavior and Human Decision Processes 75(1): 23–55. Kumar, V. 2017. “Foul Behavior.” Philosophers’ Imprint 17(15): 1-16. Kumar, V. 2015. “Moral Judgment as a Natural Kind.” Philosophical Studies 172(11): 2887-2910. Kumar, V. 2016a. “The Empirical Identity of Moral Judgment.” Philosophical Quarterly 66(265): 783804. Kumar, V. 2016b. “Nudges and Bumps.” Georgetown Journal of Law and Public Policy 14 (SI): 861876. Kumar, V. & Campbell, R. 2012. “On the Normative Significance of Experimental Moral Psychology.” Philosophical Psychology 25 (3): 311-330. Kumar, V. & May, J. Forthcoming. “How to Debunk Moral Beliefs.” In The New Methods of Ethics, ed. by J. Suikkanen & A. Kauppinen. pg. 192 of 206

Regard for Reason | J. May

Kunda, Z. 1990. “The Case for Motivated Reasoning.” Psychological Bulletin 108(3): 480-98. Lamont, T. 2015. “Joseph Gordon-Levitt: ‘Edward Snowden was Warm, Kind, Thoughtful.’” The Guardian, accessed 2 May 2016, . Landy, J. F. & Goodwin, G. P. 2015. “Does Incidental Disgust Amplify Moral Judgment? A MetaAnalytic Review of Experimental Evidence.” Perspectives on Psychological Science 10(4): 518536. Latané, B., & Nida, S. 1981. “Ten Years of Research on Group Size and Helping.” Psychological Bulletin, 89(2): 308-324. Lawrence, C., Horne, Z., & Rottman, J. 2017. “Relational Matching Induces Coherence Shifts in Moral Attitudes about Meat.” Unpublished manuscript. Lenman, J. 1996. “Belief, Desire and Motivation: an Essay in Quasi-Hydraulics.” American Philosophical Quarterly 33(3): 291-301. Levy, N. 2007. Neuroethics. Cambridge University Press. Levy, N. 2011. “Resisting ‘Weakness of the Will.’” Philosophy and Phenomenological Research 82 (1): 134-155. Levy, N. 2015a. “Neither Fish nor Fowl: Implicit Attitudes as Patchy Endorsements.” Noûs 49(4): 800– 823. Levy, N. 2015b. “The Virtuous Homophobe.” Practical Ethics. Accessed 15 September 2015, . Liao, S. M., A. Wiegmann, J. Alexander, and G. Vong. 2012. “Putting the Trolley in Order: Experimental Philosophy and the Loop Case.” Philosophical Psychology 25: 661–671. Libet, B. 1985. “Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action.” Behavioral and Brain Sciences 8: 529–566. Lillehammer, H. 1997. “Smith on Moral Fetishism.” Analysis, 57(3): 187–195. Lillehammer, H. 2003. “Debunking Morality.” Biology and Philosophy 18(4): 567-581. Locke, D. 2014. “Darwinian Normative Skepticism.” In Challenges to Moral and Religious Belief: Disagreement and Evolution, ed. by M. Bergmann & P. Kain. Oxford University Press. Lombrozo, T. 2009. “The Role of Moral Commitments in Moral Judgment.” Cognitive Science 33(2): 273–286. Lord, C. G., Ross, L., & Lepper, M. R. 1979. “Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence.” Journal of Personality and Social Psychology 37(11): 2098–2109. Machery, E. & Doris, J. M. 2017. “An Open Letter to Our Students: Doing Interdisciplinary Moral Psychology.” In Moral Psychology: A Multidisciplinary Guide, ed. by B. G. Voyer and T. Tarantola. Springer. Maibom, H. L. 2005. “Moral Unreason: The Case of Psychopathy.” Mind and Language 20(2): 237-57. Maibom, H. L. 2010. “What Experimental Evidence Shows Us About the Role of Emotions in Moral Judgement.” Philosophy Compass 5 (11): 999-1012. Mallon, R. & Nichols, S. 2010. “Rules.” In The Moral Psychology Handbook, ed. J. M. Doris and The Moral Psychology Research Group, New York: Oxford University Press, 297–320. Maner, J. K., Luce, C. L., Neuberg, S. L., Cialdini, R. B., Brown, S., & Sagarin, B. J. 2002. “The Effects of Perspective Taking on Motivations for Helping: Still No Evidence for Altruism.” Personality and Social Psychology Bulletin 28(11): 1601-1610. Markovits, J. 2010. “Acting for the Right Reasons.” The Philosophical Review 119(2): 201–242. Marsh, A. A., & Blair, R. J. R. 2008. “Deficits in Facial Affect Recognition among Antisocial Populations: A Meta-analysis.” Neuroscience & Biobehavioral Reviews 32(3): 454–465. Marshall, J., Watts, A. L., & Lilienfeld, S. O. Forthcoming. “Do Psychopathic Individuals Possess a Misaligned Moral Compass? A Meta-Analytic Examination of Psychopathy’s Relations with pg. 193 of 206

Regard for Reason | J. May

Moral Judgment.” Personality Disorders: Theory, Research, and Treatment. DOI: 10.1037/per0000226 May, J. 2011a. “Relational Desires and Empirical Evidence against Psychological Egoism.” European Journal of Philosophy 19(1): 39-58. May, J. 2011b. “Egoism, Empathy, and Self-Other Merging.” Southern Journal of Philosophy 49(S1): 25–39, Spindel Supplement: Empathy & Ethics, ed. R. Debes. May, J. & Holton, R. 2012. “What in the World Is Weakness of Will?” Philosophical Studies 157(3): 341–360. May, J. 2013a. “Because I Believe It’s the Right Thing to Do.” Ethical Theory & Moral Practice 16(4): 791–808. May, J. 2013b. “Skeptical Hypotheses and Moral Skepticism.” Canadian Journal of Philosophy 43(3): 341–359. May, J. 2014a. “Does Disgust Influence Moral Judgment?” Australasian Journal of Philosophy 92(1): 125–141. May, J. 2014b. “On the Very Concept of Free Will.” Synthese 191(12): 2849-2866. May, J. 2016a. “Repugnance as Performance Error: The Role of Disgust in Bioethical Intuitions.” The Ethics of Human Enhancement: Understanding the Debate, ed. by S. Clarke, et al. Oxford University Press. May, J. 2016b. “Emotional Reactions to Human Reproductive Cloning.” Journal of Medical Ethics 42(1): 26-30. May, J. 2018. “The Moral & Political Limits of Disgust.” The Moral Psychology of Disgust, ed. by V. Kumar & N. Strohminger. Rowman & Littlefield. May, J. & Kumar, V. Forthcoming. “Moral Reasoning and Emotion.” In The Routledge Handbook of Moral Epistemology, ed. by K. Jones, M. Timmons, & A. Zimmerman. Routledge. Mazar, N., Amir, O., & Ariely, D. 2008. “The Dishonesty of Honest People: A Theory of Self-concept Maintenance.” Journal of Marketing Research 45(6): 633–644. Mazar, N., & Zhong, C. B. 2010. “Do Green Products Make Us Better People?” Psychological Science, 21(4): 494–498. McDowell, J. 1978/1998. “Are Moral Requirements Hypothetical Imperatives?” In McDowell’s Mind, Value, and Reality, pp. 77-94. Harvard University Press. McDowell, J. 1995/1998. “Might There Be External Reasons?” In McDowell’s Mind, Value, and Reality, pp. 95-111. Harvard University Press. McGrath, S. 2008. “Moral Disagreement and Moral Expertise.” In Oxford Studies in Metaethics, Vol. 3, ed. R. Shafer-Landau. Oxford University Press. McGuire, J., Langdon, R., Coltheart, M., C. Mackenzie 2009. “A Reanalysis of the Personal/Impersonal Distinction in Moral Psychology Research.” Journal of Experimental Social Psychology 45(3): 577–580. Mele, A. 2003. Motivation and Agency. New York: Oxford University Press. Mendez, M. F., Anderson, E., & Shapira, J. S. 2005. “An Investigation of Moral Judgement in Frontotemporal Dementia.” Cognitive and Behavioral Neurology 18(4): 193–197. Mercer, M. 2001. “In Defence Of Weak Psychological Egoism.” Erkenntnis, 55, 217–237. Mercier, H., Deguchi, M., Van der Henst, J. B., & Yama, H. 2016. “The Benefits of Argumentation Are Cross-culturally Robust: The Case of Japan.” Thinking & Reasoning 22(1): 1-15. Mercier, H., & Sperber, D. 2011. “Why Do Humans Reason? Arguments for an Argumentative Theory.” Behavioral and Brain Sciences 34(02): 57–74. Merritt, M. W., Doris, J. M., & Harman, G. 2010. “Character.” In The Moral Psychology Handbook, ed. by J. Doris and the Moral Psychology Research Group. Oxford University Press. Mikhail, J. 2007. “Universal Moral Grammar: Theory, Evidence and the Future.” TRENDS in Cognitive Sciences 11(4): 143–52. Mikhail, J. 2011. Elements of Moral Cognition. Cambridge University Press. pg. 194 of 206

Regard for Reason | J. May

Mikhail, J. 2013. “New Perspectives on Moral Cognition: Reply to Zimmerman, Enoch, and Chemla, Egré & Schlenker.” Jerusalem Review of Legal Studies 8(1): 66–114. Mikhail, J. 2014. “Any Animal Whatever? Harmful Battery and its Elements as Building Blocks of Moral Cognition.” Ethics 124(4): 750-786. Millar, J. C., Turri, J., & Friedman, O. 2014. “For the Greater Goods? Ownership Rights and Utilitarian Moral Judgment.” Cognition 133(1): 79–84. Miller, C. B. 2013. Moral Character: An Empirical Theory. Oxford University Press. Miller, C. B. 2016. “Honesty.” In Moral Psychology Vol. 5: Virtue & Happiness, ed. by W. SinnottArmstrong & C. Miller. MIT Press. Miller, R. W. 1985. “Ways of Moral Learning.” The Philosophical Review 94(4): 507–556. Mogensen, A. L. 2015. “Evolutionary Debunking Arguments and the Proximate/Ultimate Distinction.” Analysis 75(2): 196–203. Moll, J., Zahn, R., de Oliveira-Souza, R., Krueger, F., & Grafman, J. 2005. “The Neural Basis of Human Moral Cognition.” Nature Reviews Neuroscience 6(10): 799-809. Monroe, K. R., M. C. Barton, & U. Klingemann 1990. “Altruism and the Theory of Rational Action: Rescuers of Jews in Nazi Europe.” Ethics 101(1): 103-22. Moody-Adams, M. M. 1997. Fieldwork in familiar places: Morality, culture, and philosophy. Harvard University Press. Moran, J. M., Young, L. L., Saxe, R., Lee, S. M., O’Young, D., Mavros, P. L., & Gabrieli, J. D. 2011. “Impaired Theory of Mind for Moral Judgment in High-Functioning Autism.” Proceedings of the National Academy of Sciences of the United States of America 108(7): 2688–2692. Morillo, C. 1990. “The Reward Event and Motivation,” The Journal of Philosophy 87(4): 169–186. Morrow, D. 2009. “Moral Psychology and the ‘Mencian creature.’” Philosophical Psychology 22(3): 281304. Moore, A. B., Clark, B. A., & Kane, M. J. 2008. “Who Shalt Not Kill? Individual Differences in Working Memory Capacity, Executive Control, and Moral Judgment.” Psychological Science 19(6): 549557. Mukhopadhyay, A., & Johar, G. V. 2009. “Indulgence as Self-Reward For Prior Shopping Restraint: A Justification-Based Mechanism.” Journal of Consumer Psychology 19(3): 334–345. Musen, J. 2010. “The Moral Psychology of Obligations to Help Those in Need.” Honors thesis, Harvard. Nadelhoffer, T., & Feltz, A. 2008. “The Actor–Observer Bias and Moral Intuitions: Adding Fuel To Sinnott-Armstrong’s Fire.” Neuroethics 1(2): 133-144. Nado, J., Kelly, D., & Stich, S. 2009. “Moral Judgment.” In The Routledge Companion to Philosophy of Psychology, ed. by J. Symons & P. Calvo, pp. 621-633. Routledge. Nagel, T. 1970/1978. The Possibility of Altruism. Princeton University Press. Nahmias, E. 2007. “Autonomous Agency and Social Psychology.” In Cartographies of the Mind: Philosophy and Psychology in Intersection, ed. by Marraffa, Caro, and Ferretti, pp. 169-185. Springer. Nelkin, D. K. 2005. “Freedom, Responsibility and the Challenge of Situationism.” Midwest Studies in Philosophy 29(1): 181–206. Neuberg, Steven L., Cialdini, R. B., Brown, S.L., & C. Luce 1997. “Does Empathy Lead to Anything More Than Superficial Helping? Comment on Batson et al. (1997)” Journal of Personality and Social Psychology 7(3): 510-516. Nichols, S. 2002. “Norms with Feeling: Towards a Psychological Account of Moral Judgment.” Cognition 84(2): 221–236. Nichols, S. 2004. Sentimental Rules: On the Natural Foundations of Moral Judgment. New York: Oxford University Press. Nichols, S. 2008. “Moral Rationalism and Empirical Immunity.” In Moral Psychology Vol. 3, ed. W. Sinnott-Armstrong. Cambridge, MA: MIT Press. Nichols, S. 2014. “Process Debunking and Ethics.” Ethics, 124: 727-49. pg. 195 of 206

Regard for Reason | J. May

Nichols, S., Kumar, S. & Lopez, T. 2016. “Rational Learners and Moral Rules.” Mind & Language 31(5): 530-554. Nichols, S. & R. Mallon 2006. “Moral Dilemmas and Moral Rules.” Cognition 100(3): 530–542. Nisbett, R. E., & Cohen, D. 1996. Culture of honor: the psychology of violence in the south. Boulder: Westview Press. Nisbett, R. E. & Wilson, T. D. 1977. “Telling More Than We Can Know: Verbal Reports on Mental Processes.” Psychological Review 84(3):231–259. Nucci, L. P. 2001. Education in the Moral Domain. Cambridge University Press. Nussbaum, M. C. 2001. Upheavals of Thought. New York: Cambridge University Press. Nussbaum, M. C. 2004. Hiding from Humanity: Disgust, Shame, and the Law. Princeton, NJ: Princeton University Press. Open Science Collaboration 2015. “Estimating the Reproducibility of Psychological Science.” Science 349(6251): aac4716. Oswald, F. L., Mitchell, G., Blanton, H., Jaccard, J., & Tetlock, P. E. 2013. “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies.” Journal of Personality and Social Psychology 105(2): 171–192. Parfit, D. 1984. Reasons and Persons. New York: Oxford University Press. Parfit, D. 1997. “Reasons and Motivation.” Aristotelian Society, Suppl. Vol. 77, pp. 99-130. Park, J. J. 2011. “Prototypes, Exemplars, and Theoretical & Applied Ethics.” Neuroethics 6(2): 237–247. Paul, L. A. 2014. Transformative Experience. New York: Oxford University Press. Paul, L. A. 2017. “First Personal Modes of Presentation and the Structure of Empathy.” Inquiry 60(3): 189–207. Paxton, J. M., & Greene, J. D. 2010. “Moral Reasoning: Hints and Allegations.” Topics in Cognitive Science 2(3): 511–527. Paxton, J. M., Ungar, L., & Greene, J. D. 2012. “Reflection and Reasoning in Moral Judgment.” Cognitive Science 36(1): 163–177. Payne, B. K. 2001. “Prejudice and Perception: The Role of Automatic and Controlled Processes in Misperceiving a Weapon.” Journal of Personality and Social Psychology 81(2): 181–192. Pellizzoni, S., Siegal, M., & Surian, L. 2010. “The Contact Principle and Utilitarian Moral Judgments in Young Children.” Developmental Science 13(2): 265-270. Pence, G. E. 1998. Who’s Afraid of Human Cloning? Rowman & Littlefield. Perry, J. 1979. “The Problem of the Essential Indexical.” Noûs 13(1): 3–21. Persson, I., & Savulescu, J. 2012. Unfit for the future: The need for moral enhancement. Oxford University Press. Petrinovich, L., & O’Neill, P. 1996. “Influence of Wording and Framing Effects on Moral Intuitions.” Ethology and Sociobiology 17(3): 145–171. Pettit, D. & Knobe, J. 2009. “The Pervasive Impact of Moral Judgment.” Mind and Language 24(5): 586– 604. Pew Research Center 2014. “Religious Landscape Study.”